<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mark Gubrud &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/author/mgubrud/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Fri, 19 Jan 2018 02:42:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>ICRAC opening statement to the 2015 UN CCW Expert Meeting</title>
		<link>https://www.icrac.net/icrac-opening-statement-to-the-2015-un-ccw-expert-meeting/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Mon, 13 Apr 2015 21:13:06 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[CCW]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2528</guid>

					<description><![CDATA[On Monday April 13, ICRAC guest Dr. Mark A. Gubrud delivered the following statement to the informal meeting of experts at the United Nations in Geneva. International Committee for Robot Arms Control opening statement to the Convention on Conventional Weapons Meeting of Experts on lethal autonomous weapons systems, United Nations Geneva 13 April 2015 I am [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/IMG_20150413_1531471-e1429486206517.jpg"><img data-recalc-dims="1" decoding="async" class=" size-thumbnail wp-image-2529 alignleft" style="border: 0px none; margin: 10px;" src="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/IMG_20150413_1531471-e1429486206517-150x150.jpg?resize=150%2C150" alt="" width="150" height="150" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/IMG_20150413_1531471-e1429486206517.jpg?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/IMG_20150413_1531471-e1429486206517.jpg?zoom=2&amp;resize=150%2C150&amp;ssl=1 300w" sizes="(max-width: 150px) 100vw, 150px" /></a>On Monday April 13, ICRAC guest <a href="http://gubrud.net/">Dr. Mark A. Gubrud</a> delivered the following statement to the <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/6CE049BE22EC75A2C1257C8D00513E26?OpenDocument">informal meeting of experts</a> at the United Nations in Geneva.</p>
<p><strong>International Committee for Robot Arms Control opening statement to the Convention on Conventional Weapons Meeting of Experts on lethal autonomous weapons systems, United Nations Geneva 13 April 2015</strong></p>
<blockquote><p>I am speaking on behalf of the International Committee for Robot Arms Control (or ICRAC as we are known), a founding NGO of the Campaign to Stop Killer Robots. We would very much like to thank Ambassador Biontino for preparations in chairing this second meeting of experts and for inviting our members to share their expertise. And we thank all of the States Parties for their participation.</p>
<p>ICRAC is an international association of scientists, technologists, lawyers, and policy experts committed to the peaceful use of robotics in the service of humanity and the regulation of robotic weapons.</p>
<p>ICRAC members have carried out research on various aspects of autonomous weapons systems and published their results in scientific journals as well as at conferences and in mass media.</p>
<p>We would like to stress that ICRAC experts are available and willing to provide technical expertise to the High Contracting Parties as they engage in discussions about autonomous weapons systems.</p>
<p>ICRAC urges the international community to seriously consider the prohibition of autonomous weapons systems in light of the pressing dangers they pose to global peace and security. We have produced a new leaflet on problems for global security posed by LAWS that is now available.</p>
<p>We fear that once they are developed, they will proliferate rapidly, and if deployed they may interact unpredictably and contribute to regional and global destabilization and arms races.</p>
<p>ICRAC urges nations to be guided by the principles of humanity in its deliberations and take into account considerations of human security, human rights, human dignity, humanitarian law and the public conscience. The Martens Clause reminds us that such moral principles are the basis for international law. Human judgment and meaningful human control over the use of violence must be made an explicit requirement in international policymaking on autonomous weapons.</p>
<p>ICRAC urges a broader discussion about the arming of autonomous systems beyond just lethal weapons, to include so-called “sub-lethal” or “less-than-lethal” weapons. These could still cause unnecessary suffering to humans. We urge nations to consider the human rights implications of the development and potential use of these weapons in any situation, including domestic policing, border control and internal law enforcement.</p>
<p>In conclusion, ICRAC encourages the CCW to move towards a preemptive ban on the development, production and use of autonomous weapons systems. ICRAC urges delegates to build consensus for negotiating a legally binding instrument to ban autonomous weapons systems.</p></blockquote>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2528</post-id>	</item>
		<item>
		<title>Is Russia Leading the World to Autonomous Weapons?</title>
		<link>https://www.icrac.net/is-russia-leading-the-world-to-autonomous-weapons/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Tue, 06 May 2014 05:32:05 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2453</guid>

					<description><![CDATA[The short answer is no. But Russia is testing and may deploy at its ICBM bases a lethal mobile system which has “automatic and semi-automatic control modes.” Additionally, Deputy Prime Minister Dmitry Rogozin has recently called for “robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<div class="entry">
<p>The short <a href="http://gubrud.net/?p=203">answer is no</a>. But Russia is testing and may deploy at its ICBM bases a lethal mobile system which has “automatic and semi-automatic control modes.” Additionally, Deputy Prime Minister Dmitry Rogozin has recently called for “robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other components of the combat system, but also on their own strike.”</p>
<p>However, <a href="http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_Status_4Nov2013.doc">Russia’s statements at the UN</a> have expressed concern about autonomous weapons as a threat to human life and international law, and Russia will be a participant in the Geneva CCW meeting. Moreover, a critical examination of claims that Russia is notably more aggressive in its early deployments of autonomous weapon systems than other nations, let alone that Russia is “leading a new robotic arms race,” shows these claims to be inflated and unwarranted. I have <a href="http://gubrud.net/?p=203">posted a detailed report at 1.0 Human.</a></p>
</div>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2453</post-id>	</item>
		<item>
		<title>Can an autonomous weapons ban be verified?</title>
		<link>https://www.icrac.net/can-an-autonomous-weapons-ban-be-verified/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Mon, 14 Apr 2014 05:13:57 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2445</guid>

					<description><![CDATA[At the ongoing CCW experts’ meeting on Lethal Autonomous Weapons Systems in Geneva, questions have begun to be raised about the verifiability of a ban on autonomous weapon systems. We would like to highlight the existence of our working paper outlining compliance measures for a ban, including a framework proposal as to how compliance could be [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/compliance.png"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft wp-image-2446 " style="border: 0px none; margin: 10px;" src="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/compliance-150x150.png?resize=176%2C176" alt="compliance" width="176" height="176" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?zoom=2&amp;resize=176%2C176&amp;ssl=1 352w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?zoom=3&amp;resize=176%2C176&amp;ssl=1 528w" sizes="auto, (max-width: 176px) 100vw, 176px" /></a>At the ongoing CCW experts’ meeting on Lethal Autonomous Weapons Systems in Geneva, questions have begun to be raised about the verifiability of a ban on autonomous weapon systems. We would like to highlight the existence of our <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf">working paper</a> outlining compliance measures for a ban, including a framework proposal as to how compliance could be verified.</p>
<p>A common remark is that verification would require inspection of software as well as hardware, and that nations will never permit such intrusive inspections—as well as the fact that even clearly-written and well-documented software can be very difficult to read and interpret, let alone the case in which software may be deliberately obscured or encrypted. The physical form of systems that are capable of being autonomous may also not always be readily definable or discernible. Where both a cockpit and communications link to a remote operator are lacking, one may reasonably infer that a system is intended to operate autonomously, but their presence does not necessarily ensure that the system is incapable of operating autonomously.</p>
<p>The <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf" target="_blank">solution to this conundrum</a> is already at hand, however, in the increasing emphasis in these discussions on the need for meaningful human control. This approach is increasingly recognized as a conceptual reframing of the problem of banning autonomous weapons, as already proposed in 2010 with the <a href="http://icrac.net/statements/">Berlin Statement</a> (originally titled “The Principle of Human Control of Weapons and All Technology”). That statement asserts positively <em>“That it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war. In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision and its foreseeable consequences.”</em> The emphasis on personal responsibility and legal accountability for the decision to use violent force has become recognized as one of the elements of the concept of meaningful human control, which also emphasizes the role of adequate information and deliberation by the decision maker.</p>
<p>Thus, while it may indeed be impractical to verify compliance with a ban on “autonomous weapons” as such, it may very well be possible to verify compliance with a requirement for accountable and meaningful human control and decision in each use of violent force.</p>
<p>This is not to say that we should not also declaratively ban autonomous weapons–minus a list of exceptions for systems that operate autonomously but do not make significant lethal decisions autonomously, or that are purely defensive in nature and defend human life against immediate threats from incoming munitions, or are to be allowed for other, pragmatic reasons. Certainly, we should ban fully autonomous weapons. But the way to implement and verify such a ban may be better framed in terms of human control.</p>
<p>Two years ago, ICRAC members took part in an effort to consider measures for promoting compliance with an autonomous weapons ban. The result was a working paper, “Compliance Measures for an Autonomous Weapons Convention,” which is <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf">posted here</a>. The work has not received wide recognition, but given that the question has begun to arise, it seems appropriate to highlight it now, rather than witness the emergence of  “a ban on killer robots would be nice, but it’s unverifiable” as a persistent canard.</p>
<p>The paper highlights many aspects of ensuring compliance with an autonomous weapons convention, including the enunciation of strong, simple, intuitive principles as the moral foundation for such an agreement, framing in terms of clear definitions, articulation of allowed exceptions, declaration of previously existing autonomous weapon systems, national implementing legislation, and the creation of an international treaty implementing organization (TIO). The role of the TIO in verification is detailed, particularly its support for cryptographic validation of records to tie them to particular weapon systems and the use of force at particular times (and potentially, places). These records, it is proposed, would be held by the compliant States Parties themselves and not released to the TIO or subject to any other possible compromise of military secrets, except in the case of an orderly inquiry into particular suspicious events, and possibly also some quota of routine, random inspections to verify continuous compliance. The cryptographic principles on which the tamper-proofing and time-stamping of such records can be carried out are simple and well-understood, and full encrypted records need not be exposed to the possibility of decryption if only “digital signatures” of the records are supplied to the TIO for archiving.</p>
<p>We believe that a scheme of this type can support rigorous verification of compliance in cases where it is suspected that a fully autonomous weapon system has been used, which should be a sufficient deterrent to their use. If coupled with other transparency and confidence-building measures, including routine on-site inspections of facilities in which remotely operated or nearly autonomous systems are developed, tested, manufactured, stockpiled, deployed or used, and with national means of intelligence which should suffice to reveal any prohibited activities large enough in scale and scope to pose a significant strategic security threat. These measures should suffice to ensure that no State Party will find the risks of non-compliance to be outweighed by uncertain and hypothetical military benefits.</p>
<p>&#8211;<em> by <a href="http://gubrud.net/">Mark Gubrud</a> (<a href="https://twitter.com/mgubrud">@mgubrud</a>) and ICRAC’s Juergen Altmann</em></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2445</post-id>	</item>
		<item>
		<title>A meme is born: autonomous = secure</title>
		<link>https://www.icrac.net/a-meme-is-born-autonomous-secure/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Fri, 11 Oct 2013 21:46:20 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2332</guid>

					<description><![CDATA[One of Joshua Foust’s assertions in his debate with Heather Roff was that making weapons autonomous was necessary in order to secure them against the threat of hacking. I posted a response after Foustreiterated this surprising argument, and provided a few scraps of pseudo-evidence to support it, in an article which seems to have gone semi-viral on the internet–launching what seems [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>One of Joshua Foust’s assertions in his debate with Heather Roff was that making weapons autonomous was necessary in order to secure them against the threat of hacking. I <a title="foustian meme" href="https://medium.com/i-m-h-o/a7c6981915e1">posted a response</a> after Foust<a href="http://www.defenseone.com/technology/2013/10/ready-lethal-autonomous-robot-drones/71492/">reiterated</a> this surprising argument, and provided a few scraps of pseudo-evidence to support it, in an article which seems to have <a href="http://www.checkarmaments.com/america-wants-drones-that-kill-without-humans-g757167530?language=en">gone semi-viral</a> on the internet–launching what seems likely to become a persistent meme (canard, for those less fond of neologisms) in this debate.</p>
<p>Briefly, Foust argues that teleoperated drones today are too vulnerable to hacking through their communications links, and that the solution is to make them fully autonomous. And just as briefly, I <a href="https://medium.com/i-m-h-o/a7c6981915e1">show</a> that this is wrong in a number of ways.</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2332</post-id>	</item>
		<item>
		<title>NYT warns of killer robot gap</title>
		<link>https://www.icrac.net/nyt-warns-killer-robot-gap/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Sun, 29 Sep 2013 15:08:44 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Front Page]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2174</guid>

					<description><![CDATA[New York Times science writer John Markoff reported on Sept.23 that the US military “lags” in development of unmanned ground vehicles (UGVs), which is sort of true if you compare the status of UGVs with that of unmanned air vehicles (UAVs). The real reason, as Markoff acknowledges, has to do with the technical difficulty of locomotion on [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>New York Times science writer John Markoff <a href="http://www.nytimes.com/2013/09/24/science/military-lags-in-push-for-robotic-ground-vehicles.html">reported on Sept.23</a> that the US military “lags” in development of unmanned ground vehicles (UGVs), which is sort of true if you compare the status of UGVs with that of unmanned air vehicles (UAVs). The real reason, as Markoff acknowledges, has to do with the technical difficulty of locomotion on rough and varied terrain, around obstacles and through impediments, as compared with the relative ease of flying through unobstructed air. But that didn’t prevent <a href="http://gawker.com/the-army-desperately-needs-more-killer-robots-1370505757">gawker</a>, <a href="http://digg.com/search?q=markoff">digg</a> and numerous tweeters from reading the article as a warning that the US military is falling behind someone in a race for “killer robots.”</p>
<p>Actually, Markoff does clearly say that the Pentagon is falling behind Google. In fact, the entire article seems to suggest that the military has neglected land robots, which is simply untrue, as I explain in <a href="http://gubrud.net/?p=49">my response</a>, posted on my own blog.</p>
<p>I felt compelled to respond because, just days before Markoff’s article was posted (it was also included in the Sept. 24 New York print edition), I had published in the Bulletin of the Atomic Scientists an <a href="http://thebulletin.org/us-killer-robot-policy-full-speed-ahead">analysis of US policy for autonomous weapons</a> which shows that it is actually an aggressive, “full speed ahead” policy. I don’t know if Markoff read my piece before he wrote his, but if you put them side by side and step back until you can only read the headlines, his looks like a rebuttal of mine, even though he is far too good a writer to say something that is actually wrong, and therefore he doesn’t actually contradict anything I wrote.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2174</post-id>	</item>
		<item>
		<title>US killer robot policy: Full speed ahead</title>
		<link>https://www.icrac.net/us-killer-robot-policy-full-speed-ahead/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Sun, 22 Sep 2013 16:25:43 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2070</guid>

					<description><![CDATA[In November 2012, United States Deputy Defense Secretary Ashton Carter signed directive 3000.09, establishing policy for the “design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”  Without fanfare, the world had its first openly declared national policy for killer robots. The policy has [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<div>
<div id="attachment_2146" style="width: 160px" class="wp-caption alignleft"><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2013/09/doomsday-1024x1024-1024x939-e1380126056885.jpg"><img data-recalc-dims="1" loading="lazy" decoding="async" aria-describedby="caption-attachment-2146" class="wp-image-2146 size-thumbnail" style="margin-right: 5px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2013/09/doomsday-1024x1024-1024x939.jpg?resize=150%2C150&#038;ssl=1" alt="doomsday-1024x1024-1024x939" width="150" height="150" /></a><p id="caption-attachment-2146" class="wp-caption-text">Doomsday Clock</p></div>
<p>In November 2012, United States Deputy Defense Secretary Ashton Carter signed <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf" target="_blank" rel="noopener noreferrer">directive 3000.09</a>, establishing policy for the “design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”  Without fanfare, the world had its first openly declared national policy for killer robots.</p>
<p>The policy has been widely misperceived as one of caution. According to <a href="http://www.wired.com/dangerroom/2012/11/human-robot-kill/" target="_blank" rel="noopener noreferrer">one account</a>, the directive promises that a human will always decide when a robot kills another human. Others even <a href="http://www.hrw.org/news/2013/04/16/us-ban-fully-autonomous-weapons" target="_blank" rel="noopener noreferrer">read it as imposing a 10-year moratorium</a> to allow for <a href="http://www.nytimes.com/2013/03/17/opinion/sunday/keller-smart-drones.html?pagewanted=all" target="_blank" rel="noopener noreferrer">discussion of ethics and safeguards</a>. However, as a Defense Department spokesman confirmed for me, the 10-year expiration date is routine for such directives, and the policy itself is “not a moratorium on anything.”</p>
<p>A careful reading of the directive finds that it lists some broad and imprecise criteria and requires senior officials to certify that these criteria have been met if systems are intended to target and kill people by machine decision alone. But it fully supports developing, testing, and using the technology, without delay. Far from applying the brakes, the policy in effect overrides longstanding resistance within the military, establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.</p>
<p><strong>Did soldiers ask for killer robots?</strong> In the years before this new policy was announced, spokesmen routinely denied that the US military would even consider lethal autonomy for machines.  Over the past year, speaking for themselves, some<a href="http://online.wsj.com/article/SB10001424127887324128504578346333246145590.html" target="_blank" rel="noopener noreferrer">retired</a> and even <a href="http://usacac.army.mil/CAC2/MilitaryReview/Archives/English/MilitaryReview_20130430_art005.pdf" target="_blank" rel="noopener noreferrer">active duty</a> officers have written passionately against both autonomous weapons and the overuse of remotely operated drones. In May 2013, the first nationwide poll ever taken on this topic found that Americans opposed to autonomous weapons outnumbered supporters by two to one. Strikingly, the closer people were to the military—family, former military, or active duty—the more likely they were to <a href="http://www.whiteoliphaunt.com/duckofminerva/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons_May2013.pdf" target="_blank" rel="noopener noreferrer">strongly oppose</a> autonomous weapons and support efforts to ban them.</p>
<p>Since the 1990s, the military has exhibited what <a href="http://www.csbaonline.org/wp-content/uploads/2011/06/2007.03.01-Six-Decades-Of-Guided-Weapons.pdf" target="_blank" rel="noopener noreferrer">autonomy proponent Barry Watts has called</a> “a cultural disinclination to turn attack decisions over to software algorithms.” Legacy weapons such as land and sea mines have been deemphasized and some futuristic programs canceled—or altered to provide greater capabilities for human control. Most notably, the Army’s Future Combat Systems program, which was to include a variety of networked drones and robots at an eventual cost estimated as high as $300 billion, was cancelled in 2009, with <a href="https://www.cbo.gov/publication/41186" target="_blank" rel="noopener noreferrer">$16 billion already spent</a>.</p>
<p>At the same time, calls for autonomous weapons have been rising both outside and from some inside  the military. In 2001, retired <a href="http://www.carlisle.army.mil/USAWC/parameters/Articles/01winter/adams.htm" target="_blank" rel="noopener noreferrer">Army lieutenant colonel T. K. Adams argued</a> that humans were becoming the most vulnerable, burdensome, and performance-limiting components of manned systems. Communications links for remote operation would be vulnerable to disruption, and full autonomy would be needed as a fallback. Furthermore, warfare would become too fast and too complex for humans to direct. Realistic or not, such thinking, together with budget pressures and the perception that robots are cheaper than people, has supported a steady growth of autonomy research and development in military and contractor-supported labs. In March 2012, the Naval Research Lab opened a <a href="http://www.nrl.navy.mil/media/news-releases/2012/naval-research-laboratory-opens-laboratory-for-autonomous-systems-research" target="_blank" rel="noopener noreferrer">new facility</a> dedicated to development and testing of autonomous systems, complete with simulated rainforest, desert, littoral, and shipboard or urban combat environments. But the killer roboticists’ brainchildren have continued to face what a <a href="http://www.acq.osd.mil/dsb/reports/AutonomyReport.pdf" target="_blank" rel="noopener noreferrer">2012 Defense Science Board report</a>, commissioned by then-Undersecretary Carter, called “material obstacles within the Department that are inhibiting the broad acceptance of autonomy.”</p>
<p><strong>The discrimination problem. </strong>Navy scientist <a href="http://www.sevenhorizons.org/docs/CanningWeaponizedunmannedsystems.pdf" target="_blank" rel="noopener noreferrer">John Canning recounts a 2003 meeting</a> at which high-level lawyers from the Navy and Pentagon <a href="http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4799401" target="_blank" rel="noopener noreferrer">objected</a> to autonomous weapons. They assumed that robots could not comply with <a href="http://www.icrc.org/eng/war-and-law/index.jsp" target="_blank" rel="noopener noreferrer">international humanitarian law</a>, core principles of which include a responsibility to distinguish civilians from combatants and to refrain from attacks that would cause excessive harm to civilians. These principles, and the military rules of engagement intended to implement them, assume a level of awareness, understanding, and judgment that computers simply don’t have. Weapons are also subject to mandated legal review, and indiscriminate weapons—that is, weapons that cannot be selectively directed to attack lawful targets and avoid civilians—are forbidden. The lawyers did not think they would ever be able to sign off on autonomous weapons.</p>
<p>Georgia Tech roboticist Ron Arkin <a href="http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf" target="_blank" rel="noopener noreferrer">has argued</a> that unemotional robots, following rigid programs, could actually be more ethical than human soldiers. But his proposals fail to solve the hard problems of distinguishing civilians, understanding and predicting social and tactical situations, or judging the proportionality of force. Others argue, philosophically, that only humans can make such targeting judgments legitimately. In a world getting used to talking about virtual assistants and <a href="http://www.youtube.com/watch?v=cdgQpa1pUUE" target="_blank" rel="noopener noreferrer">self-driving cars</a>, it may not be obvious what the limits of artificial intelligence will be, or what people will accept, in 10, 20, or 40 years. But for now, and for the immediate future, the robot discrimination problem is hard to dispute.</p>
<p>To break the legal deadlock, Canning suggested that robots might normally be granted autonomy to attack materiel, including other robots, but not humans. Yet in many situations it might be impossible to avoid the risk—or the intent—of killing or injuring people. For such cases, Canning proposed what he called “dial-a-level” autonomy; that is, the robot might ordinarily be required to ask a human what to do, but in some circumstances it could be authorized to take action on its own.</p>
<p>In recent years, autonomy visionaries have stressed human-machine partnerships and flexibility to decide the level of autonomy a weapon may be allowed, based on tactical needs.  In a 2011 <a href="http://www.defenseinnovationmarketplace.mil/resources/UnmannedSystemsIntegratedRoadmapFY2011.pdf" target="_blank" rel="noopener noreferrer">roadmap</a>, for example, the Defense Department envisions unmanned systems that seamlessly operate with manned systems while gradually reducing the degree of human control and decision making required. A <a href="http://www.minwara.org/Meetings/2011_05/Presentations/thurspdf/0800/Mining.pdf" target="_blank" rel="noopener noreferrer">2011 Navy presentation</a>depicts decisions about autonomy and control as a continuous tradeoff, explaining that while human control minimizes the risk of attacking unintended targets, machine autonomy maximizes the chance of defeating the intended ones. It seems likely that in desperate combat, autonomy would be dialed up to the highest level.</p>
<p><strong>Appropriate levels of human judgment.</strong> In the spring of 2011, the Defense Department convened a group of uniformed and civilian personnel to begin developing a policy for autonomous weapons. The directive that emerged 18 months later lists a number of requirements for autonomous systems and draws a line at systems intended to autonomously target and engage humans&#8211;or to apply kinetic force (e.g., bullets and bombs) against any targets. But the directive neither states nor implies that this line should not be crossed.</p>
<p>Rather, the line may be crossed if two undersecretaries and the Chairman of the Joint Chiefs of Staff affirm that the listed requirements have been met. In the event of an urgent military need, any of the requirements can be waived—with the exception of a legal review. Furthermore, the line is not as clearly drawn as it may seem to be.</p>
<p>The requirements listed in the directive are not much more stringent than those that apply to any weapon system. Tactics, techniques, and procedures must be developed to specify how an autonomous weapon system should be used. Hardware and software must undergo rigorous verification and validation. Human-machine interfaces must be understandable to trained operators and must provide clear activation and deactivation procedures and have “safeties, anti-tamper mechanisms, and information assurance” that minimize the probability of unintended engagements.</p>
<p>These requirements sound reassuring; they promise to address many of the concerns people have about autonomous weapons. According to the directive, it is Defense Department policy that the measures listed will ensure that the systems will work in realistic environments against adaptive adversaries. But saying it doesn’t necessarily make it so.</p>
<p>In reality, neither mathematical analysis nor field testing can possibly locate every software bug or situation in which such complex systems may fail or behave inappropriately. Adversaries will strive to locate points of vulnerability, and it is terribly hard to anticipate everything that adversaries may do, let alone know how their actions may affect system performance. The notion of information assurance implies a promise to solve problems of software reliability and computer security that bedevil contemporary technology.</p>
<p>The centerpiece of the entire directive is this statement: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Although the phrase is never defined, it does not appear that appropriate levels always require at least one human being to make the decision to kill another. Rather, the appropriate level might well be the decision to dispatch a robot on a mission and let it select the targets to engage. In making such decisions, it appears that the burden of ensuring compliance with rules of engagement and laws of war falls on commanders and operators when the robots themselves are incapable of ensuring this. But in practice, it seems likely that unintended atrocities committed by autonomous weapons will be blamed on technical failures.</p>
<p><strong>Semi-autonomy: Smudging the line. </strong>In theory, as long as three senior officials withhold their signatures, autonomous weapon systems that are intended to target humans or use kinetic or lethal force would be blocked. But the policy green lights—no extra signatures needed—semi-autonomous weapon systems that may apply any kind of force against any targets, including people. The crucial line that the policy draws between semi- and fully autonomous systems is fuzzy and broken. As technology advances, it is likely to be crossed as a matter of course.</p>
<p>The directive defines a semi-autonomous weapon system as one intended to engage only those targets that have been selected by a human operator. But the system itself is allowed to use autonomy to acquire, track and identify potential targets. It can cue the operator, prioritize targets, and decide when to fire. What the operator must do to select targets is left unspecified. Would a verbal OK, gesture, or even neurological interface be acceptable?</p>
<p>A system with such capabilities may not be intended to function without a human operator, but at most it would require a trivial modification to do so—perhaps a hack.<a href="http://spectrum.ieee.org/robotics/military-robots/a-robotic-sentry-for-koreas-demilitarized-zone" target="_blank" rel="noopener noreferrer">At least</a> <a href="http://www.dodaam.com/eng/sub2/menu2_1_4.php" target="_blank" rel="noopener noreferrer">three</a> <a href="http://www.rafael.co.il/Marketing/396-1687-en/Marketing.aspx" target="_blank" rel="noopener noreferrer">companies</a> already market such systems. The policy clears them for immediate use after acquisition, via standard procedures.</p>
<p>The policy also addresses fire-and-forget or lock-on-after-launch homing munitions, which would include many systems in use today. Such munitions have seekers that autonomously find and home on targets. The directive classifies them as semi-autonomous weapon systems, on the theory that the operator selects targets by using tactics, techniques and procedures that “maximize the probability that the only targets within the seeker’s acquisition basket” will be the intended targets. Yet, upon launch, such munitions become, <em>de facto</em>, fully autonomous.</p>
<p>No restrictions are placed on the technology that a seeker may use to find a target and decide whether that is what it was looking for. This opens a clear path for weapons that can be sent on hunt-and-kill missions, limited only by the ability of their onboard sensors and computers to narrow their acquisition baskets to selected targets.</p>
<p><strong>The way forward—to what?</strong> In the mid-2000s, Lockheed Martin was developing <a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/factsheet-LOCAAS.pdf" target="_blank" rel="noopener noreferrer">a small autonomous drone missile</a> for the Air Force, and a <a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/Product_Card-NLOS.pdf" target="_blank" rel="noopener noreferrer">similar system</a> for the Army. Equipped with several types of onboard sensors, the missiles would fly out to designated areas and wander in search of generic targets, such as tanks, rocket launchers, radars, or personnel, which they would autonomously recognize and attack. Both programs were canceled, amid legal, ethical, and technical questions, to be <a href="http://www.precisionstrike.org/pdf/2005_Oct_and_Dec_newsletter.pdf" target="_blank" rel="noopener noreferrer">superseded</a> by <a href="http://www.avinc.com/uas/adc/switchblade/" target="_blank" rel="noopener noreferrer">systems</a> that combine autonomous capabilities with radio links to human operators. Under the new policy, would such wide-area search munitions be classified as autonomous or semi-autonomous? Either way, the policy establishes that weapons like these may be developed, acquired, and used.</p>
<p>Given the long internal debate and general public opposition to killer robots, this is a highly aggressive policy. The US military never intended to replace foot soldiers with autonomous lethal robots during this decade, particularly not where civilians might be at risk. But funding the development and acquisition of systems that have autonomous targeting and fire-control capabilities—even if they are not intended for fully autonomous killing—will spur the weapons industry, in the United States and elsewhere, to accelerate exploration and investment in the technology of autonomous warfare.</p>
<p>The real issue is whether the world needs to go this way at all. The message of this policy is: full speed ahead.</p>
</div>
<div></div>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2070</post-id>	</item>
		<item>
		<title>Foust’s case for killer robots engaged: Autonomous weapons are no phantom menace</title>
		<link>https://www.icrac.net/weapons-are-no-phantom-menace/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Fri, 21 Jun 2013 03:36:22 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Front Page]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2022</guid>

					<description><![CDATA[ForeignPolicy.com blogger Joshua Foust announced on May 14 that he’d identified a “liberal” case for killer robots, including the seemingly incompatible assessments that they could “do it better”, where “it” means make the decision to kill, and that they are but a “phantom” (therefore not demanding of a serious response, such as discussion of a treaty). Foreign Policy has [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>ForeignPolicy.com blogger Joshua Foust <a title="foustfp" href="http://www.foreignpolicy.com/articles/2013/05/14/a_liberal_case_for_drones">announced</a> on May 14 that he’d identified a “liberal” case for killer robots, including the seemingly incompatible assessments that they could “do it better”, where “it” means make the decision to kill, and that they are but a “phantom” (therefore not demanding of a serious response, such as discussion of a treaty). Foreign Policy has not accepted a response; this post is one that was offered.</p>
<p>By now we’ve all become accustomed to the idea of somebody sitting in a trailer, gazing at a screen with a joystick in hand, and pushing a button to blow up a person, or usually more than one, somewhere on the other side of the world. Indeed, <a href="http://www.nytimes.com/2002/02/11/world/nation-challenged-raid-s-aftermath-us-troops-search-for-clues-victims-missile.html">when we first heard of this</a>, it was just another bit of disorienting news in the brave new post-9/11 world. George W. Bush was president then, and like it or not, he was going to do it. Then came Barack Obama, the avatar of liberal hopes, to <a href="http://www.foreignpolicy.com/articles/2013/05/14/a_liberal_case_for_drones">double Pentagon procurement spending</a> for drones in his first year of office, and execute <a href="http://www.thebureauinvestigates.com/2012/12/03/the-reaper-presidency-obamas-300th-drone-strike-in-pakistan/">6 times as many CIA drone strikes</a> in Pakistan as had Bush, with the <a href="http://www.people-press.org/2013/02/11/continued-support-for-u-s-drone-strikes/">approval of a majority of Democrats</a> as well as Republicans. So it seems odd for Joshua Foust to be announcing now that he’s found “<a href="http://www.foreignpolicy.com/articles/2013/05/14/a_liberal_case_for_drones">a liberal case for drones</a>,” as if liberals (as a political demographic) weren’t already solidly on board with the president who seems to have all but defined himself as the man with the drone <a href="https://www.youtube.com/watch?v=WWKG6ZmgAX4"><i>cojones</i></a>.</p>
<p>However, the drones that Foust is out to build a case for are drones not as we know them. Autonomous drones, or autonomous weapons in general, are the new shock of the new, and not yet fastened in the hearts of liberals, conservatives, or even <a href="http://online.wsj.com/article/SB10001424127887324128504578346333246145590.html">military professionals</a>. In fact, not many people are yet aware that in November the United States became the first nation to have an <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf">openly declared policy</a> for the development, acquisition and use of autonomous weapon systems (AWS), weapons which “once activated, can select and engage targets without further intervention by a human operator.”</p>
<p>Many of those who have heard of this are under the impression that the policy is a <a href="http://www.nytimes.com/2013/03/17/opinion/sunday/keller-smart-drones.html?pagewanted=all&amp;_r=0">moratorium</a> on systems that <a href="http://www.wired.com/dangerroom/2012/11/human-robot-kill/">kill</a> without a “<a href="http://www.hrw.org/news/2013/04/16/us-ban-fully-autonomous-weapons">human in the loop</a>.” It’s not. According to Pentagon spokesman James Gregory, the policy “mandates a review by senior defense leaders [two under-secretaries plus the chairman of the Joint Chiefs] before entering formal development and again before fielding” of systems<i>intended</i> to kill people by autonomous machine decision. “This review process,” says LTC Gregory, “doesn’t really impose a moratorium on anything.”</p>
<p>In fact, the new policy clears the way for aggressive development of a technology that the military has been reluctant to embrace, and has actually backed away from in recent years.</p>
<p>Older systems such as robotic antisubmarine mines have been phased out, and systems in development such as loitering autonomous hunter-killer missiles have been canceled or redirected to include communications links to human operators, while still retaining capabilities for autonomous operation. In 2005, the US Air Force ended work on <a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/factsheet-LOCAAS.pdf">LOCAAS</a>, a small air-launched missile that was to hunt for tanks and missile launchers, using its own sensors, and attack them autonomously. It was <a href="http://www.precisionstrike.org/pdf/2005_Oct_and_Dec_newsletter.pdf">superseded</a> by<a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/Product_Card-SMACM.pdf">SMACM</a>, a larger missile with a two-way link to an operator. The Army canceled its autonomous<a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/Product_Card-NLOS.pdf">Loitering Attack Missile</a>, but has recently been experimenting with <a href="http://www.avinc.com/uas/adc/switchblade/">Switchblade</a>, a small ground-launched missile that can loiter and has both remote-control and semi-autonomous modes.</p>
<p>At the same time, a growing contingent, both inside and outside the Pentagon, has declared the inevitability of AWS and <a href="http://www.defenseinnovationmarketplace.mil/resources/UnmannedSystemsIntegratedRoadmapFY2011.pdf">called for</a> “policies that introduce a higher degree of autonomy to reduce the manpower burden and reliance on full-time high-speed communications links while also reducing decision loop cycle time.” The November 2012 <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf">Directive</a>, which “Establishes DoD policy and assigns responsibilities for the…. design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems,” appears to resolve this debate in favor of a commitment to autonomous technology and AWS development, integration of AWS into “operational mission planning” and use in war.</p>
<p>Foust only quotes from the Directive the one line which probably does best summarize its approach:</p>
<p>“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force<i>.</i>”</p>
<p>The problem with this is that nowhere does the document explain what the “appropriate levels” are, but a strongly implied assumption is that human judgment is not always required at the level of the decision to take a human life—that this is something which can be appropriately delegated to a machine.</p>
<p>While opening the door for delegation of lethal authority to electronic decision makers, the Directive places the burden of responsibility for their decisions on human commanders and operators. If the systems themselves are not able to ensure that they operate “in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE)” it is up to “Persons who authorize the use of, direct the use of, or operate” AWS to ensure this, despite not being able to know exactly what situations the systems will encounter, let alone how they will behave. The same can be said of a commander sending soldiers into a fight, but in most cases the soldiers themselves are supposed to be responsible for not shooting civilians.</p>
<p>The fact that soldiers often fail in this regard is held by Foust, and many other apologists for killer robots, to deflate the basic moral objection to machines that kill autonomously. After all, going by the record, how moral are human beings, especially in war? Perhaps emotionless robots can be programmed to be more scrupulously faithful to international humanitarian law (IHL) and military ROEs, going one better by exposing themselves to greater risk in, for example, checking whether a wounded opponent is still armed and dangerous, instead of just shooting him again to make sure.</p>
<p>While one may imagine such happy scenarios, in practice, for the immediate future, artificial intelligence (particularly in mobile robots) can’t come close to human performance in the cognition, reasoning and judgment required to distinguish combatants from noncombatants, to understand human behavior in combat situations, or to weigh military gains against harm or risk to civilians before deciding on the use of weapons. As long as this is so, giving machines increasing latitude to decide between one action and another, once the action starts, is only loosening the degree of control by responsible human judgment.</p>
<p>Will emotional humans, under stress and in danger, be as responsible in their use of AWS as the Directive demands, or will using AWS become a way to evade responsibility? Foust suggests that determining why an AWS made a bad decision could be “as simple as plugging in a black box,” and that programmers could be held responsible for war crimes caused by bugs in code. But in reality, what investigation would likely reveal is that the technology encountered a condition that had not been anticipated—the kind of thing that leads to millions of software failures every working day, most of which at least do not have fatal consequences.</p>
<p>Foust has it entirely backwards when he says that the Pentagon’s declaration about “appropriate levels of human judgment” implies that “the U.S. government isn’t looking to develop complex behaviors in drones.” What would we be talking about, if that were true? Increasing levels of autonomy implies increasing levels of complexity and capability of weapon systems to classify objects, interpret situations, predict the next move, and to decide how to act. Saying that it is the commanders’ and operators’ responsibility to understand the limitations of these systems, so that they can be used as they are, is clearing the way for further development of these systems without setting any ultimate limits.</p>
<p>This becomes clear in the Directive’s definition of “semi-autonomous weapon systems” (SAWS). These are fully green-lighted for development, acquisition and use without needing any special signatures or high-level oversight. If something is a semi-autonomous weapon system, it can be developed, funded, purchased, integrated into operational planning and used in combat today, or ASAP.</p>
<p>Two kinds of SAWS are defined. The first are systems with capabilities for</p>
<p>“…acquiring, tracking, and identifying potential targets; cueing potential targets to human operators; prioritizing selected targets; timing of when to fire; or providing terminal guidance to home in on selected targets, provided that human control is retained over the decision to select individual targets and specific target groups for engagement.”</p>
<p>Got that? For example, you might find an operator sitting at a console, and on the screen he sees some blurry figures through a FLIR camera. The system places a cursor over the figures and says “TARGET GROUP IDENTIFIED.” The operator says “Engage,” and the system does the rest. Obviously, it takes nothing more than a trivial software modification, or the throw of a switch, to make such a system fully autonomous. At <a href="http://news.cnet.com/8301-17938_105-20010533-1.html">least</a> <a href="http://www.dodaam.com/eng/sub2/menu2_1_4.php">three</a> <a href="http://www.rafael.co.il/Marketing/396-1687-en/Marketing.aspx">companies</a> already produce sentry systems which have capabilities of this sort. Nor is there any reason it could not be a mobile robot sent out to patrol an urban environment.</p>
<p>The second kind of SAWS, also green-lighted by the Directive, comprises “homing munitions” that are sent out to seek a target rather than being locked onto a particular target before launch. These SAWS clearly include missiles, cruise missiles and robotic torpedoes, and there is no clear exclusion of autonomous drones and ground robots (even if you think self-destruction would be required to meet the definition of a “munition”). Here, the operator is supposed to be responsible for following “tactics, techniques and procedures” which “maximize the probability that the only targets within the seeker’s acquisition basket” are the intended targets. However, most seekers have at least some ability to distinguish intended targets from “clutter,” and as this technology develops, the sophistication of target identification and discrimination is expected to increase. First we rely on the technology to seek out missile launchers within some limited search area, and then it gets good enough to distinguish missile launchers from gasoline trucks or the piping in a water treatment plant, so we can expand the search area. This process can be continued without limit, and eventually we are dispatching highly intelligent combat robots on search-and-destroy missions over a wide area as if they were human commandos.</p>
<p>In effect, the second definition of SAWS has erased any meaningful distinction between this kind of weapon and the fully autonomous ones that are supposed to require senior review (the so-called moratorium), since as soon as the missile is launched it becomes fully autonomous. There is thus no red line left to cross between the weapons that are fully approved under the new US policy, set by this administration, and, to put it baldly, the Terminator—or anything in between. It becomes a matter of technology and time.</p>
<p>It is curious that so many who write on both sides of this question jeer at science fiction, as if its warnings could be dismissed just because they were delivered through expressionistic art and scenarios that don’t pass analytical muster. How can we talk about the future, and about how technology may reshape or destabilize our world, without being subject to this particular form of <a href="https://twitter.com/drunkenpredator/status/327134248681209859"><i>ad hominem</i></a> ridicule? Perhaps it may help to draw parallels with the indisputable past.</p>
<p>Obama’s assumption and expansion of the presidential prerogative to drone is eerily reminiscent of Harry Truman’s assumption of the authority to use nuclear weapons and to largely set policy for their further development, production and deployment. Liberals mostly went along with that, too, at least until realizing that the power to initiate nuclear war was the power to end human history. By the time popular (and mostly liberal) <a href="http://www.armscontrol.org/act/2010_12/LookingBack">dissent</a> against an unbounded nuclear arms race rose to the level of a potent political force, it was almost too late. We were very lucky—perhaps owing largely to the<a href="http://www.theatlantic.com/magazine/archive/2013/01/the-real-cuban-missile-crisis/309190/?single_page=true">moderating role</a> of <a href="http://www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter021099b.htm">human judgment</a> in <a href="http://www.cubanmissilecrisis.org/">crises</a> and <a href="http://nuclearfiles.org/menu/key-issues/nuclear-weapons/issues/accidents/20-mishaps-maybe-caused-nuclear-war.htm">accidents</a> at the <a href="http://en.wikipedia.org/wiki/Yom_Kippur_War#Soviet_threat_of_intervention">brink</a> of apocalypse—to survive the Cold War, as well as two decades (and counting) of the new world disorder.</p>
<p>Killer robots are not the only element of the global technological arms race, but they are currently the most salient, rapidly-advancing and fateful. If we continue to allow global security policies to be driven by advancing technology, then the arms race will continue, and it may even reheat to Cold War levels, with multiple players this time. Robotic armed forces controlled by AI systems too complex for anyone to understand will be set in confrontation with each other, and sooner or later, our luck will run out.</p>
<p>We can stop this. The line to draw is clear: No autonomous initiation of violence, no machines deciding on their own to kill human beings or to start or escalate conflicts in which people will be killed. It is a bright red line that everyone can understand, and it defines a strong moral principle that we instinctively know is right. To enshrine this principle in a global ban on autonomous weapons is a necessary step in our unending effort to secure the human future.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2022</post-id>	</item>
		<item>
		<title>DoD Directive on Autonomy in Weapon Systems</title>
		<link>https://www.icrac.net/dod-directive-on-autonomy-in-weapon-systems/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Tue, 27 Nov 2012 01:00:36 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[News]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=688</guid>

					<description><![CDATA[On Nov. 21, as news of Human Rights Watch&#8217;s (HRW) &#8220;Losing Humanity&#8221; report was spreading, the Department of Defense quietly released Directive 3000.09 &#8220;for the development and use of autonomous and semi-autonomous functions in weapon systems&#8221;, making the United States the first nation to have an official policy statement on autonomous weapon systems (AWS). The [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>On Nov. 21, as news of Human Rights Watch&#8217;s (HRW) <a href="http://www.hrw.org/reports/2012/11/19/losing-humanity-0" target="_blank">&#8220;Losing Humanity&#8221; report</a> was spreading, the Department of Defense quietly released <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf" target="_blank">Directive 3000.09</a> &#8220;for the development and use of autonomous and semi-autonomous functions in weapon systems&#8221;, making the United States the first nation to have an official policy statement on autonomous weapon systems (AWS).</p>
<p>The DoD Directive appears at first glance a stony rejection of ICRAC and HRW&#8217;s call for a broad AWS ban, setting a presumption that the US will proceed to develop, deploy and use AWS, under certain doctrines and guidelines: &#8220;The Commanders of the Combatant Commands shall&#8230;Use autonomous and semi-autonomous weapon systems in accordance with this Directive&#8230;.&#8221;</p>
<p>On closer examination, a more complicated picture emerges. The policy defines an AWS as &#8220;A weapon system that, once activated, can select and engage targets without further intervention by a human operator.&#8221; The ability of the system to select and engage targets autonomously makes it an AWS even if it is human-supervised with a possible human override; thus &#8220;human on the loop&#8221; is assigned the same status as &#8220;human out of the loop,&#8221; a conservative (good) policy.</p>
<p>The policy distinguishes AWS from &#8220;semi-autonomous weapon system&#8221; (SAWS) which it defines as &#8220;A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.&#8221; The category includes systems that automatically acquire, track, identify and prioritize potential targets or cue humans to their presence, &#8220;provided that human control is retained over the decision to select individual targets and specific target groups for engagement.&#8221; It also includes weapons with terminal homing guidance, and &#8220;fire and forget&#8221; weapons where the target has been human-selected.</p>
<p>The policy basically green-lights the development and use of both lethal and nonlethal SAWS for all targets.</p>
<p>Fully autonomous kinetic weapons, however, are only pre-authorized &#8220;for local defense&#8221; of manned installations and platforms, presumably referring to missile and projectile interception systems. And they have to be human-supervised.</p>
<p>Unsupervised AWS are only authorized for &#8220;non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets&#8230;.&#8221;</p>
<p>This policy is basically, &#8220;Let the machines target other machines; Let men target men.&#8221; Since the most compelling arms-race pressures will arise from machine-vs.-machine confrontation, this solution is a thin blanket, but it suggests some level of sensitivity to the issue of robots targeting humans without being able to exercise &#8220;human judgment&#8221; &#8212; a phrase that appears repeatedly in the DoD Directive.</p>
<p>This approach seems calculated to preempt the main thrust of HRW&#8217;s report, that robots cannot satisfy the principles of distinction and proportionality as required by international humanitarian law, therefore AWS should never be allowed.</p>
<p>The policy directs that AWS and SAWS &#8220;shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.&#8221;</p>
<p>Assuming that directive has been followed, responsibility for IHL compliance will fall on those commanders and operators: &#8220;Persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement (ROE).&#8221;</p>
<p>The policy thus entrenches AWS policy in a strong defensive position against assault on the basis of IHL and the limits of computation.</p>
<p>However, development and fielding of fully autonomous lethal weapons which do engage human targets is not ruled out. It must be approved by three Undersecretaries of Defense and the Chairman of the Joint Chiefs, but a separate set of guidelines is provided for such systems, suggesting that such approval would not be extraordinary.</p>
<p>For the time being, however, DoD can deny that such programs have been approved and point to this policy to deflect questions about killer robots targeting people, and whether that comports with international law.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">688</post-id>	</item>
		<item>
		<title>The Principle of Humanity in Conflict</title>
		<link>https://www.icrac.net/the-principle-of-humanity-in-conflict/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Mon, 19 Nov 2012 15:49:05 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=625</guid>

					<description><![CDATA[I want to share a personal perspective, which has not been endorsed by ICRAC. I hope to stimulate further discussion on the foundations and framing of the nascent global campaign against autonomous weapons (AW, or systems, AWS). This essay is written to be constructively provocative. Two years ago, in presenting the first version of what [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>I want to share a personal perspective, which has not been endorsed by ICRAC. I hope to stimulate further discussion on the foundations and framing of the nascent global campaign against autonomous weapons (AW, or systems, AWS). This essay is written to be constructively provocative.</p>
<p>Two years ago, in presenting the first version of what became ICRAC’s Berlin Statement , I emphasized that the dictum “<em>machines shall not decide to kill</em>” could serve as the “kernel” of a convention on robotic weapons that would naturally encompass ICRAC’s broader goals for robot arms control.  The idea of machines taking the decision to kill people, or initiating violence that causes death and suffering<a title="" href="#_ftn1">[1]</a>, was abhorrent to almost everyone who considered it. This almost universal repugnance could be harnessed as the “engine” of a global movement to stop killer robots, over the opposition of a minority who would argue the inevitability and military necessity of robotic and autonomous weapons.</p>
<p>I called this the “Principle of Human Control,” and felt it important that this should be declared as a <span style="text-decoration: underline;">new principle</span>, consistent with just war theory, the laws of war, and human rights law, but <span style="text-decoration: underline;">not explicitly contained in nor necessarily derived or derivable from existing bodies of philosophy and law</span>.  A new principle was needed for the simple reason that <span style="text-decoration: underline;">the threat against which the principle was raised had not previously existed</span>, and was only becoming imaginable as the march of technology brought us closer to the day when machines might plausibly be deemed capable, and trustworthy, to make lethal decisions autonomously.</p>
<p>Accordingly, whereas ICRAC’s founding Mission Statement would only “<em>propose…</em> <em>that this discussion should consider”</em> that <em>“machines should not be allowed to make the decision to kill people”</em>; the Berlin Statement declared that “<em><span style="text-decoration: underline;">We believe&#8230; it is unacceptable</span> for machines to control, determine, or decide upon the application of force or violence in conflict or war. In all cases where such a decision must be made, at least one human being <span style="text-decoration: underline;">must</span> be held personally responsible and legally accountable for the decision….”</em> [Emphasis added.] No claim was made that our belief could be proven true according to any body of law, philosophy or science. <span style="text-decoration: underline;">We declared a new principle.</span></p>
<p align="center"><strong>Distinction and Proportionality</strong></p>
<p>Nevertheless, as the debate about autonomous weapons takes shape, many arguments revolve around international humanitarian law (IHL), or the law of armed conflict (LOAC), and the principles of this body of law, as well as the deeper philosophical principles of just war theory.  Some authors argue that these venerated principles have stood the test of time, and have proven adaptable as weapons and modes of warfare have evolved. Others claim that 21<sup>st</sup> century weapons and irregular warfare render the existing instruments of LOAC quaint and ripe for revision. Some argue that autonomous weapons cannot fulfill IHL requirements to distinguish between combatants and non-combatants (Principle of Distinction) and to weigh military necessity and objectives against the risk or expectation of collateral harm to non-combatants (Principle of Proportionality). Others maintain that even limited discrimination capabilities might help to reduce harm to non-combatants, and that human commanders, who will decide when and what kinds of AW to use, will remain responsible for the judgment of proportionality.</p>
<p>One argument that AWS are inconsistent with IHL/LOAC rests on the proposition that machines are simply incapable of the judgment required to ensure their compliance.  For example, Protocol I of the Geneva Conventions requires that <em>“Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.”</em> This duty to “at all times distinguish” embodies the Principle of Distinction.  It is implicitly required of whatever agent “shall direct… operations” so that the operations are directed “only against military objectives.” Therefore, if the agent directing “operations” were to be a machine, it would have to be a machine able to “at all times distinguish” between civilians, combatants, civilian objects and military objectives.</p>
<p>Distinction is clearly a challenge for machines, but human capabilities to “at all times distinguish” also have limits. For example, one may not be able to see clearly in the presence of smoke or other obstructions, or to judge quickly and correctly whether a figure in the shadows is a civilian or combatant, particularly when under fire. If human responsibility for adherence to the Principle of Distinction is limited by human capabilities, in various circumstances, would such limits not apply to machines as well? If we don’t expect perfection in all situations, might there not be circumstances in which a machine’s ability to discriminate would be comparable to, or perhaps better than, a human’s?</p>
<p>Questions of interpretation also challenge the argument. Who bears the responsibility of the “Parties to the conflict” to “at all times distinguish” and to “direct their operations only against military objectives”? “Parties” would normally be interpreted to mean the States involved, a fairly high level of direction.  May such “Parties” not lawfully direct that autonomous weapons be used only against combatants and military objectives in some conflict, subject to technical limitations which may result in unintended harm to noncombatants or civilian objects, which the “Parties… at all times distinguish”?  Alternatively, if responsibility is delegated to a human commander, who directs that an autonomous weapon be used in some tactical situation, and if that person meanwhile upholds the obligation to “at all times distinguish,” can it not be argued that the commander may lawfully direct the operation only against military objectives, subject to technical limitations which might produce an unintended result?</p>
<p>Certainly, the technical limitations of AWS matter, in relation to the circumstances in which they are used. But a reasonable interpretation of the Principle of Distinction cannot demand <em>absolute</em> perfection of machines (since humans are incapable of it) nor forbid a responsible human commander from taking <em>some</em> risk of a mistake. The actual capabilities of machines, and the acceptable level of risk, will become the issues of contention. It is not obvious that the resolution will be to ban autonomous weapons categorically.</p>
<p>Protocol I also requires (in several statements) that <em>“those who plan or decide upon an attack shall… refrain from deciding to launch any attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”</em> This is known as the Principle of Proportionality, and it poses an even more formidable challenge to technical capabilities, if machines would bear the burden of judging not only what effects upon civilians and civilian objects “may be expected,” but whether those “would be excessive in relation to the… military advantage anticipated.”</p>
<p>Here again, questions of interpretation, particularly in view of human fallibility, challenge the argument. In practice, human commanders have little objective basis on which to judge what is “excessive” other than, perhaps, some set of examples and judicial precedents – which might very well be coded as a database suitable for machine access. In practice, human judgment of what is “excessive” is externally regulated only by the possibility of later review and adverse judgment. This (in theory) motivates commanders to think carefully about whether their decisions will be perceived as reasonable. Could the same mode of regulation not apply equally to the decisions of machines? Surely a machine could be programmed to seek action plans that it assesses are consistent with the rules and precedents coded in its database. If humans judge that machine decisions are reasonable and proportional, at least as consistently as human decisions, might that not satisfy the Principle?</p>
<p>Alternatively, if an autonomous weapon system is programmed to fire under some conditions, and not under others, subject to known limitations of its capabilities to autonomously distinguish conditions, could a human commander who is aware of those limitations not accept responsibility for his or her own judgment of proportionality, taking some calculated risk that the machine might misclassify the actual situation, just as it might be subject to any other error or malfunction? Is this fundamentally different from the situation in which a commander authorizes an attack by human combatants, knowing there is some risk of their autonomously taking some action which would later be judged excessive, due either to their own misperception of the situation or their own misjudgment of proportionality?</p>
<p align="center"><strong>Frames of reference and hard cases</strong></p>
<p>While just war theory separates <em>jus in bello</em> (just conduct in war) from <em>jus ad bellum</em> (justice in going to war), in practice these principles are often entangled. Judgment that some amount of risk or harm to noncombatants is not “excessive in relation to” military objectives likely depends on perceptions or assumptions about the necessity and legitimacy of the conflict itself. Conversely, perceived excesses of violence can help to delegitimize a conflict. This is true whether action is taken on human decision or on the decision of a machine.</p>
<p>Whether the harm caused by a particular “attack” (or weapon, or type of attack) is judged “excessive” will depend on what it is compared with, which may again depend on whether the perceived alternative would also be perceived as plausibly not excessive, given military necessity. High-altitude saturation bombing was largely judged not excessive in World War II, and was far more controversial in Vietnam, but may still set a standard for comparison with the US drone strike campaigns in Pakistan and Yemen, as seen by those who believe the campaigns to be necessary and just. Those less convinced of <em>jus ad bellum</em> in this instance may be more inclined to compare the drone strikes with alternatives such as special operations forces raids on selected high-value targets, or perhaps no military action at all.</p>
<p>If autonomous weapons have only limited capabilities for discrimination and proportionality, they may still be claimed as consistent with<em> jus in bello</em> if they are perceived as alternative to weapons that are completely indiscriminate and almost certain to cost more innocent lives. Compared with a 500-lb bomb dropped on the roof of a building where noncombatants might be found, an autonomous robot that can recognize persons bearing arms, or even (perhaps an easier technical problem) a specific, known individual, could reasonably be expected to reduce the level of harm or risk to noncombatants.  In this scenario, the robot might not have perfect abilities to implement the Principles of Distinction and Proportionality, but a tactical decision to use the robot, as alternative to a blunter weapon, might arguably uphold those principles if they are viewed as practical goals.</p>
<p>A different judgment might be reached if autonomous weapons were compared with armed teleoperated robots, which keep a human “in the loop”, and which might provide another means of assault on a location in which noncombatants may be present.  However, teleoperation might not always be reliable or practical, due to the vagaries and vulnerabilities of communication links, or the need for small size or stealth of the robot. For very small robots, resolution and bandwidth limits in teloperation may mean that fully autonomous systems which seek a specific individual or object might compare favorably in terms of discrimination, and hence proportionality. Another principle is needed if we are to categorically rule out the use of autonomous weapons when the possibility of teleoperation is not available.</p>
<p>Similar arguments can be made in defense of missiles equipped with target recognition and terminal homing capabilities, which arguably can be called robotic weapons, autonomous in the sense that after release they are charged to make at least targeting refinement decisions autonomously. Such weapons already exist and have been deployed, albeit with only rudimentary target recognition capabilities. What would be the objection to improving those capabilities to achieve more precise targeting and a lower risk of collateral harm?  It might be argued that such weapons are sent on a one-way mission and do not make an independent “kill decision,” but what would be the objection if the systems were further developed so that, upon failure to recognize an appropriate target, or upon detection of humans in the vicinity of the target (with or without distinction), the weapon decides to abort its mission? It could then safely self-destruct or divert to an open area while disarming its warhead. Even an imperfect “abort decision” capability would appear to be an improvement from an IHL standpoint, if compared with the use of a missile with no such capability. Yet, the machine would in effect be deciding whether to kill.</p>
<p>Note that teleoperation and even “on-the-loop” human intervention might not be possible in the split second prior to impact during which a missile’s target analysis system might have sufficient information to make a decision. Another case in which events occur too rapidly for meaningful human control or even supervision of target identification and fire decision is that of point defense systems such as the Phalanx guns deployed on ships, and other anti-missile,-mortar and -shell systems employing ballistic or guided interceptor munitions or lasers. Even longer-range missile and air defense systems such as the Patriot and Aegis challenge human capabilities to make crucial target discrimination decisions in seconds. In practice, humans have often failed to exert an “abort” command when operating in an “on-the-loop” role, with deadly results in a number of incidents.</p>
<p>I think that the case for automated fire decision in a point defense system is so compelling that any credible proposal for an autonomous weapons convention will have to carve out an exception for such systems, at least when they are directly defending human lives against immediate threats. Strict and continuous human supervision should of course be required, including accountability for failure to intervene when information indicating a system error is available to those charged to be “on the loop.” Such systems should also be operated in a “normally-off” mode, and only activated upon warning of an incoming attack. But as long as a system is limited to defense against weapons incoming to a given human-inhabited location (including the instantaneous location of an inhabited vehicle), automated fire decisions will likely have to be allowed.</p>
<p>A similar exception was carved out from the Convention on Cluster Munitions in the case of the so-called sensor-fuzed weapon (SFW), or any weapon meeting the CCM’s criteria of having fewer than 10 submunitions, each of which weighs more than four kilograms, is “<em>designed to detect and engage a single target object</em>”, and is equipped with self-deactivating and self-destruct mechanisms. Here again we see the implications of proportionality and distinction as guiding principles for robot arms control. The rationale for excepting SFW-type weapons from the CCM was that such weapons, with their more sophisticated capabilities for target identification, discrimination, self-guidance, selective engagement, and self-deactivation, do not pose the same risk of harm to civilians posed by traditional cluster munitions with their many small and highly unreliable bomblets. Following this logic, further development of the SFW, to incorporate even more sophisticated discrimination and fire/no fire decision making capabilities, would be hard to object to. Yet it is not clear why the SFW would not meet a reasonable, broad definition of “autonomous weapons.”</p>
<p>It is certainly true that early AWS can have only limited capabilities for discrimination, and even less to judge proportionality. Exaggerated perceptions of their precision and selectivity may lead to excesses in their use, as may well be occurring with drones already. Yet this is not, I think, the central concern that is driving either the nascent campaign to ban AWS, nor the broader public’s unease with the rise of killer robots. If it were, it would suggest the need not for a ban but for regulation of these weapons and their use, and for a go-slow approach to their deployment – until the technology can be perfected, or is good enough to be acceptable in well-understood circumstances.</p>
<p align="center"><strong>So then, What’s so bad about killer robots?</strong></p>
<p><em>We were a family. How&#8217;d it break up and come apart, so that now we&#8217;re turned against each other?</em><em> &#8230; This great evil. Where does it come from? How&#8217;d it steal into the world? What seed, what root did it grow from? Who&#8217;s doin&#8217; this? Who&#8217;s killin&#8217; us? Robbing us of life and light. Mockin&#8217; us with the sight of what we might&#8217;ve known. </em></p>
<p align="right">– Private Witt, in Terrence Malick’s screenplay for <em>The Thin Red Line</em></p>
<p>The cases just considered may be seen as borderline cases for the prohibition of AWS. Borderline cases always exist; we might say that in the vicinity of every bright red line there is a broad grey zone. Those opposed to drawing a line are fond of citing ambiguous cases. It is true that where you place the line in relation to such cases may be somewhat arbitrary. What is important is to draw the line <em>somewhere</em>. If we stand back from the grey and fuzzy border zones to see the big picture, we can see clearly the difference between violent force in human conflict today, and some future in which decisions about the use of violent force are routinely made by machines.</p>
<p>In such a future, we risk the unleashing of conflict itself as an autonomous force, embodied in technology, and divorced from the body of humanity, within which it first arose.</p>
<p>What is conflict? One perspective on conflict is that it arises because each of us has a unique point of view. We also join in community, but humanity as a whole is spread across the globe. The human community is divided and in conflict because of differing points of view. This is not different from saying we have differing and conflicting interests, e.g. in controlling the same territory or limited resources. However, the willingness of human individuals to sacrifice themselves for families, platoon brothers, tribes, nations… or for a cause, shows that self-interest is only one factor in our point of view about what is good and just, and worthy of fighting for, wrong and unjust, and worthy of anger and violence.</p>
<p>In another perspective, conflict is a process which arises within and between us, and which can consume us and escape our control. Because humans have the capacity for anger and violence, because violence easily becomes lethal, and because life and death transcend in importance, we easily become caught up in emotions that overpower reason. Community fractures, separating us from them, and we are unable to forgive the terrible things that they have done, unable to consider their claims to justice and agree on a compromise with our own. Across the fault lines of love and reason, we and they speak to each other in the language of violence, wear our masks and play our roles in the Greek tragedy of conflict and war.</p>
<p>Yet until now it has always been true that conflict has consisted solely of willful human deeds. When a weapon is fired, one person deliberately unleashes violent, potentially lethal force upon another. It may be irrational, but it is intentional, and essential. We say that weapons are fired “in anger,”  an animal passion that is rooted in mortality and the struggle for survival. I think this is what the warriors mean when they say that war is deeply human (and somehow, in spite of robot weapons, always will be).</p>
<p>Anger humanizes violence, and its apparent absence is part of what makes remote control killing so deeply disturbing. Yet even in the cool detachment of the drone operator’s padded chair, we find one human being accepting the responsibility for the act of killing another, because the human community is divided and the community to which the “cubicle warrior” is loyal has gravely decided that this killing is a necessary burden of evil. That burden is felt strongly by military veterans and professionals, who correspondingly also feel, surprisingly often, that there is something deeply wrong – and terrifying – about the idea of machines that would usurp from us, or to which we would surrender, the heaviest responsibility ever assumed by human beings: that of deciding when, and under what circumstances, we are justified in injuring or killing others.</p>
<p>If the community is democratic, if it is even truly human, the burden is felt. When the enemy hits back, the pain and loss are felt, too. There is always the possibility of saying “Enough.” As long as conflict remains human conflict, it ends when people finally, for whatever reasons, decide to stop fighting.</p>
<p>In making the process of killing fully autonomous, we risk machines no longer under human control pursuing conflict for its own sake, conflict that is no longer human conflict, no longer about right and wrong. We risk machines mercilessly extinguishing human lives according to programs developed to embody only military doctrines and goals, and the laws and logic of states. We risk the dulling or loss of our ability and responsibility to judge when the price or the risk is too great. Or to know when too much blood has been spilled, either because it is our own blood or because in spilling the blood of others we lost our claim of justice. We risk becoming either tyrants who rule through robot soldiers, or peasants who submit to a robotic regime, or perhaps both at once (already the drones are coming home to roost).</p>
<p>Do I mean to invoke here the specter of Skynet, the artificial intelligence that declared war on humanity in <em>The Terminator</em>, reputedly because it feared we’d turn it off? Or <em>Colossus</em>, the military supercomputer that took control of the world in a brutal coup in order to fulfill its mission of ensuring peace? The scorn that “serious” people direct against these tropes from science fiction betrays their own nervousness.  Artists have mined our apprehensions about the world we’re creating, and projected them before us in gaudy masks and cartoonish story lines that beg to be decoded. The military system is <em>already</em> a kind of machine, pursuing its own agenda, just as states, corporations and institutions of all kinds are. These machines are made of people, and their minds are the minds of people – increasingly augmented by information technology, from clay tablets to search engines. We like to think that this augmentation increases our effective intelligence, but as soon as words are written down, thinking rigidifies. Yet one essential fact remains: each human mind, dazzled and lost in the maze of knowledge, of law and the machinery of institutions, remains tethered intimately and existentially to a human heart. It is that tether which the military complex, enabled by technology, now threatens to break.</p>
<p>No claim is made here of the infallibility or even wisdom of human decision making in conflict. On the contrary, we are all familiar with history’s march of folly, hubris, aggression and anger, the tragic farce of bluster, miscalculation and misunderstanding , the tragedy of right pitted against right, outrage responding to outrage and leading to further outrage, escalation and the madness of war. In full view of this, we might wish for an all-wise and all-powerful super-AI to impose a global <em>pax robotica</em>. But apart from the question of how we would ensure the benevolence of an electronic emperor, there is simply no reason to think our present drift into robotic warfare will lead to peace.</p>
<p>The robotic weapons being created today are just that, weapons. They are fitted into increasingly automated, integrated, networked systems which gather and process “intelligence” to produce action orders following plans and doctrines issued from on high. The tactical officer increasingly consults a computer to learn the next objective, estimate weapons effects and perhaps assess the risk of killing civilians; after the action the officer reports to the system, which updates its model of the conflict.</p>
<p>As military systems are increasingly automated, and the human role is progressively atomized, mechanized, and displaced, these systems remain pitted against each other, embodying the same contradictions of reason and purpose. As machines take over from brains, they will sideline the hearts that whispered: <em>Life is precious.</em> Artificial intelligence will know everything about correlations of force, kill probabilities, stealth and counter-stealth, and perhaps also everything about the economic value of resources, the intricacies of laws and treaties, protocols and codes of conduct, the theory of games and the flow of information through networks. Yet it will understand nothing about the purpose of any of this, nothing about simply <em>being human</em>.</p>
<p>Probably the most certain reason why the AI warlords of the future won’t understand what we were trying to protect when we created them is because we won’t tell them. We’ll scrupulously avoid corrupting their military discipline with any hint that, in some cases, we might and should and probably would <em>back down</em>. That we might learn not only that we lack the physical might to impose our will on others, but that we were wrong to even try, or to want to. Would we even know <em>how</em> to tell our machines when and why they should stop fighting, propose or accept a cease-fire, or even withdraw in defeat, rather than lose more, and risk everything? Do we even understand how to tell this to ourselves?</p>
<p>On the contrary, violence and especially war always represent the failure of reason, the tearing and breakdown of community, and negotiating directly with the primal chaos that civilization sought to expel. Great job for a robot!</p>
<p>If ceding control to an automated war machine seems far removed from military robots performing the dull, dirty and dangerous jobs of today, and even from the possible next step of automatic target selection and autonomous fire decision, it nevertheless represents the logical destination of a program to outsource human responsibility for making decisions about the use of violent force. I believe it epitomizes the concern that underlies the widespread and almost universal view that moving from today’s non-weapons military robots and teleoperated lethal drones to fully autonomous weapons is a step that we should never take – or, if ever, only reluctantly, with great care, and only because it might be necessary or unavoidable. We feel instinctively that the killer robot is no longer a human tool, but has become an enemy of humanity; as depicted in <em>Terminator</em>, a gleaming metal skeleton whose eyes glow with the fire of Hell. A whiff of death lurks in every weapon, but the killer robot embodies death as an animated Other, Death that walks, Death that pursues each of us with a determination of its own.</p>
<p>The reason for banning autonomous weapons is to draw a bright red line across a broad grey zone that lies between us and an out-of-control future. Surely the place to draw this line is at the point where a machine is empowered to decide, by a mechanical process not controlled or even fully understood by any of us, the use of force against a human being. Because violence commits us, we must generalize this to a ban on any autonomous initiation of violence, lethal or nonlethal, against humans or against property, including other machines. Finer details do matter, but what is essential is to draw the line.</p>
<p>If we allow ourselves to cross this line, we will find ourselves driven onward by the imperatives of an arms race. This is another type of conflict process which we struggle to control, and crossing the line already means losing that struggle. If we tell ourselves that we will limit lethal autonomy with reasonable demands for Distinction and Proportionality, we will find that the weapons demand to be freed, in order to confront, and if need be to fight, others of their own kind.</p>
<p>We will increasingly risk the ignition of violence by the unanticipated interactions of proliferating systems pitted in confrontation with one another. The ever-increasing complexity of such confrontations will far surpass the Cold War’s already dangerous and unmanageable correlations of forces. Ensuring stability would be a difficult engineering challenge, but the creators of these systems will be teams in different nations working against each other. The further we go, the more control we cede to machines, the harder it will be to turn back, the more timid we will be to even consider dismantling or tampering with the systems that guard us against the other systems.</p>
<p>We risk being seduced, also, by the promise of imperial war without risk to ourselves, punishing or subjugating others with impunity as long as they do not have the same technical means to use against us. If this could be successful, it would be monstrous enough, but history shows that people do find ways of striking back, and the technologies that would enable AWS are widely spread around the globe. History also shows that imperialism leads to the clash of empires. The norms that we set in our dealings with the weak are the norms we will have to live with when we deal with the strong. We can’t expect to enjoy the comforts of home as our robots fight an endless stream of “asymmetrical” conflicts, when that is exactly the sort of behavior which will conjure up a “peer competitor” to say “You do it, why can’t we?”—and draw us into serious confrontation.</p>
<p align="center"><strong>The Principle of Humanity in Conflict</strong></p>
<p>We say &#8220;No to killer robots&#8221; because they pose a threat to humanity. The threat is new and unanticipated in prior law or philosophy.  We cannot derive our opposition from the principles of just war theory or the codes of international humanitarian law.  We need to declare new principles.</p>
<p>Here is an attempt to formulate a set of principles which address the threat from autonomous weapons. I do not claim that this is the final formulation. I do not claim that the wording here is perfect. I do not claim to have distilled a mathematically minimal set of independent principles. Rather, these principles are interlocking, partially redundant, mutually reflecting, mutually referential, and mutually reinforcing. Together, they can be referred to as the Principle of Humanity in Conflict, unless you have a better name. </p>
<p>In the literature on IHL/LOAC, the principle that in the conduct of war we must not be needlessly cruel, inflict unnecessary suffering, or make use of weapons that are inherently inhumane, is sometimes called the Principle of Humanity. This set of principles can be seen as an expanded Principle of Humanity.</p>
<p><strong>Human Control:</strong> Any use of violent force, whether lethal or sub-lethal, against the body of a human being, or to oppose the will of a human being, must be fully under human control. If violence is initiated, it must be the decision of a human being.</p>
<p>Hence it is unacceptable for machines to control, determine (e.g. by the narrowing of options), or decide upon the application of violent force. This applies whether the target is a human or another machine; we can’t just turn robots loose to fight other robots as a proxy for our conflicts.</p>
<p><strong>Human Responsibility:</strong> We must hold ourselves responsible for the decision to use violent force, and cannot delegate that responsibility to machines. At least one person (a human person), and preferably exactly one (e.g. a commanding officer), must be accountable for each decision to use violent force against a particular person or object at a particular time and place. </p>
<p>Where data has been recorded pertaining to the circumstances of such decisions, it must be retained and made available for judicial review.</p>
<p>A human commander cannot accept responsibility, <em>a priori</em>, for the individual violent actions of autonomous weapons which will occur in circumstances which cannot be fully anticipated; this would be a mere pretense of responsibility, to hide the fact of irresponsibility. To authorize the use of autonomous weapons only within certain boundaries  of space and time, or within a certain conflict, would not create an exception to this principle.</p>
<p><strong>Human Dignity: </strong>It is a human right not to be killed on the decision of machines, nor to be subjected to violent force or pain on the decision of machines, nor to be threatened with violent force, pain or death as a form of coercion on the decision of machines, nor to be ruled in the conduct of life through the agency of overseeing machines that may decide the use of force as coercion.</p>
<p><strong>Human Sovereignty:</strong> Humanity is, and must remain, sovereign. Threats to human sovereignty, and security, include the process of conflict that arises between us, and with which we struggle to retain or regain control. Externalizing that process in an autonomous technology would make it ever more difficult to control, until finally we would have lost even the ability to exert control.</p>
<p><strong>The Principle of Humanity in Conflict:</strong> Here we mean, most fundamentally, the principle that conflict is, and must remain, human. It is between us. When in conflict with one another, we must not lose sight of our humanity (as is the case when we are needlessly cruel or inhumane).</p>
<p>We must always try to resolve conflicts nonviolently. Violent force, when unavoidable, must be used only in full respect for the humanity of opponents and recognition of the gravity of the act of killing (for we are all mortal). Taking arms against an opponent always entails the possibility of killing, and of being killed (for none of us is omnipotent).</p>
<p>Use of violent force can be accepted only in conflict between human beings, and only then because, and only when, we have failed to resolve the conflict nonviolently. Allowing machines to assume control of the conduct of conflict, to use violence autonomously, pitilessly and in contempt of the humanity that we share with our opponents, would be inhumane, and inhuman.</p>
<div>
<p>&nbsp;</p>
<hr align="left" size="1" width="33%" />
<div>
<p><a title="" href="#_ftnref1">[1]</a> Violence might be initiated on the decision of a machine either as intended by its designers, or as the result of a bug or malfunction, or through the unforeseen interactions of machines in confrontation with each other.</p>
</div>
</div>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">625</post-id>	</item>
		<item>
		<title>AP report: 30% of Pakistan drone strike dead not &#8220;militants&#8221;</title>
		<link>https://www.icrac.net/ap-report-30-of-pakistan-drone-strike-dead-not-militants/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Sat, 25 Feb 2012 21:50:24 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Drone Strikes]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=36</guid>

					<description><![CDATA[The Associated Press reports  that an &#8220;on-the-ground investigation&#8221; it conducted in North Waziristan of &#8220;10 of the deadliest [drone] attacks in the past 18 months&#8221;  found that  &#8220;of at least 194 people killed in the attacks, about 70 percent &#8211; at least 138 &#8211; were militants.  The remaining 56 were either civilians or tribal police,&#8221; [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>The <a href="http://www.miamiherald.com/2012/02/25/2660277/ap-impact-new-light-on-drone-wars.html">Associated Press reports</a>  that an &#8220;on-the-ground investigation&#8221; it conducted in North Waziristan of &#8220;10 of the deadliest [drone] attacks in the past 18 months&#8221;  found that  &#8220;of at least 194 people killed in the attacks, about 70 percent &#8211; at least 138 &#8211; were militants.  The remaining 56 were either civilians or tribal police,&#8221; based on interviews with local villagers, of whom &#8220;Many knew the dead civilians personally.&#8221;</p>
<p>According to the report, &#8220;The U.S. has carried out at least 280 attacks since 2004 in Pakistan&#8217;s tribal region.&#8221;</p>
<p>What is curious about the AP report is its spin.  Sebastian Abbot&#8217;s story starts with the assertion that &#8220;American drone strikes inside Pakistan are killing far fewer civilians than many in the country are led to believe,&#8221; based on heated statements by Pakistani islamists that drone strikes &#8220;are killing nearly 100 percent innocent people&#8221; and so on.</p>
<p>In reality, the AP&#8217;s findings are <a href="http://www.thebureauinvestigates.com/category/projects/drones/">fully consistent with reporting by the respected Bureau of Investigative Journalism</a> which has found that the drone war in Pakistan has killed 464-818 civilians (including 175 children) out of  2,413-3,058 total deaths, a 15%-34% civilian fraction of the dead.  Note that the injured, maimed, widowed and orphaned are not counted among these victims.</p>
<p><a href="http://www.voanews.com/english/news/asia/Pakistan-Repeats-Condemnation-of-Drone-Strikes-138417439.html">President Obama recently caused a stir</a> with a public remark that &#8220;drones have not caused a huge number of civilian casualties.&#8221;  But they are killing thousands, and it appears that at least about a quarter of the victims are not among the &#8220;militants&#8221; being targeted.  In this light, it seems doubtful that &#8216;Really, it&#8217;s not as many dead kids as you think&#8217; is such a good talking point for proponents of so-called &#8220;targeted killing.&#8221;</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">36</post-id>	</item>
	</channel>
</rss>
