<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Working Papers &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/working-paper/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Tue, 20 Aug 2019 08:40:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>ICRAC Releases New Report on Meaningful Human Control</title>
		<link>https://www.icrac.net/icrac-releases-new-report-on-meaningful-human-control/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 20 Aug 2019 08:38:40 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6292</guid>

					<description><![CDATA[ICRAC Members Daniele Amoroso and Guglielmo Taburrini have completed a new ICRAC Working paper #4 on “What makes human control over weapons “Meaningful”? The paper was prepared for distribution at the August 2019 meeting of the United Nations CCW GGE on Lethal Autonomous Weapons. The paper can be downloaded from our Resources Page, along with [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>ICRAC Members Daniele Amoroso and Guglielmo Taburrini have completed a new ICRAC Working paper #4 on <a href="https://www.icrac.net/wp-content/uploads/2019/08/Amoroso-Tamburrini_Human-Control_ICRAC-WP4.pdf">“What makes human control over weapons “Meaningful”?</a>  The paper was prepared for distribution at the August 2019 meeting of the United Nations CCW GGE on Lethal Autonomous Weapons. The paper can be downloaded from our <a href="https://www.icrac.net/research/">Resources Page</a>, along with ICRAC&#8217;s other working papers.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6292</post-id>	</item>
		<item>
		<title>ICRAC Working Paper #3 (CCW GGE April 2018): Guidelines for the human control of weapons systems</title>
		<link>https://www.icrac.net/icrac-working-paper-3-ccw-gge-april-2018-guidelines-for-the-human-control-of-weapons-systems/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:33:10 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3998</guid>

					<description><![CDATA[Guidelines for the human control of weapons systems [PDF] Authored by Noel Sharkey, chair of ICRAC[1] Since 2014, high contracting parties to the CCW have expressed interest and concern about the meaningful human control of weapons systems. There is an extensive scientific and engineering literature on the dynamics of human-machine interaction and human supervisory control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4001" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?resize=300%2C263&#038;ssl=1" alt="" width="300" height="263" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?resize=300%2C263&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/ICRAC-WP3_CCW-GGE-April-2018-1.jpg?w=667&amp;ssl=1 667w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><strong>Guidelines for the human control of weapons systems [<a href="https://www.icrac.net/wp-content/uploads/2018/04/Sharkey_Guideline-for-the-human-control-of-weapons-systems_ICRAC-WP3_GGE-April-2018.pdf">PDF</a>]<br />
</strong></p>
<p>Authored by Noel Sharkey, chair of ICRAC<a href="#_ftn1" name="_ftnref1">[1]</a></p>
<p>Since 2014, high contracting parties to the CCW have expressed interest and concern about the meaningful human control of weapons systems. There is an extensive scientific and engineering literature on the dynamics of human-machine interaction and human supervisory control of machinery. A short guide is presented here consisting of two parts. Part 1 is a simple primer on the psychology of human reasoning. Part 2 outlines different levels for the control of weapons systems, adapted from human-machine interaction research, and discusses them in terms of the properties of human reasoning. This outlines which of the levels can ensure the legality of human control of weapons systems and guarantee that precautionary measures are taken to assess the significance of potential targets, their necessity and appropriateness, as well as the likely incidental and possible accidental effects of the attack.</p>
<ol>
<li><strong> A short primer on human reasoning for the control of weapons</strong></li>
</ol>
<p>A well-established distinction in human psychology, backed by over 100 years of substantial research, divides human reasoning into two types: (i) fast <em>automatic </em>processes needed for routine and/or well tasks like riding a bicycle or playing tennis and (ii) slower <em>deliberativ</em>e processes needed for thoughtful reasoning such as making a diplomatic decision.</p>
<p>The drawback of deliberative reasoning is that it requires attention and memory resources and so it can easily be disrupted by anything like stress, or being pressured into making a quick decision.</p>
<p>Automatic processes kick in first, but we can override them if we are operating in novel circumstances or performing tasks that require active control or attention. Automatic processes are essential to our normal functioning, but they have a number of liabilities when it comes to making important decisions such as those required to determine the legitimacy of a target.</p>
<p>Four of the known properties of automatic reasoning<a href="#_ftn2" name="_ftnref2">[2]</a> illustrate why it is it problematic for the supervisory control of weapons.</p>
<ul>
<li><strong>neglects ambiguity and suppresses doubt</strong>. Automatic processes jump to conclusions. An unambiguous answer pops up instantly without question. There is no search for alternative interpretations or uncertainty. If something looks like it might be a legitimate target, in ambiguous circumstances, automatic reasoning will be certain that it is legitimate.</li>
<li><strong>infers and invents causes and intentions.</strong> Automatic reasoning rapidly invents coherent causal stories by linking fragments of available information. Events that include people are automatically attributed with intentions that fit a causal story. For example, people loading muckrakes onto a truck could initiate a causal story that they were loading rifles. This is called <em>assimilation bias</em> in the human supervisory control literature.<a href="#_ftn3" name="_ftnref3">[3]</a></li>
<li><strong>is biased to believe and confirm. </strong>Automatic reasoning favours uncritical acceptance of suggestions and maintains a strong bias. If a computer suggests a target to an operator, automatic reasoning alone would make it highly likely to be accepted. This is <em>automation bias</em>.<a href="#_ftn4" name="_ftnref4">[4]</a> <em>Confirmation bias</em><a href="#_ftn5" name="_ftnref5">[5]</a> selects information that confirms a prior belief.</li>
<li><strong>focuses on existing evidence and ignores absent evidence. </strong>Automatic reasoning builds coherent explanatory stories without consideration of evidence or contextual information that might be missing. What You See Is All There Is (WYSIATI)<a href="#_ftn6" name="_ftnref6">[6]</a>. It facilitates the feeling of coherence that makes us confident to accept information as true. For example, a man firing a rifle may be deemed to be a hostile target with WYSIATI when a quick look around might reveal that he is shooting a wolf hunting his goats.</li>
</ul>
<p>&nbsp;</p>
<ol start="2">
<li><strong> Levels of human control and how they impact on human decision-making</strong></li>
</ol>
<p>We can look at levels of human control for weapons systems by adapting research from the human supervisory control literature as shown in Table 1.<a href="#_ftn7" name="_ftnref7">[7]</a></p>
<table>
<tbody>
<tr>
<td width="594">A classification for levels of human supervisory control of weapons</td>
</tr>
<tr>
<td width="594">
<ol>
<li><strong>a human deliberates about a target before initiating any attack </strong></li>
<li><strong>program provides a list of targets and a human chooses which to attack</strong></li>
<li><strong>program selects target and a human must approve before attack</strong></li>
<li><strong>program selects target and a human has restricted time to veto </strong></li>
<li><strong>program selects target and initiates attack without human involvement</strong></li>
</ol>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p><strong>Level 1 control is the ideal</strong>. A human commander (or operator) has full contextual and situational awareness of the target area at the time of a specific attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack. There is active cognitive participation in the attack and sufficient time for deliberation on the nature of the target, its significance in terms of the necessity and appropriateness, and likely incidental and possible accidental effects. There must also be a means for the rapid suspension or abortion of the attack.</p>
<p><strong>Level 2 control could be acceptable</strong> if it is shown to meet the requirement of deliberating on potential targets. The human operator or commander should deliberatively assess necessity and appropriateness and whether any of the suggested alternatives are permissible objects of attack. Without sufficient time or in a distracting environment the illegitimacy of a target could be overlooked.</p>
<p>A rank ordered list of targets is particularly problematic as automation bias could create a tendency to accept the top ranked target unless sufficient time and attentional space is given for deliberative reasoning.</p>
<p><strong>Level 3 is unacceptable.</strong> This type of control has been experimentally shown to create what is known as <em>automation bias</em> in which human operators come to trust computer generated solutions as correct and disregard or don’t search for contradictory information. Cummings experimented with automation bias in a study on an interface designed for supervision and resource allocation of in-flight GPS guided Tomahawk missiles.<a href="#_ftn8" name="_ftnref8">[8]</a> She found that when the computer recommendations were wrong, operators using Level 3 control had a significantly decreased accuracy.</p>
<p><strong>Level 4 is unacceptable</strong> because it does not promote target validation and a short time to veto would reinforce automation bias and leave no room for doubt or deliberation. As the attack will take place <em>unless</em> a human intervenes, this undermines well-established presumptions under international humanitarian law that promote civilian protection.</p>
<p>The time pressure will result in operators neglecting ambiguity and suppressing doubt, inferring and inventing causes and intentions, being biased to believe and confirm, focusing on existing evidence and ignoring absent but needed evidence. An example of the errors caused by demands of fast veto was in the 2004 Iraq war when the U.S. Army&#8217;s Patriot missile system shot down a British Tornado and an American F/A-18, killing three pilots.</p>
<p><strong>Level 5 control</strong> <strong>is unacceptable</strong> as it describes weapons that are autonomous in the critical functions of target selection and the application of violent force.</p>
<p>It should be clear from the above that there are lessons to be drawn both from the psychology of human reasoning and from the literature on human-machine interaction. An understanding of this research is urgently needed to ensure that human-machine interaction is designed to get the best level of human control needed to comply with the international law in all circumstances.</p>
<p><strong>Conclusion: Necessary conditions for meaningful human control of weapons.</strong></p>
<p>A commander or operator should</p>
<ul>
<li>have full contextual and situational awareness of the target area at the time of initiating a specific attack;</li>
<li>be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack, such as changes in the legitimacy of the targets;</li>
<li>have active cognitive participation in the attack;</li>
<li>have sufficient time for deliberation on the nature of targets, their significance in terms of the necessity and appropriateness of an attack and the likely incidental and possible accidental effects of the attack and…</li>
<li>&#8230;have a means for the rapid suspension or abortion of the attack.</li>
</ul>
<p>&#8212;</p>
<p><a href="#_ftnref1" name="_ftn1">[1]</a> Special thanks to Lucy Suchman, Frank Sauer and Amanda Sharkey and members of ICRAC for helpful comments.</p>
<p><a href="#_ftnref2" name="_ftn2">[2]</a> D. Kahneman 2011:, Thinking, Fast and Slow, Penguin Books. He refers to the two processes as System 1 and System 2, These are exactly the same as the terms automatic and deliberative used here for clarity and consistency.</p>
<p><a href="#_ftnref3" name="_ftn3">[3]</a> J.M. Carroll and M.B. Rosson, ‘Paradox of the active user’, in J.M. Carroll (eds.), Interfacing Thought: Cognitive Aspects of Human-Computer Interaction (MIT Press, 1987), 80–111.</p>
<p><a href="#_ftnref4" name="_ftn4">[4]</a> K.L. Mosier and L.J. Skitka 1996: Human decision makers and automated decision aids: made for each other?, in: Mouloua, M. (Eds.): Automation and Human Performance: Theory and Applications, Lawrence Erlbaum Associates, 201–220.</p>
<p><a href="#_ftnref5" name="_ftn5">[5]</a> C.G. Lord, L. Ross and M. Lepper 1979: ‘Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence’, Journal of Personality and Social Psychology, 47, 1231–1243.</p>
<p><a href="#_ftnref6" name="_ftn6">[6]</a> Kaheneman ibid.</p>
<p><a href="#_ftnref7" name="_ftn7">[7]</a> For a more in-depth understanding of these analyses and references see N. Sharkey 2016: Staying in the Loop. Human Supervisory Control of Weapons, in: Bhuta, Nehal et al. (Eds.): Autonomous Weapons Systems. Law, Ethics, Policy. Cambridge University Press, 23-38.</p>
<p><a href="#_ftnref8" name="_ftn8">[8]</a> M.L. Cummings 2006: Automation and Accountability in Decision Support System Interface Design, in: Journal of Technology Studies 32: 1, 23–31.</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3998</post-id>	</item>
		<item>
		<title>Can an autonomous weapons ban be verified?</title>
		<link>https://www.icrac.net/can-an-autonomous-weapons-ban-be-verified/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Mon, 14 Apr 2014 05:13:57 +0000</pubDate>
				<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2445</guid>

					<description><![CDATA[At the ongoing CCW experts’ meeting on Lethal Autonomous Weapons Systems in Geneva, questions have begun to be raised about the verifiability of a ban on autonomous weapon systems. We would like to highlight the existence of our working paper outlining compliance measures for a ban, including a framework proposal as to how compliance could be [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/compliance.png"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft wp-image-2446 " style="border: 0px none; margin: 10px;" src="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2015/06/compliance-150x150.png?resize=176%2C176" alt="compliance" width="176" height="176" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?zoom=2&amp;resize=176%2C176&amp;ssl=1 352w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2015/06/compliance.png?zoom=3&amp;resize=176%2C176&amp;ssl=1 528w" sizes="auto, (max-width: 176px) 100vw, 176px" /></a>At the ongoing CCW experts’ meeting on Lethal Autonomous Weapons Systems in Geneva, questions have begun to be raised about the verifiability of a ban on autonomous weapon systems. We would like to highlight the existence of our <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf">working paper</a> outlining compliance measures for a ban, including a framework proposal as to how compliance could be verified.</p>
<p>A common remark is that verification would require inspection of software as well as hardware, and that nations will never permit such intrusive inspections—as well as the fact that even clearly-written and well-documented software can be very difficult to read and interpret, let alone the case in which software may be deliberately obscured or encrypted. The physical form of systems that are capable of being autonomous may also not always be readily definable or discernible. Where both a cockpit and communications link to a remote operator are lacking, one may reasonably infer that a system is intended to operate autonomously, but their presence does not necessarily ensure that the system is incapable of operating autonomously.</p>
<p>The <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf" target="_blank">solution to this conundrum</a> is already at hand, however, in the increasing emphasis in these discussions on the need for meaningful human control. This approach is increasingly recognized as a conceptual reframing of the problem of banning autonomous weapons, as already proposed in 2010 with the <a href="http://icrac.net/statements/">Berlin Statement</a> (originally titled “The Principle of Human Control of Weapons and All Technology”). That statement asserts positively <em>“That it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war. In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision and its foreseeable consequences.”</em> The emphasis on personal responsibility and legal accountability for the decision to use violent force has become recognized as one of the elements of the concept of meaningful human control, which also emphasizes the role of adequate information and deliberation by the decision maker.</p>
<p>Thus, while it may indeed be impractical to verify compliance with a ban on “autonomous weapons” as such, it may very well be possible to verify compliance with a requirement for accountable and meaningful human control and decision in each use of violent force.</p>
<p>This is not to say that we should not also declaratively ban autonomous weapons–minus a list of exceptions for systems that operate autonomously but do not make significant lethal decisions autonomously, or that are purely defensive in nature and defend human life against immediate threats from incoming munitions, or are to be allowed for other, pragmatic reasons. Certainly, we should ban fully autonomous weapons. But the way to implement and verify such a ban may be better framed in terms of human control.</p>
<p>Two years ago, ICRAC members took part in an effort to consider measures for promoting compliance with an autonomous weapons ban. The result was a working paper, “Compliance Measures for an Autonomous Weapons Convention,” which is <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf">posted here</a>. The work has not received wide recognition, but given that the question has begun to arise, it seems appropriate to highlight it now, rather than witness the emergence of  “a ban on killer robots would be nice, but it’s unverifiable” as a persistent canard.</p>
<p>The paper highlights many aspects of ensuring compliance with an autonomous weapons convention, including the enunciation of strong, simple, intuitive principles as the moral foundation for such an agreement, framing in terms of clear definitions, articulation of allowed exceptions, declaration of previously existing autonomous weapon systems, national implementing legislation, and the creation of an international treaty implementing organization (TIO). The role of the TIO in verification is detailed, particularly its support for cryptographic validation of records to tie them to particular weapon systems and the use of force at particular times (and potentially, places). These records, it is proposed, would be held by the compliant States Parties themselves and not released to the TIO or subject to any other possible compromise of military secrets, except in the case of an orderly inquiry into particular suspicious events, and possibly also some quota of routine, random inspections to verify continuous compliance. The cryptographic principles on which the tamper-proofing and time-stamping of such records can be carried out are simple and well-understood, and full encrypted records need not be exposed to the possibility of decryption if only “digital signatures” of the records are supplied to the TIO for archiving.</p>
<p>We believe that a scheme of this type can support rigorous verification of compliance in cases where it is suspected that a fully autonomous weapon system has been used, which should be a sufficient deterrent to their use. If coupled with other transparency and confidence-building measures, including routine on-site inspections of facilities in which remotely operated or nearly autonomous systems are developed, tested, manufactured, stockpiled, deployed or used, and with national means of intelligence which should suffice to reveal any prohibited activities large enough in scale and scope to pose a significant strategic security threat. These measures should suffice to ensure that no State Party will find the risks of non-compliance to be outweighed by uncertain and hypothetical military benefits.</p>
<p>&#8211;<em> by <a href="http://gubrud.net/">Mark Gubrud</a> (<a href="https://twitter.com/mgubrud">@mgubrud</a>) and ICRAC’s Juergen Altmann</em></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2445</post-id>	</item>
		<item>
		<title>Futureproofing Is Never Complete: Ensuring the Arms Trade Treaty Keeps Pace with New Weapons Technology</title>
		<link>https://www.icrac.net/futureproofing-is-never-complete-ensuring-the-arms-trade-treaty-keeps-pace-with-new-weapons-technology/</link>
		
		<dc:creator><![CDATA[mbolton]]></dc:creator>
		<pubDate>Sat, 19 Oct 2013 21:42:42 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2329</guid>

					<description><![CDATA[In a new working paper, International Committee for Robot Arms Control (ICRAC) members Matthew Bolton (Pace University) and Wim Zwijnenburg (IKV Pax Christi) stress the importance of making sure states control new weapons technologies, including robotic weapons, when the Arms Trade Treatyenters into force. It outlines strategies for civil society (such as the Control Arms campaign) and concerned states to counter [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>In <a href="http://icrac.net/wp-content/uploads/2013/10/Futureproofing-ICRAC-Working-Paper-3-2.pdf">a new working paper</a>, <a href="http://icrac.net/" target="_blank">International Committee for Robot Arms Control (ICRAC) </a>members Matthew Bolton (<a href="http://www.pace.edu/dyson/academic-departments-and-programs/political-science/faculty/matthew-bolton" target="_blank">Pace University</a>) and Wim Zwijnenburg (<a href="http://www.ikvpaxchristi.nl/" target="_blank">IKV Pax Christi</a>) stress the importance of making sure states control new weapons technologies, including robotic weapons, when the <a href="http://www.un.org/disarmament/ATT/" target="_blank">Arms Trade Treaty</a>enters into force. It outlines strategies for civil society (such as the <a href="http://controlarms.org/en/" target="_blank">Control Arms campaign</a>) and concerned states to counter potential arguments from states or manufacturers acting in bad faith, who may claim erroneously that the treaty will not apply to robotic weapons. We recommend that civil society and concerned states:</p>
<ol>
<li>Unequivocally assert that the Arms Trade Treaty Scope includes both manned and unmanned conventional arms,</li>
<li>Build on the recent clarifications by the <a href="http://www.un.org/ga/search/view_doc.asp?symbol=A/68/140" target="_blank">Group of Governmental Experts of the UN Register of Conventional Arms </a>of the categories of weapons borrowed by the Arms Trade Treaty in its Scope. The Group authoritatively defined the categories as including armed aerial drones,</li>
<li>Develop and promote comprehensive National Control Lists of the weapons to be controlled by states party to the Arms Trade Treaty,</li>
<li>Influence the interpretation of the Arms Trade Treaty through careful monitoring and calling out states acting in bad faith, and</li>
<li>Build connections between the community working on the Arms Trade Treaty (such as <a href="http://www.controlarms.org/">Control Arms</a>) and those working on related campaigns (such as the <a href="http://www.stopkillerrobots.org/" target="_blank">Campaign to Stop Killer Robots</a>) and control regimes (such as the <a href="http://www.un.org/disarmament/convarms/Register/" target="_blank">UN Register</a>, <a href="http://www.mtcr.info/english/" target="_blank">Missile Technology Control Regime</a>, the <a href="http://www.wassenaar.org/" target="_blank">Wassenaar Arrangement </a>and dual-use equipment control programs).</li>
</ol>
<p><a href="http://icrac.net/wp-content/uploads/2013/10/Futureproofing-ICRAC-Working-Paper-3-2.pdf">Click here to read the full paper</a>.</p>
<p><a href="http://icrac.net/who/">ICRAC</a> is an international committee of experts in robotics technology, robot ethics, international relations, international security, arms control, international humanitarian law, human rights law, and public campaigns, concerned about the pressing dangers that military robots pose to peace and international security and to civilians in war.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2329</post-id>	</item>
		<item>
		<title>ICRAC Working Paper Series launched</title>
		<link>https://www.icrac.net/icrac-working-paper-series-launched/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Fri, 31 May 2013 15:02:52 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Working Papers]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=1010</guid>

					<description><![CDATA[Today, ICRAC launches its new series of working papers. In ICRAC Working Paper #2 (#1 is to follow suit in the near future), Mark Gubrud and Juergen Altmann present &#8220;Compliance Measures for an Autonomous Weapons Convention&#8221;, inter alia containing a first conceptual sketch about how to implement technical verification measures to ensure human control and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>Today, ICRAC launches its new series of working papers. In ICRAC Working Paper #2 (#1 is to follow suit in the near future), Mark Gubrud and Juergen Altmann present &#8220;Compliance Measures for an Autonomous Weapons Convention&#8221;, inter alia containing a first conceptual sketch about how to implement technical verification measures to ensure human control and responsibility during the use of force.</p>
<p>The full text of the paper is available here: <a href="http://icrac.net/wp-content/uploads/2013/05/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2.pdf">Gubrud-Altmann_Compliance Measures AWC_ICRAC-WP2</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1010</post-id>	</item>
	</channel>
</rss>
