<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Opinion &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/opinion/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 01 Oct 2018 09:46:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons</title>
		<link>https://www.icrac.net/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/</link>
		
		<dc:creator><![CDATA[Lucy Suchman]]></dc:creator>
		<pubDate>Sun, 08 Apr 2018 22:43:38 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3957</guid>

					<description><![CDATA[*Originally published on the &#8220;Robot Futures Blog&#8221; In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence. Designated a primer for CCW [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" class="alignnone size-medium wp-image-3958" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=1024%2C7681&amp;ssl=1 1024w" sizes="(max-width: 300px) 100vw, 300px" /><br />
*Originally published on the <a href="https://robotfutures.wordpress.com/2018/04/07/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/">&#8220;Robot Futures Blog&#8221;</a></p>
<p>In the lead up to the next meeting of the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">CCW’s Group of Governmental Experts</a> at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled <a href="http://www.unidir.ch/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-artificial-intelligence-en-700.pdf">The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence</a>. Designated <em>a primer for CCW delegates</em>, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based <a href="https://www.cnas.org/">CNAS </a>are well represented.</p>
<p>Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:</p>
<p style="padding-left: 30px;">Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).</p>
<p>The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.</p>
<p>The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.</p>
<p>We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of <em>if–then </em>rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique <em>that makes use of labelled training data</em>” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.</p>
<p>Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:</p>
<p style="padding-left: 30px;">Intelligence is a system’s ability to <em>determine the best course of action </em>to achieve its goals. Autonomy is the <em>freedom </em>a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems <em>could </em>be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).</p>
<p>The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?</p>
<p>The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (<a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">CCW/GGE.1/2018/WP.4</a>). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3957</post-id>	</item>
		<item>
		<title>Arms Control for AWS: 2016 and beyond</title>
		<link>https://www.icrac.net/3219-2/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Wed, 07 Dec 2016 16:07:58 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php56-3.dfw3-2.websitetestlink.com/?p=3219</guid>

					<description><![CDATA[After three informal meetings of experts, the Convention on Certain Conventional Weapons, during its Fifth Review Conference taking place from 12 to 16 December 2016 in Geneva, will decide on how to continue work on arms control for autonomous weapon systems. Below is a preview to an article published in the October 2016 issue of Arms Control [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>After three informal meetings of experts, the Convention on Certain Conventional Weapons, during its <a href="http://www.unog.ch/80256EE600585943/(httpPages)/9F975E1E06869679C1257F50004F7E8C?OpenDocument">Fifth Review Conference taking place from 12 to 16 December 2016 in Geneva</a>, will decide on how to continue work on arms control for autonomous weapon systems. Below is a preview to an article published in the October 2016 issue of <a href="https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems">Arms Control Today</a>, outlining the perspectives for future AWS arms control.</p>
<p>Sauer, Frank 2016: Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems, in: Arms Control Today 46 (8): 8-13.</p>
<p><a href="https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems">Click here to read the full article</a>.</p>
<p><a href="https://www.unibw.de/internationalepolitik/professur/team/Sauer/Why%20Now%20Is%20the%20Time%20to%20Ban%20AWS%20-braille.brf/at_download/file">NEW: Click here for the BRF file of the full article</a></p>
<blockquote><p>[F]our possible outcomes can be predicted for the CCW process. The first would be a legally binding and preventive multilateral arms control agreement derived by consensus in the CCW and thus involving the major stakeholders, the outcome referenced as “a ban.” Considering the growing number of states-parties calling for a ban and the large number of governments calling for meaningful human control and expressing considerable unease with the idea of autonomous weapons systems, combined with the fact that no government is openly promoting their development, this seems possible. It would require mustering considerable political will. Verification and compliance for a ban, as well as for weaker restrictions, would then require creative arms control solutions. After all, with full autonomy in a weapons system eventually coming down to merely flipping a software switch, how can one tell if a specific system at a specific time is not operating autonomously? A few arms control experts are already wrapping their heads around these questions.</p>
<div class="SandboxRoot env-bp-350" data-twitter-event-id="0">
<div id="twitter-widget-1" class="EmbeddedTweet EmbeddedTweet--edge EmbeddedTweet--mediaForward media-forward js-clickToOpenTarget js-tweetIdInfo tweet-InformationCircle-widgetParent" lang="en" data-click-to-open-target="https://twitter.com/ArmsControlNow/status/786600390020194304" data-iframe-title="Twitter Tweet" data-dt-full="%{hours12}:%{minutes} %{amPm} - %{day} %{month} %{year}" data-dt-explicit-timestamp="12:12 PM - Oct 13, 2016" data-dt-months="Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec" data-dt-am="AM" data-dt-pm="PM" data-dt-now="now" data-dt-s="s" data-dt-m="m" data-dt-h="h" data-dt-second="second" data-dt-seconds="seconds" data-dt-minute="minute" data-dt-minutes="minutes" data-dt-hour="hour" data-dt-hours="hours" data-dt-abbr="%{number}%{symbol}" data-dt-short="%{day} %{month}" data-dt-long="%{day} %{month} %{year}" data-scribe="page:tweet" data-tweet-id="786600390020194304" data-twitter-event-id="3">
<article class="MediaCard MediaCard--mediaForward customisable-border" dir="ltr" data-scribe="component:card">
<div class="MediaCard-media"></div>
</article>
<div class="tweet-InformationCircle--top tweet-InformationCircle--topEdge tweet-InformationCircle" data-scribe="element:notice">
<p>&nbsp;</p>
<blockquote class="twitter-tweet" data-lang="en">
<p dir="ltr" lang="en">Can <a href="https://twitter.com/hashtag/KillerRobots?src=hash&amp;ref_src=twsrc%5Etfw">#KillerRobots</a> (autonomous weapons systems) work as preventive arms control? More in October&#8217;s <a href="https://twitter.com/hashtag/ArmsControlToday?src=hash&amp;ref_src=twsrc%5Etfw">#ArmsControlToday</a> <a href="https://t.co/E7sDVzdmbn">https://t.co/E7sDVzdmbn</a> <a href="https://t.co/LwPSojH9Gr">pic.twitter.com/LwPSojH9Gr</a></p>
<p>— Arms Control Assoc (@ArmsControlNow) <a href="https://twitter.com/ArmsControlNow/status/786600390020194304?ref_src=twsrc%5Etfw">October 13, 2016</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
</div>
</div>
<div class="resize-sensor"></div>
</div>
<p>The second outcome would be restrictions short of a ban. The details of such an agreement are impossible to predict, but it is conceivable that governments could agree, for example, to limit the use of autonomous weapons systems, such as permitting their use against materiel only.</p>
<p>The third would be a declaratory, nonbinding agreement on best practices. Such a code of conduct would likely emphasize compliance with existing international humanitarian law and rigorous weapons review processes, in accordance with Article 36 of Additional Protocol I to the Geneva Conventions.</p>
<p>Finally, there may be no tangible result, perhaps with one of the technologically leading countries setting a precedent by fielding autonomous weapons systems. That would certainly prompt others to follow, fueling an arms race. In light of some of the most advanced standoff weapons, such as the U.S. Long Range Anti-Ship Missile or the UK Brimstone, each capable of autonomous targeting during terminal flight phase, one might argue that the world is already headed for such an autonomy arms race.</p>
<p>Implementing autonomy, which mainly comes down to software, in systems drawn from a vibrant global ecosystem of unmanned vehicles in various shapes and sizes is a technical challenge, but doable for state and nonstate actors, particularly because so much of the hardware and software is dual use. In short, autonomous weapons systems are extremely prone to proliferation. An unchecked autonomous weapons arms race and the diffusion of autonomous killing capabilities to extremist groups would clearly be detrimental to international peace, stability, and security.</p>
<div class="SandboxRoot env-bp-350" data-twitter-event-id="1">
<div id="twitter-widget-2" class="EmbeddedTweet EmbeddedTweet--edge js-clickToOpenTarget tweet-InformationCircle-widgetParent" lang="en" data-click-to-open-target="https://twitter.com/marywareham/status/788723233709101056" data-iframe-title="Twitter Tweet" data-dt-full="%{hours12}:%{minutes} %{amPm} - %{day} %{month} %{year}" data-dt-explicit-timestamp="8:47 AM - Oct 19, 2016" data-dt-months="Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec" data-dt-am="AM" data-dt-pm="PM" data-dt-now="now" data-dt-s="s" data-dt-m="m" data-dt-h="h" data-dt-second="second" data-dt-seconds="seconds" data-dt-minute="minute" data-dt-minutes="minutes" data-dt-hour="hour" data-dt-hours="hours" data-dt-abbr="%{number}%{symbol}" data-dt-short="%{day} %{month}" data-dt-long="%{day} %{month} %{year}" data-scribe="page:tweet" data-twitter-event-id="4">
<div class="EmbeddedTweet-tweet">
<blockquote class="Tweet h-entry js-tweetIdInfo subject expanded is-deciderHtmlWhitespace" cite="https://twitter.com/marywareham/status/788723233709101056" data-tweet-id="788723233709101056" data-scribe="section:subject">
<div class="Tweet-header u-cf">
<div class="Tweet-brand u-floatRight"></div>
<div class="TweetAuthor js-inViewportScribingTarget " data-scribe="component:author">
<blockquote class="twitter-tweet" data-lang="en">
<p dir="ltr" lang="en">The nascent social taboo against machines autonomously making kill decisions &#8211; Frank Sauer in <a href="https://twitter.com/ArmsControlNow?ref_src=twsrc%5Etfw">@ArmsControlNow</a> <a href="https://t.co/nBTGtXLT5R">https://t.co/nBTGtXLT5R</a> <a href="https://twitter.com/hashtag/CCWUN?src=hash&amp;ref_src=twsrc%5Etfw">#CCWUN</a></p>
<p>— Mary Wareham (@marywareham) <a href="https://twitter.com/marywareham/status/788723233709101056?ref_src=twsrc%5Etfw">October 19, 2016</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
</div>
</div>
</blockquote>
</div>
</div>
<div class="resize-sensor"></div>
</div>
<p>This underlines the importance of the current opportunity for putting a comprehensive, verifiable ban in place. The hurdles are high, but at this point, a ban is clearly the most prudent and thus desirable outcome. After all, as long as no one possesses them, a verifiable ban is the optimal solution. It stops the currently commencing arms race in its tracks, and everyone reaps the benefits. A prime goal of arms control would be fulfilled by facilitating the diversion of resources from military applications toward research and development for peaceful purposes—in the fields of AI and robotics no less, two key future technologies.</p></blockquote>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3219</post-id>	</item>
		<item>
		<title>US killer robot policy: Full speed ahead</title>
		<link>https://www.icrac.net/us-killer-robot-policy-full-speed-ahead/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Sun, 22 Sep 2013 16:25:43 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2070</guid>

					<description><![CDATA[In November 2012, United States Deputy Defense Secretary Ashton Carter signed directive 3000.09, establishing policy for the “design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”  Without fanfare, the world had its first openly declared national policy for killer robots. The policy has [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<div>
<div id="attachment_2146" style="width: 160px" class="wp-caption alignleft"><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2013/09/doomsday-1024x1024-1024x939-e1380126056885.jpg"><img data-recalc-dims="1" loading="lazy" decoding="async" aria-describedby="caption-attachment-2146" class="wp-image-2146 size-thumbnail" style="margin-right: 5px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2013/09/doomsday-1024x1024-1024x939.jpg?resize=150%2C150&#038;ssl=1" alt="doomsday-1024x1024-1024x939" width="150" height="150" /></a><p id="caption-attachment-2146" class="wp-caption-text">Doomsday Clock</p></div>
<p>In November 2012, United States Deputy Defense Secretary Ashton Carter signed <a href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf" target="_blank" rel="noopener noreferrer">directive 3000.09</a>, establishing policy for the “design, development, acquisition, testing, fielding, and … application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems.”  Without fanfare, the world had its first openly declared national policy for killer robots.</p>
<p>The policy has been widely misperceived as one of caution. According to <a href="http://www.wired.com/dangerroom/2012/11/human-robot-kill/" target="_blank" rel="noopener noreferrer">one account</a>, the directive promises that a human will always decide when a robot kills another human. Others even <a href="http://www.hrw.org/news/2013/04/16/us-ban-fully-autonomous-weapons" target="_blank" rel="noopener noreferrer">read it as imposing a 10-year moratorium</a> to allow for <a href="http://www.nytimes.com/2013/03/17/opinion/sunday/keller-smart-drones.html?pagewanted=all" target="_blank" rel="noopener noreferrer">discussion of ethics and safeguards</a>. However, as a Defense Department spokesman confirmed for me, the 10-year expiration date is routine for such directives, and the policy itself is “not a moratorium on anything.”</p>
<p>A careful reading of the directive finds that it lists some broad and imprecise criteria and requires senior officials to certify that these criteria have been met if systems are intended to target and kill people by machine decision alone. But it fully supports developing, testing, and using the technology, without delay. Far from applying the brakes, the policy in effect overrides longstanding resistance within the military, establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.</p>
<p><strong>Did soldiers ask for killer robots?</strong> In the years before this new policy was announced, spokesmen routinely denied that the US military would even consider lethal autonomy for machines.  Over the past year, speaking for themselves, some<a href="http://online.wsj.com/article/SB10001424127887324128504578346333246145590.html" target="_blank" rel="noopener noreferrer">retired</a> and even <a href="http://usacac.army.mil/CAC2/MilitaryReview/Archives/English/MilitaryReview_20130430_art005.pdf" target="_blank" rel="noopener noreferrer">active duty</a> officers have written passionately against both autonomous weapons and the overuse of remotely operated drones. In May 2013, the first nationwide poll ever taken on this topic found that Americans opposed to autonomous weapons outnumbered supporters by two to one. Strikingly, the closer people were to the military—family, former military, or active duty—the more likely they were to <a href="http://www.whiteoliphaunt.com/duckofminerva/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons_May2013.pdf" target="_blank" rel="noopener noreferrer">strongly oppose</a> autonomous weapons and support efforts to ban them.</p>
<p>Since the 1990s, the military has exhibited what <a href="http://www.csbaonline.org/wp-content/uploads/2011/06/2007.03.01-Six-Decades-Of-Guided-Weapons.pdf" target="_blank" rel="noopener noreferrer">autonomy proponent Barry Watts has called</a> “a cultural disinclination to turn attack decisions over to software algorithms.” Legacy weapons such as land and sea mines have been deemphasized and some futuristic programs canceled—or altered to provide greater capabilities for human control. Most notably, the Army’s Future Combat Systems program, which was to include a variety of networked drones and robots at an eventual cost estimated as high as $300 billion, was cancelled in 2009, with <a href="https://www.cbo.gov/publication/41186" target="_blank" rel="noopener noreferrer">$16 billion already spent</a>.</p>
<p>At the same time, calls for autonomous weapons have been rising both outside and from some inside  the military. In 2001, retired <a href="http://www.carlisle.army.mil/USAWC/parameters/Articles/01winter/adams.htm" target="_blank" rel="noopener noreferrer">Army lieutenant colonel T. K. Adams argued</a> that humans were becoming the most vulnerable, burdensome, and performance-limiting components of manned systems. Communications links for remote operation would be vulnerable to disruption, and full autonomy would be needed as a fallback. Furthermore, warfare would become too fast and too complex for humans to direct. Realistic or not, such thinking, together with budget pressures and the perception that robots are cheaper than people, has supported a steady growth of autonomy research and development in military and contractor-supported labs. In March 2012, the Naval Research Lab opened a <a href="http://www.nrl.navy.mil/media/news-releases/2012/naval-research-laboratory-opens-laboratory-for-autonomous-systems-research" target="_blank" rel="noopener noreferrer">new facility</a> dedicated to development and testing of autonomous systems, complete with simulated rainforest, desert, littoral, and shipboard or urban combat environments. But the killer roboticists’ brainchildren have continued to face what a <a href="http://www.acq.osd.mil/dsb/reports/AutonomyReport.pdf" target="_blank" rel="noopener noreferrer">2012 Defense Science Board report</a>, commissioned by then-Undersecretary Carter, called “material obstacles within the Department that are inhibiting the broad acceptance of autonomy.”</p>
<p><strong>The discrimination problem. </strong>Navy scientist <a href="http://www.sevenhorizons.org/docs/CanningWeaponizedunmannedsystems.pdf" target="_blank" rel="noopener noreferrer">John Canning recounts a 2003 meeting</a> at which high-level lawyers from the Navy and Pentagon <a href="http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4799401" target="_blank" rel="noopener noreferrer">objected</a> to autonomous weapons. They assumed that robots could not comply with <a href="http://www.icrc.org/eng/war-and-law/index.jsp" target="_blank" rel="noopener noreferrer">international humanitarian law</a>, core principles of which include a responsibility to distinguish civilians from combatants and to refrain from attacks that would cause excessive harm to civilians. These principles, and the military rules of engagement intended to implement them, assume a level of awareness, understanding, and judgment that computers simply don’t have. Weapons are also subject to mandated legal review, and indiscriminate weapons—that is, weapons that cannot be selectively directed to attack lawful targets and avoid civilians—are forbidden. The lawyers did not think they would ever be able to sign off on autonomous weapons.</p>
<p>Georgia Tech roboticist Ron Arkin <a href="http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf" target="_blank" rel="noopener noreferrer">has argued</a> that unemotional robots, following rigid programs, could actually be more ethical than human soldiers. But his proposals fail to solve the hard problems of distinguishing civilians, understanding and predicting social and tactical situations, or judging the proportionality of force. Others argue, philosophically, that only humans can make such targeting judgments legitimately. In a world getting used to talking about virtual assistants and <a href="http://www.youtube.com/watch?v=cdgQpa1pUUE" target="_blank" rel="noopener noreferrer">self-driving cars</a>, it may not be obvious what the limits of artificial intelligence will be, or what people will accept, in 10, 20, or 40 years. But for now, and for the immediate future, the robot discrimination problem is hard to dispute.</p>
<p>To break the legal deadlock, Canning suggested that robots might normally be granted autonomy to attack materiel, including other robots, but not humans. Yet in many situations it might be impossible to avoid the risk—or the intent—of killing or injuring people. For such cases, Canning proposed what he called “dial-a-level” autonomy; that is, the robot might ordinarily be required to ask a human what to do, but in some circumstances it could be authorized to take action on its own.</p>
<p>In recent years, autonomy visionaries have stressed human-machine partnerships and flexibility to decide the level of autonomy a weapon may be allowed, based on tactical needs.  In a 2011 <a href="http://www.defenseinnovationmarketplace.mil/resources/UnmannedSystemsIntegratedRoadmapFY2011.pdf" target="_blank" rel="noopener noreferrer">roadmap</a>, for example, the Defense Department envisions unmanned systems that seamlessly operate with manned systems while gradually reducing the degree of human control and decision making required. A <a href="http://www.minwara.org/Meetings/2011_05/Presentations/thurspdf/0800/Mining.pdf" target="_blank" rel="noopener noreferrer">2011 Navy presentation</a>depicts decisions about autonomy and control as a continuous tradeoff, explaining that while human control minimizes the risk of attacking unintended targets, machine autonomy maximizes the chance of defeating the intended ones. It seems likely that in desperate combat, autonomy would be dialed up to the highest level.</p>
<p><strong>Appropriate levels of human judgment.</strong> In the spring of 2011, the Defense Department convened a group of uniformed and civilian personnel to begin developing a policy for autonomous weapons. The directive that emerged 18 months later lists a number of requirements for autonomous systems and draws a line at systems intended to autonomously target and engage humans&#8211;or to apply kinetic force (e.g., bullets and bombs) against any targets. But the directive neither states nor implies that this line should not be crossed.</p>
<p>Rather, the line may be crossed if two undersecretaries and the Chairman of the Joint Chiefs of Staff affirm that the listed requirements have been met. In the event of an urgent military need, any of the requirements can be waived—with the exception of a legal review. Furthermore, the line is not as clearly drawn as it may seem to be.</p>
<p>The requirements listed in the directive are not much more stringent than those that apply to any weapon system. Tactics, techniques, and procedures must be developed to specify how an autonomous weapon system should be used. Hardware and software must undergo rigorous verification and validation. Human-machine interfaces must be understandable to trained operators and must provide clear activation and deactivation procedures and have “safeties, anti-tamper mechanisms, and information assurance” that minimize the probability of unintended engagements.</p>
<p>These requirements sound reassuring; they promise to address many of the concerns people have about autonomous weapons. According to the directive, it is Defense Department policy that the measures listed will ensure that the systems will work in realistic environments against adaptive adversaries. But saying it doesn’t necessarily make it so.</p>
<p>In reality, neither mathematical analysis nor field testing can possibly locate every software bug or situation in which such complex systems may fail or behave inappropriately. Adversaries will strive to locate points of vulnerability, and it is terribly hard to anticipate everything that adversaries may do, let alone know how their actions may affect system performance. The notion of information assurance implies a promise to solve problems of software reliability and computer security that bedevil contemporary technology.</p>
<p>The centerpiece of the entire directive is this statement: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Although the phrase is never defined, it does not appear that appropriate levels always require at least one human being to make the decision to kill another. Rather, the appropriate level might well be the decision to dispatch a robot on a mission and let it select the targets to engage. In making such decisions, it appears that the burden of ensuring compliance with rules of engagement and laws of war falls on commanders and operators when the robots themselves are incapable of ensuring this. But in practice, it seems likely that unintended atrocities committed by autonomous weapons will be blamed on technical failures.</p>
<p><strong>Semi-autonomy: Smudging the line. </strong>In theory, as long as three senior officials withhold their signatures, autonomous weapon systems that are intended to target humans or use kinetic or lethal force would be blocked. But the policy green lights—no extra signatures needed—semi-autonomous weapon systems that may apply any kind of force against any targets, including people. The crucial line that the policy draws between semi- and fully autonomous systems is fuzzy and broken. As technology advances, it is likely to be crossed as a matter of course.</p>
<p>The directive defines a semi-autonomous weapon system as one intended to engage only those targets that have been selected by a human operator. But the system itself is allowed to use autonomy to acquire, track and identify potential targets. It can cue the operator, prioritize targets, and decide when to fire. What the operator must do to select targets is left unspecified. Would a verbal OK, gesture, or even neurological interface be acceptable?</p>
<p>A system with such capabilities may not be intended to function without a human operator, but at most it would require a trivial modification to do so—perhaps a hack.<a href="http://spectrum.ieee.org/robotics/military-robots/a-robotic-sentry-for-koreas-demilitarized-zone" target="_blank" rel="noopener noreferrer">At least</a> <a href="http://www.dodaam.com/eng/sub2/menu2_1_4.php" target="_blank" rel="noopener noreferrer">three</a> <a href="http://www.rafael.co.il/Marketing/396-1687-en/Marketing.aspx" target="_blank" rel="noopener noreferrer">companies</a> already market such systems. The policy clears them for immediate use after acquisition, via standard procedures.</p>
<p>The policy also addresses fire-and-forget or lock-on-after-launch homing munitions, which would include many systems in use today. Such munitions have seekers that autonomously find and home on targets. The directive classifies them as semi-autonomous weapon systems, on the theory that the operator selects targets by using tactics, techniques and procedures that “maximize the probability that the only targets within the seeker’s acquisition basket” will be the intended targets. Yet, upon launch, such munitions become, <em>de facto</em>, fully autonomous.</p>
<p>No restrictions are placed on the technology that a seeker may use to find a target and decide whether that is what it was looking for. This opens a clear path for weapons that can be sent on hunt-and-kill missions, limited only by the ability of their onboard sensors and computers to narrow their acquisition baskets to selected targets.</p>
<p><strong>The way forward—to what?</strong> In the mid-2000s, Lockheed Martin was developing <a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/factsheet-LOCAAS.pdf" target="_blank" rel="noopener noreferrer">a small autonomous drone missile</a> for the Air Force, and a <a href="https://mfcbastion.external.lmco.com/missilesandfirecontrol/our_news/factsheets/Product_Card-NLOS.pdf" target="_blank" rel="noopener noreferrer">similar system</a> for the Army. Equipped with several types of onboard sensors, the missiles would fly out to designated areas and wander in search of generic targets, such as tanks, rocket launchers, radars, or personnel, which they would autonomously recognize and attack. Both programs were canceled, amid legal, ethical, and technical questions, to be <a href="http://www.precisionstrike.org/pdf/2005_Oct_and_Dec_newsletter.pdf" target="_blank" rel="noopener noreferrer">superseded</a> by <a href="http://www.avinc.com/uas/adc/switchblade/" target="_blank" rel="noopener noreferrer">systems</a> that combine autonomous capabilities with radio links to human operators. Under the new policy, would such wide-area search munitions be classified as autonomous or semi-autonomous? Either way, the policy establishes that weapons like these may be developed, acquired, and used.</p>
<p>Given the long internal debate and general public opposition to killer robots, this is a highly aggressive policy. The US military never intended to replace foot soldiers with autonomous lethal robots during this decade, particularly not where civilians might be at risk. But funding the development and acquisition of systems that have autonomous targeting and fire-control capabilities—even if they are not intended for fully autonomous killing—will spur the weapons industry, in the United States and elsewhere, to accelerate exploration and investment in the technology of autonomous warfare.</p>
<p>The real issue is whether the world needs to go this way at all. The message of this policy is: full speed ahead.</p>
</div>
<div></div>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2070</post-id>	</item>
		<item>
		<title>The Role of ICRAC in the Arms Trade Treaty Negotiations</title>
		<link>https://www.icrac.net/the-role-of-icrac-in-the-arms-trade-treaty-negotiations/</link>
		
		<dc:creator><![CDATA[mbolton]]></dc:creator>
		<pubDate>Tue, 09 Apr 2013 15:19:17 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Amnesty International]]></category>
		<category><![CDATA[Arms Trade Treaty]]></category>
		<category><![CDATA[Article 36]]></category>
		<category><![CDATA[ATT]]></category>
		<category><![CDATA[ATT Monitor]]></category>
		<category><![CDATA[Autonomous Armed Robots]]></category>
		<category><![CDATA[Control Arms]]></category>
		<category><![CDATA[drones]]></category>
		<category><![CDATA[Futureproofing]]></category>
		<category><![CDATA[Holy See]]></category>
		<category><![CDATA[humanitarian law]]></category>
		<category><![CDATA[ICRAC]]></category>
		<category><![CDATA[IKV Pax Christi]]></category>
		<category><![CDATA[Killer Robots]]></category>
		<category><![CDATA[Matthew Bolton]]></category>
		<category><![CDATA[NGOs]]></category>
		<category><![CDATA[Pace University]]></category>
		<category><![CDATA[Reaching Critical Will]]></category>
		<category><![CDATA[Richard Moyes]]></category>
		<category><![CDATA[robotic weapons]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[Thomas Nash]]></category>
		<category><![CDATA[UAV]]></category>
		<category><![CDATA[UAVs]]></category>
		<category><![CDATA[UN Register on Conventional Weapons]]></category>
		<category><![CDATA[United Nations]]></category>
		<category><![CDATA[Wim Zwijnenburg]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=963</guid>

					<description><![CDATA[Last week the United Nations General Assembly voted overwhelmingly to adopt the Arms Trade Treaty (ATT), which will aim to constrain the flow of conventional weapons to states and organizations that threaten peace and security or engage in gross violations of human rights and humanitarian law. Several members of the International Committee for Robot Arms [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><a href="https://i0.wp.com/www.icrac.net.php53-3.dfw1-2.websitetestlink.com/wp-content/uploads/2013/04/546545-armstreaty.jpg"><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignleft wp-image-2059 size-medium" style="margin-right: 5px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2013/04/546545-armstreaty.jpg?resize=300%2C191&#038;ssl=1" alt="546545-armstreaty" width="300" height="191" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2013/04/546545-armstreaty.jpg?resize=300%2C191&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2013/04/546545-armstreaty.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></a>Last week the United Nations General Assembly <a href="http://www.un.org/News/Press/docs/2013/ga11354.doc.htm">voted</a> overwhelmingly to adopt the <a href="http://www.un.org/disarmament/ATT/docs/Draft_ATT_text_27_Mar_2013-E.pdf">Arms Trade Treaty</a> (ATT), which will aim to constrain the flow of conventional weapons to states and organizations that threaten peace and security or engage in gross violations of human rights and humanitarian law.</p>
<p>Several members of the <a href="http://icrac.net/">International Committee for Robot Arms Control (ICRAC)</a> – <a href="http://www.linkedin.com/pub/wim-zwijnenburg/14/631/1a4">Wim Zwijnenburg</a> of <a href="http://www.ikvpaxchristi.nl/en/home" target="_blank" rel="noopener">IKV Pax Christi</a>, Thomas Nash and Richard Moyes of <a href="http://www.article36.org/">Article 36</a> and <a href="http://pace.academia.edu/MatthewBolton">Matthew Bolton</a> of <a href="http://www.pace.edu/dyson/academic-departments-and-programs/political-science/faculty/matthew-bolton" target="_blank" rel="noopener">Pace University New York City </a>– were engaged in supporting the advocacy work of <a href="http://controlarms.org/en/">Control Arms</a>, the global civil society coalition campaigning for a ‘bulletproof’ treaty.</p>
<p>Pushing states to develop text that would cover emerging weapons technologies was a particular emphasis of ICRAC members’ lobbying at the July 2012 and March 2013 Diplomatic Conferences. Many campaigners and diplomats were concerned that the draft treaty did not include specific provisions for <a href="http://icrac.net/2012/07/draft-arms-trade-treaty-omits-explicit-reference-to-unmanned-weapons/">‘unmanned’ weapons</a>, such as aerial drones, or robotic systems that have ‘dual uses.’ A <a href="http://www.sipri.org/media/newsletter/essay/brueck_holtom_March13">recent report</a> from the Stockholm International Peace Research Institute (SIPRI) raised concerns that the text “looks dangerously likely to be a relic before it ever comes into force.”</p>
<p><b>Futureproofing</b></p>
<p>Drawing on technical advice from other ICRAC members, Zwijnenburg and Bolton wrote a policy brief titled “<a href="http://politicalminefields.files.wordpress.com/2013/03/futureproofing-the-draft-arms-trade-treaty-42.pdf">Futureproofing the Draft Arms Trade Treaty</a>” that called on states to make five “critical changes” to the text “in order to cover the emerging class of robotic, ‘unmanned’ and autonomous weapons.” The paper was distributed widely in the conference and online to governments and civil society organizations and was reprinted in <a href="http://www.reachingcriticalwill.org/" target="_blank" rel="noopener">Reaching Critical Will’s </a>widely read <i><a href="http://politicalminefields.files.wordpress.com/2013/03/futureproofing-the-draft-arms-trade-treaty-42.pdf">ATT Monitor</a></i> newsletter (pp. 3-4). The phrase “futureproofing” caught on and was soon being used widely by Control Arms campaigners, <a href="http://www2.amnesty.org.uk/blogs/campaigns/arms-trade-treaty-history-twenty-years-making?utm_source=aiuk&amp;utm_medium=Homepage&amp;utm_campaign=Arms&amp;utm_content=OllyfinaldayBlog">Amnesty International</a> and even the representative of the <a href="http://www.holyseemission.org/press/release.aspx?id=410">Holy See</a>.</p>
<p>Not all of the changes suggested in Zwijnenburg and Bolton’s “Futureproofing” paper were made in the <a href="http://www.un.org/disarmament/ATT/docs/Draft_ATT_text_27_Mar_2013-E.pdf">final treaty text</a> and it would be disingenuous to overstate ICRAC’s impact. However, by helping to shape and frame the conversation, the policy brief, amplified by Control Arms lobbying, contributed to efforts that changed the treaty text to allow for the future conferences of States Parties to the treaty to review “developments in the field of conventional arms” (Article 17) and adopt amendments by three-quarters vote instead of consensus (Article 20). This means that activists and advocacy organizations will be able to push states to amend the treaty to address developing new weapons technologies. This new text has essentially created a forum in which ICRAC and other stakeholders concerned about emerging weapons technologies can press their case in the future.</p>
<p><b>What Next?</b></p>
<p>The next push for campaigners will be to make sure states sign and ratify the ATT, to make it enter into force as quickly as possible. Another important area for advocacy will be to push for a broadening of the categories used by the <a href="http://www.un.org/disarmament/convarms/Register/">UN Register of Conventional Weapons</a>. The ATT relies on these categories, which at the moment do not explicitly cover many types of robotic weapons. If civil society can push for states to include unmanned armed systems in this register before the treaty enters into force, the treaty will actually cover a broader scope of weapons.</p>
<p>While the ATT and broadening the UN Register have not been the primary focus of ICRAC’s advocacy, they are establishing important precedents and norms that provide important foundations for the regulation of robotic weapons. Indeed, passing the treaty in a majority vote in the UN General Assembly, instead of consensus, has <a href="http://www.4disarmament.org/2013/03/30/bustingconsensus/">opened the possibility of developing arms control instruments with high standards</a>, instead of the lowest common denominator.</p>
<p>The ATT is not really a disarmament treaty – it is more of an amalgamation of humanitarian and trade law. Even if it works well, it will only regulate the flows of weapons, not the kind of weapons in circulation. As a result, those who are concerned about the <a href="https://www.hrw.org/topic/arms/killer-robots">trends toward ‘autonomy’ in robotic weapons</a>, threatening to reduce direct human control over killing, cannot rely on the ATT to prevent this dangerous possibility. This is one of many reasons why ICRAC is part of a growing number of NGOs and faith groups calling for a <a href="http://nobelwomensinitiative.org/2013/03/stop-killer-robots/">specific on ban fully autonomous armed robots – “killer robots.”</a></p>
<p><a href="http://icrac.net/who/">ICRAC</a> is an international committee of experts in robotics technology, robot ethics, international relations, international security, arms control, international humanitarian law, human rights law, and public campaigns, concerned about the pressing dangers that military robots pose to peace and international security and to civilians in war.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">963</post-id>	</item>
		<item>
		<title>Arms Control for Uninhabited Vehicles: A Detailed Study</title>
		<link>https://www.icrac.net/arms-control-for-uninhabited-vehicles-detailed-study/</link>
		
		<dc:creator><![CDATA[altmann]]></dc:creator>
		<pubDate>Tue, 02 Apr 2013 00:00:47 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=927</guid>

					<description><![CDATA[In a detailed scientific article just published online, physicist and peace researcher Jürgen Altmann (TU Dortmund, Germany) explains that armed uninhabited vehicles (on land, on/under water, in the air) do not exist in a legal vacuum. &#160; For example, they must not be equipped with biological or chemical weapons. In Europe most land and air [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='altmann' src='https://secure.gravatar.com/avatar/5ed6d6c543ca0fd5239769e6539543dc39e8213bb687816bbce09ac5f83520f5?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/5ed6d6c543ca0fd5239769e6539543dc39e8213bb687816bbce09ac5f83520f5?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://e3.physik.tu-dortmund.de/cms/de/AG-Altmann/index.html">altmann</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Jürgen Altmann (PhD) is a physicist and peace researcher (retired) at TU Dortmund University, Germany. Since 1985 he has studied scientific-technical problems of disarmament. An experimental focus is automatic sensor systems for co-operative verification of disarmament and peace agreements and for IAEA safeguards for an underground final repository. The second focus is assessment of new military technologies and preventive arms control, including verification. Studies have dealt with “non-lethal” weapons, civilian and military technologies in aviation, military uses of microsystems technology and of nanotechnology, confidence and security building measures for cyber forces, armed uncrewed vehicles and autonomous weapon systems. He co-founded and chairs the German Research Association for Science, Disarmament and International Security (FONAS) and has authored book chapters on the relationship of natural science, armament and disarmament. He teaches a lecture "Physics and Technology of the Verification of Arms-Limitation Agreements".</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>In a detailed <a href="http://www.springerlink.com/openurl.asp?genre=article&amp;id=doi:10.1007/s10676-013-9314-5">scientific article</a> just published online, physicist and peace researcher <a title="Who We Are" href="http://icrac.net/who/">Jürgen Altmann</a> (TU Dortmund, Germany) explains that armed uninhabited vehicles (on land, on/under water, in the air) do not exist in a legal vacuum.</p>
<p>&nbsp;</p>
<p>For example, they must not be equipped with biological or chemical weapons. In Europe most land and air vehicles are covered by the definitions of the Treaty on Conventional Armed Forces (CFE Treaty), thus they are limited by numbers and subject to verification. If armed uninhabited vehicles cannot be prohibited outright, then limitations similar to the CFE Treaty are needed in other regions of the world. To avoid dangers for international humanitarian law and military stability, autonomous attack, that is attacks without a human decision in each single case, should be prohibited. Additional prohibitions are needed, among others, for small and very small armed vehicles.</p>
<p>&nbsp;</p>
<p>Jürgen Altmann, Arms control for armed uninhabited vehicles: an ethical issue, Ethics and Information Technology, 2013, DOI 10.1007/s10676-013-9314-5. <a href="http://link.springer.com/article/10.1007%2Fs10676-013-9314-5">Read the full article  (open access, 17 pages) here.</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='altmann' src='https://secure.gravatar.com/avatar/5ed6d6c543ca0fd5239769e6539543dc39e8213bb687816bbce09ac5f83520f5?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/5ed6d6c543ca0fd5239769e6539543dc39e8213bb687816bbce09ac5f83520f5?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://e3.physik.tu-dortmund.de/cms/de/AG-Altmann/index.html">altmann</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Jürgen Altmann (PhD) is a physicist and peace researcher (retired) at TU Dortmund University, Germany. Since 1985 he has studied scientific-technical problems of disarmament. An experimental focus is automatic sensor systems for co-operative verification of disarmament and peace agreements and for IAEA safeguards for an underground final repository. The second focus is assessment of new military technologies and preventive arms control, including verification. Studies have dealt with “non-lethal” weapons, civilian and military technologies in aviation, military uses of microsystems technology and of nanotechnology, confidence and security building measures for cyber forces, armed uncrewed vehicles and autonomous weapon systems. He co-founded and chairs the German Research Association for Science, Disarmament and International Security (FONAS) and has authored book chapters on the relationship of natural science, armament and disarmament. He teaches a lecture "Physics and Technology of the Verification of Arms-Limitation Agreements".</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">927</post-id>	</item>
		<item>
		<title>Smart Robots? Perhaps not smart enough to be called stupid.</title>
		<link>https://www.icrac.net/smart-robots-perhaps-not-smart-enough-to-be-called-stupid/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Mon, 18 Mar 2013 11:17:50 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[ICRAC in the media]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=899</guid>

					<description><![CDATA[The New York Times has entered the discussion about the Campaign to Stop Killer Robots. Columnist Bill Keller has produced a well balanced article that looks at the pros and cons of a ban. For the ban, he notes that The arguments against developing fully autonomous weapons, as they are called, range from moral (“they [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>The New York Times has entered the discussion about the Campaign to Stop Killer Robots. Columnist Bill Keller has produced a well balanced article that looks at the pros and cons of a ban.</p>
<p>For the ban, he notes that</p>
<blockquote><p>The arguments against developing fully autonomous weapons, as they are called, range from moral (“they are evil”) to technical (“they will never be that smart”) to visceral (“they are creepy”).</p>
<p>“This is something people seem to feel at a very gut level is wrong,” says Stephen Goose, director of the arms division of Human Rights Watch, which has assumed a leading role in challenging the dehumanizing of warfare. “The ugh factor comes through really strong.”</p></blockquote>
<p>He then discusses the three International Humanitarian issues with autonomous robot weapons (i) inability to conform to the principle of distinction; (ii) inability to conform to the principle of Proportionality and (iii) difficulties with accountability with mishaps or war crimes.</p>
<p>He brings out the usual suspect, Ron Arkin, to argue against a ban. Arkin still believes that robots could do better than human because they don&#8217;t have emotional responses. Others argue that that is one of the main problems. The funniest comment to Keller&#8217;s article was a response to Ron Arkin:</p>
<blockquote><p>Professor Arkin argues that automation can also make war more humane.&#8221; This guy has obviously been a civilian all his life. Only a civilian would believe there is a humane way to kill another human being. Does he get out of the house on a regular basis?</p></blockquote>
<p>But Arkin&#8217;s position in other respects does not now seem that removed from those calling for a ban. &#8220;He advocates a moratorium on deployment and a full-blown discussion of ways to keep humans in charge.&#8221; The human&#8217;s in charge is a subtle change in Arkin&#8217;s position that is greatly appreciated. It moves us some way toward the discussions that should be had.</p>
<p>However, without a ban on the development and research on these weapons systems, they are going to end up in the US arsenal. Other countries have not said that they will have a moratorium and so we can expect and arms race that the US will not be able to resist.</p>
<p>In fact in terms of a moratorium, Keller appears to have made an error of interpretation with regards to the recent <a title="Department of Defence directive" href="http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf">Department of Defence directive</a> (November 21 2012) &#8221; Last November the Defense Department issued what amounts to a 10-year moratorium on developing them while it discusses the ethical implications and possible safeguards.&#8221;</p>
<p>ICRAC member Mark Gubrud picks up on this error in a comment after Keller&#8217;s piece:</p>
<blockquote><p>The DoD Directive (3000.09) does not impose any moratorium. It says that the United States will develop and use autonomous weapons.</p>
<p>Although it draws a line at AW that kill humans autonomously, it does not forbid crossing the line; rather, it sets forth the procedure for doing so. Four sub-cabinet level signatures are required. Other than that, the rules for AW that kill humans are essentially the same as for AW that target materiel, which the Directive approves already.</p>
<p>The directive also approves for immediate development and use &#8220;semi-autonomous weapons&#8221; which may automatically acquire, track, identify and prioritize potential targets, cue a human operator to their presence, and upon approval, engage them, automatically determining the timing of when to fire.</p>
<p>So, a semi-autonomous weapon system might detect a group of persons, highlight their dim outlines on a screen, and say to the operator &#8220;target group identified.&#8221; The operator says &#8220;engage&#8221; and the machine kills them.</p>
<p>Such a system already has every capability needed for full lethal autonomy. It has only been programmed to request approval. One trivial software modification will fix that, if the system doesn&#8217;t already have a switch to throw it into full autonomous mode.</p>
<p>DoDD 3000.09 approves such systems for immediate development, acquistion and use.</p>
<p>There is no moratorium; it is a full-speed charge into the unknown.</p></blockquote>
<p>Nonetheless, Keller is clearly on the right side of the issues and shows a clear understanding: &#8221; It’s a squishy directive, likely to be cast aside in a minute if we learn that China has sold autonomous weapons to Iran&#8221;</p>
<p>Although Keller is not optimistic about the chance of us getting a ban on killer robots, he supports it and ICRAC appreciates him for that:</p>
<blockquote><p>I don’t hold out a lot of hope for an enforceable ban on death-dealing robots, but I’d love to be proved wrong. If war is made to seem impersonal and safe, about as morally consequential as a video game, I worry that autonomous weapons deplete our humanity. As unsettling as the idea of robots’ becoming more like humans is the prospect that, in the process, we become more like robots.</p></blockquote>
<p>It is well worth reading Bill Keller&#8217;s full story and the comments that come afterwards &#8211; <a title="Smart Robots" href="http://http://www.nytimes.com/2013/03/17/opinion/sunday/keller-smart-drones.html?pagewanted=all">Smart Robots.</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">899</post-id>	</item>
	</channel>
</rss>
