<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Lucy Suchman &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/author/lucy-suchman/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Wed, 26 Sep 2018 14:50:10 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons</title>
		<link>https://www.icrac.net/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/</link>
		
		<dc:creator><![CDATA[Lucy Suchman]]></dc:creator>
		<pubDate>Sun, 08 Apr 2018 22:43:38 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3957</guid>

					<description><![CDATA[*Originally published on the &#8220;Robot Futures Blog&#8221; In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence. Designated a primer for CCW [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" decoding="async" class="alignnone size-medium wp-image-3958" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=1024%2C7681&amp;ssl=1 1024w" sizes="(max-width: 300px) 100vw, 300px" /><br />
*Originally published on the <a href="https://robotfutures.wordpress.com/2018/04/07/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/">&#8220;Robot Futures Blog&#8221;</a></p>
<p>In the lead up to the next meeting of the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">CCW’s Group of Governmental Experts</a> at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled <a href="http://www.unidir.ch/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-artificial-intelligence-en-700.pdf">The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence</a>. Designated <em>a primer for CCW delegates</em>, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based <a href="https://www.cnas.org/">CNAS </a>are well represented.</p>
<p>Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:</p>
<p style="padding-left: 30px;">Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).</p>
<p>The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.</p>
<p>The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.</p>
<p>We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of <em>if–then </em>rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique <em>that makes use of labelled training data</em>” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.</p>
<p>Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:</p>
<p style="padding-left: 30px;">Intelligence is a system’s ability to <em>determine the best course of action </em>to achieve its goals. Autonomy is the <em>freedom </em>a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems <em>could </em>be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).</p>
<p>The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?</p>
<p>The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (<a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">CCW/GGE.1/2018/WP.4</a>). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3957</post-id>	</item>
		<item>
		<title>The vagaries of ‘precision’ in targeted killing</title>
		<link>https://www.icrac.net/the-vagaries-of-precision-in-targeted-killing/</link>
		
		<dc:creator><![CDATA[Lucy Suchman]]></dc:creator>
		<pubDate>Fri, 29 Jun 2012 10:51:31 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<guid isPermaLink="false">http://icrac.net/?p=450</guid>

					<description><![CDATA[Reposted from Robot Futures Two recent events highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality. The first is the Drone Summit held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>Reposted from <a title="Robot Futures" href="http://robotfutures.wordpress.com/" target="_blank">Robot Futures</a></p>
<p>Two recent events highlight the striking contrast between the Obama administration’s current policy regarding the use of armed drones as part of the U.S. ‘Counterterrorism Strategy,’ and those who challenge that strategy’s legality and morality.</p>
<p>The first is the <a title="CodePink Drone Summit" href="http://www.codepinkalert.org/article.php?id=6065">Drone Summit</a> held on April 28-29th in Washington, D.C., co-organized by activist group CodePink, the <a title="Center for Constitutional Rights" href="http://ccrjustice.org/">Center for Constitutional Rights</a>, and the UK organization <a title="Reprieve" href="http://www.reprieve.org.uk/">Reprieve</a>. The summit presentations offered compelling testimony, from participants including Pakistani attorney Shahzad Akbar, Reprieve’s Clive Stafford Smith, Chris Woods of the Bureau of Investigative Journalism, Pakistani journalist Madiha Tahir, and Somali activist Sadia Ali Aden, for documented and extensive civilian injury and death from U.S. Drone strikes in Pakistan, Yemen and Somalia.  While popular support in the United States is based on the premise (and promise) that strikes only kill ‘militants,’ these speakers underscored the vagaries of the categories that inform the (il)legitimacy of extrajudicial targeted killing.</p>
<p>According to the <a title="Drone war exposed" href="http://www.thebureauinvestigates.com/2011/08/10/most-complete-picture-yet-of-cia-drone-strikes/">Bureau of Investigative Journalism</a>, between 2004 and 2011 the CIA conducted over 300 drone strikes in Pakistan, killing somewhere between 2,372 and 2,997 people.  Waziristan, in the northwest of Pakistan on the frontier with Afghanistan (the so-called Federally Administered Tribal Area) is the focus of these targeted killings. Shahzad Akbar cited estimates that more than 3,000 people have been killed in the area, but its closure to outside journalists adds to the secrecy in which killings are carried out. One recent victim of the strikes, 16 year old Tariq Aziz, had joined a campaign organized by Akbar’s <a title="Foundation for Fundamental Rights" href="http://rightsadvocacy.org/">Foundation for Fundamental Rights</a> in collaboration with Reprieve to crowd source documentation of strikes inside Waziristan using cell phones. Within 72 hours of his participation in the training, Aziz himself was killed in a drone strike on the car in which he was traveling with his younger cousin.  Whether Aziz was deliberately targeted or was another innocent casualty remains unknown.</p>
<p>In the targeting of houses believed to house ‘militants’, according to Akbar, strikes are concentrated during mealtimes and at night, when families are most likely to be assembled.  Not only do immediate family members die in these strikes, but often those in neighboring houses as well, particularly children hit by shrapnel. So how is the category of ‘militant’ defined?  Clive Stafford Smith of Reprieve points out that targeted killing relies upon the same intelligence that informed the detention of ‘militants’ at Guantanamo, where 80% of those held have been cleared.  He reported as well that the U.S. routinely offers $5,000 to informants, the equivalent of a quarter of a million dollars to relatively more affluent Americans, for information leading to the identification of ‘bad guys.’</p>
<p>Particularly in those areas where targeted killings are concentrated, being identified as ‘militant,’ even being armed, does not in itself meet the criterion of posing an imminent threat to the United States.  But the U.S. Government has so far refused to release either the criteria or the evidentiary bases for its placement of persons on targeted kill lists.  This problem is intensified by the administration’s recent endorsement of so-called ‘signature’ targeting, where in place of positive identification of individuals who pose concrete, specific and imminent threat to life (as required by the laws of armed conflict), targeting can be based on patterns of behavior, observed from the air, that correspond with profiles specified as evidence for ‘militancy’. Shahzad Akbar points out that ‘signature’ effectively means profiling, adding that “before they used to arrest and question you, now they just kill you.”  The elision of distinctions between being armed and being a ‘terror suspect’ allows wide scope for action, as does the failure to recognize how these ‘targeted’ killings (where we now have to put targeted as well into scare quotes, insofar as we’re coming to recognize the questions and uncertainties that it masks) might themselves be experienced as terror by civilians on the ground.  Pakistani journalist Madiha Tahir urges us, in considering who is a ‘militant,’ to ask: how does a person become one?  People join ‘militant’ groups largely in relation to internal divisions quite apart from actions aimed at the U.S, but now increasingly also because of U.S. Attacks. “On what grounds,’ she asked ‘does it make sense to terrorize people in order to hunt terrorists?”</p>
<p>The second event of the past week was the <a title="US Counter-terrorism strategy - CSPAN" href="http://www.c-spanvideo.org/program/USCounterte">appearance of President Obama’s ‘top counterterrorism advisor’ John Brennan</a> at the Wilson Center, where he asserted that the growing use of armed unmanned aircraft in Pakistan, Yemen and Somalia have saved American lives, and that civilian casualties from U.S. drones were “exceedingly rare” and rigorously investigated.  As the <a title="Obama's Counterterrorism Advisor Defends Drone Strikes" href="http://articles.latimes.com/2012/apr/30/world/la-fg-brennan-drones-20120501">LA Times reports</a>, “Brennan emphasized throughout his speech that drone strikes are carried out against ‘individual terrorists.’ He did not mention so-called signature strikes, a type of attack the U.S. has used in Pakistan against facilities and suspected militants without knowing the target’s name. When asked later by a member of the audience whether the standards he outlined for drone attacks also applied to signature strikes, Brennan said he was not speaking of signature strikes but that all attacks launched by the U.S. are done in accordance with the rule of law. The White House this month approved the use of signature strikes in Yemen after U.S. officials previously insisted that it would target only people whose names are known. The new rules permit attacks against individuals suspected of militant activities, even when their names are unknown or only partially known, a U.S. official said.”</p>
<p>Contrasting war by remote control with traditional military operations, Brennan argued that “large, intrusive military deployments risk playing into Al Qaeda’s strategy of trying to draw us into long, costly wars that drain us financially, inflame anti-American resentment and inspire the next generation of terrorists.”  The implication is that death by remote control does not have the same consequences.</p>
<p>CodePink anti-warrior Medea Benjamin brought the contradictions of these two events together when she staged a <a title="Medea Benjamin intervenes in Brennan speech" href="http://www.youtube.com/watch?v=qZsfKJc4Tgg">courageous interruption</a> of Brennan’s speech, continuing her testimony on behalf of innocent victims of U.S. drone strikes even as she was literally lifted off of her feet and carried out of the room by a burly security guard.</p>
<p>For a compelling and carefully researched introduction to the drone industry and its problems see Medea Benjamin’s new book <a title="Drone Warfare - Benjamin" href="http://www.codepinkalert.org/article.php?id=6064"><em>Drone Warfare: Killing by remote control</em>.</a></p>
<p>Reposted from <a title="Robot Futures" href="http://robotfutures.wordpress.com/" target="_blank">Robot Futures</a></p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">450</post-id>	</item>
	</channel>
</rss>
