<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Front Page &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/category/front-page/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:34:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>ICRAC Submission to the United Nations Secretary-General on Autonomous Weapon Systems</title>
		<link>https://www.icrac.net/icrac-submission-to-the-united-nations-secretary-general-on-autonomous-weapon-systems/</link>
		
		<dc:creator><![CDATA[Laura Nolan]]></dc:creator>
		<pubDate>Mon, 20 May 2024 18:45:00 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19903</guid>

					<description><![CDATA[The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p>The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, robot ethics, international relations, international security, arms control, international humanitarian law, international human rights law and philosophy of technology. We are deeply concerned about the pressing dangers that military robotics and automation pose to international peace, international security and stability, and the rights and safety of civilians in war. Based on our expertise, we are particularly concerned that military robotic systems will lead to more frequent, less restrained, and less accountable armed conflict. In light of these risks, we call for an international treaty to prohibit and restrict autonomous weapon systems.</p>



<p>As has been discussed in detail at the CCW GGE over the past decade, autonomous weapon systems (AWS) raise serious concerns for international humanitarian law in regard to complying with the principles of distinction and proportionality. The risk of triggering the proliferation of arms is another stark reality posed by AWS, as is the accessibility of these types of weapon systems to non-state armed groups, among other actors. The use of AWS may further spill into the arena of national and transnational organized crime in addition to policing at the domestic level. All the while, several operational concerns remain as to the use of AWS from the perspective of accountability, bias and the use of machine-learning algorithms which may develop beyond the capacity of “the human in the loop.” There are also serious risks to regional and global stability posed by replacing human decision making with machine decision making, as it becomes more difficult for political and military leaders to anticipate and interpret the intentions, decision and actions of their adversaries, and thus find ways to avoid or de-escalate conflicts.</p>



<p>We also note the threat that AWS pose to compliance with international human rights, particularly the right to life, the prohibition against torture, cruel and inhumane treatment, and above all the human right to dignity. We fear that an additional protocol to the CCW would fail to address these human rights concerns. We are concerned that the automated targeting and release of non-conventional weapons, including nuclear weapons, may also fall outside the scope of any legally binding CCW protocol. We thus advocate and support all calls for a legally binding instrument to prohibit and restrict the use of AWS, and urge the Secretary-General to encourage the initiation of a forum within the United Nations General Assembly that can include all States, cover autonomy and automation in the use of all weapons, and address international humanitarian law as well as human rights concerns.</p>



<p>This submission is informed by our comprehensive interdisciplinary expertise. We have published extensively on the ethical, legal, technical and security challenges of autonomous weapon systems, on the question of meaningful human control, and on the challenges of escalation at speed.</p>



<p><strong>Scope</strong></p>



<p>In accordance with the International Committee of the Red Cross we understand an autonomous weapon system as one that, potentially after initial activation or launch by a human, selects targets based on sensor data and engages the targets without human intervention. We endorse the recommendations of the International Committee of the Red Cross for a two-tiered approach that prohibits unpredictable systems and systems that explicitly target humans, while strictly regulating the use of autonomy in all other systems for the command, control and engagement of lethal force. This includes restrictions on the time, space, scope and scale of operations of such systems, as well as the types of targets and situations in which they may be used. In particular, we strongly agree that the only permissible targets of<br>such systems should be military objects by nature, and never civilian or dual-use targets, which should always require human judgment. More discussion is needed on the appropriate forms and regulation of the human-machine interaction in complex command and control systems. In particular, as computers and artificial intelligence collect and automatically analyze more and more data, greater clarity is needed in what constitutes meaningful human control in the context of automated target generation and identification, and how to ensure respect and responsibility for international law when such systems are used.</p>



<p><strong><span style="text-decoration: underline;">Key Challenges to Global Peace and Security</span></strong></p>



<ul class="wp-block-list">
<li>Uncontrolled Escalation and Missed Opportunities for De-escalation and Diplomacy</li>
</ul>



<p>The technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation at speed. As the thresholds for applying military force will be lowered, the likelihood of conflicts will go up. Actions and reactions to the adversary will have to be programmed in advance. Two AWS swarms moving at relatively close distance from each other, in international air space, for example, might interact in ways that could not be mitigated or controlled by a human in an appropriate time window. In case of an enemy attack, even a few seconds delay could mean loss of one’s systems, thus there will be strong pressure for fast counterattacks that preclude human consideration.</p>



<p>Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about by erroneous indications of attack or a simple sensor or computer error. Mutual interaction between the control programs could not be tested in advance. The outcome of the interaction of such complex systems would be intrinsically unpredictable, but fast escalation is possible and likely. In a severe crisis with fear of preemption this could greatly destabilize the military situation between potential enemies.</p>



<p>As political and military leaders become increasingly dependent on systems they cannot explain or predict, it will make the traditional means of conflict resolution and de-escalation more difficult or impossible. Unpredictable systems will give leaders false impressions of their capabilities, leading to overconfidence or encouraging preemptive attacks. Moreover, automated attacks, responses, and escalations will make it more difficult for leaders to interpret the intentions, decisions and actions of their adversaries, and will also limit their options for response. Systems that automatically react or attack may miss opportunities to find other, less violent, ways to achieve military objectives, or preclude opportunities for diplomatic or political resolutions to a conflict. The overall effect of these systems will be to close off avenues and opportunities to avoid conflicts, to de-escalate conflicts, and to find means to end hostilities.</p>



<ul class="wp-block-list">
<li>Moral responsibility</li>
</ul>



<p>No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect humans as inherent bearers of rights and dignity. A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end, a human life. Decisions to end human life must be made by humans in order to be morally justifiable. These are responsibilities of unavoidable moral weight that cannot be delegated to machines or satisfied by the mere inclusion of humans in the writing of computer programs. While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility. This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.</p>



<ul class="wp-block-list">
<li>Meaningful Human Control</li>
</ul>



<p>Much hinges on the degree to which AWS can be meaningfully controlled by humans. Robust scientific scholarship on human psychology suggests that humans experience cognitive limitations when it comes to technological/computational systems. This condition is known as automation bias by which the human is cognitively hindered from having sufficient contextual understanding to be able to intervene with systems that are fully autonomous and function at speeds beyond human capabilities. In order to safeguard meaningful human control (not merely functional control) over AI-enabled AWS, those<br>involved in operating or deciding to deploy AWS should have full contextual and situational awareness of the target area at the time of a specific attack. They must also be able to perceive and react to changes or unanticipated situations that arise; ensure active and deliberate participation in the action; have sufficient training and understanding of the system and its likely actions; have adequate time for meaningful control and have the means and knowledge for a rapid suspension of an action. For many AWS this is not possible. Meaningful human control is fundamental to the edifice of the laws of war and the ethics of war.</p>



<p><strong><span style="text-decoration: underline;">Moving Forward: A Treaty to Prohibit and Regulate the Use of AWS</span></strong></p>



<p>We support calls from States, as well as the UN Secretary-General and the President of the ICRC, for an international legally binding treaty prohibiting and regulating the use of AWS.</p>



<p>What is needed is a legally binding instrument that obligates States to adhere to prohibitions and regulatory limitations for AWS. Codes of conduct and political declarations are not enough for systems that pose such grave risks to global peace and security. This legally binding instrument must apply to the automated control of all weapons, and require meaningful human control in compliance with substantive regulations for the use of force in all cases. Such a treaty should apply to all military uses of AWS and systems that generate or select targets, as well as to all police, border security and other civilian applications that automate the use of force.</p>



<p>The treaty should prohibit autonomous weapons systems that are ethically or legally unacceptable. This includes autonomous weapons systems for which the operation or effects cannot be sufficiently understood, predicted and explained; autonomous weapons systems that cannot be used with meaningful human control; and autonomous weapons systems designed to target human beings.<br></p>



<p>The treaty should include positive obligations for States to use AWS systems that are permitted only within the bounds of clearly stipulated regulations that ensure adherence to international human rights and the key principles of international humanitarian law. We believe that an emerging norm around meaningful human control can be articulated and codified through a treaty negotiation in a process that includes all States, civil society, and industry and technical experts. We urge the Secretary-General to advance the creation of such a forum within the General Assembly, and look forward to offering our expertise to those discussions.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img src="https://www.icrac.net/wp-content/uploads/2019/01/LauraNolan2.jpg" width="64" alt="Laura Nolan" /></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Laura Nolan</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19903</post-id>	</item>
		<item>
		<title>ICRAC statement at the March 2019 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-march-2019-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 26 Mar 2019 15:50:46 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=6170</guid>

					<description><![CDATA[As delivered by Prof. Peter Asaro, March 26, 2019. ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<p><img data-recalc-dims="1" loading="lazy" decoding="async" width="600" height="450" class="wp-image-6177" style="width: 600px;" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=600%2C450&#038;ssl=1" alt="" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=4608&amp;ssl=1 4608w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_20190326_163824.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>



<p><strong><em>As delivered by Prof. Peter Asaro</em>, March 26, 2019.</strong></p>



<p>ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems. And (as an aside) we should not forget the scientifically recognized limitations of the technology or the foreseeable threats to global security such weapons pose.</p>



<p>We are
also pleased with the statements and working papers beginning to examine the
requirements for human control and planning in military systems. While this can
be multifaceted, we must not let the complexity of military planning throw a
smoke screen over the core issues of the meaningful human assessment of all
targets, their legitimacy and the proportionate use of force.</p>



<p>We are glad that we see the beginnings of a more nuanced approach to the control of weapons systems that cannot be captured by gross terms such as in-the-loop, on-the-loop, the broader loop, human oversight, and appropriate levels of human judgement. However, these terms continue to insinuate themselves in military, political and defence contractor’s narratives outside of the CCW. We welcome the suggestion of the IPRAW report to distinguish control-by-design and control-in-use—acknowledging that ultimate responsibility for the use of force lies in the specific context of its use.</p>



<figure class="wp-block-image"><img data-recalc-dims="1" loading="lazy" decoding="async" width="480" height="640" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=480%2C640&#038;ssl=1" alt="" class="wp-image-6174" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?w=480&amp;ssl=1 480w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2019/03/IMG_7726-e1553615341514.jpg?resize=225%2C300&amp;ssl=1 225w" sizes="auto, (max-width: 480px) 100vw, 480px" /></figure>



<p>As a
scientific and scholarly group, our focus is on how we can make control
effective and ensure that operators, commanders and planners are making clear
judgements about the validity of every attack at the time of that attack.</p>



<p>To do
this we need to move away from blanket terms and examine in detail how humans
interact with automated machinery. As we have pointed out before, there has
been more than 30 years of scientific research on human supervisory control of
machinery and more than 100 years of research on the psychology of human
reasoning. Ignoring the science for sake of expediency could lead us down a
path to a humanitarian disaster.</p>



<p>The
scientific approach is not mutually exclusive to an examination of the military
control of weapons and the many lessons to be learned for current methods. Indeed,
we applaud the UK’s paper on human control in 2018 and that of the Netherlands
and others this year.&nbsp; We may not agree
with all of the detail, but it is what we have urged all of the high
contracting parties to bring to the table.</p>



<p>This
combination of work can help us to design human-machine interfaces that allow
weapons to be controlled in a manner that is fully compliant with international
law and the principle of humanity.</p>



<p>First,
there should be a focus on what the human operator<strong>&nbsp;MUST</strong>&nbsp;do
in the targeting cycle. This is control by use which is governed by targeting
rules under International Humanitarian Law and International Human Rights Law,
which were well articulated by the ICRC in their statement this morning.
Further, international law rules that apply after the use of weapons – such as
those that relate to human responsibility – must be satisfied.</p>



<p>Second,
the design of weapon systems must render them&nbsp;<strong>INCAPABLE</strong>&nbsp;of
operating without meaningful human control. &nbsp;This is control by design,
which is governed by international weapons law. In terms of international
weapons law, if the weapon system, by its design, is incapable of being
sufficiently controlled in terms of the law, then such a weapon should be
prohibited.</p>



<p>We need further
discussion of the details of human-machine interfaces, the distribution of
responsibility in the targeting cycle, and how their design can ensure IHL and
IHRL compliance. Such details need not be the substance of a treaty, and we
must resist being caught up in the weeds of process. We support German’s goal
of finding a shared understanding of the principles of human control that apply
to all weapons systems now and in the future, regardless of context, planning
or process. This is not different from the normal processes that operate in
science. One of the goals of science is to reduce the complexity of the world
to simple theories or principles that capture all of the experimental data. In
other words, we create abstractions of the details that are firmly coupled with
and informed by the details. As Einstein once said, explanations should be a
simple as possible but no simpler. “Human in the loop” and its variants fall
under the too simple category. Detailed accounts of every weapon type and how
it is controlled in every context is far too complex.</p>



<p>Let me
give you an example of an abstraction with three conditions that could make a
good starting point for discussions on the control of weapons systems. I have
said this before but clearly there is no prohibition on repeating yourself in
this room. </p>



<ol class="wp-block-list">
<li>a human commander (or operator) will have full contextual and<br>situational awareness of the target area for each and every attack and be able<br>to perceive and react to any change or unanticipated situations that may have<br>arisen since planning the attack.</li>



<li>there will be active cognitive participation in every attack with<br>sufficient time for deliberation on the nature of any target, its significance<br>in terms of the necessity and appropriateness of attack, and likely incidental<br>and possible accidental effects of the attack and</li>



<li>there will be a means for the rapid suspension or abortion of<br>every attack.</li>
</ol>



<p>These are general principles that could provide
a starting point for discussion by states in the context of negotiating a
legally binding treaty that clearly articulates the legal obligations of human
control. </p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6170</post-id>	</item>
		<item>
		<title>ICRAC statement at the 2018 CCW States Parties Meeting</title>
		<link>https://www.icrac.net/icrac-statement-at-the-2018-ccw-states-parties-meeting/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Fri, 23 Nov 2018 08:34:47 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4341</guid>

					<description><![CDATA[As delivered by Prof. Roser Martínez Quirante (in Spanish) &#160; Mr. President, representatives of nations, members of civil society, During the past 5 years, at the Convention on Conventional Weapons, we have seen a greater understanding of the problems and challenges posed by autonomous weapon systems. The ICRAC is satisfied with the general consensus on [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>As delivered by Prof. Roser Martínez Quirante (in Spanish)</p>
<p>&nbsp;</p>
<p>Mr. President, representatives of nations, members of civil society,</p>
<p>During the past 5 years, at the Convention on Conventional Weapons, we have seen a greater understanding of the problems and challenges posed by autonomous weapon systems. The ICRAC is satisfied with the general consensus on the need to retain human control over these systems, in particular, on the critical functions of selection and elimination of objectives. Therefore, we believe that the time has come to establish binding legal mechanisms that restrict the use of autonomous weaponry, underlining the importance of human judgment in critical decisions.</p>
<p>During our participation in this convention, we have generated a large number of scientific articles, books and reports that emphasize three main classes of risk.</p>
<p>First, this type of weapons can not guarantee compliance with international humanitarian law. We should not give a blank check to future technology. With the large-scale commercialization of AI it is true that we are observing a great innovation in areas that are beneficial for humanity, but at the same time we are witnessing the appearance of many problems with biases in decision and facial recognition algorithms that can be dramatic if they are applied in a warlike context.</p>
<p>If nations invest based on techno-scientific speculations, we believe that it will be practically impossible to return to the starting position when the new typologies of conflict that announce these weapons materialize. We urge states to consider the veracity of current technology and its limitations in the critical selection of objectives.</p>
<p>Second, there are considerable moral values ​​at risk. No machine, computer or algorithm is capable of recognizing a human being as such, nor can he respect it as a human being with rights and dignity. He only observes it as a bit of information. A machine, without intuition, without ethics or morals, can not even understand what it means to be in a state of war, much less what it means to end a human life.</p>
<p>Decisions to end human life must be made by humans and in a non-arbitrary way to be justified. In addition, we must not confuse the fact that humans develop computer programs with the objective that the calculated results of these programs constitute human decisions. While responsibility for the deployment of lethal force is a necessary condition for compliance with minimum ethical standards in armed conflict, that responsibility alone is not enough, also requiring recognition of the human, of its dignity, and the reflection on the value of life and the justification of the use of violent force.</p>
<p>Third, autonomous weapons systems represent a great danger to global security. The threshold for the application of military force will be reduced and the likelihood of conflict will increase. We are concerned that the human control mechanisms established and controlled for double verification and reconsideration, function as security boxes or switches and can be easily disconnected. This, in combination with unpredictable algorithmic interactions and unpredictable results, will increase the instability of the conflict. In addition, the development and use of autonomous weapons by some States unilaterally will provide strong incentives for their proliferation, including their use by actors who are not responsible to the legal frameworks governing the use of force. Do we really need this new competitive arms race?</p>
<p>From the ICRAC as well as from other organizations involved in the Stop Killer Robots Campaign, representing a large part of international civil society, we urge the Convention to lay the foundations for the elaboration of an international treaty whose main objective is to prohibit preventive way autonomous weapons in clear application of the precautionary principle.</p>
<p>&nbsp;</p>
<p>***</p>
<p>&nbsp;</p>
<p>Señor presidente, representantes de las naciones, miembros de la sociedad civil,</p>
<p>&nbsp;</p>
<p>Durante los últimos 5 años en la Convención de armas Convencionales, hemos visto una mayor comprensión de los problemas y desafíos planteados por los sistemas de armamento autónomo. El ICRAC está satisfecho con el consenso general sobre la necesidad de retener el control humano sobre estos sistemas de armamento, en particular, sobre las funciones críticas de selección y eliminación de objetivos. Por ello consideramos que ha llegado el momento de establecer unos mecanismos legales vinculantes que restrinjan el uso de armamento autónomo subrayando la importancia del juicio humano en decisiones críticas.</p>
<p>&nbsp;</p>
<p>Durante nuestra participación en esta convención, hemos generado un gran número artículos científicos, libros e informes que enfatizan tres clases principales de riesgo.</p>
<p>&nbsp;</p>
<p>Primero, esta tipología de armas no puede garantizar el cumplimiento del derecho internacional humanitario. No debemos dar un cheque en blanco a la tecnología futura. Con la comercialización a gran escala de la IA es cierto que estamos observando una gran innovación en ámbitos benéficos para la humanidad, pero al mismo tiempo estamos comprobando la aparición de muchos problemas con sesgos en los algoritmos de decisión y de reconocimiento facial que pueden ser dramáticos si se aplican en un contexto bélico.</p>
<p>&nbsp;</p>
<p>Si las naciones invierten en base a especulaciones tecnocientíficas, creemos que será prácticamente imposible volver a la posición de partida cuando las nuevas tipologías de conflicto que anuncian estas armas, se materialicen. Instamos a los estados a considerar la veracidad de la tecnología actual y sus limitaciones en la selección crítica de objetivos legítimos.</p>
<p>&nbsp;</p>
<p>Segundo, hay considerables valores morales en riesgo. Ninguna máquina, computadora o algoritmo es capaz de reconocer a un ser humano como tal, ni puede respetarlo como un ser con derechos y dignidad humana. Solo lo observa como un bit de información. Una máquina, sin intuición, sin ética ni moral, ni siquiera puede entender lo que significa estar en estado de guerra, y mucho menos lo que significa terminar con una vida humana.</p>
<p>&nbsp;</p>
<p>Las decisiones para acabar con la vida humana deben ser tomadas por los humanos y de forma no arbitraria para ser justificadas. Además, no debemos confundir el hecho de que los humanos desarrollan programas informáticos con el objetivo que los resultados calculados de esos programas constituyan decisiones humanas. Si bien la responsabilidad por el despliegue de la fuerza letal es una condición necesaria para el cumplimiento de los estándares mínimos éticos en el conflicto armado, esa responsabilidad por sí sola no es suficiente, requiriendo también el reconocimiento de lo humano, de su dignidad, y la reflexión sobre el valor de la vida y la justificación del uso de la fuerza violenta.</p>
<p>&nbsp;</p>
<p>En tercer lugar, los sistemas de armas autónomos representan un gran peligro para la seguridad global. El umbral para la aplicación de la fuerza militar se reducirá y la probabilidad de conflicto aumentará. Nos preocupa que los mecanismos de control humano establecidos y controlados encaminados a una doble verificación y reconsideración, funcionen como cajas de seguridad o interruptores y puedan ser fácilmente desconectados. Esto, en combinación con interacciones de algoritmos imprevisibles y resultados impredecibles, aumentará la inestabilidad del conflicto. Además, el desarrollo y uso de armas autónomas por parte de algunos Estados proporcionará fuertes incentivos para su proliferación, incluido su uso por parte de actores que no son responsables ante los marcos legales que rigen el uso de la fuerza. ¿Realmente necesitamos esta nueva carrera competitiva de armamentos?</p>
<p>&nbsp;</p>
<p>Desde el ICRAC así como desde otras organizaciones involucradas en la Campaña Stop Killer Robots, en representación de una gran parte de la sociedad civil internacional, instamos a la Convención a sentar las bases para la elaboración de un tratado internacional que tenga como objetivo principal prohibir de manera preventiva las armas autónomas en aplicación clara del principio de precaución.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4341</post-id>	</item>
		<item>
		<title>ICRAC general statement at the August 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-general-statement-at-the-august-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Wed, 29 Aug 2018 15:28:10 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4263</guid>

					<description><![CDATA[As delivered by Prof. Noel Sharkey Mr Chairman, Over the last 5 years at the CCW we have seen an increased understanding of the issues and challenges posed by autonomous weapons systems. ICRAC is pleased with the general consensus that we must retain human control over weapons systems, in particular, over the critical functions of [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><em>As delivered by Prof. Noel Sharkey</em></p>
<p>Mr Chairman,</p>
<p>Over the last 5 years at the CCW we have seen an increased understanding of the issues and challenges posed by autonomous weapons systems. ICRAC is pleased with the general consensus that we must retain human control over weapons systems, in particular, over the critical functions of selecting and killing targets. During our time at the CCW, we have produced many scientific papers and reports emphasising three major classes of risk.</p>
<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="size-medium wp-image-4308 alignright" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?w=2000&amp;ssl=1 2000w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/IMG_20180829_172118314_BURST000_COVER_TOP.jpg?w=3000&amp;ssl=1 3000w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><strong>First</strong>, we do not believe that IHL compliance can be guaranteed with autonomous weapons systems. Some argue that the technology will be able to comply with IHL in the future. But there is absolutely no evidence for that. We must not rely on hopeware and speculations about future technology. With the mass scale commercialisation of AI we are seeing great innovation but we are also seeing the emergence of many problems with bias in decision algorithms and face recognition (see my new ICRC blog post for more on this). If nations invest heavily on the basis of technical speculations, we believe that it will be difficult to put the toothpaste back in the tube when the humanitarian crises begin to emerge. We urge states to look at the plausibility of the current technology and how it falls short in the critical function of selecting legitimate targets.</p>
<p><strong>Second,</strong> there are considerable moral values at risk. No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect the human as a being with human rights and human dignity.  A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end a human life. Decisions to end human life must be made by humans in order to be justified.  Further, we should not mistake the fact that humans write computer programs to imply that the calculated results of those programs constitute human decisions.  While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility.  This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.</p>
<p><strong>Third</strong>, Autonomous Weapons Systems pose great dangers to global security. The threshold for applying military force will be lowered and the likelihood of conflict will go up. We are concerned that tried and tested human control mechanisms for double checking and reconsidering, with humans functioning as fail-safes or circuit-breakers, would be discontinued. This, in combination with unforeseeable algorithm interactions and their unpredictable outcomes, increases crisis instability. In addition, the development and use of Autonomous weapons by <em>some</em> States will provide strong incentives for their proliferation, including their use by actors not accountable to legal frameworks governing the use of force. Do we really need a new arms race?</p>
<p><strong>Finally,</strong> we urge that nations urgently move towards negotiations for a legally binding instrument in further deliberations next year. I am going off script here but look &#8211; come on – and no offence intended – but I am a scientist and not a diplomat so in plain speech there are those here who have an interest in slowing down the move towards a ban while they quickly continue to develop the weapons. Don’t be fooled or bullied by these tactics or the mudslide of refining definitions. We ask you &#8211; please &#8211; get on with ridding us of these morally reprehensible weapons before it is too late.</p>
<p>Thank you, Mr Chairman</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4263</post-id>	</item>
		<item>
		<title>ICRAC statement on the human control of weapons systems at the August 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-august-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Frank Sauer]]></dc:creator>
		<pubDate>Wed, 29 Aug 2018 08:40:49 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4246</guid>

					<description><![CDATA[As delivered by Dr. Elke Schwarz Thank you, Mr Chairperson, The International Committee for Robot Arms Control is pleased to see states move away from the use of broad, brush-stroke terms such as in-the-loop, on-the-loop, the wider loop, human oversight, and appropriate human judgement. We agree with the working paper from Estonia and Finland that [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4248" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/08/DlwQIN-XoAAcT7v.jpg-large.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" />As delivered by Dr. Elke Schwarz</p>
<p>Thank you, Mr Chairperson,</p>
<p>The International Committee for Robot Arms Control is pleased to see states move away from the use of broad, brush-stroke terms such as in-the-loop, on-the-loop, the wider loop, human oversight, and appropriate human judgement. We agree with the working paper from Estonia and Finland that complex definitions of autonomy and autonomous weapons systems is moving us in the wrong direction. As scientists we believe, following Einstein, that definitions should be as simple as possible but no simpler. In that, we applaud the approach of the ICRC, that focus should be on the <em>critical functions</em> of target selection and the application of violent force. This counters concerns that a prohibition of autonomous weapons systems (AWS) would impact on innovation in other civilian and non-lethal military applications.</p>
<p>&nbsp;</p>
<p>ICRAC holds that the way forward is to focus on the meaningful human control of weapons systems. For human control to be <em>meaningful</em> we need to examine how humans interact with machines and understand the types of human-machine biases that can occur in the selection of legitimate targets. Lessons should be learned from 30 years of research on human supervisory control of machinery and more than 100 years of research on the psychology of human reasoning. A combination of this work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principles of humanity.</p>
<p>First, there should be a focus on what the human operator<strong> MUST</strong> <em>do</em> in the targeting cycle. This is <em>control in use</em> which is governed by targeting rules under International Humanitarian Law and International Human Rights Law. Further, international law rules that apply <em>after</em> the use of weapons – such as those that relate to human responsibility – must be satisfied.</p>
<p>Second, the design of weapon systems must render them <strong>INCAPABLE</strong> of operating <em>without</em> meaningful human control.  This is <em>control by design</em>, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled, then such a weapon is illegal <em>per se. </em>Systems <strong>MUST</strong> be designed to ensure human responsibility and accountability.</p>
<p>Ideally the following three conditions should be followed for the control of weapons systems:</p>
<ol>
<li>a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.</li>
<li>there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and</li>
<li>there will be a means for the rapid suspension or abortion of every attack.</li>
</ol>
<p>For further details please see our guidelines for the human control of weapons systems from the April meeting this year.</p>
<p>While systems must be designed to ensure safety and responsibility, we should not mistake the review of weapons and good design as itself a form of human control. The responsibility to make decisions of life and death cannot be delegated to machines, nor to the review- or design process of those machines.</p>
<p>Thank you Mr Chairperson</p>
<p>&nbsp;</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Frank Sauer' src='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/7367abd54bcccab11252f513db7ac0ab9bd9b726dcc720fe7e55b9f594fdda9d?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.unibw.de/frank.sauer">Frank Sauer</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4246</post-id>	</item>
		<item>
		<title>ICRAC statement on the human control of weapons systems at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Wed, 11 Apr 2018 13:06:43 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=4006</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 11 April 2018 Mr Chairperson, We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-4007" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135-300x300.jpg?resize=300%2C300&#038;ssl=1" alt="" width="300" height="300" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?resize=300%2C300&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/IMG_5870-e1523451941135.jpg?w=640&amp;ssl=1 640w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 11 April 2018</p>
<p>Mr Chairperson,</p>
<p>We have been very pleased with this morning&#8217;s session as states begin to contemplate a move towards policies on the human control of weapons systems. On a pedantic note: we cannot talk about the meaningful human control of LAWS as that would make them no longer an autonomous weapon.</p>
<p>In the view of ICRAC, the control of weapons systems is more nuanced than can be captured by terms such as in-the-loop, on-the-loop, the broader loop, looping-the loop, human oversight, and appropriate human judgement. In this way we agree strongly with the statement made by Brazil and several others in this session who believe that the devil is in the detail.</p>
<p>For human control to be meaningful we need to examine how humans interact with machines and understand the types of human-machine biases that can occur in the selection of legitimate targets. Lessons should be learned from 30 years of research on human supervisory control of machinery <a href="https://xn--yxadbbg.tv/tag/milf-sex/" style="border: none; color: #333; font-weight: normal !important; text-decoration: none;">xn--yxadbbg milf</a> and more than 100 years of research on the psychology of human reasoning. This combination of this work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principle of humanity.</p>
<p>First, there should be a focus on what the human operator<strong> MUST</strong> do in the targeting cycle. This is control by use which is governed by targeting rules under International Humanitarian  Law and International Human Rights Law. Further, international law rules that apply after the use of weapons – such as those that relate to human responsibility – must be satisfied.</p>
<p>Second, the design of weapon systems must render them <strong>INCAPABLE</strong> of operating without meaningful human control. &nbsp;This is control by design, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled in terms of the law, then such a weapon is illegal <em>per se.</em></p>
<p>Ideally the following three conditions should be followed for the control of weapons systems:</p>
<ol>
<li>a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and is able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.</li>
<li>there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and</li>
<li>there will be a means for the rapid suspension or abortion of every attack.</li>
</ol>
<p>Thank you Mr Chairperson</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4006</post-id>	</item>
		<item>
		<title>Short ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/short-icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[nsharkey]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:06:41 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3993</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Prof. Noel Sharkey, on 10 April 2018 Mr. Chairperson, There have been very useful and interesting discussions this morning. I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3994" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?resize=1024%2C768&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/DaaaODtW4AEjomI.jpg-large.jpg?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Prof. Noel Sharkey, on 10 April 2018</p>
<p>Mr. Chairperson,</p>
<p>There have been very useful and interesting discussions this morning.</p>
<p>I speak here as chair of an academic NGO: the International Committee for Robot Arms Control (ICRAC) and as a member of the scientific community in the field of Artificial Intelligence and Robotics with specialty in Machine Learning.</p>
<p>We stress again that it would be confusing to broaden the discussion of LAWS into issues about Artificial Intelligence or weapons with emerging intelligence. By chasing definitions of LAWS down the rabbit hole of AI, we remove ourselves from the key issues that need to be urgently discussed here. The definition extracted from ICRC, and echoed by a number of states this morning, is concerned with weapons that have autonomy in the critical functions of target selection and the application of force. This is sufficient for our definitional purposes here: decisions about target selection and the application of force are delegated to a machine. <strong>Let me highlight that it does not matter what techniques or computing methods are used to create autonomy in these critical functions.</strong></p>
<p>What is important here are questions about the nature of human control required and acceptable to ensure compliance with international law. <strong>It is key that we get this right.</strong> You can read more about this in ICRAC’s new working paper <strong>Guidelines for the Human Control of Weapons Systems</strong> that will be delivered at Wednesday’s side event. We support those states who have stated that the focus of this meeting should be on human control of weapons systems and human-machine interaction. In this way we can make real progress this week and protect our future no matter what the technological developments.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='nsharkey' src='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/e6cd227594f64421151214d3d51a2a80df88e84aa4bd648da1116ba45dffc7e0?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">nsharkey</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Noel SharkeyPhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield and  was an EPSRC Senior Media Fellow (2004-2010).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3993</post-id>	</item>
		<item>
		<title>ICRAC Statement at the April 2018 CCW GGE</title>
		<link>https://www.icrac.net/icrac-statement-at-the-april-2018-ccw-gge/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Mon, 09 Apr 2018 13:56:04 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Statements]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3975</guid>

					<description><![CDATA[International Committee for Robot Arms Control Statement to the UN GGE Meeting 2018 Delivered by Dr Thompson Chengeta, on 9 April 2018 Mr. Chairperson, I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3979" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/Thompson2018.jpg?w=1024&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>International Committee for Robot Arms Control<br />
Statement to the UN GGE Meeting 2018<br />
Delivered by Dr Thompson Chengeta, on 9 April 2018</p>
<p><iframe loading="lazy" title="Dr Thompson Chengeta Statement on behalf of the International Committee for Robot Arms Control" width="500" height="281" src="https://www.youtube.com/embed/ALvbgCAfBW8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>Mr. Chairperson,</p>
<p>I speak on behalf of the International Committee for Robot Arms Control [ICRAC], a founding member of the Campaign to Stop Killer Robots. Ambassador Gill, we thank you for your important work. Mr Chairperson, we are going to focus here on four points:</p>
<p>FIRST, a ban on LAWS will have no negative impact on the development of socially beneficial uses of autonomy, robotics or artificial intelligence. In fact, such a ban will direct more resources and specialists to work on humanitarian and beneficial applications.</p>
<p>SECOND, human control of weapon systems is a critical key component of the present discussions. It does not matter what name or term is used to describe human control, what is imperative is that we make sure that human control is consistent with applicable legal, ethical and moral standards.</p>
<p>THIRD, human input in the making of judgements to use violent force is at the centre of legal, ethics and moral standards pertaining to human responsibility for use of such force. No matter how attractive, if a proposed definition of human control does not resolve the accountability gap challenge, then such a proposal is legally inadequate. To that end, States should ask the question: What is the Legally Required Level of Human Control at each “touch point” in the human-machine interaction chain? At every step in the development, deployment, targeting and use of a weapon system, there is an obligation to ensure that the system is both capable of being used in compliance with applicable legal norms.</p>
<p>FOURTH, Poland and ICRC Working Papers’ emphasis on ethics and reassertion of the Principle of Non-Delegation of the Authority to Kill to non-human mechanisms is worth noting. Dictates of public conscience must always take precedence over any short-term advantage that might be gained from autonomous technologies. Furthermore, respect for human rights and human dignity, even within armed conflict, is a moral imperative recognized by the UN and the CCW. ICRAC reiterates the spirit of the Martens Clause—that morality can provide a strong basis for new law.</p>
<p>Finally, human control over critical functions of weapon systems and a ban on fully autonomous weapon systems are two sides of the same coin. States are urged to focus on the requirement of human control rather than technical definitions of autonomy. Further, States must move towards negotiation of a legally binding instrument on this issue.</p>
<p>Mr Chairperson, I thank you.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3975</post-id>	</item>
		<item>
		<title>Unpriming the pump: Remystifications of AI at the UN’s Convention on Certain Conventional Weapons</title>
		<link>https://www.icrac.net/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/</link>
		
		<dc:creator><![CDATA[Lucy Suchman]]></dc:creator>
		<pubDate>Sun, 08 Apr 2018 22:43:38 +0000</pubDate>
				<category><![CDATA[Front Page]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opinion]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=3957</guid>

					<description><![CDATA[*Originally published on the &#8220;Robot Futures Blog&#8221; In the lead up to the next meeting of the CCW’s Group of Governmental Experts at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence. Designated a primer for CCW [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><img data-recalc-dims="1" loading="lazy" decoding="async" class="alignnone size-medium wp-image-3958" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&#038;ssl=1" alt="" width="300" height="225" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=300%2C225&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=160%2C120&amp;ssl=1 160w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=768%2C576&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2018/04/2015-04-14-10.16.53-1024x7681.jpg?resize=1024%2C7681&amp;ssl=1 1024w" sizes="auto, (max-width: 300px) 100vw, 300px" /><br />
*Originally published on the <a href="https://robotfutures.wordpress.com/2018/04/07/unpriming-the-pump-remystifications-of-ai-at-the-uns-convention-on-certain-conventional-weapons/">&#8220;Robot Futures Blog&#8221;</a></p>
<p>In the lead up to the next meeting of the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">CCW’s Group of Governmental Experts</a> at the United Nations April 9-13th in Geneva, the UN’s Institute for Disarmament Research has issued a briefing paper titled <a href="http://www.unidir.ch/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-artificial-intelligence-en-700.pdf">The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence</a>. Designated <em>a primer for CCW delegates</em>, the paper lists no authors, but a special acknowledgement to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, suggests that the viewpoints of the Washington, D.C.-based <a href="https://www.cnas.org/">CNAS </a>are well represented.</p>
<p>Surprisingly for a document positioning itself as “an introductory primer for non-technical audiences on the current state of AI and machine learning, designed to support the international discussions on the weaponization of increasingly autonomous technologies” (pp. 1-2), the paper opens with a series of assertions regarding “rapid advances” in the field of AI. The evidence offered is the case of Google/Alphabet affiliate Deep Mind’s AlphaGo Zero, announced in December 2017 (“only a few weeks after the November 2017 GGE”) as having achieved better-than-human competency at (simulations of) the game of Go:</p>
<p style="padding-left: 30px;">Although AlphaGo Zero does not have direct military applications, it suggests that current AI technology can be used to solve narrowly defined problems provided that there is a clear goal, the environment is sufficiently constrained, and interactions can be simulated so that computers can learn over time (p.1).</p>
<p>The requirements listed – a clear (read computationally specifiable) goal, within a constrained environment that can be effectively simulated – might be underscored as cautionary qualifications on claims for AI’s applicability to military operations. The tone of these opening paragraphs suggests, however, that these developments are game-changers for the GGE debate.</p>
<p>The paper’s first section, titled ‘What is artificial intelligence,’ opens with the tautological statement that “Artificial intelligence is the field of study devoted to making machines intelligent” (p. 2). A more demystifying description might say, for example, that AI is the field of study devoted to developing computational technologies that automate aspects of human activity conventionally understood to require intelligence. While the authors observe that as systems become more established they shift from characterizations of “intelligence” to more mundane designations like “automation” or “computation,” they suggest that rather than the result of demystification this is itself somehow an effect of the field’s advancement. One implication of this logic is that the ever-receding horizon of machine intelligence should be understood not as a marker of the technology’s limits, but of its success.</p>
<p>We begin to get a more concrete sense of the field in the section titled ‘Machine learning,’ which outlines the latter’s various forms. Even here, however, issues central to the deliberations of the GGE are passed over. For example, in the statement that “[r]ather than follow a proscribed [sic] set of <em>if–then </em>rules for how to behave in a given situation, learning machines are given a goal to optimize – for example, winning at the game of chess” (p. 2) the example is not chosen at random, but rather is illustrative of the unstated requirement that the ‘goal’ be computationally specifiable. The authors do helpfully explain that “[s]upervised learning is a machine learning technique <em>that makes use of labelled training data</em>” (my emphasis, p. 3), but the contrast with “unsupervised learning,” or “learning from unlabelled data based on the identification of patterns” fails to emphasize the role of the human in assessing the relevance and significance of patterns identified. In the case of reinforcement learning “in which an agent learns by interacting with its environment,” the (unmarked) examples are again from strategy games in which, implicitly, the range of agent/environment interactions are sufficiently constrained. And finally, the section on ‘Deep learning’ helpfully emphasizes that so called neural networks rely either on very large data sets and extensive labours of human classification (for example, the labeling of images to enable their ‘recognition’), or on domains amenable to the generation of synthetic ‘data’ through simulation (for example, in the case of strategy games like Go). Progress in AI, in sum, has been tied to growth in the availability of large data sets and associated computational power, along with increasingly sophisticated algorithms within highly constrained domains of application.</p>
<p>Yet in spite of these qualifications, the concluding sections of the paper return to the prospects for increasing machine autonomy:</p>
<p style="padding-left: 30px;">Intelligence is a system’s ability to <em>determine the best course of action </em>to achieve its goals. Autonomy is the <em>freedom </em>a system has in accomplishing its goals. Greater autonomy means more freedom, either in the form of undertaking more tasks, with less supervision, for longer periods in space and time, or in more complex environments … Intelligence is related to autonomy in that more intelligent systems are capable of deciding the best course of action for more difficult tasks in more complex environments. This means that more intelligent systems <em>could </em>be granted more autonomy and would be capable of successfully accomplishing their goals (p. 5, original emphasis).</p>
<p>The logical leap exemplified in this passage’s closing sentence is at the crux of the debate regarding lethal autonomous weapon systems. The authors of the primer concede that “all AI systems in existence today fall under the broad category of “narrow AI”. This means that their intelligence is limited to a single task or domain of knowledge” (p. 5). They acknowledge as well that “many advance [sic] AI and machine learning methods suffer from problems of predictability, explainability, verifiability, and reliability” (p. 8). These are precisely the concerns that have been consistently voiced, over the past five meetings of the CCW, by those states and civil society organizations calling for a ban on autonomous weapon systems. And yet the primer takes us back, once again, to a starting point premised on general claims for the field of AI’s “rapid advance,” rather than careful articulation of its limits. Is it not the latter that are most relevant to the questions that the GGE is convened to consider?</p>
<p>The UNIDIR primer comes at the same time that the United States has issued a new position paper in advance of the CCW titled ‘Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems’ (<a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">CCW/GGE.1/2018/WP.4</a>). While the US has taken a cautionary position in relation to lethal autonomous weapon systems in past meetings, asserting the efficacy of already-existing weapons reviews to address the concerns raised by other member states and civil society groups, it now appears to be moving in the direction of active promotion of LAWS on the grounds of promised increases in precision and greater accuracy of targeting, with associated limits on unintended civilian casualties – promises that have been extensively critiqued at previous CCW meetings. Taken together, the UNIDIR primer and the US working paper suggest that, rather than moving forward from the debates of the past five years, the 2018 meetings of the CCW will require renewed efforts to articulate the limits of AI, and their relevance to the CCW’s charter to enact Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Lucy Suchman' src='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/570cdedb8f03c7ac5b3d673183a83551a9a680092a081298dab642fdc3fb15d1?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="https://www.lancaster.ac.uk/sociology/people/lucy-suchman">Lucy Suchman</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Lucy Suchman is a Professor of the Anthropology of Science and Technology at Lancaster University in the UK. Before taking up her present post she was a Principal Scientist at Xerox's Palo Alto Research Center (PARC), where she spent twenty years as a researcher. During this period she became widely recognized for her critical engagement with artificial intelligence (AI), as well as her contributions to a deeper understanding of both the essential connections and the profound differences between humans and machines.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3957</post-id>	</item>
		<item>
		<title>NYT warns of killer robot gap</title>
		<link>https://www.icrac.net/nyt-warns-killer-robot-gap/</link>
		
		<dc:creator><![CDATA[Mark Gubrud]]></dc:creator>
		<pubDate>Sun, 29 Sep 2013 15:08:44 +0000</pubDate>
				<category><![CDATA[Analysis]]></category>
		<category><![CDATA[Front Page]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2174</guid>

					<description><![CDATA[New York Times science writer John Markoff reported on Sept.23 that the US military “lags” in development of unmanned ground vehicles (UGVs), which is sort of true if you compare the status of UGVs with that of unmanned air vehicles (UAVs). The real reason, as Markoff acknowledges, has to do with the technical difficulty of locomotion on [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p>New York Times science writer John Markoff <a href="http://www.nytimes.com/2013/09/24/science/military-lags-in-push-for-robotic-ground-vehicles.html">reported on Sept.23</a> that the US military “lags” in development of unmanned ground vehicles (UGVs), which is sort of true if you compare the status of UGVs with that of unmanned air vehicles (UAVs). The real reason, as Markoff acknowledges, has to do with the technical difficulty of locomotion on rough and varied terrain, around obstacles and through impediments, as compared with the relative ease of flying through unobstructed air. But that didn’t prevent <a href="http://gawker.com/the-army-desperately-needs-more-killer-robots-1370505757">gawker</a>, <a href="http://digg.com/search?q=markoff">digg</a> and numerous tweeters from reading the article as a warning that the US military is falling behind someone in a race for “killer robots.”</p>
<p>Actually, Markoff does clearly say that the Pentagon is falling behind Google. In fact, the entire article seems to suggest that the military has neglected land robots, which is simply untrue, as I explain in <a href="http://gubrud.net/?p=49">my response</a>, posted on my own blog.</p>
<p>I felt compelled to respond because, just days before Markoff’s article was posted (it was also included in the Sept. 24 New York print edition), I had published in the Bulletin of the Atomic Scientists an <a href="http://thebulletin.org/us-killer-robot-policy-full-speed-ahead">analysis of US policy for autonomous weapons</a> which shows that it is actually an aggressive, “full speed ahead” policy. I don’t know if Markoff read my piece before he wrote his, but if you put them side by side and step back until you can only read the headlines, his looks like a rebuttal of mine, even though he is far too good a writer to say something that is actually wrong, and therefore he doesn’t actually contradict anything I wrote.</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Mark Gubrud' src='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a0ed93015aa261386521e2fdb3b63ff65d79da29491562533b052108724bcdcc?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong>Mark Gubrud</strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em"></div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2174</post-id>	</item>
	</channel>
</rss>
