ICRAC statement at the March 2019 CCW GGE

Posted on 26 March 2019 by Peter Asaro

As delivered by Prof. Peter Asaro

ICRAC has been pleased to hear states shift their focus away from definitions of the technologies of autonomous weapons systems and move towards discussing restriction of their use with regards to how they should be controlled. Of course, by definition, if states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems. And (as an aside) we should not forget the scientifically recognized limitations of the technology or the foreseeable threats to global security such weapons pose.

We are also pleased with the statements and working papers beginning to examine the requirements for human control and planning in military systems. While this can be multifaceted, we must not let the complexity of military planning throw a smoke screen over the core issues of the meaningful human assessment of all targets, their legitimacy and the proportionate use of force.

We are glad that we see the beginnings of a more nuanced approach to the control of weapons systems that cannot be captured by gross terms such as in-the-loop, on-the-loop, the broader loop, human oversight, and appropriate levels of human judgement. However, these terms continue suomi porno to insinuate themselves in military, political and defence contractor’s narratives outside of the CCW. We welcome the suggestion of the IPRAW report to distinguish control-by-design and control-in-use—acknowledging that ultimate responsibility for the use of force lies in the specific context of its use.

As a scientific and scholarly group, our focus is on how we can make control effective and ensure that operators, commanders and planners are making clear judgements about the validity of every attack at the time of that attack.

To do this we need to move away from blanket terms and examine in detail how humans interact with automated machinery. As we have pointed out before, there has been more than 30 years of scientific research on human supervisory control of machinery and more than 100 years of research on the psychology of human reasoning. Ignoring the science for sake of expediency could lead us down a path to a humanitarian disaster.

The scientific approach is not mutually exclusive to an examination of the military control of weapons and the many lessons to be learned for current methods. Indeed, we applaud the UK’s paper on human control in 2018 and that of the Netherlands and others this year.  We may not agree with all of the detail, but it is what we have urged all of the high contracting parties to bring to the table.

This combination of work can help us to design human-machine interfaces that allow weapons to be controlled in a manner that is fully compliant with international law and the principle of humanity.

First, there should be a focus on what the human operator MUST do in the targeting cycle. This is control by use which is governed by targeting rules under International Humanitarian Law and International Human Rights Law, which were well articulated by the ICRC in their statement this morning. Further, international law rules that apply after the use of weapons – such as those that relate to human responsibility – must be satisfied.

Second, the design of weapon systems must render them INCAPABLE of operating without meaningful human control.  This is control by design, which is governed by international weapons law. In terms of international weapons law, if the weapon system, by its design, is incapable of being sufficiently controlled in terms of the law, then such a weapon should be prohibited.

We need further discussion of the details of human-machine interfaces, the distribution of responsibility in the targeting cycle, and how their design can ensure IHL and IHRL compliance. Such details need not be the substance of a treaty, and we must resist being caught up in the weeds of process. We support German’s goal of finding a shared understanding of the principles of human control that apply to all weapons systems now and in the future, regardless of context, planning or process. This is not different from the normal processes that operate in science. One of the goals of science is to reduce the complexity of the world to simple theories or principles that capture all of the experimental data. In other words, we create abstractions of the details that are firmly coupled with and informed by the details. As Einstein once said, explanations should be a simple as possible but no simpler. “Human in the loop” and its variants fall under the too simple category. Detailed accounts of every weapon type and how it is controlled in every context is far too complex.

Let me give you an example of an abstraction with three conditions that could make a good starting point for discussions on the control of weapons systems. I have said this before but clearly there is no prohibition on repeating yourself in this room.

  1. a human commander (or operator) will have full contextual and situational awareness of the target area for each and every attack and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.
  2. there will be active cognitive participation in every attack with sufficient time for deliberation on the nature of any target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack and
  3. there will be a means for the rapid suspension or abortion of every attack.

These are general principles that could provide a starting point for discussion by states in the context of negotiating a legally binding treaty that clearly articulates the legal obligations of human control.

Peter Asaro
Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media. His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues. Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research. He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities. Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.

Leave a Reply