Statement on Technical Considerations in Open Informal Meeting at UNGA 1st Committee

Posted on 13 May 2025 by Peter Asaro

UNGA LAWS Informals
ICRAC Statement on Technical Considerations
Delivered by Prof. Peter Asaro, 13 May 2025


Thank you Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a co-founding member of the Stop Killer Robots Campaign.

ICRAC has many concerns about the development and use of autonomous weapons and the accelerated production and promotion of these systems by private technology companies. Far from being a technically inevitable and practically necessary, autonomous weapons pose a considerable risk to global stability and security, and are likely to cause more civilian harm, rather than less. As a group of scholars with expertise in relevant domains, including robotics, AI, and digital information systems, we strongly urge caution.

We are concerned that the technology that underpins the functionalities of AWS is dangerously unsuitable for the complex and dynamic contexts of conflict. Specifically, the AI element in AWS poses considerable risks. Testing such systems is difficult and time-consuming, and the tools and methods for the verification and validation of AI systems do not yet exist, if they are possible at all. The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of using AI supported AWS.

At best, AI supported systems are only as good as the data on which they are trained on and appropriate, comprehensive and up-to-date data is hard to come by in contested conflict spaces. AI systems need frequent updates to remain relevant and functional, but with each substantial update, vital systems-aspects may become compromised requiring further verification and validation.

As we heard from these presenters, it is a well-known fact in technology and industry circles that AI systems remain unproven in terms of reliability for safety-critical situations and complex situations such as armed conflict. They are known to give inaccurate outputs, and newer generative AI systems, which are likely to find their way into the wider AWS environment, are known to hallucinate – that is they give false or misleading output which is difficult to distinguish from accurate results. In the case of generative AI, this behavior is guaranteed by its technical architecture and these types of errors can only be managed not eliminated. When AI experts and those that make the technologies used in AWS raise alarms about the inadequacies of AWS for conflict, we should listen.

We are concerned that the technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation and conflict at speed. Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about due to erroneous indications of attack or a simple sensor or computer error. Unpredictable systems, and systems which operators cannot understand or explain, will give leaders false impressions of their capabilities, leading to overconfidence or encouraging pre-emptive
attacks. This will lead to greater global instability and insecurity.

Finally, there are operational risks posed by AWS in that they give the illusion that such weapons are more precise and accurate, and will therefore inflict less harm. The extensive use of AI in current conflicts has given us an indication that the contrary might be the case. This is particularly so for database-driven systems that generate targeting lists faster than humans can evaluate and verify the lawfulness of targets. The technical capacity for precision or accuracy is not a warrant for discrimination or proportionality in use. Unless we establish clear legally binding limitations on AWS, there is no safeguard that systems that prioritize speed and scale are not used in an indiscriminate and disproportional manner, either intentionally or because humans have abdicated their judgement to a machine.

Thank you.

Peter Asaro
Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media. His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues. Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research. He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities. Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.

Categorized | Uncategorized

Leave a Reply