UNGA LAWS Informals
ICRAC Statement on Technical Considerations
Delivered by Prof. Peter Asaro, 13 May 2025
Thank you Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a co-founding member of the Stop Killer Robots Campaign.
ICRAC has many concerns about the development and use of autonomous weapons and the accelerated production and promotion of these systems by private technology companies. Far from being a technically inevitable and practically necessary, autonomous weapons pose a considerable risk to global stability and security, and are likely to cause more civilian harm, rather than less. As a group of scholars with expertise in relevant domains, including robotics, AI, and digital information systems, we strongly urge caution.
We are concerned that the technology that underpins the functionalities of AWS is dangerously unsuitable for the complex and dynamic contexts of conflict. Specifically, the AI element in AWS poses considerable risks. Testing such systems is difficult and time-consuming, and the tools and methods for the verification and validation of AI systems do not yet exist, if they are possible at all. The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of using AI supported AWS.
At best, AI supported systems are only as good as the data on which they are trained on and appropriate, comprehensive and up-to-date data is hard to come by in contested conflict spaces. AI systems need frequent updates to remain relevant and functional, but with each substantial update, vital systems-aspects may become compromised requiring further verification and validation.
As we heard from these presenters, it is a well-known fact in technology and industry circles that AI systems remain unproven in terms of reliability for safety-critical situations and complex situations such as armed conflict. They are known to give inaccurate outputs, and newer generative AI systems, which are likely to find their way into the wider AWS environment, are known to hallucinate – that is they give false or misleading output which is difficult to distinguish from accurate results. In the case of generative AI, this behavior is guaranteed by its technical architecture and these types of errors can only be managed not eliminated. When AI experts and those that make the technologies used in AWS raise alarms about the inadequacies of AWS for conflict, we should listen.
We are concerned that the technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation and conflict at speed. Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about due to erroneous indications of attack or a simple sensor or computer error. Unpredictable systems, and systems which operators cannot understand or explain, will give leaders false impressions of their capabilities, leading to overconfidence or encouraging pre-emptive
attacks. This will lead to greater global instability and insecurity.
Finally, there are operational risks posed by AWS in that they give the illusion that such weapons are more precise and accurate, and will therefore inflict less harm. The extensive use of AI in current conflicts has given us an indication that the contrary might be the case. This is particularly so for database-driven systems that generate targeting lists faster than humans can evaluate and verify the lawfulness of targets. The technical capacity for precision or accuracy is not a warrant for discrimination or proportionality in use. Unless we establish clear legally binding limitations on AWS, there is no safeguard that systems that prioritize speed and scale are not used in an indiscriminate and disproportional manner, either intentionally or because humans have abdicated their judgement to a machine.
Thank you.