The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit its perspectives and recommendations to be considered by the United Nations Secretary General with respect to Resolution 78/241 on Lethal Autonomous Weapon Systems (adopted in December 2023). Founded in 2009, ICRAC is an international committee of experts in robotics technology, artificial intelligence, robot ethics, international relations, international security, arms control, international humanitarian law, international human rights law and philosophy of technology. We are deeply concerned about the pressing dangers that military robotics and automation pose to international peace, international security and stability, and the rights and safety of civilians in war. Based on our expertise, we are particularly concerned that military robotic systems will lead to more frequent, less restrained, and less accountable armed conflict. In light of these risks, we call for an international treaty to prohibit and restrict autonomous weapon systems.
As has been discussed in detail at the CCW GGE over the past decade, autonomous weapon systems (AWS) raise serious concerns for international humanitarian law in regard to complying with the principles of distinction and proportionality. The risk of triggering the proliferation of arms is another stark reality posed by AWS, as is the accessibility of these types of weapon systems to non-state armed groups, among other actors. The use of AWS may further spill into the arena of national and transnational organized crime in addition to policing at the domestic level. All the while, several operational concerns remain as to the use of AWS from the perspective of accountability, bias and the use of machine-learning algorithms which may develop beyond the capacity of “the human in the loop.” There are also serious risks to regional and global stability posed by replacing human decision making with machine decision making, as it becomes more difficult for political and military leaders to anticipate and interpret the intentions, decision and actions of their adversaries, and thus find ways to avoid or de-escalate conflicts.
We also note the threat that AWS pose to compliance with international human rights, particularly the right to life, the prohibition against torture, cruel and inhumane treatment, and above all the human right to dignity. We fear that an additional protocol to the CCW would fail to address these human rights concerns. We are concerned that the automated targeting and release of non-conventional weapons, including nuclear weapons, may also fall outside the scope of any legally binding CCW protocol. We thus advocate and support all calls for a legally binding instrument to prohibit and restrict the use of AWS, and urge the Secretary-General to encourage the initiation of a forum within the United Nations General Assembly that can include all States, cover autonomy and automation in the use of all weapons, and address international humanitarian law as well as human rights concerns.
This submission is informed by our comprehensive interdisciplinary expertise. We have published extensively on the ethical, legal, technical and security challenges of autonomous weapon systems, on the question of meaningful human control, and on the challenges of escalation at speed.
Scope
In accordance with the International Committee of the Red Cross we understand an autonomous weapon system as one that, potentially after initial activation or launch by a human, selects targets based on sensor data and engages the targets without human intervention. We endorse the recommendations of the International Committee of the Red Cross for a two-tiered approach that prohibits unpredictable systems and systems that explicitly target humans, while strictly regulating the use of autonomy in all other systems for the command, control and engagement of lethal force. This includes restrictions on the time, space, scope and scale of operations of such systems, as well as the types of targets and situations in which they may be used. In particular, we strongly agree that the only permissible targets of
such systems should be military objects by nature, and never civilian or dual-use targets, which should always require human judgment. More discussion is needed on the appropriate forms and regulation of the human-machine interaction in complex command and control systems. In particular, as computers and artificial intelligence collect and automatically analyze more and more data, greater clarity is needed in what constitutes meaningful human control in the context of automated target generation and identification, and how to ensure respect and responsibility for international law when such systems are used.
Key Challenges to Global Peace and Security
- Uncontrolled Escalation and Missed Opportunities for De-escalation and Diplomacy
The technical characteristics of AWS pose a considerable risk in enabling uncontrolled escalation at speed. As the thresholds for applying military force will be lowered, the likelihood of conflicts will go up. Actions and reactions to the adversary will have to be programmed in advance. Two AWS swarms moving at relatively close distance from each other, in international air space, for example, might interact in ways that could not be mitigated or controlled by a human in an appropriate time window. In case of an enemy attack, even a few seconds delay could mean loss of one’s systems, thus there will be strong pressure for fast counterattacks that preclude human consideration.
Escalation from crisis to war, or escalating a conflict to a higher level of violence, could come about by erroneous indications of attack or a simple sensor or computer error. Mutual interaction between the control programs could not be tested in advance. The outcome of the interaction of such complex systems would be intrinsically unpredictable, but fast escalation is possible and likely. In a severe crisis with fear of preemption this could greatly destabilize the military situation between potential enemies.
As political and military leaders become increasingly dependent on systems they cannot explain or predict, it will make the traditional means of conflict resolution and de-escalation more difficult or impossible. Unpredictable systems will give leaders false impressions of their capabilities, leading to overconfidence or encouraging preemptive attacks. Moreover, automated attacks, responses, and escalations will make it more difficult for leaders to interpret the intentions, decisions and actions of their adversaries, and will also limit their options for response. Systems that automatically react or attack may miss opportunities to find other, less violent, ways to achieve military objectives, or preclude opportunities for diplomatic or political resolutions to a conflict. The overall effect of these systems will be to close off avenues and opportunities to avoid conflicts, to de-escalate conflicts, and to find means to end hostilities.
- Moral responsibility
No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect humans as inherent bearers of rights and dignity. A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end, a human life. Decisions to end human life must be made by humans in order to be morally justifiable. These are responsibilities of unavoidable moral weight that cannot be delegated to machines or satisfied by the mere inclusion of humans in the writing of computer programs. While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility. This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.
- Meaningful Human Control
Much hinges on the degree to which AWS can be meaningfully controlled by humans. Robust scientific scholarship on human psychology suggests that humans experience cognitive limitations when it comes to technological/computational systems. This condition is known as automation bias by which the human is cognitively hindered from having sufficient contextual understanding to be able to intervene with systems that are fully autonomous and function at speeds beyond human capabilities. In order to safeguard meaningful human control (not merely functional control) over AI-enabled AWS, those
involved in operating or deciding to deploy AWS should have full contextual and situational awareness of the target area at the time of a specific attack. They must also be able to perceive and react to changes or unanticipated situations that arise; ensure active and deliberate participation in the action; have sufficient training and understanding of the system and its likely actions; have adequate time for meaningful control and have the means and knowledge for a rapid suspension of an action. For many AWS this is not possible. Meaningful human control is fundamental to the edifice of the laws of war and the ethics of war.
Moving Forward: A Treaty to Prohibit and Regulate the Use of AWS
We support calls from States, as well as the UN Secretary-General and the President of the ICRC, for an international legally binding treaty prohibiting and regulating the use of AWS.
What is needed is a legally binding instrument that obligates States to adhere to prohibitions and regulatory limitations for AWS. Codes of conduct and political declarations are not enough for systems that pose such grave risks to global peace and security. This legally binding instrument must apply to the automated control of all weapons, and require meaningful human control in compliance with substantive regulations for the use of force in all cases. Such a treaty should apply to all military uses of AWS and systems that generate or select targets, as well as to all police, border security and other civilian applications that automate the use of force.
The treaty should prohibit autonomous weapons systems that are ethically or legally unacceptable. This includes autonomous weapons systems for which the operation or effects cannot be sufficiently understood, predicted and explained; autonomous weapons systems that cannot be used with meaningful human control; and autonomous weapons systems designed to target human beings.
The treaty should include positive obligations for States to use AWS systems that are permitted only within the bounds of clearly stipulated regulations that ensure adherence to international human rights and the key principles of international humanitarian law. We believe that an emerging norm around meaningful human control can be articulated and codified through a treaty negotiation in a process that includes all States, civil society, and industry and technical experts. We urge the Secretary-General to advance the creation of such a forum within the General Assembly, and look forward to offering our expertise to those discussions.