As delivered by Prof. Noel Sharkey
Mr Chairman,
Over the last 5 years at the CCW we have seen an increased understanding of the issues and challenges posed by autonomous weapons systems. ICRAC is pleased with the general consensus that we must retain human control over weapons systems, in particular, over the critical functions of selecting and killing targets. During our time at the CCW, we have produced many scientific papers and reports emphasising three major classes of risk.
First, we do not believe that IHL compliance can be guaranteed with autonomous weapons systems. Some argue that the technology will be able to comply with IHL in the future. But there is absolutely no evidence for that. We must not rely on hopeware and speculations about future technology. With the mass scale commercialisation of AI we are seeing great innovation but we are also seeing the emergence of many problems with bias in decision algorithms and face recognition (see my new ICRC blog post for more on this). If nations invest heavily on the basis of technical speculations, we believe that it will be difficult to put the toothpaste back in the tube when the humanitarian crises begin to emerge. We urge states to look at the plausibility of the current technology and how it falls short in the critical function of selecting legitimate targets.
Second, there are considerable moral values at risk. No machine, computer or algorithm is capable of recognizing a human as a human being, nor can it respect the human as a being with human rights and human dignity. A machine cannot even understand what it means to be in a state of war, much less what it means to have, or to end a human life. Decisions to end human life must be made by humans in order to be justified. Further, we should not mistake the fact that humans write computer programs to imply that the calculated results of those programs constitute human decisions. While accountability for the deployment of lethal force is a necessary condition for moral responsibility in war, accountability alone is not sufficient for moral responsibility. This also requires the recognition of the human, respect for the human right to life and dignity, and reflection upon the value of life and the justification for the use of violent force.
Third, Autonomous Weapons Systems pose great dangers to global security. The threshold for applying military force will be lowered and the likelihood of conflict will go up. We are concerned that tried and tested human control mechanisms for double checking and reconsidering, with humans functioning as fail-safes or circuit-breakers, would be discontinued. This, in combination with unforeseeable algorithm interactions and their unpredictable outcomes, increases crisis instability. In addition, the development and use of Autonomous weapons by some States will provide strong incentives for their proliferation, including their use by actors not accountable to legal frameworks governing the use of force. Do we really need a new arms race?
Finally, we urge that nations urgently move towards negotiations for a legally binding instrument in further deliberations next year. I am going off script here but look – come on – and no offence intended – but I am a scientist and not a diplomat so in plain speech there are those here who have an interest in slowing down the move towards a ban while they quickly continue to develop the weapons. Don’t be fooled or bullied by these tactics or the mudslide of refining definitions. We ask you – please – get on with ridding us of these morally reprehensible weapons before it is too late.
Thank you, Mr Chairman