On April 14, ICRAC’s Noel Sharkey delivered the following statement to the informal “Meeting of Experts“, gathered to discuss questions related to “lethal autonomous weapons systems” from April 11 to April 15 at the United Nations in Geneva, Switzerland.
Thank you for allowing the International Committee for Robot Arms Control this opportunity to comment after such an excellent and insightful panel.
There has been a lot of talk at the CCW that Autonomous Weapons Systems do not yet exist. This is partially true. But that statement hides the fact that a number of states are openly and publicly rushing to develop them. We will not name names here but a quick internet search will reveal a great deal (although there’s much more that we don’t have access to). We are seeing the development of autonomous tanks, autonomous fighter planes, autonomous ships and autonomous submarines. And the intentions should be absolutely clear to everyone.
It would be foolhardy to imagine that conflict will consist of singular attacks of one AWS. ICRAC is particularly concerned with the notion of swarms of autonomous weapons: tanks, fighter planes and ships – as well as all kinds of miniature armed vehicles. This is a very frightening thought. But we are not scaremongers, we are a group of scientists and we are tracking the development of swarm technology very carefully.
These produce a number of worrying threats to global security and have undeniable problems for compliance with IHL. I will mention only two of these here.
First, swarms are seen as force multipliers. That means that one person is in control of the swarm. They can only observe and possibly control the collective but can not examine individual attacks. Whether you believe that the term Meaningful Human Control is vague or not, it would be extremely difficult to argue that there was any meaning in this kind of control. This is particularly worrying for the increasing speed of aerial autonomous systems where decisions would have to be made in 1000ths of a second.
Second, we would like to place very heavy emphasis on one aspect of AWS that could not be IHL compliant. We have already argued that the predictability of AWS could not be guaranteed by their very definition.
But scientifically there is a much worse problem. It we do not stop autonomous weapons systems now, there will be rapid proliferation and perhaps a new arms race. This means that we can look forward to the prospect of swarms of AWS being confronted by enemy swarms (autonomous tanks, planes, ships or more likely a combination of combat vehicles of all sizes).
Here lies a problem of the greatest order. Each swarm of autonomous weapons will be controlled by its own combat computer programs. These will have to be secret because to reveal them would make them vulnerable.
So what happens when unknown computer programs get into competition? The answer is unpredictable and unanticipated behavior. Ask any computer scientist on your delegations – this is a fact and not a theory. In these circumstances of greatest uncertainty it is absolutely impossible to guarantee compliance with IHL. We have said this before but we just cannot stress it enough. Please listen to a group of scientist who are telling you an indisputable fact: IHL cannot be guaranteed for the interaction of autonomous weapons systems. Not to mention the prospect of a severe crisis escalating into war. This cannot be tested for by any weapons review. If we truly want to destabilize global security this is one of the best methods to go about it.
There is no way to get around this problem without an international legally binding treaty that has a strong focus on ensuring that human control is meaningfully involved in all application of violent force.