ICRAC statement on technical issues to the 2014 UN CCW Expert Meeting

Posted on 14 May 2014 by Frank Sauer

On May 14, ICRAC’s Denise Garcia delivered the following statement on technical issues to the informal “Meeting of Experts“, gathered to discuss questions related to “lethal autonomous weapons systems” from May 13 to May 16 at the United Nations in Geneva, Switzerland.

Technical statement by the International Committee for Robot Arms Control

Convention on Conventional Weapons Meeting of Experts on lethal autonomous weapons systems
United Nations Geneva
14 May 2014

I am speaking on behalf of the International Committee for Robot Arms Control – ICRAC.

Thank you again Ambassador Simon-Michel and the states present for providing us with this opportunity to make a statement about our technical concerns with autonomous weapons systems.

Those who support the development of autonomous weapons, often believe that they will provide an advantage that no one else will have. However, new weapons technology proliferates rapidly. Once some states have autonomous weapons there will be rapid developments of counter weapons and counter-counter weapons. Given the nature of computer control, we cannot know how such weapons will interact except that it will be unpredictable.

A number of states have been developing precursors for autonomous weapons for more than a decade. Although there has been considerable testing of fully autonomous combat platforms, none have yet been fielded.

However, there are a number of weapons systems that Sense and React to Military Objects (SARMO weapons) for protection against fast incoming munitions such as mortar shells and missiles, for example. Mantis, Phalanx and C-RAM.

These are not fully autonomous in that they are programmed to automatically perform a small set of defined actions repeatedly. They are used in highly structured and predictable environments that are relatively uncluttered with a very low risk of civilian harm. They are fixed base, even on Naval vessels, and have constant vigilant human evaluation and monitoring for rapid shutdown. They are sometimes called ‘supervised autonomy’.

SARMO weapons may be acceptable when used defensively against military objects but caution must be exercised about expanding their role and this also needs careful international discussion.

Fully autonomous weapons pose considerable challenges to international humanitarian law; in particular they are unable to:

– distinguish between military and non military persons and objects
– determine the legitimacy of targets
– make proportionality decisions
– adapt to changing circumstances
– handle unanticipated actions of an adaptive enemy
– deal with other autonomous systems controlled by unknown combat algorithms

The state of the art in computing machinery is unlikely to meet these requirements within the foreseeable future. Computers are better at some task than humans such as responding quickly to control tasks and carrying out repetitive dull routines. But humans are needed for tasks involving deliberative reasoning and the exercise of judgement.

Distinguishing between combatants and civilians and others who are hors de combat is particularly difficult. Sensing and vision processors will improve in the longer-term future, but methods to determine the legitimacy of targets have not yet been credibly proposed.

Many targeting decisions – particularly a justified decision not to fire – are subjective in nature. Decisions such as the proportionate use of force require the deliberative reasoning of an experienced human commander who must balance civilian lives and property against direct military advantage. These judgements are not reducible to calculation.

A large number of novel and unanticipated circumstance occur in conflicts that require last minute changes of plan. An autonomous weapons system is not capable of adapting predictably. And an adaptive enemy could trick the technology into firing on unintended targets.

Some states already understand the unpredictability of autonomous weapons and are proposing to keep a human in the control loop – at least for the time being. But the question remains as to what is appropriate human control. Humans need to exercise meaningful control over weapons systems to counter the limitations of automation.

ICRAC hold that the minimum necessary conditions for meaningful control are

First, a human commander (or operator) must have full contextual and situational awareness of the target area and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.

Second, there must be active cognitive participation in the attack and sufficient time for deliberation on the nature of the target, its significance in terms of the necessity and appropriateness of attack, and likely incidental and possible accidental effects of the attack.

Third, there must be a means for the rapid suspension or abortion of the attack.

If we get the delicate balance between computers and humans right, computerised weapons systems can be predictably controlled and discriminate. The power of human reason can ensure legitimate target selection and proportionate response.

In Conclusion:

We can expect considerable improvements in the technology over the long term but IHL compliance with autonomous weapons systems cannot be guaranteed for the foreseeable future.

The predictability of fully autonomous weapons systems to perform mission requirements cannot be guaranteed. Testing such systems for unanticipated circumstances is not viable.

The unpredictability of autonomous weapons in unanticipated circumstances makes weapons reviews extremely difficult or even impossible.

The combined strengths of humans and computers operating together with the human in charge of targeting decisions makes better military sense and is necessary in order to meet the requirements of international law.

Categorized | ICRAC News, Statements

Leave a Reply