Can an autonomous weapons ban be verified?

Posted on 14 April 2014 by Mark Gubrud

complianceAt the ongoing CCW experts’ meeting on Lethal Autonomous Weapons Systems in Geneva, questions have begun to be raised about the verifiability of a ban on autonomous weapon systems. We would like to highlight the existence of our working paper outlining compliance measures for a ban, including a framework proposal as to how compliance could be verified.

A common remark is that verification would require inspection of software as well as hardware, and that nations will never permit such intrusive inspections—as well as the fact that even clearly-written and well-documented software can be very difficult to read and interpret, let alone the case in which software may be deliberately obscured or encrypted. The physical form of systems that are capable of being autonomous may also not always be readily definable or discernible. Where both a cockpit and communications link to a remote operator are lacking, one may reasonably infer that a system is intended to operate autonomously, but their presence does not necessarily ensure that the system is incapable of operating autonomously.

The solution to this conundrum is already at hand, however, in the increasing emphasis in these discussions on the need for meaningful human control. This approach is increasingly recognized as a conceptual reframing of the problem of banning autonomous weapons, as already proposed in 2010 with the Berlin Statement (originally titled “The Principle of Human Control of Weapons and All Technology”). That statement asserts positively “That it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war. In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision and its foreseeable consequences.” The emphasis on personal responsibility and legal accountability for the decision to use violent force has become recognized as one of the elements of the concept of meaningful human control, which also emphasizes the role of adequate information and deliberation by the decision maker.

Thus, while it may indeed be impractical to verify compliance with a ban on “autonomous weapons” as such, it may very well be possible to verify compliance with a requirement for accountable and meaningful human control and decision in each use of violent force.

This is not to say that we should not also declaratively ban autonomous weapons–minus a list of exceptions for systems that operate autonomously but do not make significant lethal decisions autonomously, or that are purely defensive in nature and defend human life against immediate threats from incoming munitions, or are to be allowed for other, pragmatic reasons. Certainly, we should ban fully autonomous weapons. But the way to implement and verify such a ban may be better framed in terms of human control.

Two years ago, ICRAC members took part in an effort to consider measures for promoting compliance with an autonomous weapons ban. The result was a working paper, “Compliance Measures for an Autonomous Weapons Convention,” which is posted here. The work has not received wide recognition, but given that the question has begun to arise, it seems appropriate to highlight it now, rather than witness the emergence of  “a ban on killer robots would be nice, but it’s unverifiable” as a persistent canard.

The paper highlights many aspects of ensuring compliance with an autonomous weapons convention, including the enunciation of strong, simple, intuitive principles as the moral foundation for such an agreement, framing in terms of clear definitions, articulation of allowed exceptions, declaration of previously existing autonomous weapon systems, national implementing legislation, and the creation of an international treaty implementing organization (TIO). The role of the TIO in verification is detailed, particularly its support for cryptographic validation of records to tie them to particular weapon systems and the use of force at particular times (and potentially, places). These records, it is proposed, would be held by the compliant States Parties themselves and not released to the TIO or subject to any other possible compromise of military secrets, except in the case of an orderly inquiry into particular suspicious events, and possibly also some quota of routine, random inspections to verify continuous compliance. The cryptographic principles on which the tamper-proofing and time-stamping of such records can be carried out are simple and well-understood, and full encrypted records need not be exposed to the possibility of decryption if only “digital signatures” of the records are supplied to the TIO for archiving.

We believe that a scheme of this type can support rigorous verification of compliance in cases where it is suspected that a fully autonomous weapon system has been used, which should be a sufficient deterrent to their use. If coupled with other transparency and confidence-building measures, including routine on-site inspections of facilities in which remotely operated or nearly autonomous systems are developed, tested, manufactured, stockpiled, deployed or used, and with national means of intelligence which should suffice to reveal any prohibited activities large enough in scale and scope to pose a significant strategic security threat. These measures should suffice to ensure that no State Party will find the risks of non-compliance to be outweighed by uncertain and hypothetical military benefits.

by Mark Gubrud (@mgubrud) and ICRAC’s Juergen Altmann

Categorized | ICRAC News, Working Papers

Leave a Reply