Banning Lethal Autonomous Weapon Systems (LAWS): The way forward

Posted on 13 June 2014 by Frank Sauer

With ICRAC’s 2009 mission statement fulfilled and the issue of fully autonomous weapon systems picked up by the international community at the United Nations Convention on Conventional Weapons (CCW) in Geneva, ICRAC and the Campaign to Stop Killer Robots can celebrate a first success (Read more about this here and here and see the ICRAC statements delivered in Geneva).

But we are only at the beginning. There is lots of work left to do in regard to raising awareness, clarifying the issue, fleshing out the details of the concepts involved, and moving the debate forward at both the CCW and other international fora.

Some background: LARS or LAWS or Killer Robots?

PHALANX – Automatic defensive systems to protect human life are not the problem. Source: Wikimedia Commons

PHALANX – Automatic defensive systems to protect human life are not the problem. Source: Wikimedia Commons

Obviously, militaries all over the world already deploy systems which operate on their own, but these systems are currently confined to defensive functions such as the interception of rockets, artillery fire and mortars, either ship-based or stationary on land. The most prominent ones are PHALANX, PATRIOT, IRON DOME or MANTIS (see also Human Rights Watch 2012) designed for use against such inanimate targets, if necessary, without human intervention (the rationale being that there may not be enough time for human intervention). However, these defensive systems operate automatically rather than autonomously, simply performing repeated pre-programmed actions.

To distinguish them from these precursors, weapons systems are described as autonomous if they operate without human control or supervision, perhaps over a longer period of time, in dynamic, unstructured, open environments. In other words, these are mobile (assault) weapons platforms which are equipped with on-board sensors and decision-making algorithms, enabling them to guide themselves. As they could potentially have the autonomous capability to identify, track and attack humans or living targets, they are known as lethal autonomous robots (LARS) or, to use CCW’s current terminology, lethal autonomous weapons systems (LAWS). Of course, one might just call them Killer Robots for short instead – because that is essentially what they are.

Why “autonomy” in mobile weapon systems and what are the problems with LAWS?

As of today, it is mainly for applications underwater or in the air – in other words, in less complex but more inaccessible environments – where the drive towards more autonomy is becoming most apparent.
In a nutshell, three driving factors can be identified and are put forward by proponents of LAWS.

  1. Transferring all the decision-making to the weapons system offers various benefits from a military perspective. After all, there is no longer any need for a control and communication link, which is vulnerable to disruption or capture and may well reveal the system’s location, and in which there is invariably some delay between the issuing of the command by the responsible person and the execution of the command. The time benefits already afforded by defensive systems are also valuable from a tactical perspective during military assaults. In the drone sector, a number of research and technology demonstrator programs have therefore been launched to develop (more) autonomous systems; examples are the X-47B in the US, Taranis in the UK, and the French nEUROn project.
  2. As autonomous systems are immune to fear, stress and overreactions, some observers believe that they offer the prospect of more humane warfare (see Ron Arkin’s arguments in his debate with ICRAC’s co-founder Rob Sparrow). Proponents further argue that not only are machines devoid of negative human emotions; they also lack a self-preservation instinct, so could well delay returning fire in extreme cases. All in all, this, it is argued, could prevent some of the atrocities of war.
  3. Some observers draw attention to the superior efficiency of LAWS and their cost-cutting potential, especially due to the reduced need for personnel.

But the problems raised by LAWS are manifold.

The US X-47B technology demonstrator. Source: Wikimedia Commons

The US X-47B technology demonstrator. Source: Wikimedia Commons

From a military perspective, there is a certain amount of tension between autonomous systems and military leadership structures. In fact, it was noticeable how support from the military at the Expert Meeting in Geneva was somewhat muted. For them, it was mostly about toying with ideas and looking at ways of deploying these systems in strictly controlled scenarios – for example as anti-materiel weapons. But these scenarios remain highly artificial. It is entirely unclear how the use of force could be restricted against only other military hardware in the chaos of battle. Also, with much of the robotics revolution in the military driven by commercial off-the-shelf-technology (see below) the risks of proliferation and potentially destabilizing effects to peace and security quickly come to mind regarding the wider politico-military context (Altmann 2013). And here another downside of unmanned systems in general, the lowering of the threshold to the use of military force (Sauer/Schoernig 2012), might become an even more vexing problem with LAWS in the future.

From an international law perspective, there is considerable doubt that LAWS could potentially be capable of distinguishing between civilians and combatants and ensure that the military use of force is proportionate. Numerous international law and robotics experts doubt that it is possible, in the foreseeable future, to pre-programme machines to abide by international law in the notoriously grey area of decision-making in times of war (Sharkey 2012; Wagner 2012). A further objection against LAWS is that the body of international law is based on the premise of human agency; it is therefore unclear who would be legally responsible and accountable if people – particularly civilians – were unlawfully injured or killed by LAWS (Sparrow 2007). Lastly, the Martens Clause, which forms part of customary international law, holds that in cases not (yet) covered in the regulations adopted in international law, the principles of the laws of humanity and the dictates of the public conscience apply. And in fact, the general public has serious concerns about LAWS. The findings of a representative survey, unfortunately available only for the US at the moment, show that a majority (55%) of Americans are opposed to autonomous weapons on humanitarian grounds, with 40% “strongly opposed” (Carpenter 2014).

What follows from this is that the ethical dimension may well pose the greatest problem regarding LAWS. In short, it stands to reason that giving machines the power to decide on the use of force against people violates basic principles of humanity and is, per se, unacceptable (Gubrud 2012; Asaro 2012). In fact, the report of the CCW Expert Meeting emphasised this very point, stressing that LAWS could end up undermining human dignity, as these systems cannot understand or respect the value of human life, yet would have the power to determine when to take it away.

LAWS on the international agenda: The challenges involved

Noone likes a Killer Robot! Source: Charli Carpenter’s survey on “How Do Americans Feel About Fully Autonomous Weapons?”

With LAWS, the CCW has identified a new topic, which – according to seasoned CCW participants – has been placed on the agenda with unprecedented speed and is attracting lively interest from the international community. Exactly what is behind this is not one hundred percent clear though. On the one hand, it seems plausible that countries have discovered their genuine interest in a development which is deemed to require regulation, and, after the process on cluster munitions failed to conclude a new protocol in 2011, are keen to demonstrate the CCW’s capacity to act. However, the CCW has a fearsome reputation as a place where good ideas go to die a slow and silent death. So it is also possible that some countries which might have an interest in developing and deploying LAWS (from purely a military technology perspective, this applies primarily to the US, Israel, China, Russia and the United Kingdom) will use the CCW process to stall for time and smother the anti-LAWS campaign over the coming years.

But the Campaign to Stop Killer Robots not only brought the issue of LAWS to the CCW’s attention in record time. The Campaign – a coalition of 52 NGOs from 24 countries which provides a platform for coordinated civil society and academic activities – is, of course, also aware of the time sensitivity of the issue and other hurdles and intricacies involved.

Hence the first goal must be to work towards a CCW protocol banning LAWS as swiftly as possible – a preemptive ban, that is, which would come into effect before countries and the arms industry invest so much in LAWS that the window of opportunity for a preemptive solution closes.

The dual use issue makes this especially pressing. Research on autonomous robots is already underway in countless university laboratories and companies, and there is massive commercial interest in robotics. And the problem lies in the fact that the integration of commercial off-the-shelf-technology has long been a driver of developments in the field of military technology.

So, to be clear: The Campaign is all for research and innovation in all fields of autonomous systems and robotics. Especially we at ICRAC like to say: We love robots! However, we want to see them used for peaceful purposes, i.e. we love autonomous robots that do not have a “kill function” or are deployed to coerce or terrorize human beings. The speed of technological progress makes drawing this line a challenging endeavor. Especially since the potential military relevance of LAWS is, obviously, much greater than that of, say, blinding lasers (CCW Protocol IV) to which a comparison is sometimes drawn.

Where automation can serve to protect human life – as in the aforementioned defensive systems – it is not necessarily a problem, but when machines make life and death decisions without human intervention and responsibility, a line is arguably being crossed. But where and how to draw that line to preserve basic human dignity in the practice of warfighting will be the subject of upcoming debates. At this early stage in the CCW process it is still unclear if the CCW is the right place to sort these things out and ensure a preemptive ban of lethal robots. It will thus have to be discussed in other fora as well, such as the Human Rights Council and the UN General Assembly First Committee.

Nevertheless, there are two graspable results of the CCW process already. First, five CCW parties (Cuba, Ecuador, Egypt, Pakistan and the Holy See) were already calling for a ban on LAWS at the informal CCW Expert Meeting. And no country vigorously defended or argued for the development and deployment of LAWS, although the Czech Republic and Israel underlined, in their statements in Geneva, that autonomous weapons systems may offer certain benefits. The US pursued a similar line of argument. Second, many countries (including Germany, Austria, France, Norway, Netherlands, Switzerland and the United Kingdom), made one thing very clear: they want to see guarantees of what is now called meaningful human control over the use of armed force.

“(Meaningful?) human control” as the way forward?

The concept of “meaningful human control”, which was introduced into the CCW debate by Campaign NGOs (Article36) and has now been taken up by governments, is the counter-concept to appropriate human involvement in the operation of (semi-)autonomous weapons systems, as specified by the US in its Directive on Autonomy in Weapon Systems, issued in November 2012. It essentially argues that appropriate human involvement does not go far enough – for there may be certain circumstances in which zero human involvement may be deemed appropriate.

The idea is that human control over life and death decisions must always be significant – in other words, it must be considerably more than none at all and, putting it bluntly, it must also involve more than the mindless pressing of a button in response to machine-processed information. According to current practice, a human operator of weapons must have sufficient information about the target and sufficient control of the weapon, and must be able to assess its effects, in order to be able to make decisions in accordance with international law. But how much human judgment can be transferred into a technical system and exercised by algorithms before human control ceases to be meaningful – in other words, before warfare is quite literally dehumanized?

One thing seems clear: in the future, certain time limits would have to apply if LAWS are not to become a reality across a broad front. The fact is that the human brain needs time for complex evaluation and decision-making processes – time which must not be denied to it in the interaction between human and machine, if the human role is to remain relevant; in other words, if the decision-making process is merely to be supported, not dominated, by the machine.

Some of ICRAC’s members in discussion at the UN in Geneva

Some of ICRAC’s members in discussion at the UN in Geneva

The concept of meaningful human control is, at present, not fully fleshed out yet, and in the further course of the CCW process there will undoubtedly be considerable wrangling over precisely how it should be filled with meaning. In that process, the Campaign will be pressing for the greatest possible role for the exercise of human judgment – not only in relation to killing but also in other decisions on the use of violence or non-lethal force.

Against this background, members of ICRAC are currently (and have been for some time) thinking more in-depth about what (meaningful) human control can and should be all about, e.g. both in terms of differentiating discrete levels of supervisory control from an analytical perspective (Sharkey 2014) and in terms of a normative reminder to seek a definition that is as clear-cut, simple and with as little degrees of meaning as possible (Gubrud 2014). In its working paper series, ICRAC has already been thinking even further ahead, pondering the design of legally binding instruments and suggesting verification and compliance measures for a possible future convention on autonomous weapons (Gubrud and Altmann 2013).

There is lots of work to do, but considerable progress has been made. As a founding member of the Campaign to Stop Killer Robots, ICRAC will keep working towards a pre-emptive ban on lethal autonomous weapon systems to ensure that the future of robotics is a peaceful one.

This post is based on a policy paper titled Autonomous Weapons Systems – Humanising or Dehumanising Warfare?, published by the German “Development and Peace Foundation” as “Global Governance Spotlight 4|2014″.

Categorized | Analysis

Comments are closed.