The Principle of Humanity in Conflict

Posted on 19 November 2012 by Mark Gubrud

I want to share a personal perspective, which has not been endorsed by ICRAC. I hope to stimulate further discussion on the foundations and framing of the nascent global campaign against autonomous weapons (AW, or systems, AWS). This essay is written to be constructively provocative.

Two years ago, in presenting the first version of what became ICRAC’s Berlin Statement , I emphasized that the dictum “machines shall not decide to kill” could serve as the “kernel” of a convention on robotic weapons that would naturally encompass ICRAC’s broader goals for robot arms control.  The idea of machines taking the decision to kill people, or initiating violence that causes death and suffering[1], was abhorrent to almost everyone who considered it. This almost universal repugnance could be harnessed as the “engine” of a global movement to stop killer robots, over the opposition of a minority who would argue the inevitability and military necessity of robotic and autonomous weapons.

I called this the “Principle of Human Control,” and felt it important that this should be declared as a new principle, consistent with just war theory, the laws of war, and human rights law, but not explicitly contained in nor necessarily derived or derivable from existing bodies of philosophy and law.  A new principle was needed for the simple reason that the threat against which the principle was raised had not previously existed, and was only becoming imaginable as the march of technology brought us closer to the day when machines might plausibly be deemed capable, and trustworthy, to make lethal decisions autonomously.

Accordingly, whereas ICRAC’s founding Mission Statement would only “propose… that this discussion should consider” that “machines should not be allowed to make the decision to kill people”; the Berlin Statement declared that “We believe… it is unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war. In all cases where such a decision must be made, at least one human being must be held personally responsible and legally accountable for the decision….” [Emphasis added.] No claim was made that our belief could be proven true according to any body of law, philosophy or science. We declared a new principle.

Distinction and Proportionality

Nevertheless, as the debate about autonomous weapons takes shape, many arguments revolve around international humanitarian law (IHL), or the law of armed conflict (LOAC), and the principles of this body of law, as well as the deeper philosophical principles of just war theory.  Some authors argue that these venerated principles have stood the test of time, and have proven adaptable as weapons and modes of warfare have evolved. Others claim that 21st century weapons and irregular warfare render the existing instruments of LOAC quaint and ripe for revision. Some argue that autonomous weapons cannot fulfill IHL requirements to distinguish between combatants and non-combatants (Principle of Distinction) and to weigh military necessity and objectives against the risk or expectation of collateral harm to non-combatants (Principle of Proportionality). Others maintain that even limited discrimination capabilities might help to reduce harm to non-combatants, and that human commanders, who will decide when and what kinds of AW to use, will remain responsible for the judgment of proportionality.

One argument that AWS are inconsistent with IHL/LOAC rests on the proposition that machines are simply incapable of the judgment required to ensure their compliance.  For example, Protocol I of the Geneva Conventions requires that “Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.” This duty to “at all times distinguish” embodies the Principle of Distinction.  It is implicitly required of whatever agent “shall direct… operations” so that the operations are directed “only against military objectives.” Therefore, if the agent directing “operations” were to be a machine, it would have to be a machine able to “at all times distinguish” between civilians, combatants, civilian objects and military objectives.

Distinction is clearly a challenge for machines, but human capabilities to “at all times distinguish” also have limits. For example, one may not be able to see clearly in the presence of smoke or other obstructions, or to judge quickly and correctly whether a figure in the shadows is a civilian or combatant, particularly when under fire. If human responsibility for adherence to the Principle of Distinction is limited by human capabilities, in various circumstances, would such limits not apply to machines as well? If we don’t expect perfection in all situations, might there not be circumstances in which a machine’s ability to discriminate would be comparable to, or perhaps better than, a human’s?

Questions of interpretation also challenge the argument. Who bears the responsibility of the “Parties to the conflict” to “at all times distinguish” and to “direct their operations only against military objectives”? “Parties” would normally be interpreted to mean the States involved, a fairly high level of direction.  May such “Parties” not lawfully direct that autonomous weapons be used only against combatants and military objectives in some conflict, subject to technical limitations which may result in unintended harm to noncombatants or civilian objects, which the “Parties… at all times distinguish”?  Alternatively, if responsibility is delegated to a human commander, who directs that an autonomous weapon be used in some tactical situation, and if that person meanwhile upholds the obligation to “at all times distinguish,” can it not be argued that the commander may lawfully direct the operation only against military objectives, subject to technical limitations which might produce an unintended result?

Certainly, the technical limitations of AWS matter, in relation to the circumstances in which they are used. But a reasonable interpretation of the Principle of Distinction cannot demand absolute perfection of machines (since humans are incapable of it) nor forbid a responsible human commander from taking some risk of a mistake. The actual capabilities of machines, and the acceptable level of risk, will become the issues of contention. It is not obvious that the resolution will be to ban autonomous weapons categorically.

Protocol I also requires (in several statements) that “those who plan or decide upon an attack shall… refrain from deciding to launch any attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.” This is known as the Principle of Proportionality, and it poses an even more formidable challenge to technical capabilities, if machines would bear the burden of judging not only what effects upon civilians and civilian objects “may be expected,” but whether those “would be excessive in relation to the… military advantage anticipated.”

Here again, questions of interpretation, particularly in view of human fallibility, challenge the argument. In practice, human commanders have little objective basis on which to judge what is “excessive” other than, perhaps, some set of examples and judicial precedents – which might very well be coded as a database suitable for machine access. In practice, human judgment of what is “excessive” is externally regulated only by the possibility of later review and adverse judgment. This (in theory) motivates commanders to think carefully about whether their decisions will be perceived as reasonable. Could the same mode of regulation not apply equally to the decisions of machines? Surely a machine could be programmed to seek action plans that it assesses are consistent with the rules and precedents coded in its database. If humans judge that machine decisions are reasonable and proportional, at least as consistently as human decisions, might that not satisfy the Principle?

Alternatively, if an autonomous weapon system is programmed to fire under some conditions, and not under others, subject to known limitations of its capabilities to autonomously distinguish conditions, could a human commander who is aware of those limitations not accept responsibility for his or her own judgment of proportionality, taking some calculated risk that the machine might misclassify the actual situation, just as it might be subject to any other error or malfunction? Is this fundamentally different from the situation in which a commander authorizes an attack by human combatants, knowing there is some risk of their autonomously taking some action which would later be judged excessive, due either to their own misperception of the situation or their own misjudgment of proportionality?

Frames of reference and hard cases

While just war theory separates jus in bello (just conduct in war) from jus ad bellum (justice in going to war), in practice these principles are often entangled. Judgment that some amount of risk or harm to noncombatants is not “excessive in relation to” military objectives likely depends on perceptions or assumptions about the necessity and legitimacy of the conflict itself. Conversely, perceived excesses of violence can help to delegitimize a conflict. This is true whether action is taken on human decision or on the decision of a machine.

Whether the harm caused by a particular “attack” (or weapon, or type of attack) is judged “excessive” will depend on what it is compared with, which may again depend on whether the perceived alternative would also be perceived as plausibly not excessive, given military necessity. High-altitude saturation bombing was largely judged not excessive in World War II, and was far more controversial in Vietnam, but may still set a standard for comparison with the US drone strike campaigns in Pakistan and Yemen, as seen by those who believe the campaigns to be necessary and just. Those less convinced of jus ad bellum in this instance may be more inclined to compare the drone strikes with alternatives such as special operations forces raids on selected high-value targets, or perhaps no military action at all.

If autonomous weapons have only limited capabilities for discrimination and proportionality, they may still be claimed as consistent with jus in bello if they are perceived as alternative to weapons that are completely indiscriminate and almost certain to cost more innocent lives. Compared with a 500-lb bomb dropped on the roof of a building where noncombatants might be found, an autonomous robot that can recognize persons bearing arms, or even (perhaps an easier technical problem) a specific, known individual, could reasonably be expected to reduce the level of harm or risk to noncombatants.  In this scenario, the robot might not have perfect abilities to implement the Principles of Distinction and Proportionality, but a tactical decision to use the robot, as alternative to a blunter weapon, might arguably uphold those principles if they are viewed as practical goals.

A different judgment might be reached if autonomous weapons were compared with armed teleoperated robots, which keep a human “in the loop”, and which might provide another means of assault on a location in which noncombatants may be present.  However, teleoperation might not always be reliable or practical, due to the vagaries and vulnerabilities of communication links, or the need for small size or stealth of the robot. For very small robots, resolution and bandwidth limits in teloperation may mean that fully autonomous systems which seek a specific individual or object might compare favorably in terms of discrimination, and hence proportionality. Another principle is needed if we are to categorically rule out the use of autonomous weapons when the possibility of teleoperation is not available.

Similar arguments can be made in defense of missiles equipped with target recognition and terminal homing capabilities, which arguably can be called robotic weapons, autonomous in the sense that after release they are charged to make at least targeting refinement decisions autonomously. Such weapons already exist and have been deployed, albeit with only rudimentary target recognition capabilities. What would be the objection to improving those capabilities to achieve more precise targeting and a lower risk of collateral harm?  It might be argued that such weapons are sent on a one-way mission and do not make an independent “kill decision,” but what would be the objection if the systems were further developed so that, upon failure to recognize an appropriate target, or upon detection of humans in the vicinity of the target (with or without distinction), the weapon decides to abort its mission? It could then safely self-destruct or divert to an open area while disarming its warhead. Even an imperfect “abort decision” capability would appear to be an improvement from an IHL standpoint, if compared with the use of a missile with no such capability. Yet, the machine would in effect be deciding whether to kill.

Note that teleoperation and even “on-the-loop” human intervention might not be possible in the split second prior to impact during which a missile’s target analysis system might have sufficient information to make a decision. Another case in which events occur too rapidly for meaningful human control or even supervision of target identification and fire decision is that of point defense systems such as the Phalanx guns deployed on ships, and other anti-missile,-mortar and -shell systems employing ballistic or guided interceptor munitions or lasers. Even longer-range missile and air defense systems such as the Patriot and Aegis challenge human capabilities to make crucial target discrimination decisions in seconds. In practice, humans have often failed to exert an “abort” command when operating in an “on-the-loop” role, with deadly results in a number of incidents.

I think that the case for automated fire decision in a point defense system is so compelling that any credible proposal for an autonomous weapons convention will have to carve out an exception for such systems, at least when they are directly defending human lives against immediate threats. Strict and continuous human supervision should of course be required, including accountability for failure to intervene when information indicating a system error is available to those charged to be “on the loop.” Such systems should also be operated in a “normally-off” mode, and only activated upon warning of an incoming attack. But as long as a system is limited to defense against weapons incoming to a given human-inhabited location (including the instantaneous location of an inhabited vehicle), automated fire decisions will likely have to be allowed.

A similar exception was carved out from the Convention on Cluster Munitions in the case of the so-called sensor-fuzed weapon (SFW), or any weapon meeting the CCM’s criteria of having fewer than 10 submunitions, each of which weighs more than four kilograms, is “designed to detect and engage a single target object”, and is equipped with self-deactivating and self-destruct mechanisms. Here again we see the implications of proportionality and distinction as guiding principles for robot arms control. The rationale for excepting SFW-type weapons from the CCM was that such weapons, with their more sophisticated capabilities for target identification, discrimination, self-guidance, selective engagement, and self-deactivation, do not pose the same risk of harm to civilians posed by traditional cluster munitions with their many small and highly unreliable bomblets. Following this logic, further development of the SFW, to incorporate even more sophisticated discrimination and fire/no fire decision making capabilities, would be hard to object to. Yet it is not clear why the SFW would not meet a reasonable, broad definition of “autonomous weapons.”

It is certainly true that early AWS can have only limited capabilities for discrimination, and even less to judge proportionality. Exaggerated perceptions of their precision and selectivity may lead to excesses in their use, as may well be occurring with drones already. Yet this is not, I think, the central concern that is driving either the nascent campaign to ban AWS, nor the broader public’s unease with the rise of killer robots. If it were, it would suggest the need not for a ban but for regulation of these weapons and their use, and for a go-slow approach to their deployment – until the technology can be perfected, or is good enough to be acceptable in well-understood circumstances.

So then, What’s so bad about killer robots?

We were a family. How’d it break up and come apart, so that now we’re turned against each other? … This great evil. Where does it come from? How’d it steal into the world? What seed, what root did it grow from? Who’s doin’ this? Who’s killin’ us? Robbing us of life and light. Mockin’ us with the sight of what we might’ve known.

– Private Witt, in Terrence Malick’s screenplay for The Thin Red Line

The cases just considered may be seen as borderline cases for the prohibition of AWS. Borderline cases always exist; we might say that in the vicinity of every bright red line there is a broad grey zone. Those opposed to drawing a line are fond of citing ambiguous cases. It is true that where you place the line in relation to such cases may be somewhat arbitrary. What is important is to draw the line somewhere. If we stand back from the grey and fuzzy border zones to see the big picture, we can see clearly the difference between violent force in human conflict today, and some future in which decisions about the use of violent force are routinely made by machines.

In such a future, we risk the unleashing of conflict itself as an autonomous force, embodied in technology, and divorced from the body of humanity, within which it first arose.

What is conflict? One perspective on conflict is that it arises because each of us has a unique point of view. We also join in community, but humanity as a whole is spread across the globe. The human community is divided and in conflict because of differing points of view. This is not different from saying we have differing and conflicting interests, e.g. in controlling the same territory or limited resources. However, the willingness of human individuals to sacrifice themselves for families, platoon brothers, tribes, nations… or for a cause, shows that self-interest is only one factor in our point of view about what is good and just, and worthy of fighting for, wrong and unjust, and worthy of anger and violence.

In another perspective, conflict is a process which arises within and between us, and which can consume us and escape our control. Because humans have the capacity for anger and violence, because violence easily becomes lethal, and because life and death transcend in importance, we easily become caught up in emotions that overpower reason. Community fractures, separating us from them, and we are unable to forgive the terrible things that they have done, unable to consider their claims to justice and agree on a compromise with our own. Across the fault lines of love and reason, we and they speak to each other in the language of violence, wear our masks and play our roles in the Greek tragedy of conflict and war.

Yet until now it has always been true that conflict has consisted solely of willful human deeds. When a weapon is fired, one person deliberately unleashes violent, potentially lethal force upon another. It may be irrational, but it is intentional, and essential. We say that weapons are fired “in anger,”  an animal passion that is rooted in mortality and the struggle for survival. I think this is what the warriors mean when they say that war is deeply human (and somehow, in spite of robot weapons, always will be).

Anger humanizes violence, and its apparent absence is part of what makes remote control killing so deeply disturbing. Yet even in the cool detachment of the drone operator’s padded chair, we find one human being accepting the responsibility for the act of killing another, because the human community is divided and the community to which the “cubicle warrior” is loyal has gravely decided that this killing is a necessary burden of evil. That burden is felt strongly by military veterans and professionals, who correspondingly also feel, surprisingly often, that there is something deeply wrong – and terrifying – about the idea of machines that would usurp from us, or to which we would surrender, the heaviest responsibility ever assumed by human beings: that of deciding when, and under what circumstances, we are justified in injuring or killing others.

If the community is democratic, if it is even truly human, the burden is felt. When the enemy hits back, the pain and loss are felt, too. There is always the possibility of saying “Enough.” As long as conflict remains human conflict, it ends when people finally, for whatever reasons, decide to stop fighting.

In making the process of killing fully autonomous, we risk machines no longer under human control pursuing conflict for its own sake, conflict that is no longer human conflict, no longer about right and wrong. We risk machines mercilessly extinguishing human lives according to programs developed to embody only military doctrines and goals, and the laws and logic of states. We risk the dulling or loss of our ability and responsibility to judge when the price or the risk is too great. Or to know when too much blood has been spilled, either because it is our own blood or because in spilling the blood of others we lost our claim of justice. We risk becoming either tyrants who rule through robot soldiers, or peasants who submit to a robotic regime, or perhaps both at once (already the drones are coming home to roost).

Do I mean to invoke here the specter of Skynet, the artificial intelligence that declared war on humanity in The Terminator, reputedly because it feared we’d turn it off? Or Colossus, the military supercomputer that took control of the world in a brutal coup in order to fulfill its mission of ensuring peace? The scorn that “serious” people direct against these tropes from science fiction betrays their own nervousness.  Artists have mined our apprehensions about the world we’re creating, and projected them before us in gaudy masks and cartoonish story lines that beg to be decoded. The military system is already a kind of machine, pursuing its own agenda, just as states, corporations and institutions of all kinds are. These machines are made of people, and their minds are the minds of people – increasingly augmented by information technology, from clay tablets to search engines. We like to think that this augmentation increases our effective intelligence, but as soon as words are written down, thinking rigidifies. Yet one essential fact remains: each human mind, dazzled and lost in the maze of knowledge, of law and the machinery of institutions, remains tethered intimately and existentially to a human heart. It is that tether which the military complex, enabled by technology, now threatens to break.

No claim is made here of the infallibility or even wisdom of human decision making in conflict. On the contrary, we are all familiar with history’s march of folly, hubris, aggression and anger, the tragic farce of bluster, miscalculation and misunderstanding , the tragedy of right pitted against right, outrage responding to outrage and leading to further outrage, escalation and the madness of war. In full view of this, we might wish for an all-wise and all-powerful super-AI to impose a global pax robotica. But apart from the question of how we would ensure the benevolence of an electronic emperor, there is simply no reason to think our present drift into robotic warfare will lead to peace.

The robotic weapons being created today are just that, weapons. They are fitted into increasingly automated, integrated, networked systems which gather and process “intelligence” to produce action orders following plans and doctrines issued from on high. The tactical officer increasingly consults a computer to learn the next objective, estimate weapons effects and perhaps assess the risk of killing civilians; after the action the officer reports to the system, which updates its model of the conflict.

As military systems are increasingly automated, and the human role is progressively atomized, mechanized, and displaced, these systems remain pitted against each other, embodying the same contradictions of reason and purpose. As machines take over from brains, they will sideline the hearts that whispered: Life is precious. Artificial intelligence will know everything about correlations of force, kill probabilities, stealth and counter-stealth, and perhaps also everything about the economic value of resources, the intricacies of laws and treaties, protocols and codes of conduct, the theory of games and the flow of information through networks. Yet it will understand nothing about the purpose of any of this, nothing about simply being human.

Probably the most certain reason why the AI warlords of the future won’t understand what we were trying to protect when we created them is because we won’t tell them. We’ll scrupulously avoid corrupting their military discipline with any hint that, in some cases, we might and should and probably would back down. That we might learn not only that we lack the physical might to impose our will on others, but that we were wrong to even try, or to want to. Would we even know how to tell our machines when and why they should stop fighting, propose or accept a cease-fire, or even withdraw in defeat, rather than lose more, and risk everything? Do we even understand how to tell this to ourselves?

On the contrary, violence and especially war always represent the failure of reason, the tearing and breakdown of community, and negotiating directly with the primal chaos that civilization sought to expel. Great job for a robot!

If ceding control to an automated war machine seems far removed from military robots performing the dull, dirty and dangerous jobs of today, and even from the possible next step of automatic target selection and autonomous fire decision, it nevertheless represents the logical destination of a program to outsource human responsibility for making decisions about the use of violent force. I believe it epitomizes the concern that underlies the widespread and almost universal view that moving from today’s non-weapons military robots and teleoperated lethal drones to fully autonomous weapons is a step that we should never take – or, if ever, only reluctantly, with great care, and only because it might be necessary or unavoidable. We feel instinctively that the killer robot is no longer a human tool, but has become an enemy of humanity; as depicted in Terminator, a gleaming metal skeleton whose eyes glow with the fire of Hell. A whiff of death lurks in every weapon, but the killer robot embodies death as an animated Other, Death that walks, Death that pursues each of us with a determination of its own.

The reason for banning autonomous weapons is to draw a bright red line across a broad grey zone that lies between us and an out-of-control future. Surely the place to draw this line is at the point where a machine is empowered to decide, by a mechanical process not controlled or even fully understood by any of us, the use of force against a human being. Because violence commits us, we must generalize this to a ban on any autonomous initiation of violence, lethal or nonlethal, against humans or against property, including other machines. Finer details do matter, but what is essential is to draw the line.

If we allow ourselves to cross this line, we will find ourselves driven onward by the imperatives of an arms race. This is another type of conflict process which we struggle to control, and crossing the line already means losing that struggle. If we tell ourselves that we will limit lethal autonomy with reasonable demands for Distinction and Proportionality, we will find that the weapons demand to be freed, in order to confront, and if need be to fight, others of their own kind.

We will increasingly risk the ignition of violence by the unanticipated interactions of proliferating systems pitted in confrontation with one another. The ever-increasing complexity of such confrontations will far surpass the Cold War’s already dangerous and unmanageable correlations of forces. Ensuring stability would be a difficult engineering challenge, but the creators of these systems will be teams in different nations working against each other. The further we go, the more control we cede to machines, the harder it will be to turn back, the more timid we will be to even consider dismantling or tampering with the systems that guard us against the other systems.

We risk being seduced, also, by the promise of imperial war without risk to ourselves, punishing or subjugating others with impunity as long as they do not have the same technical means to use against us. If this could be successful, it would be monstrous enough, but history shows that people do find ways of striking back, and the technologies that would enable AWS are widely spread around the globe. History also shows that imperialism leads to the clash of empires. The norms that we set in our dealings with the weak are the norms we will have to live with when we deal with the strong. We can’t expect to enjoy the comforts of home as our robots fight an endless stream of “asymmetrical” conflicts, when that is exactly the sort of behavior which will conjure up a “peer competitor” to say “You do it, why can’t we?”—and draw us into serious confrontation.

The Principle of Humanity in Conflict

We say “No to killer robots” because they pose a threat to humanity. The threat is new and unanticipated in prior law or philosophy.  We cannot derive our opposition from the principles of just war theory or the codes of international humanitarian law.  We need to declare new principles.

Here is an attempt to formulate a set of principles which address the threat from autonomous weapons. I do not claim that this is the final formulation. I do not claim that the wording here is perfect. I do not claim to have distilled a mathematically minimal set of independent principles. Rather, these principles are interlocking, partially redundant, mutually reflecting, mutually referential, and mutually reinforcing. Together, they can be referred to as the Principle of Humanity in Conflict, unless you have a better name.

In the literature on IHL/LOAC, the principle that in the conduct of war we must not be needlessly cruel, inflict unnecessary suffering, or make use of weapons that are inherently inhumane, is sometimes called the Principle of Humanity. This set of principles can be seen as an expanded Principle of Humanity.

Human Control: Any use of violent force, whether lethal or sub-lethal, against the body of a human being, or to oppose the will of a human being, must be fully under human control. If violence is initiated, it must be the decision of a human being.

Hence it is unacceptable for machines to control, determine (e.g. by the narrowing of options), or decide upon the application of violent force. This applies whether the target is a human or another machine; we can’t just turn robots loose to fight other robots as a proxy for our conflicts.

Human Responsibility: We must hold ourselves responsible for the decision to use violent force, and cannot delegate that responsibility to machines. At least one person (a human person), and preferably exactly one (e.g. a commanding officer), must be accountable for each decision to use violent force against a particular person or object at a particular time and place.

Where data has been recorded pertaining to the circumstances of such decisions, it must be retained and made available for judicial review.

A human commander cannot accept responsibility, a priori, for the individual violent actions of autonomous weapons which will occur in circumstances which cannot be fully anticipated; this would be a mere pretense of responsibility, to hide the fact of irresponsibility. To authorize the use of autonomous weapons only within certain boundaries  of space and time, or within a certain conflict, would not create an exception to this principle.

Human Dignity: It is a human right not to be killed on the decision of machines, nor to be subjected to violent force or pain on the decision of machines, nor to be threatened with violent force, pain or death as a form of coercion on the decision of machines, nor to be ruled in the conduct of life through the agency of overseeing machines that may decide the use of force as coercion.

Human Sovereignty: Humanity is, and must remain, sovereign. Threats to human sovereignty, and security, include the process of conflict that arises between us, and with which we struggle to retain or regain control. Externalizing that process in an autonomous technology would make it ever more difficult to control, until finally we would have lost even the ability to exert control.

The Principle of Humanity in Conflict: Here we mean, most fundamentally, the principle that conflict is, and must remain, human. It is between us. When in conflict with one another, we must not lose sight of our humanity (as is the case when we are needlessly cruel or inhumane).

We must always try to resolve conflicts nonviolently. Violent force, when unavoidable, must be used only in full respect for the humanity of opponents and recognition of the gravity of the act of killing (for we are all mortal). Taking arms against an opponent always entails the possibility of killing, and of being killed (for none of us is omnipotent).

Use of violent force can be accepted only in conflict between human beings, and only then because, and only when, we have failed to resolve the conflict nonviolently. Allowing machines to assume control of the conduct of conflict, to use violence autonomously, pitilessly and in contempt of the humanity that we share with our opponents, would be inhumane, and inhuman.

 


[1] Violence might be initiated on the decision of a machine either as intended by its designers, or as the result of a bug or malfunction, or through the unforeseen interactions of machines in confrontation with each other.

Categorized | Analysis

Leave a Reply