Tag Archive | "Peter Asaro"

Peter Asaro delivering ICRAC Statement on Ethics

Tags:

Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee

Posted on 13 May 2025 by Peter Asaro

Peter Asaro delivering ICRAC Statement on Ethics

UNGA Informals on LAWS
ICRAC Statement on Ethical Considerations
Delivered by Prof. Peter Asaro on May 13, 2025


Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.

We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.

Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.

Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.

But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.

These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.

These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.

These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:

States should ensure context-appropriate human judgement and control in the use of
LAWS, through the following measures … [which] … includes ensuring assessment of legal
obligations and ethical considerations by a human, in particular, with regard to the effects
of the selection and engagement functions.

The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.

Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.

While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.

The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.

By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.

When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.

While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.

To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”

Comments (0)

Tags: , , , , , , , , , , , , ,

Campaign to Stop Killer Robots takes significant step forward at UN

Posted on 15 November 2013 by mbolton

ICRAC welcomes the historic decision taken by nations to begin international discussions on how to address the challenges posed by fully autonomous weapons. The agreement marks the beginning of a process that the campaign believes should lead to an international ban on these weapons to ensure there will always be meaningful human control over targeting decisions and the use of violent force.

At 16:48 on Friday, 15 November 2013, at the United Nations in Geneva, states parties to the Convention on Conventional Weapons adopted a report containing a decision to convene on May 13-16, 2014 for their first meeting to discuss questions related to “lethal autonomous weapons systems” also known as fully autonomous weapons or “killer robots.” These weapons are at the beginning of their development, but technology is moving rapidly toward increasing autonomy.

“This is a very significant step forward for the International Committee for Robot Arms Control (ICRAC ),” said Professor Noel Sharkey, Chairman of ICRAC. “We are now on the first rung of the international ladder to fulfill our goal of stopping these morally obnoxious weapons from ever being deployed.”

ICRAC was formed in 2009 to initiate international discussion on autonomous weapons systems. It is made up of experts in robotic technology, artificial intelligence, computer science, international security and arms control, ethics and international law. It is a co-founder of the Campaign to Stop Killer Robots.

The Campaign to Stop Killer Robots believes that robotic weapons systems should not be making life and death decisions on the battlefield. That would be inherently wrong, morally and ethically. Fully autonomous weapons are likely to run afoul of international humanitarian law, and that there are serious technical, proliferation, societal, and other concerns that make a preemptive ban necessary.

“Law follows technology.  With robotic weapons, we have an rare opportunity to regulate a category of dangerous weapons before they are fully realized and the CCW is our best opportunity for regulation,” said Dave Akerson an ICRAC legal expert.

A total of 117 states are party to the Convention on Conventional Weapons, including nations known to be advanced in developing autonomous weapons systems: United States, China, Israel, Russia, South Korea, and United Kingdom. Adopted in 1980, this framework convention contains five protocols, including Protocol I prohibiting non-detectable fragments, Protocol III prohibiting the use of air-dropped incendiary weapons in populated areas, and Protocol IV, which preemptively banned blinding lasers.

“This is a momentous opportunity to get states on the record and behind a ban on fully autonomous offensive weapons,” said Heather Roff, an ICRAC philosopher. “If we can gain enough support, we might succeed in banning a technology before it actually harms innocent civilians.”

The agreement to begin work in the Convention on Conventional Weapons could lead to a future CCW Protocol VI prohibiting fully autonomous weapons.

ICRAC with the Campaign to Stop Killer Robots supports any action to urgently address fully autonomous weapons in any forum. The decision to begin work in the Convention on Conventional Weapons does not prevent work elsewhere, such as the Human Rights Council.

Since the topic was first discussed at the Human Rights Council on 30 May 2013, a total of 44 nations have spoken publicly on fully autonomous weapons since May: Algeria, Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, China, Costa Rica, Cuba, Ecuador, Egypt, France, Germany, Ghana, Greece, Holy See, India, Indonesia, Iran, Ireland, Israel, Italy, Japan, Lithuania, Madagascar, Mexico, Morocco, Netherlands, New Zealand, Pakistan, Russia, Sierra Leone, Spain, South Africa, South Korea, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, and United States. All nations that have spoken out have expressed interest and concern at the challenges and dangers posed by fully autonomous weapons.

Together with the Campaign to Stop Killer Robots, ICRAC urges nations to prepare for extensive and intensive work next year, both within the CCW and outside the CCW context.  We urge states to develop national policies, and to respond to the UN Special Rapporteur on Extrajudicial Executions’ call for national moratoria on fully autonomous weapons. We urge states to come back one year from now and agree to a new mandate to begin negotiations. The new process must be underscored by  a sense of urgency.

Peter Asaro, vice-chairman of ICRAC said “The actions of the CCW this week are a hopeful first step towards an international ban on autonomous weapons systems.’

Mathew Bolton delivered a statement on behalf ICRAC at the UN CCW meeting yesterday. As a group of experts we are prepared to help any nations with expert discussions of autonomous weapons systems and to help develop clear definitions for the language to be used in a treaty to ban them. Video footage of the statement, ICRAC’s first ever statement in an official diplomatic forum, is available here.

ICRAC recently coordinated the circulation of a “Scientists Call” to ban fully autonomous weapons systems, signed by more than 270 Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines in 37 countries, saying: “given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.”

Comments (0)