<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Peter Asaro &#8211; ICRAC</title>
	<atom:link href="https://www.icrac.net/tag/peter-asaro/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.icrac.net</link>
	<description>International Committee for Robot Arms Control</description>
	<lastBuildDate>Mon, 23 Jun 2025 12:49:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">128339352</site>	<item>
		<title>Statement on Ethical Considerations in Open Informal Meeting at UNGA 1st Committee</title>
		<link>https://www.icrac.net/statement-on-ethical-considerations-in-open-informal-meeting-at-unga-1st-committee/</link>
		
		<dc:creator><![CDATA[Peter Asaro]]></dc:creator>
		<pubDate>Tue, 13 May 2025 20:45:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Slider]]></category>
		<category><![CDATA[Statements]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<guid isPermaLink="false">https://www.icrac.net/?p=19944</guid>

					<description><![CDATA[UNGA Informals on LAWS ICRAC Statement on Ethical Considerations Delivered by Prof. Peter Asaro on May 13, 2025 Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="1024" height="800" src="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799-1024x800.jpg?resize=1024%2C800&#038;ssl=1" alt="Peter Asaro delivering ICRAC Statement on Ethics" class="wp-image-19940" srcset="https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=1024%2C800&amp;ssl=1 1024w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=300%2C234&amp;ssl=1 300w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?resize=768%2C600&amp;ssl=1 768w, https://i0.wp.com/www.icrac.net/wp-content/uploads/2025/06/IMG-20250513-WA00251-e1750619884799.jpg?w=1536&amp;ssl=1 1536w" sizes="(max-width: 1000px) 100vw, 1000px" /></figure>



<p><strong>UNGA Informals on LAWS <br>ICRAC Statement on Ethical Considerations <br>Delivered by Prof. Peter Asaro on May 13, 2025</strong> </p>



<p><br>Thank you, Chair. I speak on behalf of the International Committee for Robot Arms Control, or ICRAC, a group of academics, experts, scholars and researchers in computer science, artificial intelligence, robotics, international law, political science, philosophy and ethics. ICRAC is a co-founding member of the Stop Killer Robots Campaign.</p>



<p>We appreciate the organizers of this Informal Meeting including a Session on Ethical Considerations. It has been many years since Ethics has been the primary focus of substantive discussion within the CCW GGE meetings. Yet ethics and morality has provided a valuable basis for international law in the past, and is precisely where we must ground new laws to prohibit and regulate AWS in the near future. That is, in our common shared humanity, and principles which transcend human laws, particularly human dignity in a deep sense as discussed by Prof. Chengeta, and ethical decisions as discussed by the Representative of the Holy See.</p>



<p>Whenever violent force is used, there are risks involved. But merely managing those risks is not sufficient to meet the requirements for morally justifiable killing. Understanding the reasons and the potential consequences for the use of force is required for its justification. It has been argued that AWS may be highly accurate and precise in their use of force, but these are not sufficient to meet the requirements for the ethically discriminant use of force, and do not begin to address the requirements of the proportionate use of force.</p>



<p>Following the outlines of the two-tiered approach advanced by the ICRC, regulated AWS would be permitted to target autonomously. In these limited cases, more specifically cases where the target is a military object by nature, such as military vehicles and installations, automated targeting must still be carefully regulated to ensure that humans can safely supervise those systems.</p>



<p>But as soon as we start considering civilian objects, even those which might be used for military purposes and might be lawfully targeted under IHL, we must not permit their targeting by automated processes. The moral argument that leads to this conclusion is clear. It may be tempting to think that we can automate proportionality decisions–how much force is needed, or how much risk is acceptable, or how much collateral harm to civilians might be acceptable relative to a military objective. But the nature of proportionality judgments is fundamentally moral.</p>



<p>These decisions are inherently about values–the value of a target to a military objective, the value of a military objective to an operation and an overall strategy; the value of civilian infrastructure to a family, a community, a country; the value of a natural environment; and above all the value of human lives and the cost of taking those lives. They are also about duties, our duties to protect, our duties to each other.</p>



<p>These values are not intrinsically numerical or quantitative in nature, and assigning them such values in a computer program is arbitrary at best. Computers do not “understand” in any meaningful sense. They represent the world through mathematical abstractions that we design and understand, and from which we assign and seek meaning. Worse, training an algorithm to “learn” these values from a dataset is to abdicate any human responsibility in establishing the values represented in the systems, including the value of human life and the necessary conditions of human flourishing.</p>



<p>These are moral values, only understood through the lived experience of human life, moral reflection, and ethical development. In those limited cases where the decision to end a human life can be morally justified, it must be made by a moral agent who truly understands these values. Any life lost by the decision of an algorithm is, by definition, taken arbitrarily. ICRAC appreciates the work of the CCW GGE and this section of latest draft of the Chair’s Rolling Text:</p>



<p><em>States should ensure context-appropriate human judgement and control in the use of<br>LAWS, through the following measures &#8230; [which] &#8230; includes ensuring assessment of legal<br>obligations and ethical considerations by a human, in particular, with regard to the effects<br>of the selection and engagement functions.</em></p>



<p>The ethical considerations of the use of force must remain a matter of human judgement. We must not eliminate ethical considerations altogether by delegating them to machines wholly incapable of grasping such considerations. Human dignity requires that we consider a human as human–no machine can do this for us.</p>



<p>Similarly for anti-personnel AWS, in order to design systems to autonomously target people, it would be necessary to create digital representations of people, or target profiles. The same moral logic applies here.</p>



<p>While from a legal perspective, it could be argued that unmounted infantry are military objects by nature, and can pose a threat just as a tank does. But there is an important moral difference between targeting people directly, versus targeting a tank, and accepting that people inside it may be killed. People are not to be treated as objects, but always as moral subjects.</p>



<p>The aim of war, and the moral justification of killing in war, depends critically on using force to diminish the ability of your adversary to use force against you. The ultimate aim is not to harm or kill the enemy directly, this is only a means to an end, namely the end of hostilities. Targeting a human directly is to make the destruction of a human a goal in itself, rather than the true goal of eliminating the threat they pose. This might sound like a minor distinction, but by making the targeting and killing of humans the goal of a machine, rather than the elimination of military threats, we stand to vastly undermine human dignity.</p>



<p>By designing systems to target people directly, we essentially and effectively “pre-authorize” the moral judgement to take their lives. By pre-authorizing the killing of humans, and making personnel the targets of autonomous weapons, we would fundamentally violate and diminish human dignity. If we accept that a soldier on the battlefield can be directly targeted, without a human moral judgement or moral justification, then we make it more acceptable to do so in other contexts as well.</p>



<p>When we violate human dignity, it is not just the immediate victim who loses their dignity. All of humanity suffers from this loss. This is why we feel such moral disgust at the injustices of slavery, and torture, and the dropping of bombs on children–these atrocities undermine our collective dignity as human beings and offend our moral sensibility.</p>



<p>While the use of violent force against unjust aggression is sometimes necessary, it is our moral responsibility to ensure that force is used justly. The only way to ensure that force is used justly is through moral judgement, and this requires a moral agent. Machines and automated algorithms, however sophisticated they may appear, are not moral agents, and are not capable of moral judgements–only thin and arbitrary approximations. We must not delegate our morality to machines, as doing so threatens the very essence of our human dignity.</p>



<p>To quote the wise words of Christof Heyns, “War without reflection is mechanical slaughter.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='Peter Asaro' src='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/730c6c6178743fb0e7fdfc64686309f4701c6a1cfb57d66242717d43b57b746b?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://www.peterasaro.org/">Peter Asaro</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Dr. Peter Asaro is a philosopher of science, technology and media. His work examines the interfaces between social relations, human minds and bodies, artificial intelligence and robotics, and digital media.

His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, from a perspective that combines media theory with science and technology studies. He has written widely-cited papers on lethal robotics from the perspective of just war theory and human rights. Dr. Asaro's research also examines agency and autonomy, liability and punishment, and privacy and surveillance as it applies to consumer robots, industrial automation, smart buildings, and autonomous vehicles. His research has been published in international peer reviewed journals and edited volumes, and he is currently writing a book that interrogates the intersections between military robotics, interface design practices, and social and ethical issues.

Dr. Asaro has held research positions at the Center for Cultural Analysis at Rutgers University, the HUMlab of Umeå University in Sweden, and the Austrian Academy of Sciences in Vienna. He has also developed technologies in the areas of virtual reality, data visualization and sonification, human-computer interaction, computer-supported cooperative work, artificial intelligence, machine learning, robot vision, and neuromorphic robotics at the National Center for Supercomputer Applications (NCSA), the Beckman Institute for Advanced Science and Technology, and Iguana Robotics, Inc., and was involved in the design of the natural language interface for the Wolfram|Alpha computational knowledge engine (winner of the 2010 SXSW Web Interactive Award for Technical Achievement), for Wolfram Research.

He is currently working on an Oral History of Robotics project that is funded by the IEEE Robotics and Automation Society and the National Endowment for the Humanities Office of Digital Humanities.

Dr. Asaro received his PhD in the History, Philosophy and Sociology of Science from the University of Illinois at Urbana-Champaign, where he also earned a Master of Arts from the Department of Philosophy, and a Master of Computer Science from the Department of Computer Science.</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">19944</post-id>	</item>
		<item>
		<title>Campaign to Stop Killer Robots takes significant step forward at UN</title>
		<link>https://www.icrac.net/campaign-to-stop-killer-robots-takes-significant-step-forward-at-un/</link>
		
		<dc:creator><![CDATA[mbolton]]></dc:creator>
		<pubDate>Fri, 15 Nov 2013 03:23:27 +0000</pubDate>
				<category><![CDATA[ICRAC in the media]]></category>
		<category><![CDATA[ICRAC News]]></category>
		<category><![CDATA[Campaign to Stop Killer Robots]]></category>
		<category><![CDATA[CCW]]></category>
		<category><![CDATA[Convention on Conventional Weapons]]></category>
		<category><![CDATA[Dave Akerson]]></category>
		<category><![CDATA[Geneva]]></category>
		<category><![CDATA[Heather Roff]]></category>
		<category><![CDATA[ICRAC]]></category>
		<category><![CDATA[International Committee for Robot Arms Control]]></category>
		<category><![CDATA[Killer Robots]]></category>
		<category><![CDATA[Matthew Bolton]]></category>
		<category><![CDATA[Noel Sharkey]]></category>
		<category><![CDATA[Peter Asaro]]></category>
		<category><![CDATA[UN]]></category>
		<category><![CDATA[United Nations]]></category>
		<guid isPermaLink="false">http://www.icrac.net.php53-3.dfw1-2.websitetestlink.com/?p=2433</guid>

					<description><![CDATA[ICRAC welcomes the historic decision taken by nations to begin international discussions on how to address the challenges posed by fully autonomous weapons. The agreement marks the beginning of a process that the campaign believes should lead to an international ban on these weapons to ensure there will always be meaningful human control over targeting decisions [&#8230;]<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></description>
										<content:encoded><![CDATA[<p><iframe loading="lazy" src="https://player.vimeo.com/video/79472645" width="500" height="281" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p style="text-align: left;" align="center">ICRAC welcomes the historic decision taken by nations to begin international discussions on how to address the challenges posed by fully autonomous weapons. The agreement marks the beginning of a process that the campaign believes should lead to an international ban on these weapons to ensure there will always be meaningful human control over targeting decisions and the use of violent force.</p>
<p>At 16:48 on Friday, 15 November 2013, at the United Nations in Geneva, states parties to the Convention on Conventional Weapons adopted a report containing a decision to convene on May 13-16, 2014 for their first meeting to discuss questions related to “lethal autonomous weapons systems” also known as fully autonomous weapons or “killer robots.” These weapons are at the beginning of their development, but technology is moving rapidly toward increasing autonomy.</p>
<p>“This is a very significant step forward for the International Committee for Robot Arms Control (ICRAC ),” said Professor Noel Sharkey, Chairman of ICRAC. “We are now on the first rung of the international ladder to fulfill our goal of stopping these morally obnoxious weapons from ever being deployed.”</p>
<p>ICRAC was formed in 2009 to initiate international discussion on autonomous weapons systems. It is made up of experts in robotic technology, artificial intelligence, computer science, international security and arms control, ethics and international law. It is a co-founder of the Campaign to Stop Killer Robots.</p>
<p>The Campaign to Stop Killer Robots believes that robotic weapons systems should not be making life and death decisions on the battlefield. That would be inherently wrong, morally and ethically. Fully autonomous weapons are likely to run afoul of international humanitarian law, and that there are serious technical, proliferation, societal, and other concerns that make a preemptive ban necessary.</p>
<p>“Law follows technology.  With robotic weapons, we have an rare opportunity to regulate a category of dangerous weapons before they are fully realized and the CCW is our best opportunity for regulation,” said Dave Akerson an ICRAC legal expert.</p>
<p>A total of 117 states are party to the Convention on Conventional Weapons, including nations known to be advanced in developing autonomous weapons systems: United States, China, Israel, Russia, South Korea, and United Kingdom. Adopted in 1980, this framework convention contains five protocols, including Protocol I prohibiting non-detectable fragments, Protocol III prohibiting the use of air-dropped incendiary weapons in populated areas, and Protocol IV, which preemptively banned blinding lasers.</p>
<p>“This is a momentous opportunity to get states on the record and behind a ban on fully autonomous offensive weapons,” said Heather Roff, an ICRAC philosopher. “If we can gain enough support, we might succeed in banning a technology before it actually harms innocent civilians.”</p>
<p>The agreement to begin work in the Convention on Conventional Weapons could lead to a future CCW Protocol VI prohibiting fully autonomous weapons.</p>
<p>ICRAC with the Campaign to Stop Killer Robots supports any action to urgently address fully autonomous weapons in any forum. The decision to begin work in the Convention on Conventional Weapons does not prevent work elsewhere, such as the Human Rights Council.</p>
<p>Since the topic was first discussed at the Human Rights Council on 30 May 2013, a total of 44 nations have spoken publicly on fully autonomous weapons since May: Algeria, Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, China, Costa Rica, Cuba, Ecuador, Egypt, France, Germany, Ghana, Greece, Holy See, India, Indonesia, Iran, Ireland, Israel, Italy, Japan, Lithuania, Madagascar, Mexico, Morocco, Netherlands, New Zealand, Pakistan, Russia, Sierra Leone, Spain, South Africa, South Korea, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, and United States. All nations that have spoken out have expressed interest and concern at the challenges and dangers posed by fully autonomous weapons.</p>
<p>Together with the Campaign to Stop Killer Robots, ICRAC urges nations to prepare for extensive and intensive work next year, both within the CCW and outside the CCW context.  We urge states to develop national policies, and to respond to the UN Special Rapporteur on Extrajudicial Executions’ call for national moratoria on fully autonomous weapons. We urge states to come back one year from now and agree to a new mandate to begin negotiations. The new process must be underscored by  a sense of urgency.</p>
<p>Peter Asaro, vice-chairman of ICRAC said “The actions of the CCW this week are a hopeful first step towards an international ban on autonomous weapons systems.’</p>
<p>Mathew Bolton delivered a <a href="http://icrac.net/2013/11/icrac-delivers-statement-to-states-parties-to-the-convention-on-conventional-weapons-at-the-un-in-geneva-in/">statement</a> on behalf ICRAC at the UN CCW meeting yesterday. As a group of experts we are prepared to help any nations with expert discussions of autonomous weapons systems and to help develop clear definitions for the language to be used in a treaty to ban them. Video footage of the statement, ICRAC’s first ever statement in an official diplomatic forum, is <a href="http://vimeo.com/79472645" target="_blank">available here</a>.</p>
<p>ICRAC recently coordinated the circulation of a “<a href="http://icrac.net/2013/10/computing-experts-from-37-countries-call-for-ban-on-killer-robots/">Scientists Call</a>” to ban fully autonomous weapons systems, signed by more than 270 Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines in 37 countries, saying: “given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.”</p>
<h3>Author information</h3><div class="ts-fab-wrapper" style="overflow:hidden"><div class="ts-fab-photo" style="float:left;width:64px"><img alt='mbolton' src='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=64&#038;d=retro&#038;r=g' srcset='https://secure.gravatar.com/avatar/a830bf59e0364ba33f24fb19a6c29ea5bb8c95259e3ffa70dcbad0d35df1b295?s=128&#038;d=retro&#038;r=g 2x' class='avatar avatar-64 photo' height='64' width='64' loading='lazy' decoding='async'/></div><!-- /.ts-fab-photo --><div class="ts-fab-text" style="margin-left:74px"><div class="ts-fab-header"><div style="font-size: 1.25em;margin-bottom:0"><strong><a href="http://matthewbreaybolton.com">mbolton</a></strong></div></div><!-- /.ts-fab-header --><div class="ts-fab-content" style="margin-bottom:0.5em">Matthew Bolton is professor of political science at Pace University in New York City. He is an expert on global peace and security policy, focusing on multilateral disarmament and arms control policymaking processes. He has a PhD in Government and Master's in Development Studies from the London School of Economics and a Master's from SUNY Environmental Science and Forestry. Since 2014, Bolton has worked on the UN and New York City advocacy of the International Campaign to Abolish Nuclear Weapons (ICAN), recipient of the 2017 Nobel Peace Prize. Bolton has published six books, including Political Minefields (I.B. Tauris) and Imagining Disarmament, Enchanting International Relations (Palgrave Pivot).</div><div class="ts-fab-footer"></div><!-- /.ts-fab-footer --></div><!-- /.ts-fab-text --></div><!-- /.ts-fab-wrapper -->]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2433</post-id>	</item>
	</channel>
</rss>
