Nobel Peace Laureates Call for Preemptive Ban on Autonomous Weapons

Statement by Nobel Peace Laureates

In April 2013 in London, a group of nongovernmental organizations – most associated with the successful efforts to ban landmines and cluster munitions – publicly launched the “Campaign to Stop Killer Robots.” Their efforts have helped bring the issue of fully autonomous weapons to a broader audience and spur governments to begin discussions on these weapons this May in Geneva.

We, the undersigned Nobel Peace Prize Laureates, applaud this new global effort and whole-heartedly embrace its goal of a preemptive ban on fully autonomous weapons that would be able to select and attack targets on their own. It is unconscionable that human beings are expanding research and development of lethal machines that would be able to kill people without human intervention.

Not all that long ago such weapons were considered the subject of science fiction, Hollywood and video games. But some machines are already taking the place of soldiers on the battlefield. Some experts in the field predict that fully autonomous weapons could be developed within 20 to 30 years; others contend it could even be sooner. With the rapid development of drones and the expansion of their use in the wars in Afghanistan and Iraq – and beyond, billions of dollars are already being spent to research new systems for the air, land, and sea that one day would make drones seem as quaint as the Model T Ford does today.

Too many applaud the so-called success of drone warfare and extol the virtues of the weapons. While these unmanned aircraft can fly thousands of miles from home base on their own, they still require individuals watching computer screens to fire its weapons and attack a target. Already over 70 countries have drones and many are looking to develop methods to make them ever more autonomous and create new lethal robots that will, in fact, kill human beings on their own.

Those who favor the development of autonomous lethal robots make many arguments on their behalf. They note that such machines do not put soldiers’ lives at risk nor do they tire or become frightened. Emotion would not cloud their decision-making. They also say that ultimately lethal autonomous robots will be cheaper than manned systems and laud that feature in these times of cutting government budgets.

But not everyone supports the arguments. In it very aptly entitled report, “Losing Humanity: The Case Against Killer Robots,” Human Rights Watch outlined legal and other arguments against the development of such weapons. The report says that such robots will have serious challenges meeting tests of military necessity, proportionality and distinction, which are fundamental to the laws of war. Lethal autonomous weapons would also threaten essential non-legal safeguards for civilians. They would not be constrained by the capacity for compassion, which can provide a key check on killing civilians. These arguments were also brought to the fore in the report of the UN special rapporteur on extrajudicial and arbitrary execution, Christoff Heyns, presented to the UN Human Rights Council in May 2013.

Of course a key argument for robotic weapons is that using them could reduce military casualties. On the flip side, many fear that leaving the killing to machines might make going to war easier and shift the burden of armed conflict onto civilians. The use of fully autonomous weapons raises serious questions of accountability. Who should be held responsible for any unlawful actions they commit? The military commander? The company that makes the robot? The company that produces the software? The obstacles to holding anyone accountable are huge and would significantly weaken the power of the law to deter future violations.

While there has been some heated debate about the dangers and possible virtues of such weapons, until now it had almost exclusively occurred among scientists, ethicists, lawyers and military. Even as killer robots loom over our future, there had been virtually no public discussion about the ethics and morality of fully autonomous weapons, let alone the implications and impact of their potential use.

But the work of the campaign is changing that and even in the lead-up to the April 23rd launch of the Campaign to Stop Killer Robots, interest and public awareness had begun to grow. The press has increasingly begun to report on killer robots with both the New York Times and the Wall Street Journal running opinion pieces outlining the moral and legal perils of creating killer robots and calling for public discourse before it is too late.

Lethal robots would completely and forever change the face of war and likely spawn a new arms race. Can humanity afford to follow such a path? We applaud and support the efforts of civil society’s Campaign to Stop Killer Robots to help move us away from a possible future of robotic warfare.



Mairead Maguire (1976)

Adolfo Pérez Esquivel (1980)

President Lech Walesa (1983)

Archbishop Desmond Tutu (1984)

President Oscar Arias Sánchez (1987)

Rigoberta Menchú Tum (1992)

President F.W. de Klerk (1993)

President José Ramos-Horta (1996)

Jody Williams (1997)

John Hume (1998)

Shirin Ebadi (2003)

Muhammad Yunus (2006)

Leymah Gbowee (2011)

Tawakkol Karman (2011)



American Friends Service Committee (The Quakers) (1947) – Shan Cretin, General Secretary

Amnesty International (1978) – Salil Shetty, Secretary-General

International Campaign to Ban Landmines (1997) – Sylvie Brigot-Vilain, Executive Director

International Peace Bureau (1910) – Colin Archer, Secretary-General

International Physicians for the Prevention of Nuclear War (1985) – Michael Christ, Executive Director

Pugwash Conferences on Science and World Affairs (1995) – Jayathana Dhanapala, President

Published by