news News

Autonomous weapons systems that require no meaningful human control should be prohibited
by Campaign to Stop Killer Robots, agencies
2:16pm 28th Aug, 2018
 
August 21, 2018
  
Basic humanity and the public conscience support a ban on fully autonomous weapons, Human Rights Watch said in a report released today. Countries participating in an upcoming international meeting on such “killer robots” should agree to negotiate a prohibition on the weapons systems’ development, production, and use.
  
The 46-page report, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, finds that fully autonomous weapons would violate what is known as the Martens Clause. This long-standing provision of international humanitarian law requires emerging technologies to be judged by the “principles of humanity” and the “dictates of public conscience” when they are not already covered by other treaty provisions.
  
“Permitting the development and use of killer robots would undermine established moral and legal standards,” said Bonnie Docherty, senior arms researcher at Human Rights Watch, which coordinates the Campaign to Stop Killer Robots. “Countries should work together to preemptively ban these weapons systems before they proliferate around the world.”
  
The 1995 preemptive ban on blinding lasers, which was motivated in large part by concerns under the Martens Clause, provides precedent for prohibiting fully autonomous weapons as they come closer to becoming reality.
  
The report was co-published with the Harvard Law School International Human Rights Clinic, for which Docherty is associate director of armed conflict and civilian protection.
  
More than 70 governments will convene at the United Nations in Geneva from August 27 to 31, 2018, for their sixth meeting since 2014 on the challenges raised by fully autonomous weapons, also called lethal autonomous weapons systems. The talks under the Convention on Conventional Weapons, a major disarmament treaty, were formalized in 2017, but they are not yet directed toward a specific goal.
  
Human Rights Watch and the Campaign to Stop Killer Robots urge states party to the convention to agree to begin negotiations in 2019 for a new treaty that would require meaningful human control over weapons systems and the use of force. Fully autonomous weapons would select and engage targets without meaningful human control.
  
To date, 26 countries have explicitly supported a prohibition on fully autonomous weapons. Thousands of scientists and artificial intelligence experts, more than 20 Nobel Peace Laureates, and more than 160 religious leaders and organizations of various denominations have also demanded a ban. In June, Google released a set of ethical principles that includes a pledge not to develop artificial intelligence for use in weapons.
  
At the Convention on Conventional Weapons meetings, almost all countries have called for retaining some form of human control over the use of force. The emerging consensus for preserving meaningful human control, which is effectively equivalent to a ban on weapons that lack such control, reflects the widespread opposition to fully autonomous weapons.
  
Human Rights Watch and the Harvard clinic assessed fully autonomous weapons under the core elements of the Martens Clause. The clause, which appears in the Geneva Conventions and is referenced by several disarmament treaties, is triggered by the absence of specific international treaty provisions on a topic. It sets a moral baseline for judging emerging weapons.
  
The groups found that fully autonomous weapons would undermine the principles of humanity, because they would be unable to apply either compassion or nuanced legal and ethical judgment to decisions to use lethal force. Without these human qualities, the weapons would face significant obstacles in ensuring the humane treatment of others and showing respect for human life and dignity.
  
Fully autonomous weapons would also run contrary to the dictates of public conscience. Governments, experts, and the broader public have widely condemned the loss of human control over the use of force.
  
Partial measures, such as regulations or political declarations short of a legally binding prohibition, would fail to eliminate the many dangers posed by fully autonomous weapons. In addition to violating the Martens Clause, the weapons raise other legal, accountability, security, and technological concerns.
  
In previous publications, Human Rights Watch and the Harvard clinic have elaborated on the challenges that fully autonomous weapons would present for compliance with international humanitarian law and international human rights law, analyzed the gap in accountability for the unlawful harm caused by such weapons, and responded to critics of a preemptive ban.
  
The 26 countries that have called for the ban are: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China (use only), Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, the Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, the State of Palestine, Uganda, Venezuela, and Zimbabwe.
  
The Campaign to Stop Killer Robots, which began in 2013, is a coalition of 75 nongovernmental organizations in 32 countries that is working to preemptively ban the development, production, and use of fully autonomous weapons. Docherty will present the report at a Campaign to Stop Killer Robots briefing for CCW delegates scheduled on August 28 at the United Nations in Geneva.
  
“The groundswell of opposition among scientists, faith leaders, tech companies, nongovernmental groups, and ordinary citizens shows that the public understands that killer robots cross a moral threshold,” Docherty said. “Their concerns, shared by many governments, deserve an immediate response.”
  
http://www.hrw.org/news/2018/08/21/killer-robots-fail-key-moral-legal-test
  
Aug. 2018 (Campaign to Stop Killer Robots)
  
The Secretary-General of the United Nations, António Guterres, is offering to support states to elaborate new measures such as “legally binding arrangements” to ensure that “humans remain at all times in control over the use of force.” The offer is contained in the UN head’s 74-pp disarmament agenda launched in Geneva on 24 May 2018, which describes how “roboticists, technology entrepreneurs, humanitarian actors, civil society and many Governments have raised alarms over the implications posed by the development of lethal autonomous weapon systems.”
  
In the agenda, the Secretary-General notes that “a growing number of States, including some with advanced military capabilities, have called for a preventative prohibition on lethal autonomous weapon systems” and finds that “all sides appear to be in agreement that, at a minimum, human oversight over the use of force is necessary.”
  
Some previous UN secretary-generals have actively encouraged efforts to create new international law to address humanitarian and other concerns raised by certain weapons. For example, Boutros Boutros-Ghali was an early endorser of the call to ban antipersonnel landmines, while Kofi Annan continued that support by participating in both the negotiations and signing ceremony of the 1997 Mine Ban Treaty.
  
Previously, in September 2017, UNESCO’s World Commission on the Ethics of Scientific Knowledge + Technology adopted a consultative report on “Robotics Ethics” that “strongly” recommends that, “for legal, ethical and military-operational reasons, human control over weapon systems and the use of force must be retained.”
  
The Secretary-General’s proposal is yet another example of the growing support for states to move from talk to action to prevent the development of weapons systems that, once activated, would select and attack targets without human intervention.
  
Currently, 26 states are calling for a new treaty to ban fully autonomous weapons. More than 100 countries now support moving to establish new international law aimed at retaining some form of human control over weapons systems and the use of force. For the Campaign to Stop Killer Robots, if the human control is truly meaningful both these paths achieve the same outcome of preventing a future of unchecked robotic warfare.
  
On 27-31 August 2018, more than 70 countries will convene at the Convention on Conventional Weapons (CCW) at the UN in Geneva for their sixth meeting on lethal autonomous weapons systems since 2014.
  
At the previous CCW meeting in April 2018, there was significant convergence on the need to retain human control over weapons systems and start negotiating new international law to achieve this. Most of the states participating proposed that the CCW begin negotiations on a legally-binding instrument (i.e. protocol or treaty) to address multiple, serious concerns raised by lethal autonomous weapons systems. Austria, China, Colombia, and Djibouti for the first time called for a ban on fully autonomous weapons, although China qualified its support to banning use only and not development or production.
  
France, Israel, Russia, United Kingdom, and United States explicitly rejected moving to negotiate new international law on fully autonomous weapons. The CCW operates by consensus so any single state can oppose and potentially block a proposal to start negotiations.
  
The Campaign to Stop Killer Robots encourages all states to:
  
Work towards a legally binding instrument that prohibits fully autonomous weapons: States should engage substantively in the CCW GGE meeting in August 2018 and recommend moving the talks to a negotiating mandate aimed at creating new international law. At the CCW meeting of States Parties in November 2018, states should agree to begin negotiations on a legally-binding instrument.
  
Adopt national policy and laws to prevent the development of fully autonomous weapons. Work in coordination with civil society and other national stakeholders to support the negotiation of a new treaty.
  
What are killer robots?
  
Armed drones and other autonomous weapons systems with decreasing levels of human control are currently in use and development by high-tech militaries including the US, China, Israel, South Korea, Russia, and the UK. The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control. If the trend towards autonomy continues, humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role, or simply setting broad mission parameters.
  
Last November, the Stockholm International Peace Research Institute (SIPRI) released its first report on the development of autonomy in weapons systems and identified at least 381 autonomous systems developed for defense purposes, including 175 in weapon systems, most remote-controlled drones. Earlier this summer in Paris at the international Eurosatory arms fair, defense contractors from around the world displayed an array of hi-tech weapons systems incorporating artificial intelligence and autonomous features, from remote-controlled tanks to miniature drones to loitering munitions.
  
While the serious challenges raised by fully autonomous weapons have gained widespread attention over the past five years yet progress by states to resolve address concerns has been slow. States have identified and explored key legal, operational, moral, technical, and other concerns raised by allowing machines to select and attack targets without further human intervention. There is now widespread agreement with the need to retain some form of human control over future weapons systems and the use of force.
  
http://www.stopkillerrobots.org/2018/08/unsg/
  
Apr. 2018
  
Artificial intelligence researchers from nearly 30 countries are boycotting a South Korean university over concerns a new lab in partnership with a leading defence company could lead to “killer robots”.
  
More than 50 leading academics signed the letter calling for a boycott of Korea Advanced Institute of Science and Technology (KAIST) and its partner, defence manufacturer Hanwha Systems. The researchers said they would not collaborate with the university or host visitors from KAIST over fears it sought to “accelerate the arms race to develop” autonomous weapons.
  
“There are plenty of great things you can do with AI that will save lives, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern,” said Toby Walsh, the organiser of the boycott and a professor at the University of New South Wales. “This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”
  
The boycott comes ahead of a United Nations meeting in Geneva next week on autonomous weapons, and more than 20 countries have already called for a total ban on killer robots. The use of AI in militaries around the world has sparked fears of a Terminator-like situation and questions have been raised about the accuracy of such weapons and their ability to distinguish friend from foe.
  
Hanwha is one of South Korea’s largest weapons manufacturers, and makes cluster munitions which are banned in 120 countries under an international treaty. South Korea, along with the US, Russia and China, are not signatories to the convention.
  
Walsh was concerned when a Korea Times article described KAIST as “joining the global competition to develop autonomous arms” and promptly wrote to the university asking questions but did not receive a response.
  
Subsequently KAIST’s president, Sung-Chul Shin, said he “would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots,” Shin said in a statement.
  
“As an academic institution, we value human rights and ethical standards to a very high degree.. I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”
  
Toby Walsh said the reponse sets a very clear precedent for the future. I hope now that KAIST will lobby the Republic of Korea to call for meaningful human control at next week’s UN meeting on the topic of killer robots. Such small steps we hope will eventually lead to an outright ban.
  
http://bit.ly/2GC1A9l http://bit.ly/2yV0Nwv http://www.hrw.org/news/2018/09/11/were-running-out-time-stop-killer-robot-weapons http://futureoflife.org/lethal-autonomous-weapons-pledge/
  
Apr. 2018
  
April 2018 marks five years since the launch of Campaign to Stop Killer Robots. It is also the fifth time since 2014 that governments are convening at the Convention on Conventional Weapons (CCW) in Geneva to discuss concerns over lethal autonomous weapons systems, also known as fully autonomous weapons or “killer robots.”
  
The campaign urges states to participate in the CCW Group of Governmental Experts meeting, which opens at the United Nations (UN) on Monday, 9 April, and to commit to retain meaningful human control of weapons systems and over individual attacks.
  
Why the concern about killer robots?
  
Armed drones and other autonomous weapons systems with decreasing levels of human control are currently in use and development by high-tech militaries including the US, China, Israel, South Korea, Russia, and the UK. The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control.
  
If the trend towards autonomy continues, humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role, or simply setting broad mission parameters.
  
Several states, the Campaign to Stop Killer Robots, artificial intelligence experts, faith leaders, and Nobel Peace Laureates, among others, fundamentally object to permitting machines to determine who or what to target on the battlefield or in policing, border control, and other circumstances. Such a far-reaching development raises an array of profound ethical, human rights, legal, operational, proliferation, technical, and other concerns.
  
While the capabilities of future technology are uncertain, there are strong reasons to believe that devolving more decision making over targeting to weapons systems themselves will erode the fundamental obligation that rules of international humanitarian law (IHL) and international human rights law be applied by people, and with sufficient specificity to make them meaningful.
  
Furthermore, with an erosion of human responsibility to apply legal rules at an appropriate level of detail there will likely come an erosion of human accountability for the specific outcomes of such attacks. Taken together, such developments would produce a stark dehumanization of military or policing processes.
  
What is a “killer robot”?
  
A weapons system that identifies, selects and employs force against targets without meaningful human control should be considered a lethal autonomous weapons system. It would have no human in the decision-making loop when the system selects and engages the target of an attack. Applying human control only as a function of design and in an initial deployment stage would fail to fulfill the IHL obligations that apply to commanders in relation to each “attack.”
  
Why the need for “human control”?
  
Sufficient human control over the use of weapons, and of their effects, is essential to ensuring that the use of a weapon is morally justifiable and can be legal. Such control is also required as a basis for accountability over the consequences of the use of force. To demonstrate that such control can be exercised, states must show that they understand the process by which specific systems identify individual target objects and understand the context, in space and time, where the application of force may take place.
  
Given the development of greater autonomy in weapon systems, states should make it explicit that meaningful human control is required over individual attacks and that weapon systems that operate without meaningful human control should be prohibited. For human control to be meaningful, the technology must be predictable, the user must have relevant information, and there must be the potential for timely human judgement and intervention.
  
States should come prepared to the CCW meeting provide their views on the key “touchpoints” of human/machine interaction in weapons systems. These include design aspects, such as how certain features may be encoded as target objects; how the area or boundary of operation may be fixed; the time period over which a system may operate; and, any possibility of human intervention to terminate the operation and recall the weapon system.
  
Based on these touchpoints, states should be prepared to explain how control is applied over existing weapons systems, especially those with certain autonomous or automatic functions.
  
What does the Human Rights Council say about killer robots?
  
The first multilateral debate on killer robots took place at the Human Rights Council in May 2013, but states have not considered this topic at the Council since then. Countries such as Austria, Brazil, Ireland, Sierra Leone, and South Africa affirm the relevance of human rights considerations and the Council in the international debate over fully autonomous weapons.
  
In February 2016, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the Special Rapporteur on the rights to freedom of peaceful assembly and of association issued a report recommending that “autonomous weapons systems that require no meaningful human control should be prohibited.”
  
The UN Special Rapporteur on extrajudicial, summary or arbitrary executions last addressed a CCW meeting on lethal autonomous weapons in April 2016. Human rights are no longer considered relevant in the CCW talks, which raises the question of how to address human rights concerns with these weapons, particularly their use in law enforcement, border control and other circumstances outside of armed conflict.
  
http://www.stopkillerrobots.org/2018/03/fiveyears/
  
Aug. 2017
  
UN Secretary-General''s High Representative for Disarmament Affairs, Izumi Nakamitsu: “There are currently no multilateral standards or regulations covering military Artificial Intelligence (AI) applications.
  
The United Nations says it is “closely following developments related to the prospect of weapons systems that can autonomously select and engage targets, with concern that technological developments may outpace normative deliberations.” It expresses hope that UN member states “make meaningful progress toward a shared understanding on how to ensure the core values of the international community are safeguarded in this context.”
  
That’s according to a 22 May 2017 letter sent to the Campaign to Stop Killer Robots by the new Under Secretary-General High Representative for Disarmament Affairs, Izumi Nakamitsu, on behalf of the new UN Secretary-General, António Guterres. Guterres began his term on 1 January 2017, while Nakamitsu became the new UN disarmament chief on 1 May after working for the UN in humanitarian assistance and peacekeeping.
  
Nakamitsu first elaborated the UN’s “fundamental concerns” over killer robots in an address to the high-level “Artificial Intelligence for Good” summit convened by the International Telecommunication Union (ITU) in Geneva on 7 June. The six-page statement finds that fully autonomous weapon systems raise serious questions, including over their potential impact on international peace and security, the implications for global norms and mechanisms governing warfare, likely proliferation, and possibility they will be “sought after by unscrupulous actors with malicious intent.”
  
Under the heading of “What can we do?” the UN disarmament chief finds “there are currently no multilateral standards or regulations covering military AI applications.” She expresses the UN’s support for the process to discuss lethal autonomous weapons at the Convention on Conventional Weapons (CCW) in Geneva.
  
Nakamitsu says that states should decide “what they consider to be the acceptable degree of human control over the lethal functions of a weapon system, and whether a specific international treaty or instrument is required to ensure that control is maintained.”
  
However, the CCW process on killer robots is faltering and will not convene until November 2017 at the earliest, more than a year after the last substantive talks on the topic. A crucial week of formal discussions on killer robots that was due to take place in Geneva in April 2017 and then rescheduled to August has been cancelled because several states, most notably Brazil, failed to pay their dues for the convention’s meetings.
  
The Campaign to Stop Killer Robots strongly regrets this development and is working with Brazil and others to help resolve it so that the CCW process can continue. At this time, the campaign is intensifying its outreach in national capitals to check on the status of policy development and encourage legislative initiatives to ban fully autonomous weapons.
  
The campaign aims to engage at the regional level to build awareness and support for a collective response and it continues to explore other avenues that could lead states to adopt a new international instrument to retain meaningful human control over the critical functions of weapons systems.
  
Both the UN’s letter and statement call for “inclusive and comprehensive dialogue” on the concerns posed by lethal autonomous weapons systems. Nakamitsu recommends a “multi-sectoral and multi-stakeholder exchange” between governments and “civil society activists, the scientific community and the private sector.”
  
She views “human dignity and human security” as “essential” elements or principles to guide discussion, including the development of “human-centred norms.”
  
On 29 June 2017, Nakamitsu met with Campaign to Stop Killer Robots representatives Nobel Peace Laureate Jody Williams and Mary Wareham of Human Rights Watch in New York, where they discussed the revolutionary nature of fully autonomous weapons and paradigm shift they constitute for the conduct of warfare in future.
  
The UN letter was in response to March correspondence from the campaign. Since its launch in 2013 the campaign has engaged in regular dialogue with the UN disarmament chief, including previous representatives Angela Kane (until 2015) and then Kim Won-soo (until 2017)
  
In his remarks to the AI for Good summit, the Secretary-General of campaign co-founder Amnesty International, Salil Shetty, reiterated the urgent need for a pre-emptive ban on fully autonomous weapons. http://bit.ly/2x6PA6I http://www.stopkillerrobots.org/
  
Dec. 2016
  
Formalize ‘Killer Robots’ Talks; Aim for Ban - Fully Autonomous Weapons on Disarmament
  
Governments should agree at the upcoming multilateral disarmament meeting in Geneva to formalize their talks on fully autonomous weapons, with an eye toward negotiating a preemptive ban, Human Rights Watch said in a report released today.
  
The 49-page report, Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban, rebuts 16 key arguments against a ban on fully autonomous weapons.
  
Fully autonomous weapons, also known as lethal autonomous weapons systems and ‘killer robots,’ would be able to select and attack targets without meaningful human control.
  
These weapons and others will be the subject of the five-year Review Conference of the Convention on Conventional Weapons (CCW) from December 12-16, 2016.
  
“It’s time for countries to move beyond the talk shop phase and pursue a preemptive ban,” said Bonnie Docherty, senior arms researcher at Human Rights Watch, a co-founder of the Campaign to Stop Killer Robots. “Governments should ensure that humans retain control over whom to target with their weapons and when to fire.”
  
The report is co-published with Harvard Law School’s International Human Rights Clinic, for which Docherty is also a lecturer.
  
Human Rights Watch and the Harvard clinic examined the legal, moral, security, and other dangers of killer robots. They concluded that a ban is the only option for addressing all of the concerns. Other more incremental measures, such as adopting limited regulations on their use or codifying best practices for the development and acquisition of new weapons systems, have numerous shortcomings.
  
Countries participating in the Fifth Convention on Conventional Weapons Review Conference must decide by consensus on December 16 whether to continue deliberations on lethal autonomous weapons systems in 2017 and what shape the deliberations should take. Countries should establish a formal Group of Governmental Experts to delve more deeply into the problems of the weapons and to work toward new international law prohibiting them, said Human Rights Watch, which coordinates the Campaign to Stop Killer Robots.
  
Spurred to act by the efforts of the Campaign to Stop Killer Robots, countries that have joined the international treaty on conventional weapons have held three week-long informal meetings on lethal autonomous weapons systems since 2014. The formation of a Group of Governmental Experts at the review conference would compel countries to move beyond talk by formalizing the deliberations and creating the expectation of an outcome.
  
In past publications, Human Rights Watch has elaborated on the challenges that fully autonomous weapons would present for compliance with international humanitarian law and international human rights law and analyzed the lack of accountability that would exist for the unlawful harm caused by such weapons. The weapons would also cross a moral threshold, and their humanitarian and security risks would outweigh possible military benefits.
  
Several of the 121 countries that have joined the Convention on Conventional Weapons – including the United States, the United Kingdom, China, Israel, Russia, and South Korea – are developing weapons systems with increasing levels of autonomy. Critics who dismiss concerns about fully autonomous weapons depend on speculative arguments about the future of technology and the false presumption that technological developments could address all of the dangers posed by the weapons, Human Rights Watch and the Harvard clinic said.
  
Docherty will present the report at a Campaign to Stop Killer Robots briefing on December 14 at the United Nations in Geneva.
  
“The success of past disarmament treaties shows that an absolute prohibition on fully autonomous weapons would be achievable and effective,” Docherty said.
  
http://www.hrw.org/news/2017/01/15/banning-killer-robots-2017 http://www.hrw.org/news/2016/12/09/formalize-killer-robots-talks-aim-ban http://www.stopkillerrobots.org/2016/10/unga71/ http://www.buzzfeed.com/sarahatopol/how-to-save-mankind-from-the-new-breed-of-killer-robots http://www.stopkillerrobots.org/

Visit the related web page
 
Next (more recent) news item
Next (older) news item