Autonomous weapons that kill must be banned, insists UN chief
by UN News, ICRC, Campaign to Stop Killer Robots,
3:08pm 27th Mar, 2019
25 Mar. 2019
Autonomous weapons that kill must be banned, insists UN chief. (UN News)
UN Secretary-General Antonio Guterres has called on artificial intelligence (AI) experts meeting in Geneva on Monday to push ahead with their work to restrict the development of lethal autonomous weapons systems, or LAWS, as they are also known.
In a message to the Group of Governmental Experts, the UN chief said that machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.
No country or armed force is in favour of such fully autonomous weapon systems that can take human life, Mr Guterres insisted, before welcoming the panel's statement last year that human responsibility for decisions on the use of weapons systems must be retained, since accountability cannot be transferred to machines.
Although this 2018 announcement was an important line in the sand by the Group of Governmental Experts - which meets under the auspices of the Convention on Certain Conventional Weapons (CCW) the UN chief noted in his statement that while some Member States believe new legislation is required, while others would prefer less stringent political measures and guidelines that could be agreed on.
Nonetheless, it is time for the panel to deliver on LAWS, the UN chief said, adding that it is your task now to narrow these differences and find the most effective way forward. The world is watching, the clock is ticking and others are less sanguine. I hope you prove them wrong.
The LAWS meeting is one of two planned for this year, which follow earlier Governmental Expert meetings in 2017 and 2018 at the UN in Geneva.
The Group's agenda covers technical issues related to the use of lethal autonomous weapons systems, including the challenges the technology poses to international humanitarian law, as well as human interaction in the development, deployment and use of emerging tech in LAWS.
In addition to the Governmental Experts, participation is expected from a wide array of international organizations, civil society, academia, and industry.
The CCW's full name is the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, entered into force on 2 December 1983.
The Convention currently has 125 States Parties. Its purpose is to prohibit or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.
In previous comments on AI, the Secretary-General likened the technology to a new frontier with advances moving at warp speed.
Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people, he said at the AI for Good Global Summit in 2017, adding that there are also serious challenges and ethical issues which must be taken into account - including cybersecurity, human rights and privacy. http://bit.ly/2TxM5RB
24 Mar. 2019
Resistance to killer robots growing. (DW)
Activists from 35 countries met in Berlin this week to call for a ban on lethal autonomous weapons, ahead of new talks on such weapons in Geneva. They say that if Germany took the lead, other countries would follow.
"I can build you a killer robot in just two weeks," says Noel Sharkey as he leans forward with a warning gaze. The white-haired English professor is a renowned specialist for robotics and artificial intelligence (AI). He was in Berlin to participate in an international meeting of the Campaign to Stop Killer Robots that ended on Friday.
Sharkey objects to talking about "lethal autonomous weapons systems" (LAWs) as if they were something out a science-fiction novel. Fully autonomous weapons systems are in fact a well-established reality, he says, adding that there is no need to argue about the definition thereof: These are weapons that seek, select and attack targets on their own.
That is also how the International Committee of the Red Cross (ICRC) defines them. Soldiers no longer push the firing button with such weapons; instead, the weapons themselves use built-in software to find and strike targets. Such weapons can come in the form of missiles, unmanned ground vehicles, submarines, or swarms of mini-drones.
The reality of fully automated autonomous weapons systems was on full display this February at IDEX in Abu Dhabi, the largest arms fair in the Middle East, where German arms manufacturers also enthusiastically hawked their new weapons with autonomous functions.
Violation of international law
The ICRC says that the use of such weapons is a clear breach of international law. "We all know that a machine is incapable of making moral decisions," emphasizes Sharkey, one of the leading figures in the Campaign to Stop Killer Robots.
He notes that a machine cannot differentiate between combatants and civilians as stipulated by international law, referring to failed attempts at facial recognition in which innocent civilians were identified as supposed criminals.
Facial recognition depends on artificial intelligence (AI) to autonomously find a person of interest. Once the machine has identified that person, it can attack on its own. A growing number of critics are horrified by such a scenario.
Meanwhile, some 100 non-governmental organizations (NGOs) have joined the global Campaign to Stop Killer Robots. At their Berlin meeting, those groups called on Germany to demand that autonomous weapons systems that violate international law be banned. The current German government affirmed such intentions in its coalition negotiations in 2018. Nevertheless, it has meekly pushed only for non-binding political declarations at the UN in Geneva.
UN Secretary-General Antonio Guterres and the European Parliament are also in favor of a ban. Recently, the German Informatics Society (GI), an organization of computer researchers, as well as the influential Federal Association of German Industry (BDI), also called for a legally binding ban on LAWs.
Although countries such as the USA and China are leading the world in AI use, much of the research that such systems depend on comes from Europe. That lends great weight to European voices in the ongoing debate.
Noel Sharkey is convinced: "If Germany takes the lead, others will follow." Sharkey also warns that non-binding political declarations, like those the German government is currently championing, provide "perfect cover" for countries opposed to a ban. Such countries include Russia, Israel and the USA.
The German government has argued that it is essentially in favor of a ban, but that it has pushed the notably weaker political declaration for tactical reasons. The logic behind that approach is that it allows Germany to maintain a dialogue with countries such as the USA, rather than alienating them altogether.
Nobel Peace Laureate Jody Williams is wholly unconvinced by that argument, called on German Foreign Minister Heiko Maas to reconsider his position. Williams argued that anyone waiting for the USA to come out in favor of a ban will be waiting forever.
International talks on how to regulate LAWs will be held in Geneva, Switzerland, from March 25 to March 29.
Public opposition to killer robots grows while states continue to drag their feet.
More than three in five people across 26 countries oppose the development of autonomous weapons that could select and kill targets without human intervention, according to a new poll commissioned by the Campaign to Stop Killer Robots.
The poll, which was carried out by Ipsos MORI, found that:
In the 26 countries surveyed in 2018, more than three in every five people (61%) oppose the development of lethal autonomous weapons systems.
Two-thirds (66%) of those opposed to lethal autonomous weapons systems were most concerned that they would -cross a moral line because machines should not be allowed to kill.
More than half (54%) of those who opposed said they were concerned that the weapons would be unaccountable.
A near-identical survey in 23 countries in January 2017 found that 56% of respondents were opposed to lethal autonomous weapons systems.
More than half of respondents opposed killer robots in China (60%); Russia (59%); the UK (54%); France (59%), and the USA (52%).
The Campaign to Stop Killer Robots is a growing global coalition of NGOs, including Amnesty International, that is working to ban fully autonomous weapons.
This poll shows that the states blocking a ban on killer robots are totally out of step with public opinion. Governments should be protecting people from the myriad risks that killer robots pose, not rushing into a new arms race which could have terrifying consequences, said Rasha Abdul Rahim, Acting Deputy Director of Amnesty Tech.
We still have time to halt the development and proliferation of fully autonomous weapons, but we won't have that luxury for long. Governments should take note of this poll and urgently begin negotiating a new treaty to prohibit these horrifying weapons. Only this can help ensure respect for international law and address ethical and security concerns regarding delegating the power to make life-and-death decisions to machines.
Amnesty International is calling for a total ban on the development, production and use of fully autonomous weapon systems, in light of the serious human rights, humanitarian and security risks they pose. The use of autonomous weapons without meaningful and effective human control would undermine the right to life and other human rights and create an accountability gap if, once deployed, they are able to make their own determinations about the use of lethal force.
However, a minority of states at the 2018 November annual meeting of the Convention on Conventional Weapons, used consensus rules to thwart meaningful diplomatic progress. Russia, Israel, South Korea, and the USA indicated at the meeting that they would not support negotiations for a new treaty, but the poll results show that more than half of respondents in Russia (59%) and the USA (52%) oppose autonomous weapons.
More than half of respondents opposed autonomous weapons in China (60%), South Korea (74%) and the UK (54%), which are among the leading states developing this technology.
Autonomous weapons: States must agree on what human control means in practice. (ICRC)
Should a weapon system be able to make its own 'decision' about who to kill?
The International Committee of the Red Cross (ICRC) believes that the answer is no, and today is calling on States to agree to strong, practical and future-proof limits on autonomy in weapon systems.
During the annual meeting of the States party to the Convention on Certain Conventional Weapons in Geneva November 21-23, the ICRC will urge that the new mandate of the Group of Governmental Experts focuses on determining the type and degree of human control that would be necessary to comply with international humanitarian law and satisfy ethical concerns. Several questions need to be answered:
What is the level of human supervision, including the ability to intervene and deactivate, that would be required during the operation of a weapon that can autonomously select and attack targets? What is the level of predictability and reliability that would be required, also taking into account the weapon's tasks and the environment of use?
What other operational constraints would be required, notably on the weapon system's tasks, its targets, the environment in which it operates (e.g. populated or unpopulated area), the duration of its operation, and the scope of its movement?
"It is now widely accepted that human control must be maintained over weapon systems and the use of force, which means we need limits on autonomy," said ICRC President Peter Maurer. Now is the moment for States to determine the level of human control that is needed to satisfy ethical and legal considerations."
Only humans can make context-specific judgements of distinction, proportionality and precautions in combat. Only humans can behave ethically, uphold moral responsibility and show mercy and compassion. Machines cannot exercise the complex and uniquely human judgements required on battlefields in order to comply with international humanitarian law. As inanimate objects, they will never be capable of embodying human conscience or ethical values.
Given militaries significant interest in increasingly autonomous weapons, there is a growing risk that humans will become so far removed from the choice to use force that life-and-death decision-making will effectively be left to sensors and software.
Humans cannot delegate the decision to use force and violence to machines. Decisions to kill, injure and destroy must remain with humans. It is humans who apply the law and are obliged to respect it, said Kathleen Lawand, the head of the ICRC's arms unit.
Basic humanity and the public conscience support a ban on fully autonomous weapons, Human Rights Watch said in a report released today. Countries participating in an upcoming international meeting on such 'killer robots' should agree to negotiate a prohibition on the weapons systems development, production, and use.
The 46-page report, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, finds that fully autonomous weapons would violate what is known as the Martens Clause. This long-standing provision of international humanitarian law requires emerging technologies to be judged by the principles of humanity and the dictates of public conscience when they are not already covered by other treaty provisions.
Permitting the development and use of killer robots would undermine established moral and legal standards, said Bonnie Docherty, senior arms researcher at Human Rights Watch, which coordinates the Campaign to Stop Killer Robots. Countries should work together to preemptively ban these weapons systems before they proliferate around the world.
The 1995 preemptive ban on blinding lasers, which was motivated in large part by concerns under the Martens Clause, provides precedent for prohibiting fully autonomous weapons as they come closer to becoming reality.
The report was co-published with the Harvard Law School International Human Rights Clinic, for which Docherty is associate director of armed conflict and civilian protection.
More than 70 governments will convene at the United Nations in Geneva from August 27 to 31, 2018, for their sixth meeting since 2014 on the challenges raised by fully autonomous weapons, also called lethal autonomous weapons systems. The talks under the Convention on Conventional Weapons, a major disarmament treaty, were formalized in 2017, but they are not yet directed toward a specific goal.
Human Rights Watch and the Campaign to Stop Killer Robots urge states party to the convention to agree to begin negotiations in 2019 for a new treaty that would require meaningful human control over weapons systems and the use of force. Fully autonomous weapons would select and engage targets without meaningful human control.
To date, 26 countries have explicitly supported a prohibition on fully autonomous weapons. Thousands of scientists and artificial intelligence experts, more than 20 Nobel Peace Laureates, and more than 160 religious leaders and organizations of various denominations have also demanded a ban. In June, Google released a set of ethical principles that includes a pledge not to develop artificial intelligence for use in weapons.
At the Convention on Conventional Weapons meetings, almost all countries have called for retaining some form of human control over the use of force. The emerging consensus for preserving meaningful human control, which is effectively equivalent to a ban on weapons that lack such control, reflects the widespread opposition to fully autonomous weapons.
Human Rights Watch and the Harvard clinic assessed fully autonomous weapons under the core elements of the Martens Clause. The clause, which appears in the Geneva Conventions and is referenced by several disarmament treaties, is triggered by the absence of specific international treaty provisions on a topic. It sets a moral baseline for judging emerging weapons.
The groups found that fully autonomous weapons would undermine the principles of humanity, because they would be unable to apply either compassion or nuanced legal and ethical judgment to decisions to use lethal force. Without these human qualities, the weapons would face significant obstacles in ensuring the humane treatment of others and showing respect for human life and dignity.
Fully autonomous weapons would also run contrary to the dictates of public conscience. Governments, experts, and the broader public have widely condemned the loss of human control over the use of force.
Partial measures, such as regulations or political declarations short of a legally binding prohibition, would fail to eliminate the many dangers posed by fully autonomous weapons. In addition to violating the Martens Clause, the weapons raise other legal, accountability, security, and technological concerns.
In previous publications, Human Rights Watch and the Harvard clinic have elaborated on the challenges that fully autonomous weapons would present for compliance with international humanitarian law and international human rights law, analyzed the gap in accountability for the unlawful harm caused by such weapons, and responded to critics of a preemptive ban.
The 26 countries that have called for the ban are: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China (use only), Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, the Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, the State of Palestine, Uganda, Venezuela, and Zimbabwe.
The Campaign to Stop Killer Robots, which began in 2013, is a coalition of 75 nongovernmental organizations in 32 countries that is working to preemptively ban the development, production, and use of fully autonomous weapons. Docherty will present the report at a Campaign to Stop Killer Robots briefing for CCW delegates scheduled on August 28 at the United Nations in Geneva.
The groundswell of opposition among scientists, faith leaders, tech companies, nongovernmental groups, and ordinary citizens shows that the public understands that killer robots cross a moral threshold, Docherty said. Their concerns, shared by many governments, deserve an immediate response.
April 2018 marks five years since the launch of Campaign to Stop Killer Robots. It is also the fifth time since 2014 that governments are convening at the Convention on Conventional Weapons (CCW) in Geneva to discuss concerns over lethal autonomous weapons systems, also known as fully autonomous weapons or 'killer robots'.
The campaign urges states to participate in the CCW Group of Governmental Experts meeting, which opens at the United Nations (UN) on Monday, 9 April, and to commit to retain meaningful human control of weapons systems and over individual attacks.
Why the concern about killer robots?
Armed drones and other autonomous weapons systems with decreasing levels of human control are currently in use and development by high-tech militaries including the US, China, Israel, South Korea, Russia, and the UK. The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control.
If the trend towards autonomy continues, humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role, or simply setting broad mission parameters.
Several states, the Campaign to Stop Killer Robots, artificial intelligence experts, faith leaders, and Nobel Peace Laureates, among others, fundamentally object to permitting machines to determine who or what to target on the battlefield or in policing, border control, and other circumstances. Such a far-reaching development raises an array of profound ethical, human rights, legal, operational, proliferation, technical, and other concerns.
While the capabilities of future technology are uncertain, there are strong reasons to believe that devolving more decision making over targeting to weapons systems themselves will erode the fundamental obligation that rules of international humanitarian law (IHL) and international human rights law be applied by people, and with sufficient specificity to make them meaningful.
Furthermore, with an erosion of human responsibility to apply legal rules at an appropriate level of detail there will likely come an erosion of human accountability for the specific outcomes of such attacks. Taken together, such developments would produce a stark dehumanization of military or policing processes.
What is a 'killer robot'?
targets without meaningful human control should be considered a lethal autonomous weapons system. It would have no human in the decision-making loop when the system selects and engages the target of an attack. Applying human control only as a function of design and in an initial deployment stage would fail to fulfill the IHL obligations that apply to commanders in relation to each attack.
Why the need for human control?
Sufficient human control over the use of weapons, and of their effects, is essential to ensuring that the use of a weapon is morally justifiable and can be legal. Such control is also required as a basis for accountability over the consequences of the use of force. To demonstrate that such control can be exercised, states must show that they understand the process by which specific systems identify individual target objects and understand the context, in space and time, where the application of force may take place.
Given the development of greater autonomy in weapon systems, states should make it explicit that meaningful human control is required over individual attacks and that weapon systems that operate without meaningful human control should be prohibited. For human control to be meaningful, the technology must be predictable, the user must have relevant information, and there must be the potential for timely human judgement and intervention.
States should come prepared to the CCW meeting provide their views on the key touchpoints of human/machine interaction in weapons systems. These include design aspects, such as how certain features may be encoded as target objects; how the area or boundary of operation may be fixed; the time period over which a system may operate; and, any possibility of human intervention to terminate the operation and recall the weapon system.
Based on these touchpoints, states should be prepared to explain how control is applied over existing weapons systems, especially those with certain autonomous or automatic functions.
What does the Human Rights Council say about killer robots?
The first multilateral debate on killer robots took place at the Human Rights Council in May 2013, but states have not considered this topic at the Council since then. Countries such as Austria, Brazil, Ireland, Sierra Leone, and South Africa affirm the relevance of human rights considerations and the Council in the international debate over fully autonomous weapons.
In February 2016, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the Special Rapporteur on the rights to freedom of peaceful assembly and of association issued a report recommending that autonomous weapons systems that require no meaningful human control should be prohibited.
http://www.stopkillerrobots.org/2018/03/fiveyears/ http://www.hrw.org/news/2018/08/21/killer-robots-fail-key-moral-legal-test http://www.stopkillerrobots.org/2018/08/unsg/ http://www.amnesty.org/en/latest/news/2019/01/public-opposition-to-killer-robots-grows-while-states-continue-to-drag-their-feet/ http://www.stopkillerrobots.org/2019/01/global-poll-61-oppose-killer-robots/ http://www.politico.eu/article/killer-robots-overran-united-nations-lethal-autonomous-weapons-systems/ http://www.dw.com/en/resistance-to-killer-robots-growing/a-48040866
Visit the related web page
Next (more recent) news item
Next (older) news item