Who Decides When to Fire? A Legal Debate over “Killer Robots”

  1. Home
  2. »
  3. Defense & Security
  4. »
  5. Who Decides When to Fire? A Legal Debate over “Killer Robots”

Authors

Download PDF

The combatants of war are humans and, for decades, humans assumed sole responsibility for making decisions about the use of lethal force in combat situations. Not anymore. Countries around the world now design and manufacture lethal autonomous weapons systems (LAWS), such as self-guided submarines and unmanned drones, that can make battlefield decisions without direct human input—but with direct human consequences.

Autonomous systems differ from remote-controlled or automated weapon systems primarily in terms of the degree of their independence. This distinction, often overlooked in public debate, has a fundamental importance in legal and policy terms, especially with issues related to liability and individual responsibility. While the autonomization of weapons of war may seem like an efficient and effective option for combat operations, the use of artificial intelligence to determine what to target and attack and the act of leaving life-and-death decisions to algorithms can be extremely problematic.

From the perspective of international law, the main question regarding LAWS is whether the decision to use weapons without human intervention is permissible. The U.N. Convention on Certain Conventional Weapons (CCW) from 1983 regulates, bans, or restricts the use of weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”. Moreover, the CCW stresses that international law takes precedence over all weapon systems and points out that people must “take responsibility at all times” for the operation.

According to international law, life-and-death decisions should be made by humans, not algorithms. The central role of the human, which is to serve as the connection between a serious decision and the consequences of that decision, must be maintained. If this link is broken, then the situation becomes risky in terms of both ethics and human rights. Many countries represented at the U.N., however, see things differently, as do some experts in international law. Supporters of the use of LAWS argue that LAWS would

(1) make war more “humane,”

(2) increase the accuracy of target selection,

(3) eliminate the influence of human emotions such as fear or revenge, and

(4) limit civilian casualties.

Killer robots, therefore, are increasingly perceived as an acute threat, and rightly so. In May 2021, the U.N. reported that killer robots—including the Turkish-made KARGU-2 a portable, rotary-wing attack drone designed to provide tactical ıntelligence, surveillance, reconnaissance, and precision strike capabilities for ground troops—had been used in the civil war in Libya. The U.N. report notes that because of the KARGU-2 human-in-the-loop machine learning, “the deployment of this system to Libya by Turkey is in non-compliance with paragraph 9 of resolution 1970.” Turkey, which was the source of the drone in Libya, is not willing to be drawn into the matter. In a similar response, the United States continued to develop its Golden Horde program—an effort “to develop artificial intelligence-driven systems that could allow the networking together of various types of precision munitions into an autonomous swarm”—even after being criticized for testing the predator technology in Afghanistan and Pakistan.

Efforts to autonomize weapons of war is desirable; however, the Geneva Conventions state that the final decision on the potential destruction of human life should be done in a loop and be controlled by a human. At the U.N. Convention on Conventional Weapons in 2013, the delegates meeting in Geneva were divided on whether LAWS pose a threat to human rights and no agreement on outlawing the weapons could be reached. The stalemate continues to this day. Activists who wanted to persuade countries at the meeting to ban killer robots were cautiously optimistic. They outlined the legal and political challenges posed by LAWS and provided initial steps for the possibility of regulating the weapons under international law. The international community, however, agreed only to “consider proposals” and demanded more consideration of ethical, legal, operational, security, and technical concerns.

Because a decision to ban LAWS would bind all states, ban supporters are disappointed that the convention delegates did not take more definitive action. They worry that if LAWS are not banned, killer robots could one day be used in war and the U.N. could lose the power to regulate the development of any new weapons technology. In other words, the failure to regulate autonomous lethal weapons systems would lead to a permissive and unregulated environment for the new weapons technology.

An agreement to ban LAWS may be difficult to reach. Countries (i.e., the United States, Russia, China, India, Great Britain, and Israel) that already have developed advanced autonomous weapons systems are not interested in new regulations and restrictive environments. These countries have a self-interest in blocking actual progress on a weapons ban and appear willing to prolong the discussions by rejecting any proposed ban on LAWS, such as killer robots.

The rapid technological advancement of LAWS presents significant challenges to international law. Those challenges include the further dehumanization of war and the inability to hold any individual responsible for a misguided killer-robot attack or some other autonomous weapon. Countries that invest in and are eager to use LAWS technology should be bound by the legal requirements. Autonomous weapons systems that require human assistance to operate are much less controversial than autonomous killer robots that can function independently. Use of the latter should be limited to specific, clearly defined tasks and be programmed to comply with international rules.

The U.N. Convention on Certain Conventional Weapons should not be the only body responsible for regulating LAWS. It has had roughly eight years to agree on regulations but has failed to do so. Other diplomatic options should be explored. An independent process outside the U.N. is one option that was used successfully in Ottawa for the regulation of landmines and in Oslo for the regulation of cluster munitions. A neutral country such as Switzerland would be a good choice for leading the process and helping the international community to reach an agreement on how to regulate LAWS.

An initial agreement among the countries that support a ban on LAWS is conceivable and, if reached, could prompt other countries to sign on to the agreement. International law experts emphasize that countries engaged in the development of LAWS should be part of the negotiations for the initial agreement. This approach makes it more likely that the negotiations will lead to meaningful solutions for what should be restricted. The negotiators will need to remember that artificial intelligence is used not only by the military but also by the civilian sector and that caution and deep debate are necessary for the development of clear and enforceable regulations.

______________________________________________________

Orion Policy Institute (OPI) is an independent, non-profit, tax-exempt think tank focusing on a broad range of issues at the local, national, and global levels. OPI does not take institutional policy positions. Accordingly, all views, positions, and conclusions represented herein should be understood to be solely those of the author(s) and do not necessarily reflect the views of OPI.

Facebook
Twitter
LinkedIn
Pinterest

Authors