What are lethal autonomous weapons?
Although a seemingly simple question, what classes as a lethal autonomous weapon (LAW) has been subject to extensive international debate. As a result, currently there is no agreed international definition. The term can cover a wide range of weapons that can select, detect and engage targets with little to no human intervention. They can vary from fully autonomous weapons that can operate without any human involvement, to semi-autonomous weapons that require human action to launch an attack.
While fully autonomous weapons do not yet exist, several nations, including the UK, US and Russia are said to be “investing heavily” in developing autonomous weapons systems. However, weapons, referred to as ‘”precursors” that use artificial intelligence (AI) to select, detect and engage targets are in use today. For example, the Israel Aerospace Industries’ Harpy is an armed drone that can survey large areas of land until it detects an enemy radar signal. It then crashes into the source of the radar, destroying both itself and the target.
In 2018, a House of Lords Artificial Intelligence Committee report highlighted the lack of international consensus by setting out the definitions used in France, Italy and the US, among others. It also outlined the Ministry of Defence’s (MoD) definition of an autonomous system, as set out in official guidance on unmanned aircraft systems published in 2017:
An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.
However, the committee highlighted that the MoD had made a “relatively unusual distinction between automated and autonomous systems”. It labelled the definition of autonomous as “clearly out of step with the definitions used by most other governments”. It recommended that the UK’s definition should be realigned to be the same or similar to the rest of the world. Responding, the Government said that the MoD has “no plans to change the definition”.
Should there be a ban on developing and using such weapons?
Opponents of LAWs generally focus on the ethics of developing fully autonomous weapons. Sometimes labelled ‘killer robots’, critics have questioned how if such weapons were developed, they could respect human life and comply with international humanitarian law. However, proponents of the weapons have argued that their increased accuracy and efficiency could help states meet their responsibilities under international law.
Support for a ban
There has been widespread support for a ban of LAWs. In a speech to the Paris Peace Forum in November 2019, UN Secretary General, Antonio Guterres called for a new international treaty to ban them, stating that “machines that have the power and discretion to kill without human intervention are politically unacceptable and morally despicable”. Supporting this view, 30 countries, including Austria, Argentina and Brazil, have called for a ban (the UK is not part of this group).
Non-governmental organisations have also argued that such weapons should be prohibited. Both Human Rights Watch (HRW) and the Campaign to Stop Killer Robots—a coalition of non-governmental organisations—have called for pre-emptive laws to ban such weapons. In addition, the private sector has acted against LAWs. In July 2018, over 200 organisations and 3,000 individuals pledged to not “participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. The group also called on governments to “create a future with strong international norms, regulations and laws against lethal autonomous weapons”.
Opposed to a ban
However, calls to ban LAWs are not universally supported. A group of states, including the UK, Australia, Israel, Russia and the US, are opposed to regulating LAWs. They have argued that you cannot regulate something that does not yet exist. In addition, during UN talks on the issue—which took place under the Convention on Certain Conventional Weapons—journalist Damien Gayle reported that the UK had argued against a pre-emptive ban because it could jeopardise its ability to exploit any potential military advantages that could be gained by weapons having artificial intelligence (AI). In addition, the US Government has argued that LAWs automated targeting features might increase states’ abilities to meet international humanitarian law requirements through increased accuracy and efficiency. It has suggested that reducing manual human control of a weapon might increase its accuracy and help avoid unintended harm to civilians.
What international action has been taken?
Despite calls for an international ban on developing LAWs, movement towards one has stalled. Both the US and Russia have blocked moves to form legally binding agreements on autonomous weaponry. In addition, in November 2019, at UN talks in Geneva, which took place under the Convention on Certain Conventional Weapons, diplomats could not agree on a binding common approach towards the issues and decided that talks on regulating lethal autonomous weapons systems or fully autonomous weapons should continue for the next two years. Some participants were disappointed with this decision, arguing that Russia was responsible for the lack of progress. The Campaign to Stop Killer Robots, was also unsatisfied with the progress made. It has called on countries to bypass the convention and negotiate a separate treaty.
What is the UK’s stance on the issue?
The UK Government has opposed the proposed international ban on the development and use of LAWs. It has argued that existing international human rights law is adequate and that the UN Convention on Certain Conventional Weapons allows for adequate scrutiny of autonomous weapons under its mechanisms for a legal weapons review. In addition, the MoD has said:
The UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all.
However, it is important to note that the UK unusually distinguishes between ‘autonomous’ and ‘automated’ military systems. Explaining why these definitions are important, Noel Sharkey, professor of robotics and artificial intelligence at the University of Sheffield, said that the requirement that an autonomous weapons system be “aware and show intention”, as stated in the MoD guidance, “set the bar so high that it was effectively meaningless”.
In addition, the MoD has announced new funding, stating that “thanks to the cash injection, other technologies including autonomous vehicles, swarm drones, and cutting-edge battlefield awareness systems will be developed for military use”.
In August 2020, HRW published a report outlining how 97 countries, including the UK, have responded to the idea of banning LAWs.
Read more
- Human Rights Watch, ‘Stopping killer robots: Country positions on banning fully autonomous weapons and retaining human control’, 10 August 2020
Cover image from Pixabay.