The House of Lords is scheduled to debate the following motion on 19 April 2024:

Lord Lisvane (Crossbench) to move that this House takes note of the report from the Artificial Intelligence in Weapon Systems Committee, ‘Proceed with caution: Artificial intelligence in weapon systems’.

1. Committee inquiry and report

1.1 Background

In December 2022 the House of Lords Liaison Committee endorsed a proposal from Lord Clement-Jones (Liberal Democrat), the former chair of the House of Lords Artificial Intelligence Committee which operated in the 2017–19 parliamentary session, that the House should appoint a special inquiry committee to consider the use of artificial intelligence (AI) in weapon systems.[1] It added that the new committee should, if appointed, report by the end of November 2023. The House agreed to the Liaison Committee’s recommendation later the same month.[2] In January 2023 the House agreed the new committee’s membership, including that Lord Lisvane would chair the committee.[3]

1.2 Inquiry and report

The new committee launched a two-month call for evidence in March 2023. In an article to promote the call for evidence, the committee set out the background to its inquiry as follows:

The committee is considering the use of artificial intelligence (AI) in weapon systems. Advances in robotics and digital technologies, including AI, have led to step changes in many sectors, defence included. One such area of advancement and heightened interest is the creation of autonomous weapon systems (AWS).

AWS have been defined as systems that can select and attack a target without human intervention. These systems could revolutionise warfare, with some suggesting that they would be faster, more accurate and more resilient than existing weapons systems and could limit the casualties of war.

However, concerns have arisen about the ethics of these systems, how they can be used safely and reliably, whether they risk escalating wars more quickly, and their compliance with international humanitarian law. Much of the international policymaking surrounding AWS has been focused on restricting their use, either through limitations or outright bans.[4]

The committee added that it would be examining the following issues:

  • the challenges, risks and benefits associated with AWS
  • the technical, legal and ethical safeguards that are necessary to ensure that they are used safely, reliably and accountably
  • the sufficiency of current UK policy and the state of international policymaking on AWS

The committee held 15 oral evidence sessions and received 42 pieces of written evidence during its inquiry.[5] It also conducted two visits, the first to Cambridge in June 2023 and the second to Glasgow and Edinburgh in September 2023.[6]

The committee published its report, ‘Proceed with caution: Artificial intelligence in weapon systems’, on 1 December 2023. The report included 36 conclusions and recommendations over five chapters. In an article published to accompany the report, the committee summarised the substance of its findings:

While the government aims to be “ambitious, safe, responsible” in its application of AI in defence, aspiration has not lived up to reality.

Bringing AI into the realm of warfare through the use of AI-enabled AWS could revolutionise defence technology, but the government must approach the development and use of AI in AWS in a way that is ethical and legal, while providing key strategic and battlefield benefits. “Ambitious, safe and responsible” must be translated into practical implementation.

As part of this, the government must seek, establish and retain public confidence and democratic endorsement in the development and use of AI generally, and especially in respect of AWS. This will include increasing public understanding of AI and autonomous weapons, enhancing the role of Parliament in decision making on autonomous weapons, and retaining public confidence in the development and use of autonomous weapons.[7]

The committee listed the following key recommendations:

  • The government should lead by example in international engagement on regulation of AWS. Outcomes from international debate on the regulation of AWS could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law. A key element of international engagement will also include leading on efforts to prohibit the use of AI in nuclear command, control and communications.
  • The government should adopt an operational definition of AWS. The committee was surprised the government does not currently have one and believes it is possible to create a future-proofed definition which would aid the UK’s ability to make meaningful policy on AWS and engage fully in international discussions.
  • The government should ensure human control at all stages of an AWS’s lifecycle. It is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.
  • The government should ensure that its procurement processes are appropriately designed for the world of AI. The committee heard that the Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic. It further heard that the Ministry of Defence lacks capability in relation to software and data, both of which are central to the development of AI. This may require revolutionary change. The committee warns, “if so, so be it; but time is short”.[8]

Commenting on the report’s publication, Lord Lisvane, chair of the committee, added:

Artificial intelligence has spread into many areas of life and defence is no exception. How it could revolutionise defence technology is one of the most controversial uses of AI today.

There is a growing sense that AI will have a major influence on the future of warfare, and there has been particular debate about how autonomous weapons can comply with international humanitarian law.

In our report ‘Proceed with caution: Artificial intelligence in weapon systems’, we welcome the fact that the government has recognised the role of responsible AI in its future defence capability. AI has the potential to provide key battlefield and strategic benefits. However, we make proposals that in doing so, the government must approach the development and use of AI in AWS cautiously.

It must embed ethical and legal principles at all stages of design, development and deployment, while achieving public understanding and democratic endorsement.

Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.

2. Government response to the report

The government published its response to the committee’s report on 19 February 2024.[9] It noted that AI technologies had the “potential to transform every aspect of defence fundamentally”. Because of this, the government argued it was “essential” that the UK’s armed forces were “able to embrace these technologies to maintain our technological edge within a competitive, volatile and challenging international security environment”. At the same time, however, the government said it recognised that the “adoption of these general-purpose enabling technologies poses significant challenges in a high-impact defence context”.

Overall, the government set out its position concerning AI in AWS as follows:

The Ministry of Defence (MOD) published the ‘Defence artificial intelligence strategy’ in June 2022 alongside the ‘Ambitious, safe, responsible’ policy statement, which includes our defence AI ethical principles. These documents set out our overall approach to the development and adoption of these transformative technologies in line with our AI ethical principles and the values and standards of the society we protect.

The MOD is actively engaging with a very wide range of experts (including technologists, ethicists, legal advisers and civil society stakeholders) to understand the issues and concerns associated with the use of AI in weapons, and to develop appropriate policies and control frameworks. This is not a new challenge for defence—we have extensive experience of adapting to embrace new technologies and capabilities. We are currently assessing the appropriate ways in which our existing, robust and effective legal, safety and regulatory compliance regimes may need to evolve to tackle new challenges posed by AI technologies.

We are committed to safe and responsible use of AI in the military domain. We are clear that we will use AI to augment the capabilities of our service personnel and derive military advantage through effective human-machine teaming; that accountability for military effects can never be delegated to a machine; and that we will always comply with our national and international legal obligations. At the same time, we know some adversaries may seek to misuse advanced AI technologies, deploying them in a manner which is malign, unsafe and unethical. We are working with allies and partners through international forums to develop norms and standards for military AI and to ensure that any illegal, unsafe or unethical use of these technologies is identified, attributed and held to account.[10]

It then responded to the committee’s conclusions and recommendations grouped in four sections, concerning, respectively:

  • general principles
  • ethics, legal and governance
  • safeguarding against risks
  • enablers

On the committee’s key recommendations, the government said that it:

  • intended to “remain an active and influential participant” in international dialogues to regulate AWS, and considered the Group of Governmental Experts to the Convention on Certain Conventional Weapons (LAWS GGE) to be the “most appropriate international forum to advance negotiations on these issues”
  • had no plans to adopt an “official or operative definition of AWS at this time”
  • was committed to “ensuring meaningful human control (and therefore human accountability) through context-appropriate human involvement throughout the lifecycle of AI-enabled military systems”, and that it opposed the creation and use of AWS that would operate in any other way
  • recognised that “significant change” was required to transform the MOD into an ‘AI ready’ organisation, including “accelerating ongoing work” to streamline procurement processes and utilise agile approaches within procurement programmes

3. Read more


Cover image by Steve Johnson on Unsplash.

References

  1. House of Lords Liaison Committee, ‘New committee activity in 2023’, 7 December 2022, HL Paper 104 of session 2022–23, pp 12–14. Return to text
  2. HL Hansard, 15 December 2022, cols 768–71. Return to text
  3. HL Hansard, 31 January 2023, cols 550–6. Return to text
  4. House of Lords AI in Weapon Systems Committee, ‘How should autonomous weapons be developed, used and regulated?’, 6 March 2023. Return to text
  5. House of Lords AI in Weapon Systems Committee, ‘Oral evidence transcripts’; and ‘Written evidence’, accessed 9 April 2024. Return to text
  6. House of Lords AI in Weapon Systems Committee, ‘Engagement documents’, accessed 9 April 2024. In Cambridge, the committee visited RAND Europe Cambridge, the Leverhulme Centre for the Future of Intelligence (hosted by Emmanuel College, University of Cambridge), and the Information Engineering Division of the University of Cambridge Engineering Department. In Glasgow and Edinburgh, the committee visited the University of Strathclyde, the Autonomous Systems and Connectivity Research Division of the School of Engineering at the University of Glasgow, and the School of Informatics at the University of Edinburgh. Return to text
  7. House of Lords AI in Weapon Systems Committee, ‘Government warned to proceed with caution on AI in autonomous weapons’, 1 December 2023. The committee also published a ‘shorthand story’ to mark the report’s publication: ‘Aspiration vs reality: The use of AI in autonomous weapon systems’, 1 December 2023. Return to text
  8. House of Lords AI in Weapon Systems Committee, ‘Government warned to proceed with caution on AI in autonomous weapons’, 1 December 2023. Return to text
  9. Ministry of Defence, ‘Government response to the House of Lords AI in Weapon Systems Committee report’, 19 February 2024. Return to text
  10. As above, p 2. Return to text