Approximate read time: 15 minutes

On 21 November 2024 the House of Lords is scheduled to debate the House of Lords Communications and Digital Committee report ‘Large language models and generative AI’, published on 2 February 2024. The previous, Conservative government responded to the report in May 2024.

This briefing focuses on the committee’s report, the government response and recent political developments. For more detailed information on the development of AI technology, see IBM’s ‘What is artificial intelligence (AI)?’ (16 August 2024). The Financial Times’s ‘Generative AI exists because of the transformer’ (12 September 2023) provides a visual demonstration of how generative AI large language models (LLMs) work.

1. Key definitions

The committee’s report uses the following key technical terms:

  • Artificial intelligence (AI) is technology that enables computers and machines to reason, learn and act as humans do.[1]
  • Generative AI is AI technology that creates new text, images or other outputs.[2]
  • A large language model (LLM) is a generative AI tool that focuses on creating human-like text. Examples include ChatGPT by OpenAI and Google’s Gemini.
  • Open access models are LLMs for which the developer makes much of the underlying code accessible.
  • Closed access models are LLMs for which the developer publishes minimal or no information about how the model has been developed and the data it was trained on.
  • Compute is the computational power required to develop and use AI-based systems.[3] This includes hardware, software and infrastructure.

The Parliamentary Office of Science and Technology has produced a more extensive guide to terms associated with AI in its ‘Artificial intelligence glossary’ (23 January 2024).

2. House of Lords Communications and Digital Committee’s report

The purpose of the House of Lords Communications and Digital Committee’s inquiry was to “examine likely trajectories for LLMs over the next three years and the actions required to ensure the UK can respond to opportunities and risks in time”.[4] It made 61 recommendations in the following areas:[5]

  • market competition
  • open and closed models
  • regulatory capture (regulatory agencies acting in the interests of companies instead of the public)
  • conflicts of interest
  • labour market disruption
  • improving technological understanding in government
  • investment in, and access to, compute
  • supporting innovation
  • developing sovereign LLM capability
  • understanding and mitigating risks
  • the international context
  • data protection
  • regulation
  • legal liability
  • testing
  • standards and auditing practices
  • copyright

The committee drew particular attention to the following recommendations. It said the government should:

  • prepare for “a period of protracted international competition and technological turbulence”
  • make market competition an explicit AI policy objective
  • make sure businesses do not unduly influence regulation
  • be aware of the advantages and disadvantages of open and closed models, including security risks from open models and market concentration risks from closed models
  • avoid narrowly focusing on risks and safety and support innovation and development
  • increase opportunities by improving computing power and infrastructure, skills, and support for academic spinouts
  • explore developing a sovereign LLM
  • address copyright issues by introducing legislation, empowering rightsholders to check if their data has been used, and encouraging developers to use licensed material
  • address immediate risks of LLMs enabling malicious activities, such as cyber attacks, terrorism, disinformation and activities relating to child sexual abuse material
  • introduce protections concerning discrimination, bias and data protection
  • review catastrophic risks (defined as more than 1,000 UK deaths and tens of billions in financial damages) and implement mandatory safety tests for high-risk, high-impact models
  • empower existing regulators with investigatory and sanctioning powers, cross-sector guidelines and a review of legal liability, as well as provide central support to regulators
  • develop accredited standards and common auditing methods, with a view to establishing proportionate regulation

3. Previous government’s response to the report

The previous, Conservative government responded to the committee’s report in April 2024.[6] References in this section to ‘the government’ are to that administration.

The government agreed with the committee that advances in LLMs and generative AI presented both opportunities and risks.[7] It highlighted factors that enable the UK to take advantage of the market in generative AI, including its “thriving AI sector, world-leading academic institutions, and well-established expert regulators”. It described the activities it had undertaken to support the development of AI in the UK, including overall investment of over £3.5bn since 2014, and highlighted that it had set up the AI Safety Institute and the Central AI Risk Function, both within the Department for Science, Innovation and Technology (DSIT), to address potential risks.[8]

The government agreed with the committee that open-source AI would promote competition and innovation, but also that the government would need to mitigate the potential risk of these models being used maliciously.[9] It said it would work with the open-source community to ensure any future policies had minimal impacts on open-source activity while also addressing threats.

On regulation, the government said it had chosen a “pro-innovation regulatory approach”, as set out in its 2023 AI regulation white paper and subsequent response in February 2024.[10] It said this would involve existing regulators and would be adapted for different sectors of the economy.[11] It also said it was considering technology-specific regulation. The response said the government had established a Central AI Risk Function in DSIT to coordinate regulatory action, in partnership with industry.[12]

The government agreed with the committee that the UK should develop its own regulatory approach to AI, rather than follow either the EU or US.[13] However, it also argued that it was important to “recognise the inherently global nature of AI” and that therefore “effective international coordination will be critical in some areas”. This included digital technical standards and the evaluation of LLMs. The government said it would not wait for international consensus before acting if necessary, but that it would not bring forward new legislation on LLMs at that point because it still did not “fully understand the risks or the effectiveness of potential mitigations”.[14]

The government said it believed new laws would eventually be needed to addresses the challenges posed by AI, “once understanding of risk has matured”.[15] Future legislation would be likely to address transparency and accountability. The government said it had been exploring how liability was currently distributed between organisations in the AI value chain, and it believed the current system did not effectively mitigate risks.

On data protection, the government pointed to existing laws that constitute the statutory framework for treatment of personal information.[16] It said the government recognised that trust in technology and use of data was particularly important in the health sector, and it would work with the NHS and other public authorities to ensure people’s data was “subject to the highest standards of data protection”.[17]

Addressing issues on copyright the committee had raised, the government said it was “committed to ensuring the continuation of a robust copyright framework that rewards human creativity”.[18] It said the Intellectual Property Office had worked with stakeholders on the issue of copyright and AI but they had not been able to reach a consensus. The government highlighted that there are ongoing court cases concerning AI and copyright that will contribute to the development of law in this area, and that it would not be appropriate for it to comment on these.[19] It added that AI developers should be more transparent about which data, including copyrighted data, they used to train their models because the data used had implications for biases in AI outputs.

On compute, the government said it had committed to investing in compute infrastructure, including building an “exascale supercomputer” in Edinburgh and establishing the AI Research Resource.[20] Both of these commitments were reversed by the current government (see section 4 below). The Conservative government said it had invested in skills and training to meet future needs resulting from AI. To support start-ups and university spinout companies, the government had accepted the conclusions of the Independent Review of University Spinout Companies and was working with universities on licensing and equity share, as well as gathering information about existing support for spinouts.[21]

However, the government disagreed with the committee that it should investigate developing a sovereign LLM; it said this was because the current market of LLM tools was immature and evolving very rapidly.[22] However, it said was encouraging the public sector to develop the skills needed to use generative AI effectively.

4. Developments since the previous government’s response

4.1 Labour Party manifesto and King’s Speech

In its manifesto ahead of the 2024 general election, the Labour Party said it would “ensure our industrial strategy supports the development of the artificial intelligence (AI) sector”.[23] In its King’s Speech, the incoming Labour government said it would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.[24] However, the list of bills in the background briefing notes to the King’s Speech did not include an AI bill.[25]

4.2 AI action plan

In a written statement in July 2024, Secretary of State for Science, Innovation and Technology Peter Kyle announced the government had asked entrepreneur Matt Clifford to develop an ‘AI action plan’.[26] Mr Kyle said this would “identify ways to accelerate the use of AI to improve people’s lives by making services better and developing new products”. The terms of reference for the plan stated that it would consider how the UK can:

  • build an artificial intelligence sector that can scale and be competitive globally
  • adopt artificial intelligence to enhance growth and productivity, and support the delivery across government of the government’s five stated missions[27]
  • use artificial intelligence in government to transform citizens’ experiences of interacting with the state and boosting take-up in all parts of the public sector and the wider economy
  • strengthen the enablers of artificial intelligence adoption, such as data, infrastructure, public procurement processes and policy, and regulatory reforms[28]

Mr Kyle said Mr Clifford would deliver his recommendations by September, and that an ‘AI opportunities unit’ would be set up in DSIT to support their implementation.[29] On 30 October 2024 the Financial Times reported that Mr Clifford’s recommendations would include reducing the cost and complexity of visas for people with expertise in artificial intelligence and creating special zones for data centres.[30]

4.3 Investments

In October 2023 the then government announced that it would fund a ‘exascale’ computer in Edinburgh.[31] This would be 50 times more powerful than the UK’s current top-end system. In addition, the then government announced funding for a national AI Research Resource that would “maximise the potential of AI and support critical work around the safe development and use of technology”. These two investments totalled £1.3bn[32]

In August 2024 it was reported that the new government would not proceed with this funding.[33] When questioned about this in a debate in the House of Commons, Peter Kyle said that while the previous government had announced these schemes it had not allocated money from the budget to them.[34] The BBC reported comments from technology entrepreneurs criticising the decision. Conversely, the Financial Times quoted supercomputing experts supporting the decision, saying “it was not the most ‘strategic’ move to invest almost exclusively in a single type of supercomputing hardware, and the new government should instead focus resources on a range of hardware, software and skills”.[35]

4.4 Legislation

In his July 2024 written statement, Mr Kyle also said the government would introduce legislation to regulate some AI companies, as stated in the party’s manifesto. He said it would put the AI Safety Institute on a statutory footing, “providing it with a permanent remit to enhance the safety of AI over the longer term”. He said the proposed legislation would:

[…] be highly targeted and will support growth and innovation by ending regulatory uncertainty for AI developers, strengthening public trust, and boosting business confidence. [It] will avoid creating new rules for those using AI and will instead apply to the small number of developers of the most powerful AI models with a focus on the AI systems of tomorrow and not today.[36]

Mr Kyle said the government would launch a consultation on its proposed legislation soon. In remarks to the Financial Times’ ‘Future of AI’ summit on 6 November 2024, he said legislation would be introduced in the next year.[37]

In August 2024, Mr Kyle and Chancellor Rachel Reeves told executives from large technology firms and investors that the forthcoming legislation would focus on the most advanced LLM models and would not seek to regulate the entire industry.[38] Mr Kyle said the bill would have two key elements: making existing agreements between technology companies and the government legally binding and turning the AI Safety Institute from a directorate of DSIT into an arms-length body. Both these measures would build on outcomes of the international AI safety summit the previous government hosted in November 2023.

The November 2023 AI safety summit was a meeting of governments, AI companies and others to consider the risks of AI and discuss how they could be mitigated by joint action.[39] In addition to signing the Bletchley Declaration, which sets out a mutual understanding of ‘frontier AI’, some countries and companies at the summit agreed that governments would test new LLM models for safety before they were released.[40] The governments of Australia, Canada, the EU, France, Germany, Italy, Japan, the Republic of Korea, Singapore and the US signed up to this agreement alongside AI developers including Amazon Web Services, Anthropic, Google DeepMind, Meta, Microsoft and Open AI.[41] This agreement is voluntary and not legally binding. At the summit the UK launched the AI Safety Institute, which will research AI safety and build public sector capability to conduct safety tests on AI models.

4.5 AI assurance

On 6 November 2024 the government published ‘Assuring a responsible future for AI: Accelerating the growth of the UK’s AI assurance market’, a research and analysis paper focusing on ‘AI assurance’, which the government explains means tools and techniques to evaluate, measure and communicate the trustworthiness of AI systems.[42] The report examined the current state and future potential of the AI assurance market in the UK. The report found:

  • There are currently an estimated 524 firms supplying AI assurance goods and services in the UK, including 84 specialised AI assurance companies.
  • Altogether, these 524 companies are generating an estimated £1.01bn and employ an estimated 12,572 employees, making the UK’s AI assurance market bigger relative to its economic activity than those in the US, Germany and France.
  • Despite evidence that both demand and supply are currently below their potential, there are strong indications that the market is set to continue growing, with the potential to exceed £6.53bn by 2035 if opportunities to drive future growth are realised.[43]

The report then set out three areas in which the government has stated it will take action in order to increase the market:[44]

  • developing an ‘AI assurance platform’ to provide a “one-stop-shop” for information on AI assurance and hosting new DSIT resources to support start-ups and SMS to engage with AI assurance
  • increasing the supply of third-party assurance by working with industry to develop a ‘Roadmap to trusted third-party assurance’ and collaborating with the AI Safety Institute to advance AI assurance research, development and diffusion
  • enabling the interoperability of AI assurance by developing a ‘Terminology tool for responsible AI’ to help industry and assurance service providers to navigate the international governance ecosystem

5. Read more


Cover image by Freepik.

References

  1. Google Cloud, ‘What is artificial intelligence?’, 7 November 2024. Return to text
  2. House of Lords Communications and Digital Committee, ‘Large language models and generative AI’, 2 February 2024, HL Paper 54 of session 2023–24, p 9. Return to text
  3. Carnegie Endowment, ‘A primer on compute’, 30 April 2024. Return to text
  4. House of Lords Communications and Digital Committee, ‘Large language models and generative AI’, 2 February 2024, HL Paper 54 of session 2023–24, p 7. Return to text
  5. As above, pp 73–9. Return to text
  6. House of Lords Communications and Digital Committee, ‘Government response to the House of Lords Communications and Digital Committee’s ‘Large language models and generative AI’ report’, 2 May 2024. Return to text
  7. As above, p 3. Return to text
  8. As above, p 4. Return to text
  9. As above, p 3. Return to text
  10. As above, p 1. See: Department for Science, Innovation and Technology and Office for Artificial Intelligence, ‘A pro-innovation approach to AI regulation’, updated 3 August 2023; and ‘A pro-innovation approach to AI regulation: Government response’, updated 6 February 2024. Return to text
  11. As above, p 4. Return to text
  12. As above, pp 7–8. Return to text
  13. As above, p 9. Return to text
  14. As above, p 10. Return to text
  15. As above, p 11. Return to text
  16. As above, p 8. Return to text
  17. As above, p 9. Return to text
  18. As above, p 12. Return to text
  19. As above, p 13. Return to text
  20. As above, p 5. Return to text
  21. As above, p 6. Return to text
  22. As above, p 7. Return to text
  23. Labour Party, ‘Labour Party manifesto 2024’, June 2024, p 35. Return to text
  24. Prime Minister's Office and His Majesty King Charles III, ‘King's Speech 2024’, 17 July 2024. Return to text
  25. Prime Minister’s Office, ‘King’s Speech 2024: Background briefing notes’, 17 July 2024. Return to text
  26. House of Commons, ‘Written statement: AI opportunities action plan (HCWS24)’, 26 July 2024. Return to text
  27. These missions are to “kickstart economic growth, make Britain a clean energy superpower, take back our streets, break down barriers to opportunity and build an NHS fit for the future” (Labour Party, ‘Mission-driven government’, accessed 14 November 2024). Return to text
  28. Department for Science, Innovation and Technology, ‘Artificial intelligence (AI) opportunities action plan: Terms of reference’, 26 July 2024. Return to text
  29. House of Commons, ‘Written statement: AI opportunities action plan (HCWS24)’, 26 July 2024. Return to text
  30. Anna Gross, ‘UK visa process for AI experts should be streamlined, says government adviser’, Financial Times (£), 30 October 2024. Return to text
  31. Department for Science, Innovation and Technology et al, ‘Game-changing exascale computer planned for Edinburgh’, 9 October 2023. Return to text
  32. BBC News, ‘Government shelves £1.3bn UK tech and AI plans’, 2 August 2024. Return to text
  33. As above. Return to text
  34. HC Hansard, 2 September 2024, col 94. Return to text
  35. BBC News, ‘Government shelves £1.3bn UK tech and AI plans’, 2 August 2024 and Anna Gross et al, ‘UK government plans fresh investment in supercomputing despite axing aid’, Financial Times (£), 15 August 2024. Return to text
  36. House of Commons, ‘Written statement: AI opportunities action plan (HCWS24)’, 26 July 2024. Return to text
  37. Anna Gross and Stephanie Stacey, ‘UK will legislate against AI risks in next year, pledges Kyle’, Financial Times (£), 6 November 2024. Return to text
  38. Anna Gross and George Parker, ‘UK’s AI bill to focus on ChatGPT-style models’, Financial Times (£), 1 August 2024. Return to text
  39. Foreign, Commonwealth and Development Office et al, ‘AI safety summit 2023’, accessed 11 November 2024. Return to text
  40. Prime Minister’s Office, ‘Chair’s summary of the AI safety summit 2023, Bletchley Park’, 2 November 2023. Return to text
  41. Prime Minister’s Office, ‘Safety testing: Chair's statement of session outcomes’, 2 November 2023. Return to text
  42. Department for Science, Innovation and Technology, ‘Assuring a responsible future for AI: Assuring the growth of the UK’s AI assurance market’, 6 November 2024, p 3. Return to text
  43. As above, p 4. Return to text
  44. As above, p 5. Return to text