Table of contents
The House of Lords Liaison Committee published its report ‘AI in the UK: No room for complacency’ on 18 December 2020. The report examined the progress made by the UK government in the implementation of the recommendations made by the Select Committee on Artificial Intelligence in its 2018 report, ‘AI in the UK: Ready, willing and able?’
1. Select Committee on Artificial Intelligence: What did it recommend?
The select committee was appointed by the House of Lords on 29 June 2017. The committee’s remit was “to consider the economic, ethical and social implications of advances in artificial intelligence (AI)”. It was chaired by Lord Clement-Jones (Liberal Democrat). The committee’s report, published on 16 April 2018, made a “large number of recommendations, mainly addressed to the government”. The key recommendations included:
- The government, with the Competition and Markets Authority (CMA), should review “proactively the use and potential monopolisation” of data by big technology companies operating in the UK.
- The government should incentivise the development of new approaches to the auditing of datasets used in AI.
- The government should encourage greater diversity in the training and recruitment of AI specialists.
- The government should create an AI growth fund for UK small and medium enterprises (SMEs) with a substantive AI component. The fund should be targeted at helping such companies to scale up.
- The number of visas for those with “valuable” skills in AI-related areas should be increased.
- The Government Office for Artificial Intelligence (Office for AI) and the Centre for Data Ethics and Innovation (CDEI) should identify the gaps where existing regulation may not be adequate. The committee stated that blanket AI-specific regulation at the point of the report’s publication would have been inappropriate.
- The CDEI, in consultation with other expert bodies, should produce guidance on the requirement for AI systems to be intelligible. The committee highlighted the importance of making AI understandable to developers, users and regulators.
- The CDEI, with input from the AI Council and the Alan Turing Institute, should develop a cross-sector ethical code of conduct, or ‘AI code’. If necessary, the AI code could provide the basis for statutory regulation. The committee said the code should be drawn up with a “degree of urgency”.
- Industry, through the AI Council, should establish a voluntary mechanism to inform consumers when artificial intelligence is being used to make “significant or sensitive decisions”.
- At earlier stages of education, children should be adequately prepared for using AI. In particular, the committee recommended the “ethical design and use of technology” become an “integral” part of the curriculum.
The committee stated that its recommendations were “designed to support” the government and the UK in “realising the potential of AI for our society and our economy, and to protect society from potential threats and risks”.
The government responded to these recommendations in June 2018. The government stated the aim of its key policies on AI were to:
[…] spur funding and investment; support necessary skills; establish essential infrastructure; and support businesses and places across the UK to develop and adopt AI and data technologies. Missions will drive forward innovation across sectors and will target key opportunities with both domestic and global impact.
However, the government said that it also recognised the risks posed by data aggregation and sharing:
To address these, the Office for Artificial Intelligence, the future Centre for Data Ethics and Innovation and the AI Council will work together to create data trusts. Data trusts will ensure that the infrastructure is in place, that data governance is implemented ethically, and in such a way that prioritises the safety and security of data and the public.
The House of Lords debated the report and the government’s response on 19 November 2018. Lord Stevenson of Balmacara (Labour) called for clear leadership on the issue and questioned who was “driv[ing] this policy”:
The issue is the confusion of bodies that seem to be being set up. There is an AI council, an AI department, the Centre for Data Ethics and Innovation, the GovTech Catalyst team and the new Alan Turing Institute. I could not make out from the government response where they all sit.
Lord Stevenson welcomed the recommendations on an AI code. He asked for further details from the government on how they would act on this recommendation.
2. Liaison Committee report: what progress has the government made?
On 11 February 2020, Lord Clement-Jones requested that the House of Lords Liaison Committee hold follow-up evidence sessions on the Select Committee on AI’s report. This was accepted by the Liaison Committee and on 14 October 2020, five members of the former select committee joined the Liaison Committee to hear evidence from nine witnesses over three sessions. The Liaison Committee had previously corresponded with the then minister of state for universities, science, research and innovation, Chris Skidmore, on 30 January 2020. The committee received a reply on 14 August 2020.
The Liaison Committee published its report ‘AI in the UK: No room for complacency’ on 18 December 2020. It found that:
- Since the publication of the Select Committee on AI’s report in April 2018, investment in AI had “grown significantly”. The committee reported that in 2015, the UK saw £245mn invested in AI. By 2018, this had increased to over £760mn. In 2019 this was £1.3bn.
- AI had been deployed in the UK in a range of fields: from agriculture and healthcare, to financial services, through to customer service, retail and logistics.
- It remained “essential” that that the public was “well-versed in the opportunities and risks” involved in AI.
- There was a “clear consensus” that ethical AI was the only “sustainable way forward”.
- There was “inertia” over the issue of ensuring the labour market was appropriately skilled, which was a “concern” to the committee. Furthermore, the problem remained with the general digital skills base in the UK.
- The challenges posed by the development and deployment of AI could not at that time be tackled by cross-cutting regulation. Sector-specific regulators were better placed to identify gaps in regulation, and to learn about AI and apply it to their sectors.
- The committee commended the government for its work in establishing a “considered range of bodies” to advise it on AI over the long term. However, it found that more needed to be done to improve coordination between the “wide variety of bodies”.
The Liaison Committee made several further recommendations, building on those from the select committee’s 2018 report. They included:
- The government must take an active part in educating the public about how their personal data is being used by AI. It argued the government could no longer take a “passive” role.
- The development of policy to safeguard the use of data, such as data trusts, “must pick up pace”.
- The CDEI should establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.
- The AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
- The Information Commissioner’s Office should develop a training course for use by regulators to ensure that their staff have a “grounding in the ethical and appropriate use” of public data and AI systems. The uptake by regulators should be monitored by the Office for AI.
- The government should set up a cabinet committee responsible for the strategic direction of government AI policy.
- The first task of the cabinet committee should be to commission and approve a five-year strategy for AI.
- The government must appoint a chief data officer to act as a champion for the opportunities presented by AI in the public service and to ensure that “understanding and use of AI, and the safe and principled use of public data, are embedded across the public service”.
- The government must ensure changes to the immigration rules “promote rather than obstruct” the study, research, and development of AI.
On 22 February 2021, the government published its response to the Liaison Committee report. It said the government took seriously the message that there was “no room for complacency”. It highlighted that it was working with the AI Council to ensure the UK retains a global leadership position in AI. The government agreed that its approach needed to focus on “establishing the right arrangements between institutions: across government and the public sector, between regulators, and with academia and industry”.
3. What policies has the government recently announced?
3.1 National AI strategy
On 22 September 2021, the government published its ‘National AI strategy’, setting out its ten-year plan on AI. The strategy set out three high-level aims:
- invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower
- support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions
- ensure the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values
It stated the Office for AI would develop the UK’s national position on governing and regulating AI, which would be set out in a white paper in early 2022.
The Office for AI is a joint Department for Business, Energy and Industrial Strategy (BEIS)–Department for Digital, Culture, Media and Sport (DCMS) unit, responsible for overseeing implementation of the national AI strategy.
3.2 AI assurance
On 8 December 2021, the CDEI published its ‘Roadmap to an effective AI assurance ecosystem’. The purpose of the roadmap is to help those involved in the development and deployment of AI to assess the trustworthiness of AI systems, and to communicate this information to others. It identified six priority areas for action:
- generate demand for reliable and effective assurance across the AI supply chain
- build a dynamic, competitive AI assurance market, that provides a range of effective services and tools
- develop standards that provide a common language and scalable assessment techniques for AI assurance
- build an accountable AI assurance profession
- set out regulatory requirements that can be assured against
- improve links between industry and independent researchers, so that researchers can help develop assurance techniques and identify AI risks
The CDEI is part of DCMS.
3.4 AI roadmap
The AI Council is an independent expert committee, established to provide advice to the UK government. It published a roadmap in January 2021 that set out four pillars on which the UK should build its future in AI:
- research, development and innovation
- skills and diversity
- data, infrastructure and public trust
- national, cross-sector adoption
3.5 AI standard hub
On 12 January 2022, the government announced the creation of a new AI standard hub. The government commissioned the Alan Turing Institute, the British Standards Institution (BSI) and the National Physical Laboratory (NPL) to pilot the initiative. The purpose of the hub is:
[…] to create practical tools for businesses, bring the UK’s AI community together through a new online platform, and develop educational materials to help organisations develop and benefit from global standards.
The hub is part of the government’s national AI strategy.
4. Read more
- Department for Digital, Culture, Media and Sport and Department for Business, Energy and Industrial Strategy, ‘Independent report: Growing the artificial intelligence industry in the UK’, 15 October 2017
- HM Government, ‘Industrial strategy: Building a Britain fit for the future’, 27 November 2017
- Department for Digital, Culture, Media and Sport and Office for Artificial Intelligence, ‘AI activity in UK businesses: Executive summary’, 12 January 2022
- Department for Digital, Culture, Media and Sport and Office for Artificial Intelligence, ‘£23 million to boost skills and diversity in AI jobs’, 10 February 2022
- House of Lords, ‘Written statement: Standard for algorithmic transparency’, 29 November 2021, HLWS413
- House of Lords Justice and Home Affairs Committee, ‘Technology rules? The advent of new technologies in the justice system’, 30 March 2022, HL Paper 180 of session 2021–22
- House of Commons Science and Technology Committee, ‘The right to privacy: Digital data inquiry’, accessed 11 May 2022
- Tech Nation, ‘The UK and artificial intelligence: What’s next’, 9 June 2021
- House of Commons Library, ‘Integrated review 2021: Emerging defence technologies’, 25 March 2021
- House of Lords Library, ‘Predictive and decision-making algorithms in public policy’, 3 February 2020
- House of Commons Library, ‘General debate on the involvement of patients in the use of artificial intelligence in healthcare’, 30 August 2019
- House of Commons Library, ‘Artificial intelligence and automation in the UK’, 21 December 2017