The Artificial Intelligence (Regulation) Bill [HL] would establish a new body, the AI Authority, which would have various functions designed to help address artificial intelligence (AI) regulation in the UK. This would include a requirement for the AI Authority to ensure relevant existing regulators were taking account of AI; to ensure alignment in approach between these regulators; and to undertake a gap analysis of regulatory responsibilities with respect to AI. The AI authority would also have various other functions including monitoring economic risks arising from AI, conducting horizon-scanning of developing technologies, facilitating sandbox initiatives to allow the testing of new AI models, and accrediting AI auditors. In addition, the bill would introduce a set of regulatory principles governing the development and usage of AI.
The bill would represent a departure from the UK government’s current approach to the regulation of AI. The government has said primary legislation will be necessary to regulate AI at some future point. However, it contends it is too soon in these technologies’ evolution to legislate effectively and to do so now may be counterproductive. Ministers argue that existing sectoral regulators are best placed to regulate AI with support from a central function currently in development within the Department of Science, Innovation and Technology. In February 2024, the government announced a range of measures in support of its approach—building upon the white paper it issued on the regulation of AI in 2023—including the first iteration of guidance to regulators which includes voluntary regulatory principles. This approach has been welcomed by many including prominent technology companies such as Google and Microsoft. However, others such as the Ada Lovelace Institute have voiced concerns that relying on voluntary commitments from key AI developers rather than binding legal requirements, for example, is critically insufficient.
The UK’s approach contrasts with that taken by the EU which is in the process of finalising wide-ranging legislation which will regulate the development and usage of AI across all member states. In the US, the regulation of AI is currently being examined at a federal and state level, with some states having already introduced legislation aimed at the regulation of AI particularly around privacy and accountability. The US has not legislated at a federal level. Instead, the White House issued an executive order in October 2023 setting out key principles and actions aimed at ensuring the safe development and usage of AI.