• Specifically, the increase in digital data and the advent of new algorithms using ‘machine learning’ and artificial intelligence (AI) has seen an increase in the potential for algorithmic decision-making.
• There is no universally-accepted definition of AI, which applies to a broad range of concepts and systems, but it can be regarded as a group of algorithms that can modify and create new algorithms in response to learned inputs and data, as opposed to relying solely on the inputs it was designed to recognise. This ability to change, adapt and grow based on new data is described as ‘intelligence’.
• Machine learning is commonly regarded as a subset of AI which allows computers to learn directly from examples, data and experience, finding rules or patterns independently.
• The result of these technologies is the development of systems capable of simulating human behaviour such as learning, reasoning and classification, which can make predictions and decisions upon which public policy can be based or through which it can be enacted.
• For example, the Harm Assessment Risk Tool (HART), designed by the Durham Constabulary and academics at the University of Cambridge, is a decision support system capable of taking different predictors such as past criminal behaviour, age, gender and postcode and using these to predict the risk of an individual reoffending.
• The potential uses of these technologies have proved controversial. There are a number of potential benefits to the use of AI, as the Government note in their published guidance on AI in the public sector. These include that AI can potentially provide more accurate information, forecasts and predictions leading to better outcomes—for example more accurate medical diagnoses; lead to more personalised services; and provide solutions to some of the most complex and challenging policy problems.
• However, no AI system can perform well without a large quantity of relevant high-quality data, raising questions about how this data should be collected, stored and shared, and according to what restrictions. Further, welfare and civil rights campaigners have argued algorithmic decision-making can disadvantage those from particular areas and social groups, as well as raising concerns about a lack of explicit standards and openness and transparency on the use of algorithmic systems in policy making.