Documents to download

• Specifically, the increase in digital data and the advent of new algorithms using ‘machine learning’ and artificial intelligence (AI) has seen an increase in the potential for algorithmic decision-making.
• There is no universally-accepted definition of AI, which applies to a broad range of concepts and systems, but it can be regarded as a group of algorithms that can modify and create new algorithms in response to learned inputs and data, as opposed to relying solely on the inputs it was designed to recognise. This ability to change, adapt and grow based on new data is described as ‘intelligence’.
• Machine learning is commonly regarded as a subset of AI which allows computers to learn directly from examples, data and experience, finding rules or patterns independently.
• The result of these technologies is the development of systems capable of simulating human behaviour such as learning, reasoning and classification, which can make predictions and decisions upon which public policy can be based or through which it can be enacted.
• For example, the Harm Assessment Risk Tool (HART), designed by the Durham Constabulary and academics at the University of Cambridge, is a decision support system capable of taking different predictors such as past criminal behaviour, age, gender and postcode and using these to predict the risk of an individual reoffending.
• The potential uses of these technologies have proved controversial. There are a number of potential benefits to the use of AI, as the Government note in their published guidance on AI in the public sector. These include that AI can potentially provide more accurate information, forecasts and predictions leading to better outcomes—for example more accurate medical diagnoses; lead to more personalised services; and provide solutions to some of the most complex and challenging policy problems.
• However, no AI system can perform well without a large quantity of relevant high-quality data, raising questions about how this data should be collected, stored and shared, and according to what restrictions. Further, welfare and civil rights campaigners have argued algorithmic decision-making can disadvantage those from particular areas and social groups, as well as raising concerns about a lack of explicit standards and openness and transparency on the use of algorithmic systems in policy making.


Documents to download

Related posts

  • Forensic science and the criminal justice system

    In May 2019, the House of Lords Science and Technology Committee published a report warning the quality and delivery of forensic science services in England and Wales was inadequate. It recommended several reforms intended to halt the damage this was causing to public trust in the criminal justice system. The House of Lords is scheduled to debate this report on 26 April 2021. This article summarises the committee’s recommendation, the Government’s response and subsequent developments.

    Forensic science and the criminal justice system
  • Facial recognition technology: police powers and the protection of privacy

    Facial recognition technology is used to identify individuals or to verify someone’s identity. Live facial recognition has been used by several police forces in England and Wales in collaboration with the private sector. There have been calls for increased scrutiny and oversight of the powers of the police to use the technology, including in the House of Lords. This article summarises the debates about the use of this technology.

    Facial recognition technology: police powers and the protection of privacy