Automated decision making (ADM)—which includes algorithmic decision-making—mean systems and processes which make decisions without human involvement (full ADM), or those which inform or recommend decisions to human operators (partial ADM). The trial and usage of such technologies is becoming increasingly widespread throughout the public sector. ADM has the potential to deliver significant rewards, particularly in terms of productivity and financial efficiencies. However, such technologies also come with significant risks, especially around transparency, bias, and unfair treatment. Several organisations have called for measures to boost accountability and transparency mechanisms, so individuals know when such technologies have affected them and their rights of challenge and appeal.
The Labour government has committed to boosting the productivity of the public sector through the use of innovative technologies such as artificial intelligence (AI) and ADM. However, ministers have also said that they are committed to increasing transparency and fairness around the use of such technologies, and provisions currently before Parliament in the Data (Use and Access) Bill [HL] aims to clarify the UK’s existing data protection regime in relation to such systems.
Several observers and campaigners, including Lord Clement-Jones (Liberal Democrat), have called on the government to go further, arguing that additional regulation is necessary to ensure these technologies do not have an adverse impact on those interacting with public services.