THE LEGAL SYSTEMS SELECTED FOR COMPARISON: UNITED KINGDOM

Gordon Anthony

I. Is there a national act containing a legal definition of Automated Administrative Decisions?

There is not – as yet – any national Act in the UK1 that contains a legal definition of automated administrative decisions. While the need for legislation has been acknowledged – draft legislation was introduced into the UK Parliament in Spring 2025 but has not been enacted – the approach to automated decision-making so far has tended to focus upon soft law means of regulation and the use of guiding principles rather than formal rules. Working definitions have been found in that context. For instance, a government White Paper published in 2023 defined AI with reference to “2 characteristics … the ‘adaptivity’ of AI [which] can make it difficult to explain the intent or logic of the system’s outcomes” and “the ‘autonomy’ of AI [which] can make it difficult to assign responsibility for outcomes”2. While the White Paper acknowledged that there are different ways of defining AI, it said that defining “AI with reference to these functional capabilities and designing our approach to address the challenges created by these characteristics [would] future-proof our framework against unanticipated new technologies that are autonomous and adaptive … We will, however, retain the ability to adapt our approach to defining AI if necessary, alongside the ongoing monitoring and iteration of the wider regulatory framework” (see paragraphs 39–41). That framework – which is intended to be “pro-innovation” – centres upon 5 principles of: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress (these are further analysed under “IV” below). Initial Guidance on the principles was published in February 2024.

It remains to be seen whether any legislation that might be introduced in the UK Parliament will incorporate the above definition or whether it might draw upon one that has been used elsewhere. For instance, an earlier Private Members Bill in the UK Parliament defined AI as: “technology enabling the programming or training of a device or software to (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective” (AI was for these purposes said to include “generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained”)4. This definition apparently focused upon AI primarily in terms of generated outcomes, whereas other definitions, such as that in Article 3(1) of the EU’s AI Act, include an express mention of autonomy. Article 3 thus defines an AI system as “ … a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. Either, or a mix of both, of these definitions may well influence any future definition in UK statute law.

Download this article in PDF format

Download IJPL Vol.18 - 1