Giacinto della Cananea
Abstract
This chapter investigates whether a common substantive basis exists for governing the use of artificial intelligence in administrative action across diverse legal systems. Building on two hypothetical cases—access to the algorithm’s source code and the reasonableness of AI-driven decisions—it examines how courts and legislatures respond to questions of transparency, human oversight, and judicial review in both EU and non-EU jurisdictions. The analysis reveals significant divergences in regulatory approaches, particularly where intellectual property protections and “black box” systems limit access and accountability. Yet it also uncovers shared structural principles: legality, due process, transparency, proportionality, and human supervision remain essential constraints on automated administrative power. Through this comparative functional method, the chapter shows that although administrative cultures and institutional designs vary widely, the foundational principles of administrative law continue to anchor and shape the governance of AI, suggesting the emergence—albeit uneven—of a common core of safeguards in algorithmic public decision-making.
Table of Contents
Introduction
A best-case or worst-case scenario?
Key regulatory and institutional data
The ‘functional’ analysis: the two hypothetical cases
Access to the algorithm’s source code
The principle of reasonableness and black boxes
Common and distinctive features: reasons and implications
