Maurizia De Bellis
Abstract
This chapter conducts a comparative analysis of procedural guarantees applicable to automated administrative decisions that adversely affect individuals. Focusing on three scenarios—AI-driven tax assessments, algorithmic fraud detection in welfare administration, and predictive-policing systems—the chapter examines how different jurisdictions address the requirements to give reasons, ensure transparency, provide access to algorithmic information, and guarantee effective administrative and judicial remedies. It identifies a general trend toward reinforcing human oversight and expanding the duty to give reasons in response to increasing automation, while also highlighting significant divergences in transparency regimes, access to source code and training data, and the availability of collective standing. The chapter further shows how prohibited AI practices (such as predictive policing under the EU AI Act) reshape the legal framework of accountability. The concluding comparative assessment distils the convergences and persistent variations across the examined legal systems and reflects on their implications for the evolving architecture of algorithmic administrative law.
Table of Contents
Introduction
An erroneous determination by an AI-led tax program
2.1 The requirement to give reasons and provide evidence
2.2 Administrative review and human oversight
2.3 Judicial remediesA distorted fraud algorithm
3.1 Discrimination and proof
3.2 Access and transparency
3.3 The standing of an associationA discriminatory predictive policing AI
4.1 The unlawfulness of a restrictive measure resulting from discriminatory predictive policing AI
4.2 Access and transparency
4.3 Remedies
4.4 DamagesConcluding Remarks: Commonalities and Differences
