AUTOMATED ADMINISTRATIVE RESTRICTIVE DECISIONS: GIVING REASONS, TRANSPARENCY, AND REMEDIES

Maurizia De Bellis

Abstract

This chapter conducts a comparative analysis of procedural guarantees applicable to automated administrative decisions that adversely affect individuals. Focusing on three scenarios—AI-driven tax assessments, algorithmic fraud detection in welfare administration, and predictive-policing systems—the chapter examines how different jurisdictions address the requirements to give reasons, ensure transparency, provide access to algorithmic information, and guarantee effective administrative and judicial remedies. It identifies a general trend toward reinforcing human oversight and expanding the duty to give reasons in response to increasing automation, while also highlighting significant divergences in transparency regimes, access to source code and training data, and the availability of collective standing. The chapter further shows how prohibited AI practices (such as predictive policing under the EU AI Act) reshape the legal framework of accountability. The concluding comparative assessment distils the convergences and persistent variations across the examined legal systems and reflects on their implications for the evolving architecture of algorithmic administrative law.

Table of Contents

  1. Introduction

  2. An erroneous determination by an AI-led tax program
    2.1 The requirement to give reasons and provide evidence
    2.2 Administrative review and human oversight
    2.3 Judicial remedies

  3. A distorted fraud algorithm
    3.1 Discrimination and proof
    3.2 Access and transparency
    3.3 The standing of an association

  4. A discriminatory predictive policing AI
    4.1 The unlawfulness of a restrictive measure resulting from discriminatory predictive policing AI
    4.2 Access and transparency
    4.3 Remedies
    4.4 Damages

  5. Concluding Remarks: Commonalities and Differences

Download this article in PDF format

Download IJPL Vol.18 - 1