Angela Ferrari Zumbini
Abstract
The chapter analyses the growing use of automated administrative decisions and highlights the procedural risks that arise when public authorities rely on algorithms and AI. It argues that, despite the efficiency gains associated with automation, essential guarantees such as transparency, participation, access to reasons and human oversight become more difficult to ensure.
The introductory section situates these issues within ongoing national and supranational regulatory responses and recent judicial disputes in Europe, China and the United States. The article then sets out the research objectives: to determine whether the problems caused by automation are shared across legal systems and to identify how different jurisdictions protect the rights of individuals affected by automated decisions.
It explains that these questions are investigated through the Common Core methodology, adapted from private law and applied here to administrative law through a factual questionnaire and hypothetical cases. The methodological section clarifies how legal formants, national reports and case-based comparisons reveal the practical operation of procedural safeguards in automated settings.
Finally, paragraph 4 describes the multinational research team, the drafting and revision of the questionnaire, and the structure of the volume, which includes national reports, case analyses and comparative chapters. Together, the article provides the conceptual and methodological foundation for a systematic comparative study of automated administrative decision-making.
Table of Contents
Introduction
Research objectives
Methodology
3.1 The Common Core method
3.2 The application of the Common Core method to administrative law
3.3 Applying the Common Core method to automated administrative decisionsResearch team and structure of the volume
