Catherine M. Sharkey & Caterina Barrena Hyneman
I. Is there a national act containing a legal definition of Automated Administrative Decisions?
No. At present there is no national Act in the United States that contains a legal definition of automated administrative decisions. Instead, the U.S. federal government has defined automated systems and artificial intelligence (AI) in a number of administrative policy directives and executive instruments.
The Trump administration issued Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, emphasising global competition as a driver of AI development. That Order revoked certain Biden-era executive-branch guidance (including Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), but retained a working definition of AI as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
Other important federal policy instruments include the National Institute of Standards and Technology (NIST) AI Risk Management Framework (NIST Report), the Office of Management and Budget (OMB) memoranda such as Accelerating Federal Use of AI through Innovation, Governance, and Public Trust and Driving Efficient Acquisition of Artificial Intelligence in Government. The Executive Orders and OMB memoranda carry binding authority within the Executive Branch; the NIST report is advisory and non-binding. National Institute of Standards and Technology
The OMB memoranda have in some cases adopted definitions drawn from federal statutes (for example, the John S. McCain National Defense Authorization Act). One widely used working definition (reflected in OMB and other documents) describes AI to include systems that:
-
Perform tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets;
-
Solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action;
-
Are designed to think or act like a human (including cognitive architectures and neural networks);
-
Use techniques such as machine learning to approximate cognitive tasks; or
-
Are designed to act rationally — for example, intelligent software agents or embodied robots that achieve goals using perception, planning, reasoning, learning, communication, decision-making, and action.
These working definitions have persisted across administrations and influenced subsequent guidance (including the now-revoked Biden OMB memorandum Advancing Government Innovation and Risk Management for Agency Use of Artificial Intelligence). In addition, the Administrative Conference of the United States (ACUS) has produced influential recommendations identifying core areas for federal AI development and governance — including transparency, bias mitigation, technical capacity, procurement, data, privacy, security, decisional authority, and oversight.
Congress has not yet enacted a comprehensive national AI statute, although it has considered legislative packages (including proposals that would pause state AI laws to allow a national framework). In practice, therefore, the regulatory landscape in the United States presently relies on a mixture of executive orders, agency guidance, advisory frameworks, and proposed legislation rather than a single, binding national act that defines “automated administrative decisions.” United States Congress
