The Enforcement Problem — Orange Pill Wiki
CONCEPT

The Enforcement Problem

North's brutal formulation — rules without enforcement are suggestions — applied to the novel challenge of monitoring and sanctioning the behavior of human-machine systems whose reasoning is opaque and whose outputs are joint products no existing framework was designed to evaluate.

North was fond of a formulation that had the quality of an aphorism but functioned as analytical proposition: rules without enforcement are suggestions. Compliance requires enforcement, and enforcement requires mechanisms — institutions dedicated to monitoring behavior, detecting violations, imposing sanctions, and creating the expectation that violations will be detected and sanctioned with sufficient reliability to deter the rational actor. The AI transition creates enforcement problems of qualitatively new character. Previous challenges involved monitoring human behavior. The AI transition introduces a new object of enforcement: the human-machine system, whose joint output must be evaluated against quality, safety, or ethical standards. Enforcing rules against this composite actor requires conceptual and institutional innovations the existing apparatus does not possess. Bar associations cannot evaluate AI-assisted legal work with tools designed for human professional conduct. Medical boards cannot assess AI-assisted diagnosis with frameworks built for human decision-makers. The gap between the need for new enforcement mechanisms and the capacity to build them is an institutional void with measurable costs.

In the AI Story

Hedcut illustration for The Enforcement Problem
The Enforcement Problem

Consider professional standards. A lawyer is bound by rules requiring competence, diligence, and candor. When drafting personally, enforcement is conceptually straightforward: the work product is evaluated against the standards. When a lawyer uses AI, the problem changes. The brief may cite cases that do not exist — a well-documented failure mode of large language models. The analysis may contain errors invisible to someone reviewing machine output without independent verification. The question enforcement must answer — did the lawyer exercise competence and diligence? — now requires evaluating not just the output but the process by which it was generated, a process involving interaction between human judgment and machine capabilities whose reasoning is opaque.

The Schwartz incident — the May 2023 federal court case in which a New York attorney filed a brief containing six fabricated judicial citations generated by ChatGPT — is the visible edge of this problem. Bar associations have begun issuing guidance on AI use: disclosure requirements, mandatory human review, cautions against unverified citations. But guidance is not enforcement. The disciplinary mechanisms — investigation of complaints, hearings, sanctions — are staffed by people trained to evaluate human conduct, not human-machine collaboration.

The problem extends to every domain where AI-generated output must meet standards. In education, the standard is that student work reflects student learning. When AI can produce work indistinguishable from a student's own, enforcement requires detecting AI-generated content — a detection problem that may be fundamentally insoluble as models improve. In healthcare, the standard is meeting the applicable standard of care. When AI assists diagnosis, enforcement requires evaluating whether the physician appropriately incorporated, modified, or overrode the machine's recommendations. In financial markets, AI-generated trading strategies operate in a regulatory gray zone the SEC and FINRA are struggling to address.

In each domain, the problem has the same structure. The standards were designed for human actors. The enforcement mechanisms were designed to monitor human behavior. The AI transition has introduced a new kind of actor — the human-machine system — whose behavior does not map cleanly onto the categories the existing apparatus was built to address. The machine's reasoning is opaque. The human's contribution is variable. The output is a joint product whose quality depends on the interaction rather than on either contributor alone.

Origin

The emphasis on enforcement emerged from North's critique of first-generation institutional economics, which he argued had overemphasized formal rules while neglecting the mechanisms making rules effective. His 1990 book gave enforcement equal theoretical status with formal rules and informal norms as one of three constituent elements of institutional frameworks.

The specific application to AI-era enforcement has been developed by scholars including Jack Balkin, Frank Pasquale, and the emerging literature on algorithmic accountability. The 2023 New York bar's guidance on AI-assisted legal practice and the 2024 FDA draft guidance on AI in medical devices represent first-generation formal responses to the enforcement challenge.

Key Ideas

Rules without enforcement are suggestions. The aphorism compresses a structural proposition: formal elegance is irrelevant without mechanisms that make compliance expected.

The object of enforcement has changed. Previous challenges involved monitoring humans. AI requires monitoring human-machine systems whose joint output cannot be reduced to either contributor alone.

Opacity defeats traditional evaluation. The machine's reasoning is not inspectable in the form reviewers would need to evaluate it — creating structural enforcement gaps.

Guidance is not enforcement. Professional bodies issuing AI use guidelines mistake the articulation of standards for the capacity to apply them.

New mechanisms are required. Audit trails, new competence standards, reconstructed liability frameworks, and rebuilt institutional capacity — enforcement infrastructure designed for human-machine systems rather than humans alone.

Debates & Critiques

Major debates concern whether existing enforcement frameworks can be adapted to AI contexts through marginal reform, or whether fundamentally new mechanisms are required. Proponents of adaptation point to successful extensions of professional regulation to previous technological transitions. Critics argue the AI case is qualitatively different because of the opacity and joint-production features. The Schwartz case and analogous incidents across professional domains provide empirical tests: adaptation has produced guidance but few successful enforcement actions against AI-induced failures.

Appears in the Orange Pill Cycle

Further reading

  1. Douglass North, Institutions, Institutional Change and Economic Performance, ch. 7 (Cambridge University Press, 1990)
  2. Frank Pasquale, New Laws of Robotics (Harvard University Press, 2020)
  3. Jack Balkin, 'The Three Laws of Robotics in the Age of Big Data' (Ohio State Law Journal, 2017)
  4. Ryan Calo, 'Robotics and the Lessons of Cyberlaw' (California Law Review, 2015)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT