In a 1987 talk, Terry Winograd drew a striking structural parallel between artificial intelligence and bureaucracy. Bureaucracies achieve efficiency by formalizing processes—replacing situated judgment with rules, protocols, and standardized procedures. They work well within defined parameters. They fail at boundaries—where formal rules meet situations designers did not anticipate, where the human inside the bureaucratic structure knows the rule does not apply but cannot override it because the system does not recognize exceptions. AI operates identically: statistical patterns of linguistic appropriateness produce contextually competent outputs within the territory patterns cover, but fail when gaps between rhetorical pattern and substantive truth become consequential. The analogy illuminates the specific risk: not that machines produce obviously wrong outputs (those are easy to catch), but outputs wrong in ways that matter—substantively, conceptually, structurally—while being right in every easily-assessed dimension (grammatically, rhetorically, stylistically).
The bureaucracy analogy connects to Max Weber's analysis of rationalization—the progressive substitution of calculable procedure for intuitive practice. Bureaucracies eliminate the need for individual judgment by converting every situation into a case falling under a rule. They succeed by standardizing, by treating the messy particularity of human situations as instances of general categories. The efficiency is genuine—bureaucracies process millions of cases reliably, predictably, impartially. The failure mode is also genuine: novel situations falling between categories, cases where the right answer requires recognizing that rules do not apply, that the situation demands attention the bureaucratic structure was designed to make unnecessary. Language models are bureaucracies of meaning—processing linguistic inputs through statistical rules producing contextually appropriate outputs, competent within pattern-covered territory, failing where gaps between surface appropriateness and deep correctness matter.
Winograd's analogy predated large language models by decades but anticipated their characteristic failure mode with precision. The Deleuze incident Edo Segal recounts—Claude producing an elegant philosophical connection that was eloquent, well-structured, and substantively incorrect—is the bureaucracy analogy realized. The system generated output conforming to the pattern of 'insightful philosophical connection' without possessing understanding required to verify the specific connection's validity. The conformity to rhetorical pattern concealed the gap. A human reader who had not read Deleuze could not detect the error because the error was not in the prose but in the substance beneath it. This is precisely the failure mode bureaucracies produce: formally correct processing generating outcomes that miss what matters, with the correctness concealing the miss.
Winograd delivered the analogy in a 1987 lecture as his Heideggerian critique was maturing into a design philosophy. The timing was significant: expert systems were at their commercial peak, promising to capture and deploy human expertise at scale. Winograd, watching the expert systems fail in ways that mirrored SHRDLU's closed-world limitations, reached for an analogy his audience would understand. Bureaucracy was not a technical term but a lived reality—everyone had experienced the frustration of a system whose rules did not accommodate their particular situation. The analogy made visceral what the Heideggerian arguments made abstract: formal systems optimize for the average case at the cost of the exceptional one, and intelligence—real intelligence—lives in handling the exceptional.
Formalization's double edge. Rules and standardization produce efficiency and predictability; they also produce brittleness at boundaries where situations exceed categories.
Failure at novel situations. Bureaucracies and AI systems both fail when confronting the unprecedented—cases falling between rules, requiring judgment about whether rules apply rather than application of rules.
Surface correctness concealing substantive error. The characteristic risk: outputs conforming to formal requirements, satisfying procedural checks, appearing competent—while missing what matters in ways only deep domain knowledge detects.
The exceptional case. Intelligence—whether human or organizational—reveals itself not in routine processing but in handling exceptions where rules run out and judgment is required.