In Rules: A Short History of What We Live By (2022), Daston distinguished between two fundamental types of rules that have operated throughout human history. Thick rules are accompanied by extensive contextual guidance — examples, exceptions, commentary on application, instruction in the judgment required to apply them wisely. They recognize that no rule can anticipate every situation and explicitly invite the discretion that real cases demand. Thin rules, by contrast, are rigid, predictive, and designed to be executed without discretion. The history of rules, Daston showed, is a history of the gradual thinning of thick rules into thin ones, driven by desires for predictability, efficiency, and the elimination of human variability. But behind every thin rule, she observed, is a thick rule cleaning up after it — a human exercising judgment about the cases the thin rule fails to accommodate.
The distinction illuminates an asymmetry that Daston's historical research documents across fields as different as monastic rules, legal codes, algorithmic trading systems, and scientific protocols. Thin rules promise predictability and mechanical execution; thick rules deliver adequate performance in actual contexts. The thin rule works by stipulating conditions so narrow that they rarely match the situations to which the rule must be applied. When conditions match, the thin rule executes cleanly. When they do not match — which happens continuously, because the world never quite resembles the specifications — someone must exercise the judgment the thin rule was supposed to eliminate.
Daston's canonical example is the eighteenth-century attempt to codify legal judgment into algorithmic decision procedures. The attempt failed not because the codifiers were unskilled but because the world of cases kept producing situations the codes had not anticipated. The codes were thinned in response to particular problems; the thinning created new unanticipated problems; the cycle continued until it became clear that no finite codification could eliminate the need for discretion. The solution was not to produce perfectly thin rules but to acknowledge the necessary role of thick judgment and to build institutional structures — training, apprenticeship, professional accountability — that could sustain it.
The relevance to AI is structural. AI-generated knowledge is thin knowledge: it is fluent, comprehensive, confident, produced by thin rules (statistical pattern-matching across vast corpora) that require no discretion, no contextual judgment, no understanding of the domain in which they operate. Evaluating that knowledge requires thick knowledge — the kind of contextual, domain-specific, judgment-dependent understanding that only sustained engagement with a field can produce. The thin rule generates the output. The thick rule determines whether the output is worth trusting.
The decorrelation of fluency and authority means that thin knowledge is being produced at a rate that far exceeds the production of the thick knowledge needed to evaluate it. The ratio is worsening, because the same technology that accelerates thin knowledge simultaneously reduces the incentive to invest in the thick knowledge that keeps it honest. This is not a temporary imbalance to be resolved by technological improvement; it is a structural feature of the technology, and its correction requires institutional effort on the thick-rule side of the equation.
The thick/thin distinction is developed most fully in Daston's Rules: A Short History of What We Live By (2022), though it draws on earlier research on the history of algorithms, legal codes, and the relationship between rule-following and judgment. The book traces the history of rules from ancient Greek recipes through medieval monastic codes to contemporary machine-learning systems, showing that every era has confronted the tension between the desire for predictability and the irreducible need for judgment in application.
The distinction extends and refines arguments made by philosophers including Wittgenstein on rule-following and Hans-Georg Gadamer on phronesis. Daston's specific contribution is the historical documentation: she shows that the thick/thin distinction is not merely a conceptual division but a pattern observable in practice across centuries and across domains.
Thick rules include guidance for application. Examples, exceptions, and cultivated judgment are integral to the rule, not supplements to it.
Thin rules aspire to mechanical execution. Rigidity, predictability, and elimination of discretion are the goals — and the sources of the rules' characteristic failures.
Thinning is a historical tendency. The pressure toward thin rules comes from desires for efficiency, scale, and accountability that real institutions cannot easily resist.
Thick rules clean up after thin rules. The judgment the thin rule was supposed to eliminate reappears in the humans who handle the cases the thin rule cannot accommodate.
AI produces thin knowledge. Statistical fluency without contextual judgment generates outputs that require thick human evaluation — at a rate far exceeding the production of the evaluative capacity.
Critics have questioned whether the thick/thin distinction is binary or whether rules fall along a continuum. Daston's position is that the distinction is analytical rather than ontological — it identifies two poles of a spectrum that individual rules occupy in varying degrees. A more substantive debate concerns whether the thick/thin pattern applies cleanly to AI systems or whether the statistical inference at the heart of machine learning represents a genuinely new kind of rule that escapes the old categories. The position this volume takes is that the pattern applies with adjustment: AI implements very thin rules at the level of individual predictions, but the overall system requires thick rules for deployment, monitoring, and correction — and the thick rules have not yet been adequately developed.