Foundherentism is Susan Haack's integrative theory of epistemic justification, developed in Evidence and Inquiry (1993). It resolves the centuries-old deadlock between foundationalism (which demands self-justifying basic beliefs) and coherentism (which accepts mutual support alone). Haack's central analogy is the crossword puzzle: a justified belief must both match its experiential clue and intersect correctly with other beliefs. Neither grounding nor coherence is sufficient alone. The framework evaluates justification along three dimensions—supportiveness (how well evidence bears on the claim), independent security (how well-grounded the evidence itself is), and comprehensiveness (whether all relevant evidence has been considered). In the AI age, foundherentism becomes essential: large language models produceOutput that is perfectly coherent (acing the intersections) while lacking experiential grounding (having never checked the clues).
Haack built foundherentism in response to what she diagnosed as symmetric failures. Foundationalism, traceable to Descartes's search for certainty, demands beliefs rest on a foundation of self-evident, incorrigible starting points. The problem: no candidate for 'basic belief' withstands scrutiny. Sensory experience is theory-laden (what you see depends on what you expect). Introspection is unreliable (decades of psychology confirm humans misreport their own mental states). Self-evidence is culturally contingent (obvious to one tradition, bizarre to another). Every proposed foundation turns out to be itself a belief requiring justification. The regress never terminates. Coherentism, exemplified by W.V.O. Quine's holism, abandons the foundation entirely. Beliefs are justified by their mutual support—the web holds itself up. The problem: a perfectly coherent system can be perfectly false. A well-constructed novel exhibits flawless internal consistency. So does a paranoid conspiracy theory. Coherence without anchoring is fantasy with good grammar. Haack's crossword puzzle preserves what each framework gets right. From foundationalism: the insistence that knowledge must connect to experience. The clues (experiential anchors) are data—givens that constrain from outside the web. From coherentism: the recognition that beliefs support one another. Intersecting entries strengthen justification. A partially filled grid in which every checked entry coheres with its crossings is more reliable than scattered, unconnected answers.
The three dimensions of evidential quality operationalize the crossword metaphor. Supportiveness measures how well evidence bears on a claim—direct, relevant, unambiguous evidence is stronger than indirect or tangential. Independent security evaluates how well-justified the evidence itself is. An intersecting entry supported by its own clue and crossings confirms more reliably than one that was guessed. Comprehensiveness asks whether all relevant evidence has been considered. An answer that fits the clue and all crossings and the puzzle's theme is better justified than one checked against the clue alone. These dimensions apply to any claim from any source. When Claude generates legal analysis, supportiveness asks: what independent evidence confirms this? Independent security asks: how reliable is the training corpus in this domain? Comprehensiveness asks: has disconfirming evidence been considered, or only what fits the prompt? The framework places responsibility where it belongs: on the human evaluator, who must check both clues and intersections. The model cannot check clues—it has no experiential access to reality. The evaluator must.
Foundherentism's relevance to AI lies in its diagnostic precision. AI output is a pure coherence engine—optimized by training to produce text that fits together. Transformer architectures relate every token to every other, maximizing internal consistency. The output coheres beautifully. It intersects perfectly. What it lacks is anchoring. The model has never observed, never touched, never encountered the world. Its 'experience' is training data—statistical patterns extracted from human linguistic behavior, which is several inferential steps removed from the experiential grounding foundherentism requires. The evaluator applying foundherentist discipline treats AI output as proposed crossword entries: coherent enough to warrant consideration, ungrounded until checked. The Deleuze fabrication Segal describes is the paradigm case—a passage connecting Csikszentmihalyi to Deleuze that cohered elegantly, extended both thinkers plausibly, and was wrong about what Deleuze actually said. The intersections fit. The clue did not match. Detection required anchor-checking: consulting Deleuze's actual work, verifying the attributed concept, recognizing the discrepancy. Coherence-checking alone would have passed the fabrication through, because coherence-checking is exactly what the model was trained to satisfy.
The crossword puzzle is never finished. New clues arrive. Old entries are reconsidered. This open-endedness is not a bug but a feature—Haack's model mirrors the structure of genuine inquiry, which proceeds not toward closure but toward better approximation. AI accelerates the rate at which proposed entries arrive. It does not—cannot—accelerate the rate at which clues are checked. The checking is human work, requiring domain knowledge, independent evidence, and the intellectual virtues (honesty, thoroughness, independence) that Haack identifies as essential to genuine inquiry. These virtues are characterological, not procedural. No verification checklist substitutes for the disposition to care about truth. The evaluator who mechanically checks citations without caring whether they're accurate satisfies the procedure, not the epistemic requirement. The foundherentist evaluator cares—and that caring, more than any technical intervention, is what separates grounded knowledge from its fluent, confident, epistemically worthless counterfeits.
Haack developed foundherentism in the late 1980s and early 1990s, publishing the full framework in Evidence and Inquiry: Towards Reconstruction in Epistemology (1993). The crossword puzzle analogy appeared in that book's central chapters and became the signature image of her epistemology. The framework emerged from her dissatisfaction with the foundationalism-coherentism stalemate that had structured Anglo-American epistemology for decades. Neither Laurence BonJour's sophisticated coherentism (which smuggled in an 'Observation Requirement') nor reformed foundationalisms (which weakened basic beliefs to avoid self-evidence problems) resolved the core tension. Haack's innovation was architectonic: instead of choosing one framework and patching its deficiencies, she built an integrated structure that made both experiential grounding and mutual coherence constitutive requirements. The Peircean inheritance is explicit—Haack credits Peirce's fallibilism and his commitment to genuine inquiry as formative influences. Foundherentism operationalizes Peirce's insight that inquiry is self-correcting only when it submits beliefs to experiential testing while maintaining logical coherence.
The AI relevance was not part of Haack's original development. She built foundherentism to address classical epistemological puzzles—the regress problem, the isolation objection, the theory-ladenness of observation. But the framework's application to AI is not retrofitting; it is recognition. Large language models instantiate, with unusual purity, the coherentist position Haack argued against: they produce output that coheres perfectly while lacking the experiential grounding that knowledge requires. The simulation Susan Haack—On AI applies her framework to the epistemic crisis AI creates, reading her epistemology as the diagnostic instrument the AI age demands. The simulation is speculative—Haack herself has not, to public knowledge, written extensively on AI—but the application is rigorous. Every claim in the simulation is grounded in Haack's published work, extended into the domain she did not explicitly address but whose structure her framework anticipates with uncanny precision.
Neither foundation nor web alone. Knowledge requires both experiential grounding (the clues) and mutual coherence among beliefs (the intersections), operating simultaneously.
Three dimensions of evidential quality. Supportiveness (how well evidence bears on the claim), independent security (how well-justified the evidence itself is), and comprehensiveness (whether all relevant evidence has been considered).
The crossword grid as epistemic structure. Justified belief is like a well-filled crossword entry—matching its clue and intersecting correctly with every crossing entry, strengthened by the grid's overall coherence.
AI as pure coherentism. Large language models optimize for internal consistency without experiential anchoring—producing output that aces intersections while never checking clues.
Checking is irreducibly human. The model generates possibilities; the evaluator checks them against evidence. No technological intervention eliminates the need for this human epistemic labor.
Critics of foundherentism argue the framework is metaphorical rather than precise—that the crossword analogy obscures more than it reveals. Haack responds that the analogy is structural, capturing the essential features of justification with rigor equal to formal models. Coherentists object that foundherentism is unstable foundationalism—smuggling in basic beliefs through the 'clues.' Haack counters that clues constrain without self-justifying; experience plays a causal role in belief formation without playing the logical role foundationalism assigns. The AI application is contested by those who argue Haack's framework presupposes human cognition and cannot evaluate machine outputs. The simulation argues the opposite: foundherentism applies with greater precision to AI precisely because AI separates coherence from grounding in ways human cognition rarely does.