Coherentism holds that beliefs are justified not by connection to foundational bedrock but by their coherence with the total web of beliefs. No belief is basic; every belief derives justification from its relationships to others. The metaphor is a web that holds itself up through mutual support. W.V.O. Quine's holism is the canonical version: the unit of empirical significance is not the individual statement but the totality of science. When experience conflicts with the web, any belief can be revised—the choice is pragmatic (simplicity, conservatism, explanatory power), not dictated by logic. The appeal is elegance and honesty—coherentism acknowledges that knowledge is social, historical, evolving, not a building erected once on secure foundations. The fatal vulnerability: a perfectly coherent system can be perfectly false. A novel coheres. So does a conspiracy theory. Coherence is a real epistemic virtue—but insufficient. Haack diagnosed this in the 1990s. AI demonstrates it at scale in the 2020s, producing outputs that are internally consistent, mutually reinforcing, contextually appropriate—and ungrounded.
The coherentist's central move is to dissolve the foundationalist's regress problem by denying its premise. If no belief is foundational (justified independently of all others), the regress never starts. Beliefs are justified holistically—by the web's overall coherence. Quine's 'Two Dogmas of Empiricism' (1951) demolished two pillars of logical positivism (the analytic-synthetic distinction and reductionism) and proposed that our statements about the external world face the tribunal of sense experience not individually but as a corporate body. Revising any belief is possible in principle; which beliefs we revise is determined pragmatically. The framework felt liberating—no more hunt for incorrigible foundations, no more pretense that sensory 'givens' are theory-neutral. Just the honest recognition that knowledge is a web we weave and continuously repair. Laurence BonJour's The Structure of Empirical Knowledge (1985) developed the most rigorous coherentist epistemology. He specified coherence through five criteria: logical consistency, probabilistic consistency (beliefs cohere when they are mutually probabilistically supportive), inferential connections, explanatory relations, and the system's resistance to anomalies. A highly coherent system satisfies all five.
But BonJour faced the isolation objection—the challenge that his framework could not distinguish a coherent web of genuine knowledge from a coherent web of fantasy. His response was the Observation Requirement: a coherent system must include observationally caused beliefs (beliefs formed in response to perceptual experience). Haack's retort was surgical: the Observation Requirement is a foundationalist element smuggled into a coherentist framework. If observational causation matters for justification, then observation is playing a role coherence alone cannot account for. BonJour had implicitly conceded that coherence is insufficient—that the web needs anchors. But he provided no coherentist mechanism for explaining how anchors work. He had abandoned pure coherentism without building an alternative. Haack built the alternative: foundherentism acknowledges that observation matters (against pure coherentism) while denying that observational beliefs are self-justifying (against foundationalism). Observation plays a causal role—it causes certain beliefs to form. Those beliefs are then justified by their fit with the total evidential web, including other observations and the coherence of the whole.
AI is, architecturally, a coherentist dream and a foundherentist nightmare. The transformer model—attention mechanisms relating every token to every other, optimization toward internal consistency, training objective of next-token prediction (statistical coherence)—produces outputs that satisfy BonJour's five criteria. Logical consistency: the model avoids contradiction. Probabilistic consistency: claims are mutually supportive. Inferential connections: arguments proceed logically. Explanatory relations: the model provides reasons. Anomaly resistance: the output absorbs challenges smoothly (often too smoothly—generating additional confabulated support when questioned). The model is a pure coherence engine. From a purely coherentist perspective, the output is well-justified. From a foundherentist perspective, the output is epistemically weightless—a web floating free of experiential anchoring. The isolation objection, a theoretical worry for BonJour, is an operational reality for AI. The novel and the model's confabulation are structurally identical: both cohere internally, both lack grounding. The reader knows the novel is fiction. The user may not know the model's output is ungrounded, because surface features (fluency, structure, confidence) are indistinguishable.
The temptation coherentism presents to AI users is the temptation to accept output because it fits—because it coheres with prior beliefs, extends familiar arguments plausibly, sounds right. The temptation is magnified by the fluency heuristic (cognitive ease is mistaken for truth) and by the model's optimization for exactly the features the heuristic responds to. The evaluator thinks: This fits what I know. The logic is sound. The vocabulary is appropriate. Therefore it is probably true. The inference is not absurd—coherence is evidence of truth under most conditions. But 'most conditions' assumes the producer of coherent output has experiential grounding. The assumption fails for AI. The model produces coherence without grounding as its default operation. Haack's framework makes the failure visible: coherence (intersections) is necessary but insufficient. The evaluator must also check grounding (clues)—must trace claims to evidence, verify sources, assess whether the coherent output corresponds to reality. The checking is the human contribution. It is the work the model cannot do. And it is the work the model's fluency most effectively discourages, because fluent coherence feels like it has already been grounded.
Coherentism as a systematic epistemological position emerged in the early twentieth century, developed by idealists (F.H. Bradley, Brand Blanshard) and later by analytic philosophers (Quine, Wilfrid Sellars, BonJour). The position responded to foundationalism's persistent failure to identify genuinely basic beliefs. If the foundation cannot hold, abandon the foundation—shift justification from vertical support (beliefs resting on more basic beliefs) to horizontal support (beliefs supporting one another). Quine's holism made coherentism respectable in analytic philosophy. BonJour's Structure of Empirical Knowledge made it rigorous. Haack's critique in Evidence and Inquiry made its insufficiency undeniable: the isolation objection cannot be answered with coherentist resources alone.
The AI coherentism connection was recognized by epistemologists and AI researchers simultaneously as large language models reached public deployment (2022–2023). Researchers noted that training objectives (next-token prediction, coherence maximization) produce outputs optimized for internal consistency. Philosophers noted that this makes AI a test case for coherentism's adequacy—a system exhibiting high coherence while lacking experiential grounding. If coherence were sufficient for justification, AI outputs would be well-justified. Empirically, they are not (confabulation rates, grounding failures). The Susan Haack—On AI simulation reads this as vindication of Haack's three-decade-old diagnosis: coherence without anchoring is fantasy, however smooth the prose.
Web with no foundation. Coherentism denies basic beliefs—justification is mutual support among beliefs, the web holding itself up without requiring bedrock.
The isolation objection is fatal. A perfectly coherent system (a novel, a conspiracy theory) can be entirely false—coherence does not guarantee correspondence to reality.
AI as coherentist ideal. Language models optimize for internal consistency, logical flow, mutual support—producing outputs that satisfy coherence criteria comprehensively while lacking experiential grounding entirely.
BonJour's implicit concession. His Observation Requirement (the web must include observationally caused beliefs) acknowledges coherence is insufficient—but provides no coherentist mechanism for how observation works.
The fluency trap. Coherent output feels justified—evaluators mistake smooth processing (cognitive ease) for truth, accepting claims that fit the web without checking whether the web is anchored.