The epistemic commons is the collective informational environment on which democratic deliberation, scientific progress, and cultural coherence depend—including shared facts, dispute-resolution methods, and institutions (journalism, science, education) maintaining evidential standards. Haack's framework reveals that the commons requires both anchoring (connection to experiential reality) and coherence (internal consistency), sustained through ongoing institutional labor. AI degrades the commons by flooding it with outputs that are coherent without being grounded. Before AI, producing epistemic pollution (false claims, sham reasoning) required effort—a natural bottleneck limiting volume. AI removes the bottleneck. A single user can generate dozens of fluent, well-cited, internally consistent analyses in hours—each exhibiting knowledge's surface features, each potentially ungrounded. As the ratio of ungrounded to grounded claims increases, the commons' self-correction mechanisms (fact-checking, peer review, verification) are overwhelmed. Standards erode from 'independently verified' to 'not obviously wrong' to 'someone said it.' The degradation is cumulative and invisible from inside.
Haack's work on the social epistemology of inquiry—developed across Manifesto of a Passionate Moderate, Defending Science—Within Reason, and essays in Putting Philosophy to Work—argues that genuine inquiry is a social practice requiring institutional support. Norms rewarding truth-seeking over confirmation. Incentives protecting inquirers' independence from institutional pressure. Standards of evidential rigor maintained against convenience, ideology, and self-interest. These institutions were strained before AI—replication crisis in social science, decline of local journalism, social media's engagement-optimization. AI amplifies pre-existing vulnerabilities. The epistemic commons was already polluted. AI increased the pollutant flow rate. The mechanism: producing convincing fake scientific abstracts, persuasive sham legal analyses, authoritative-sounding policy briefs once required skill limiting output volume. AI eliminates the skill barrier. Confabulation becomes cheap. The commons fills with claims that look like knowledge—logical structure, appropriate vocabulary, confident presentation, internal coherence. The coherence is real. The grounding is absent. The absence is invisible to surface evaluation.
The degradation produces second-order effects more dangerous than individual false claims. Self-correction depends on distinguishing reliable from unreliable claims. When the commons contains manageable volumes of unreliable content, mechanisms (fact-checking, replication, peer review) keep pace. They identify false claims, flag them, correct the record. When unreliable-content volume exceeds verification capacity, the commons degrades—not through any single catastrophic failure but through cumulative erosion of standards. The evaluator, overwhelmed, stops checking. The institution, unable to verify everything, verifies nothing. The standard drops incrementally. Each individual lowering (a lawyer checking one fewer citation, a journalist verifying one fewer source, a student accepting one more AI claim unchecked) seems reasonable. The aggregate effect across millions of actors is progressive weakening of the evidential infrastructure on which collective deliberation depends. The erosion is invisible because each step is small enough to rationalize. 'Good enough' becomes the standard. 'Good enough' is the enemy of knowledge.
Haack's foundherentist framework provides both diagnostic and prescription. The diagnostic: the commons requires both grounding and coherence, and AI provides only the latter. The prescription: the grounding must come from human inquirers exercising intellectual virtues—checking claims against evidence, tracking provenance, maintaining the distinction between coherent and true. The prescription is individual (addressed to the person holding the pen, checking the clues). The problem is collective (the commons is shared infrastructure). This asymmetry is the deepest challenge. Individual epistemic discipline maintains personal knowledge quality but cannot, alone, reverse commons degradation. What is needed is institutional epistemic infrastructure—structures that support, incentivize, require the practices foundherentism identifies as essential. Journals that reward replication over novelty. News organizations that invest in verification over speed. Universities that teach evidential reasoning, not just information retrieval. Legal systems that hold practitioners accountable for unverified AI citations. Cultures that value caring about truth enough to do the unglamorous work of checking clues.
The commons is not an individual possession to be protected in isolation. It is a collective achievement to be maintained through distributed effort. The tragedy-of-the-commons structure applies: each actor benefits from a high-quality shared informational environment while individually incentivized to pollute (by generating unverified content cheaply rather than verifying expensively). The solution, as with all commons problems, is institutional—rules, norms, monitoring, and sanctions that align individual incentives with collective welfare. Haack's epistemology provides the standards. Building institutions that enforce those standards is political work—work that philosophy can inform but cannot perform alone. The foundherentist framework tells the evaluator what to check (both clues and intersections). It does not build the culture that rewards checking over generating, that values slow accuracy over fast plausibility, that treats epistemic discipline as a civic virtue rather than a productivity cost. Building that culture is the urgent task the AI age demands.
The concept of an 'epistemic commons' draws on Elinor Ostrom's commons governance framework and applies it to the shared informational environment. The term entered AI discourse in the early 2020s as researchers, journalists, and educators observed that AI-generated content—often unverified, sometimes fabricated—was entering the shared information pool at volumes overwhelming existing verification capacity. The degradation mechanism parallels environmental pollution: individual actors discharge low-cost pollutants (unverified claims) into a shared resource (the informational environment), and the cumulative effect degrades the resource for all users. The epistemic commons tragedy is that verification is expensive (time, expertise, effort) while generation is cheap (prompt, wait, paste). Rational actors generate more than they verify. The commons fills with ungrounded claims.
Haack's contribution is the diagnostic precision her foundherentist framework provides. The commons does not degrade because AI-generated content is uniformly false (much of it is true). It degrades because the ratio of grounded to ungrounded claims shifts, and the mechanisms that maintain commons quality (distinguishing reliable from unreliable, flagging fabrications, correcting the record) are rate-limited by human cognitive capacity. The model generates at computational speed. Verification proceeds at human speed. The asymmetry is structural, and it means that in any environment where AI-generated content is prevalent, unverified claims accumulate faster than they can be checked. The accumulation is the degradation. The Susan Haack—On AI simulation extends her social epistemology into the domain she did not explicitly address, reading the AI-driven informational crisis as the predictable consequence of deploying pure coherence engines into a commons whose health requires both coherence and grounding.
Commons as shared evidential infrastructure. The informational environment on which collective deliberation depends—facts, methods, standards, and institutions maintaining them.
AI pollution mechanism. Ungrounded but coherent claims flood the commons at computational speed, overwhelming human-speed verification capacity, shifting the ratio of reliable to unreliable content.
Second-order degradation. As unverified content accumulates, the commons loses self-correcting capacity—standards erode, verification is skipped, 'good enough' replaces 'independently confirmed.'
Individual virtue is insufficient. Personal epistemic discipline maintains individual knowledge quality but cannot reverse collective degradation—institutional infrastructure is required.
Tragedy-of-the-commons structure. Each actor benefits from high-quality shared information while individually incentivized to generate (cheap) rather than verify (expensive)—requiring governance aligning incentives with collective welfare.