This page lists every Orange Pill Wiki entry hyperlinked from Susan Haack — On AI. 23 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The epistemological position that beliefs are justified by mutual support alone—a web with no foundation—whose fatal vulnerability (coherent systems can be entirely false) AI exploits by producing perfect coherence without grounding.
The erosion of the shared informational environment's self-correcting capacity when AI floods it with ungrounded but coherent claims—increasing the ratio of fabrication to fact, overwhelming verification infrastructure, lowering evidential…

The foundherentist requirement that beliefs must connect to observation, experiment, direct encounter—the 'clues' constraining answers from outside the web, distinct from self-justifying 'basic beliefs' foundationalism demands.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
Susan Haack's epistemological framework combining foundationalism's experiential grounding with coherentism's mutual support—knowledge as crossword puzzle where beliefs must both match their clues and intersect correctly.
Haack's taxonomic distinction: genuine inquiry follows evidence wherever it leads; sham inquiry adopts inquiry's procedures while serving predetermined conclusions—a difference invisible to AI, which serves whichever the user brings.
Tversky and Kahneman's finding that people assign probabilities to their judgments that systematically exceed actual accuracy — a calibration failure that AI's smooth output makes worse by decoupling surface cues from underlying accuracy.
The surface signature of the megamachine's organizational logic — the polished, featureless, hand-mark-erasing aesthetic that reveals, through what it conceals, the values of a civilization optimizing for elimination of friction.
AI's production of internally coherent, contextually plausible, confidently delivered fabrications—clinically distinct from hallucination, harder to detect than simple error, requiring anchor-checking the model cannot perform.
Haack's governing analogy: knowledge as a puzzle where each entry must match its clue (experiential anchor) and intersect correctly with crossing entries (coherence)—neither alone is sufficient, both are required.
The cognitive shortcut by which System 1 treats ease of processing as a proxy for truth, familiarity, and quality — the specific mechanism that makes AI's polished output feel reliable whether or not it is.

The structural absence of experiential anchoring in AI outputs—systems produce claims statistically derived from training text, not observationally connected to reality, violating foundherentism's clue-matching requirement.
The only one of Peirce's four methods of belief-fixation that is self-correcting — accepting beliefs because they have survived the test of experience, and revising them when the test fails.
The governance regime change in which the accumulated textual, visual, and computational output of millions of individuals was appropriated for AI training under terms their original contribution did not contemplate — the paradigmatic case …
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
American philosopher and logician (1839–1914), founder of pragmatism, whose fallibilism and self-correcting inquiry influenced Haack's foundherentism—'genuine inquiry' as the method of science, distinguished from tenacity, authority, and a…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.
Italian philosopher of information (b. 1964), Haack's doctoral student at Cambridge, whose 'divorce between agency and intelligence' thesis and philosophy of information extend foundherentist epistemology into the digital and AI age.