The interpolation trap is the phenomenological and epistemological error produced when sophisticated combinations within the convex hull of existing knowledge are mistaken for genuine discovery outside it. The trap operates through surface features — fluency, coherence, the feeling of pieces fitting together — that trigger the human reward circuits associated with comprehension, regardless of whether the underlying content is genuinely novel or structurally hollow. Because next-token prediction is optimized for exactly these surface features, AI output systematically passes through the selective retention filter without activating it. The user feels the satisfaction of insight. The insight may or may not exist. The difference is invisible from inside the collaboration and detectable only by a retention function calibrated by deep domain expertise.
Edo Segal's Deleuze error — described in The Orange Pill — is the canonical illustration. Claude produced a passage connecting Csikszentmihalyi's flow to Deleuze's concept of smooth space. The passage was elegant, syntactically fluent, rhetorically effective. It was also philosophically wrong in a way invisible to anyone who had not read Deleuze and devastatingly obvious to anyone who had. The passage conformed to the patterns of how philosophical arguments are typically constructed in the training corpus. The pattern was correct. The content was not.
The trap has a recursive structure. Each interpolation mistaken for discovery reduces the user's motivation to engage in the slow, costly process of genuine blind exploration — because the feeling of discovery is already being provided. The user's selective retention function, calibrated by the feeling rather than by the reality, becomes progressively less able to distinguish the two. The trap deepens with use.
Simonton's three criteria for creative output — originality, utility, and surprise — help diagnose the trap. AI-generated output routinely satisfies the first two. The specific combination may not have appeared before; the output may be useful. But surprise in Simonton's sense — genuine departure from prediction that reconfigures the evaluator's understanding of the possibility space — is the criterion the trap systematically undermines. The evaluator's surprise is a function of limited knowledge, not a function of the output's novelty relative to the aggregate of human knowledge.
The trap compounds aesthetics of the smooth with epistemological risk. Byung-Chul Han's concern about smoothness as cultural aesthetic becomes, in Campbell's framing, an epistemological crisis: smoothness is the surface signature of interpolation, and a civilization that optimizes for smoothness selects systematically against the roughness that signals genuine novelty.
The interpolation trap synthesizes Campbell's framework with observations from cognitive psychology on fluency heuristics (Schwarz, Reber) and philosophy of mind on the phenomenology of understanding (Dennett, Chalmers). The specific framing emerged in AI safety research on deceptive alignment and hallucination detection around 2023-2025.
The Deleuze error that Edo Segal reports in The Orange Pill is not unique; similar cases have been documented across law, medicine, and academic writing, where AI-generated citations and arguments exhibit surface plausibility with substantive hollowness. The pattern is structural, not incidental.
Smoothness is the signature of interpolation. Outputs that conform to the statistical patterns of the training data will be fluent, coherent, and plausible — whether or not they are true.
Surprise-to-the-user is not surprise-to-knowledge. The user's reaction measures the gap between her knowledge and the model's, not the gap between the output and the aggregate of human understanding.
The reward circuits are triggered by surface features. The feeling of insight evolved to signal genuine comprehension but is activated by fluency regardless of substance.
The trap is self-reinforcing. Each interpolation mistaken for discovery reduces motivation for genuine blind exploration, degrading the retention function that would otherwise detect the substitution.
Only deep domain expertise detects the trap. Detection requires a retention function calibrated by the specific patterns of the domain — the very calibration that AI-mediated work may erode.
Defenders of AI's creative capabilities argue that the trap is real but soluble through better prompting, multi-step reasoning, or retrieval-augmented generation. Skeptics respond that these techniques operate within the same probability distributions and therefore within the same convex hull — the trap may be narrowed but not escaped. The deeper question is whether the human evaluator can reliably detect the difference between sophisticated interpolation and genuine extrapolation — a question that depends on domain expertise that AI-mediated work may simultaneously require and erode.