Shannon's coding theory quantifies the probability of undetectable error for any given code as a function of the noise pattern and the code's structure. In AI, the analog is not yet rigorously quantified, but the qualitative behavior is well-documented: confident errors tend to cluster in domains where the model has been trained on patterns that look like the target domain without containing authoritative content.
The Deleuze error from Segal's experience is the canonical case: a fluent passage connecting Csikszentmihalyi's flow state to a Deleuzian concept of 'smooth space,' structurally plausible, philosophically wrong, caught only because the author happened to have the domain knowledge to check the reference.
The mathematical defense against undetectable errors is diverse independent decoders. In the organizational pipeline, this was provided by multiple reviewers with different expertise. In the AI pipeline, it must be constructed deliberately through structured verification practices — and the construction is expensive in throughput.
The phenomenon explains why AI errors differ qualitatively from human errors. Human errors tend to be obviously wrong (typos, logical slips) or obviously uncertain (hedged claims, acknowledged guesses). AI errors are disproportionately confident, fluent, and structurally sound while being factually wrong — a distribution of failure modes that human readers are not culturally trained to detect.
The concept emerges from Shannon's 1948 analysis of channel coding, where undetectable errors are identified as the residual failure mode of any code that does not achieve perfect error correction. The application to AI outputs dates from the mid-2020s, when fluent hallucinations became the most consequential failure mode of deployed language models.
Fluent corruption. AI errors tend to be presented with the same fluency and confidence as genuine insight, providing no surface indicator of the corruption.
Single-reviewer vulnerability. The compressed AI pipeline has fewer independent decoders than the multi-stage organizational pipeline it replaces.
Detection requires external information. Undetectable errors can only be caught through information outside the channel — typically the user's domain expertise.
Verification is expensive. Structured detection practices consume throughput and require the very expertise the tool was supposed to supplement.
Culturally invisible. Human readers are trained to treat fluent, structured prose as reliable — a heuristic that AI exploits by design.
Whether undetectable errors in AI output are a solvable engineering problem or an inherent consequence of language modeling remains contested. Some researchers argue that retrieval-augmented generation, constitutional AI, and similar techniques can reduce the rate substantially; others respond that these techniques address symptoms rather than the underlying mechanism.