Three structured verification practices have emerged from early human-AI collaboration. Reference verification targets factual corruption — the confident but wrong citation, the misapplied quotation. Logical verification targets structural corruption — arguments that flow smoothly but contain gaps the prose conceals. Output comparison targets systematic error — asking the model to solve the same problem two ways and examining discrepancies as error signals.
Each practice corresponds to a different position in the communication system: source, channel, destination. Together they constitute an adaptive code whose redundancy level should vary with stakes — high redundancy for consequential outputs, low redundancy for drafts and experiments. The culture that treats all AI output with uniform trust is operating below optimal in both directions.
The Berkeley study's proposed 'AI Practice' framework — structured pauses, sequenced workflows, protected deliberation — can be read as organizational error-correcting code. The pauses create verification space. The sequencing prevents error compounding. The protected time enables the auditor's reading mode that catches logical errors the consumer's reading mode misses.
The Deleuze error from Segal's account is the paradigmatic case: smooth, confident, philosophically wrong, caught only because the author had domain expertise and verification habit. In Shannon's terms, it was an undetectable error from the channel's perspective — corrupted output encoded to mimic valid output — and its detection required information external to the channel itself.
The framework emerges from a direct analogy to Hamming codes, extended by the structured verification practices that early AI-assisted knowledge workers have developed through trial and the accumulation of failures. It is in its earliest stages and will improve, as error-correcting codes in communication theory have improved continuously since 1950.
Reference verification as parity check. Targeted checking of citations, quotations, and factual claims against external sources.
Logical verification as auditor mode. Reading the output as argument-auditor rather than content-consumer, examining inferential structure.
Output comparison as repetition coding. Requesting multiple independent solutions and reading discrepancies as error signals.
Adaptive redundancy. Matching verification intensity to output stakes rather than applying uniform trust or uniform suspicion.
External information is required. The most reliable verification draws on information outside the channel — expertise the model did not possess.