Error-Correcting Codes for Human-AI Collaboration — Orange Pill Wiki
CONCEPT

Error-Correcting Codes for Human-AI Collaboration

The emerging class of verification practices — reference checking, logical auditing, output comparison, structured pauses — that function as Hamming-style structured redundancy against the high-confidence errors that smooth AI interfaces conceal.

Every noisy channel requires redundancy to transmit reliably. Shannon's channel coding theorem proves such codes exist; Richard Hamming's 1950 paper demonstrated the first practical ones. The human-AI channel is noisy — the model produces confident, fluent, sometimes wrong output — and requires its own error-correcting codes. Unstructured verification (reading and deciding whether the output 'looks right') catches gross errors but misses subtle ones; the fluent surface conceals them. Structured verification — reference checks, logical audits, repetition coding via multiple independent solutions — functions as Hamming-style parity: targeted redundancy that detects errors the smooth interface would hide. The practice is expensive in throughput and indispensable at high stakes.

In the AI Story

Hedcut illustration for Error-Correcting Codes for Human-AI Collaboration
Error-Correcting Codes for Human-AI Collaboration

Three structured verification practices have emerged from early human-AI collaboration. Reference verification targets factual corruption — the confident but wrong citation, the misapplied quotation. Logical verification targets structural corruption — arguments that flow smoothly but contain gaps the prose conceals. Output comparison targets systematic error — asking the model to solve the same problem two ways and examining discrepancies as error signals.

Each practice corresponds to a different position in the communication system: source, channel, destination. Together they constitute an adaptive code whose redundancy level should vary with stakes — high redundancy for consequential outputs, low redundancy for drafts and experiments. The culture that treats all AI output with uniform trust is operating below optimal in both directions.

The Berkeley study's proposed 'AI Practice' framework — structured pauses, sequenced workflows, protected deliberation — can be read as organizational error-correcting code. The pauses create verification space. The sequencing prevents error compounding. The protected time enables the auditor's reading mode that catches logical errors the consumer's reading mode misses.

The Deleuze error from Segal's account is the paradigmatic case: smooth, confident, philosophically wrong, caught only because the author had domain expertise and verification habit. In Shannon's terms, it was an undetectable error from the channel's perspective — corrupted output encoded to mimic valid output — and its detection required information external to the channel itself.

Origin

The framework emerges from a direct analogy to Hamming codes, extended by the structured verification practices that early AI-assisted knowledge workers have developed through trial and the accumulation of failures. It is in its earliest stages and will improve, as error-correcting codes in communication theory have improved continuously since 1950.

Key Ideas

Reference verification as parity check. Targeted checking of citations, quotations, and factual claims against external sources.

Logical verification as auditor mode. Reading the output as argument-auditor rather than content-consumer, examining inferential structure.

Output comparison as repetition coding. Requesting multiple independent solutions and reading discrepancies as error signals.

Adaptive redundancy. Matching verification intensity to output stakes rather than applying uniform trust or uniform suspicion.

External information is required. The most reliable verification draws on information outside the channel — expertise the model did not possess.

Appears in the Orange Pill Cycle

Further reading

  1. Richard Hamming, 'Error Detecting and Error Correcting Codes' (Bell System Technical Journal, 1950)
  2. Shu Lin and Daniel Costello, Error Control Coding (Prentice Hall, 2004)
  3. Todd Rogers and Jessica Lasky-Fink, Writing for Busy Readers (Avid Reader Press, 2023)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT