Interpretation Error — Orange Pill Wiki
CONCEPT

Interpretation Error

A new category of AI-era error — the person's prompt is clear and the system's execution is correct, but the system's interpretation of the prompt diverges from the person's meaning, producing an output that is technically right and practically wrong.

Norman's classical error taxonomy distinguished slips (right plan, wrong execution) from mistakes (wrong plan, correct execution). Both categories assumed deterministic systems where error originated in the user. The AI era introduces errors that originate neither in the user nor in the system but in the semantic space between them — what Chapter 4 of the Norman volume calls interpretation errors. The user asks for an authentication system; she gets one that uses password-based authentication when she intended OAuth. The word was interpreted in a particular way the user did not specify because she assumed the specification was unnecessary — because in a conversation with a human colleague, shared context would have resolved the ambiguity without explicit negotiation.

In the AI Story

Hedcut illustration for Interpretation Error
Interpretation Error

Interpretation errors are insidious because they are invisible on the surface. The output compiles. The tests pass. The artifact exists. The error is semantic — a gap between what was meant and what was understood — and semantic errors are not detected by any syntactic verification mechanism. The user must catch them through evaluation, and evaluation requires her to know what she meant precisely enough to recognize when the system understood something different.

The challenge is deepened by the fact that natural language conversation normally relies on ambiguity, implication, and shared context that the AI system does not fully share. When the user says "make it fast," she may mean fast to develop, fast at runtime, fast to load, or fast for the user experience. A human colleague would ask or infer from context; an AI system may pick one meaning and execute it confidently, producing a result the user will later need to undo.

Interpretation errors are not slips — the user executed correctly, typing the right words with the right meaning. They are not mistakes in Norman's sense — her plan was sound, her intention accurately articulated given normal conversational conventions. They are errors of translation between two fundamentally different kinds of understanding: the user's contextual, assumption-rich natural language and the system's literal, statistical, pattern-matching interpretation of that language.

The design response Norman's framework suggests is interpretation preview — a mechanism by which the system surfaces what it understood, what it inferred, and what it defaulted before it produces the output. This is a bridge display for the interpretive moment: externalizing the system's internal state at precisely the point where divergence could be corrected cheaply, before it has cascaded through downstream work.

Origin

The interpretation error concept emerged through empirical observation of AI-assisted development workflows in 2024–2025 and received its formal treatment in Chapter 4 of the Norman volume as a necessary extension of Norman's classical error taxonomy.

Related categories appear in the linguistic literature on pragmatic failure and in HCI research on intent disambiguation, though the Norman volume's grounding in the slip/mistake framework gives interpretation error its distinctive analytical location.

Key Ideas

Error in the space between. Interpretation errors originate neither in user nor system but in the translation across different kinds of understanding.

Invisible on the surface. The output looks correct because it is correct given the system's interpretation. The error is semantic, not syntactic.

Natural language ambiguity as feature, not defect. The same properties that make natural language powerful (context-dependence, implication, shared assumption) make it systematically vulnerable to interpretation divergence.

Interpretation preview as design response. Systems should surface interpretive choices before commitment — externalizing what was understood, inferred, and defaulted while correction remains cheap.

Appears in the Orange Pill Cycle

Further reading

  1. Donald A. Norman, The Design of Everyday Things, rev. ed. (Basic Books, 2013), chapter 5.
  2. James Reason, Human Error (Cambridge University Press, 1990).
  3. Lucy Suchman, Human-Machine Reconfigurations, 2nd ed. (Cambridge University Press, 2007).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT