Where interpretation errors arise from ambiguity in what was said, specification errors arise from the absence of what was not said. The user asks for a sorting algorithm. The system provides one — correct, efficient, well-documented. But the user needed a stable sort (preserving the relative order of equal elements) and did not specify this requirement because she did not know it was relevant, did not think of it, or assumed the system would infer it. The system did exactly what was asked. The output is correct given the specification. The specification was incomplete. Chapter 4 of the Norman volume argues this represents a third error class that Norman's classical slip/mistake taxonomy cannot accommodate.
Specification errors are not the system's fault. Nor are they, in the traditional sense, the user's fault — she did not form an incorrect plan, only an incomplete one omitting a requirement she did not know to include. In Norman's terms, the knowledge she needed to specify the requirement was not in her head, and the system did not put it in the world.
The natural language interface, unlike the formal programming interface, imposes no constraints that would have forced her to address the question. The compiler demands type declarations. The form demands required fields. The conversational interface demands nothing, and the freedom it provides comes at the cost of increased specification risk. The user arrives without the expert knowledge that would have prompted her to specify stability, thread safety, error handling, or edge case behavior — and the system, unprompted, does not ask.
Norman's framework prescribes forcing functions for errors of this kind — design constraints that prevent the user from taking the next step without completing a necessary prior one. The PIN before phone unlock. The safety switch before machine operation. The AI-era equivalent would be structured specification support: when the user requests a sorting algorithm, the system asks whether stability matters before producing an implementation. When she requests an authentication system, the system presents the major architectural options before committing. The forcing function does not restrict freedom; it protects her from the consequences of specifications she did not know she needed to make.
Specification errors are particularly common in cross-domain AI use, where the user is leveraging capabilities outside her expertise. A product manager asking for a database schema does not know which questions she should be asking. A lawyer asking for a data analysis does not know the statistical assumptions she should specify. The AI expands the range of domains in which users can operate — and simultaneously expands the surface area across which specification errors can occur.
The specification error concept is introduced in Chapter 4 of the Norman volume as a companion to interpretation error, extending Norman's classical taxonomy to account for new failure modes in conversational AI.
The pattern has precedent in Barry Boehm's work on requirements engineering and in software-engineering literature on underspecification, but the Norman volume's reframing locates it within a human-centered design framework rather than a technical-process one.
Error in what was not said. Unlike interpretation errors, specification errors arise from omission. The user did not specify because she did not know specification was required.
Neither user nor system is at fault. The user's plan was sound given her knowledge. The system executed correctly given her specification. The failure is in the design that did not scaffold the specification.
Expertise gap amplifies the problem. AI lets users operate across domains where they lack the expert knowledge to specify fully. The specification surface grows faster than the expertise does.
Structured specification support as forcing function. Systems should ask clarifying questions about critical dimensions before committing to implementations — making specification gaps visible before they become errors.