Norman placed the conceptual model at the center of his design philosophy. Unlike the engineering model (how the system actually works) or the design model (what the designer intended to communicate), the conceptual model is the user's internalized story of how the system behaves. When this model is accurate, the user can predict what the system will do, diagnose what has gone wrong, and recover from errors. When it is inaccurate, she is lost — acting on expectations the system does not meet, misinterpreting feedback, unable to learn from experience. The designer's task, in Norman's framework, was to shape the system's visible behavior into a coherent, comprehensible story that supported accurate model-building. AI systems, as Chapter 3 of the Norman volume argues, systematically resist this process.
The traditional conceptual model-building process worked because systems were deterministic and observable. The user pressed a button, observed the response, updated her mental model, and repeated. Over weeks and months, the model became reliable, and reliability was the foundation of effective use. The thermostat either heated the room or it did not. The word processor either saved or reported an error. Predictions could be tested. Models could be refined.
AI systems defeat this process at multiple levels. Their behavior is probabilistic — the same input produces different outputs on different occasions. Their behavior is context-sensitive in ways the user cannot fully observe — responses depend on training data, context window contents, sampling parameters that remain invisible. Their behavior changes over time as models are updated — the system learned last month may behave differently this month. The user cannot build a stable conceptual model of something that is not stable.
The discourse has offered three inadequate substitute models. The tool model treats AI as instrument, preserving the user's sense of control but failing to account for the system's generative agency. The collaborator model captures the interactive dynamics but risks anthropomorphism — extending social trust to a system that cannot reciprocate. The oracle model treats AI as authoritative source, ignoring (as Norman himself said) that "it doesn't understand what it is doing; it's a pattern matching device." Each model captures something true and misses something essential.
The design response Norman's framework suggests is not to impose a single correct model but to support the user in building adequate working models — representations that predict well enough for effective interaction, even if they do not capture every mechanism. Such models require systems that make their reasoning partially visible, their confidence calibrated, their limitations discoverable. The current generation of AI systems provides almost none of this support, leaving users to build conceptual models from the output alone — a process Norman identified as producing systematic misunderstanding.
Norman developed the conceptual-model framework in User Centered System Design (1986) and elaborated it in The Design of Everyday Things (1988). The distinction between the designer's model, the user's model, and the engineering model became foundational to human-computer interaction education through the 1990s and 2000s.
Norman's later work, particularly Living with Complexity (2010), acknowledged that conceptual models for complex adaptive systems could not be as stable as those for fixed artifacts. The AI era has pushed this acknowledgment to its limit: the question is no longer how to support stable models but whether stable models are possible at all for probabilistic systems.
Three models, one target. The designer's model, the system's actual mechanics, and the user's model must align enough for effective interaction — but only the user's model directly drives behavior.
Adequacy over accuracy. A conceptual model does not need to capture every mechanism. It needs to support correct predictions, remain comprehensible, and help the user figure out what to do when things go wrong.
The instability problem. AI systems' probabilistic, context-dependent, temporally shifting behavior resists the stable model-building that traditional design supported. New design approaches must account for this instability rather than wish it away.
Competing inadequate models. Tool, collaborator, and oracle models each capture part of the truth and miss the rest. The working model users actually need combines elements of all three while refusing the limitations of any.
Some researchers argue that AI's resistance to conceptual modeling is a transient property of current systems that will resolve as explainability research matures. Others, including Norman in his later writing, argue that probabilistic generative systems are fundamentally different from the deterministic artifacts his original framework addressed, and that a new design vocabulary — not just better versions of the old one — is required.