Norman's knowledge distinction is one of his most practically useful. Knowledge in the world is built into the environment: the stove whose spatial arrangement of knobs maps to its burners puts the mapping in the world, where it is available without memorization. Knowledge in the head must be learned and remembered: the stove whose mapping is arbitrary requires the user to memorize which knob controls which burner. Good design puts as much knowledge as possible in the world, reducing the user's cognitive burden. Chapter 8 of the Norman volume observes that AI systems, by this criterion, are catastrophically badly designed: what the system can do, how to prompt it effectively, when to trust and when to verify, what kinds of errors it is prone to — all of this must be learned by the user through experience, trial and error, or external documentation. None of it is in the world of the interaction.
The distinction matters for equity as well as usability. Knowledge in the head is distributed unevenly — those with time to learn, resources to experiment, and communities to learn from accumulate it faster. Knowledge in the world is democratizing — the stove with the visible mapping works equally well for anyone who can see it. Norman's lifelong advocacy for putting knowledge in the world was simultaneously an advocacy for reducing the cognitive burden on users and for narrowing the gap between novices and experts.
The AI interface inverts both aims. The blank prompt contains no knowledge about the system's capabilities, limits, or effective use. Everything must be acquired through experience the user must pay for herself — in time, in errors, in the slow accumulation of prompt-engineering lore shared unevenly across communities. The user who spent years in AI research knows tricks the newcomer does not. The user with a strong network learns them faster than the isolated user. The technology that was supposed to democratize capability has created a new axis of inequality based on who has accumulated the undocumented, tacit knowledge of how to use it effectively.
The design response, as Chapter 8 argues, is to embed knowledge about the system's nature in the interaction itself — not in help documentation the user will never read, but in the texture of the conversation. Systems should communicate confidence levels, interpretive choices, and limitations through the outputs themselves. Systems should surface examples of effective prompts, patterns of successful interaction, and common pitfalls at the moment when they are relevant. The designer's task is to move what is currently undocumented tacit knowledge into the world of the interaction, making it available to every user rather than only those who know to look for it.
Norman introduced the knowledge distinction in The Design of Everyday Things (1988), drawing on cognitive psychology research into external cognition and environmental support for memory.
The concept connects to Andy Clark's extended mind thesis and to the distributed cognition research Norman helped develop with Hutchins and others at UCSD. The AI-era reformulation treats the interaction itself as a potential locus for knowledge-in-the-world, a framing that requires significant design innovation to realize.
World vs. head as design axis. Every piece of knowledge a system requires is either in the world (embedded, available) or in the head (learned, remembered). Good design shifts the balance toward the world.
Equity implications. Knowledge in the head is unequally distributed; knowledge in the world is democratizing. Design choices about this axis have distributional consequences.
AI as head-heavy system. The natural language interface places almost all relevant knowledge in the head, creating a new inequality based on tacit know-how.
Conversation as locus for world-knowledge. The design response is to embed knowledge about the system in the interaction itself — through confidence signals, interpretation previews, and contextual examples.