This page lists every Orange Pill Wiki entry hyperlinked from Don Norman — On AI. 23 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The actionable properties of an object as perceived by a user — what the thing permits you to do — borrowed from Gibson, refined by Norman, and rendered newly problematic by an AI interface whose action space is unbounded and invisible.
The quiet risk of comprehensive automation: not that machines dominate us, but that we lose the capabilities they replace. Asimov's Solarians are the founding fiction; contemporary work on cognitive offloading is the empirical counterpart.
The propagation of initial interpretation or specification errors through complex bodies of AI-generated work, compounding silently because the speed of production outpaces the speed of evaluation.
The user's mental representation of how a system works — accurate enough to predict, diagnose, and recover from the system's behavior — and the specific cognitive architecture that AI's probabilistic, context-dependent outputs systematical…
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
Norman's design pattern for preventing error by making the wrong action impossible rather than merely unlikely — and the conceptual template for AI-era design interventions that pause production before errors cascade.
A new category of AI-era error — the person's prompt is clear and the system's execution is correct, but the system's interpretation of the prompt diverges from the person's meaning, producing an output that is technically right and practi…
Norman's foundational distinction — between knowledge embedded in the environment (available without memorization) and knowledge carried in memory (requiring learning and recall) — and the observation that AI systems place nearly all rele…
The interface paradigm — inaugurated at scale by large language models in 2022–2025 — in which the user addresses the machine in unmodified human language and the machine responds in kind; the paradigm that, read through Gibson's framework,…
The Norman volume's proposed design pattern for conversational AI — revealing capabilities dynamically through dialogue rather than statically through fixed interface layouts, extending Norman's progressive disclosure into the conversation…
Tversky's diagnostic term for the gap between the spatial structure of a thinker's understanding and the spatial structure a tool demands — the hidden tax on every pre-AI interface.
Norman's framework for designing systems that maintain human capability even as technology changes the conditions under which capability is exercised — and the design orientation Chapter 6 of the Norman volume proposes as the antidote to …
The perceivable cues that tell a person what an object affords — separate from the affordance itself, and in the AI era almost entirely absent, misleading, or replaced by accidental signals the system never intended to send.
The gradual, invisible atrophy of cognitive skills that occurs when capabilities distributed across a human-AI coupling cease to be exercised by the human component — a design consequence Norman's framework predicts but current AI systems …
A second new AI-era error category — the user's specification is incomplete in ways she did not know were possible, and the system produces a technically correct output that omits a requirement she never articulated because she did not kno…
The empty text field of the conversational AI interface — read through Norman's framework as the worst-designed primary interface element in the history of computing, communicating less about its capabilities than the average door handle.
The framework for analyzing human-AI interaction as a single integrated system rather than a human using a tool — where the behavior of the whole cannot be understood by analyzing components in isolation, and design must address the coupli…
The distance between what a system has done and what the person can perceive, interpret, and judge about what it did — the gulf that has blown open in the AI era precisely because the Gulf of Execution collapsed.
Norman's name for the distance between what a person wants to do and what a system allows her to do — the chasm the AI interface has, for the first time in tool history, crossed from the machine's side.
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
Norman's framework for the visceral, behavioral, and reflective levels of emotional processing that every designed artifact engages — and the lens through which Chapter 5 of the Norman volume diagnoses the emotional architecture of AI-as…