In Emotional Design (2004), Norman argued that the emotional dimension of the user's experience with technology was not a secondary effect of good or bad usability but a primary determinant of how she used technology, evaluated its outputs, and whether she developed a sustained, productive relationship with it. He identified three distinct levels of emotional processing: the visceral (immediate, pre-conscious response to appearance and feel), the behavioral (satisfaction or frustration arising from use), and the reflective (conscious, retrospective evaluation of what the experience means). The three levels interact, reinforce, and sometimes contradict one another. Chapter 5 of the Norman volume applies this framework to AI-assisted work and finds that the visceral and behavioral levels are powerfully engaged while the reflective level is structurally suppressed.
There is a parallel reading that begins not with the user's emotional response but with the material conditions that produce it. The visceral thrill of AI-assisted creation that Norman's framework diagnoses depends on massive computational infrastructure burning through electricity at data centers whose environmental costs remain invisible to the user. The awe response is real, but it emerges from a substrate of extraction — rare earth mining for GPUs, water consumption for cooling, carbon emissions from power generation. The emotional design succeeds precisely because it obscures its own material basis.
The behavioral satisfaction of AI flow states similarly rests on accumulated human labor rendered invisible. The models that produce the collapsing imagination-to-artifact ratio were trained on decades of human creative output, scraped without consent or compensation. Every moment of flow depends on this unpaid substrate of prior human effort. The reflective level fails not merely because current systems don't prompt reflection, but because honest reflection would reveal the user's complicity in a system of value extraction. The person experiencing productive addiction is not just failing to develop skills — she is participating in the devaluation of the very skills she admires. Norman's framework correctly identifies the emotional architecture but misses how that architecture functions ideologically: the visceral and behavioral levels don't just suppress reflection, they make the material conditions of AI production emotionally unthinkable. The design interventions proposed — friction, prompts, tracking — address symptoms while leaving the underlying political economy intact.
The visceral response to AI interaction is well-documented and physiologically intense. The Orange Pill recounts the experience of watching a functioning application emerge from a conversation — seeing an idea take material form in minutes rather than months. The response is unmistakable: awe, acceleration, a surge of creative energy that changes posture and breathing and sense of what is possible. The collapsing imagination-to-artifact ratio produces sensory impact no previous tool has delivered.
Norman would note that this visceral response, however genuine, is dangerous in a specific way. Positive affect broadens attention, increases tolerance for ambiguity, promotes creative association, and reduces critical scrutiny. These are valuable cognitive states in many contexts. They are precisely the wrong states for the careful evaluation that AI-generated outputs require. The awe suppresses the skepticism. The excitement undermines the caution.
At the behavioral level, AI interaction produces a satisfaction that maps closely onto flow. The iterative cycle of articulation, production, evaluation, refinement has all the structural features flow research identifies. But the flow may be decoupled from skill development in ways traditional flow was not. The person experiences the pleasure of productive engagement without exercising the skills that generated the pleasure. This is what The Orange Pill calls productive addiction, and Norman's framework gives it diagnostic precision: a behavioral-level emotional response decoupled from the growth the response is supposed to signal.
The reflective level — where the person steps outside the experience and asks what she is learning, what kind of practitioner she is becoming, whether the pleasure signals growth or numbness — is where the design challenge is most acute and least addressed. Current AI systems do nothing to support reflective processing. They are designed for engagement, not interruption. They produce output continuously, without pause, without invitation to assess. The feeling crowds out the reflection. The design response the Norman volume proposes includes periodic interaction summaries, transition-point prompts, and dependency tracking — friction at the reflective level that preserves what the behavioral level erodes.
Norman developed the three-level framework in Emotional Design: Why We Love (or Hate) Everyday Things (Basic Books, 2004), drawing on neuroscience research into emotional processing and his own observations that purely functional design accounts had failed to explain why users formed attachments to certain products.
Chapter 5 of the Norman volume extends the framework to AI-assisted work, applying it as a diagnostic lens for a kind of experience Norman's original formulation anticipated but could not have foreseen in its current intensity.
Visceral: immediate, pre-conscious response. The gut reaction to appearance, sound, feel. AI's collapsing imagination-to-artifact ratio produces visceral impact unprecedented in tool history.
Behavioral: satisfaction or frustration from use. The accumulated pleasure of productive engagement. AI engagement resembles flow but may be decoupled from the skill development flow traditionally accompanied.
Reflective: conscious retrospective evaluation. The level of meaning — what the experience says about who the person is becoming. Current AI systems structurally suppress this level.
Design must engage all three. Optimizing for visceral and behavioral satisfaction while ignoring reflective processing produces a design failure of unprecedented sophistication.
Some researchers argue that the three-level framework oversimplifies emotional processing, which involves continuous interaction across levels rather than discrete processing stages. The counter-argument is that the framework's analytical value lies in naming distinct design targets — interventions appropriate for visceral design differ from those for reflective design — rather than in claiming neuroanatomical precision.
The diagnostic power of Norman's three-level framework depends entirely on which scale we examine. At the scale of individual experience, the framework captures something essential (90% Edo): users do experience visceral awe, behavioral flow, and reflective absence in precisely the patterns described. The phenomenology is accurate. But at the scale of systemic analysis, the contrarian view dominates (75% contrarian): the emotional responses Norman identifies are inseparable from the material and political conditions that produce them. The framework works as description but may fail as intervention if it doesn't account for substrate.
The question of design response reveals where synthesis becomes necessary. Norman's proposed interventions — periodic summaries, transition prompts, dependency tracking — address the reflective deficit at the individual level (70% Edo on effectiveness for personal practice). But they leave untouched the larger reflective crisis: how do we design for honest engagement with the collective costs of individual productivity gains? Here both views are incomplete (50/50): Edo's framework provides the emotional architecture for intervention, while the contrarian view identifies what must be reflected upon.
The synthetic frame might be "embedded emotional design" — acknowledging that emotional responses to technology are never just about the interface but about the entire sociotechnical system. The three levels still operate, but at multiple scales simultaneously. Visceral response includes not just the thrill of creation but the discomfort of extraction. Behavioral satisfaction must account for whose behavior is being optimized. Reflective processing needs to encompass not just personal growth but collective consequence. This doesn't invalidate Norman's framework; it completes it, making it adequate to the political economy of AI-mediated experience.