Where an affordance is what an object permits, a signifier is what the object communicates about that permission. The raised button signifies "press me." The slider's track signifies "drag along this axis." The menu's visible list signifies "choose from these options." Norman's central design thesis was that affordances and signifiers must align: the perceivable cue should accurately represent the actionable possibility. When they misalign — when signifiers exist without underlying affordances, or when affordances lack perceivable signifiers — the result is systematic user failure that designers often misattribute to user error. The AI interface has created an unprecedented signifier crisis: a system of vast capability whose blank-prompt surface signifies nothing, and whose polished outputs signify meanings the system never meant to communicate.
The signifier-affordance distinction, clarified in Norman's 1999 essay and elaborated in the second edition of The Design of Everyday Things, resolved a decade of conceptual confusion in the design community. Researchers had been using "affordance" to mean "signifier" — calling a visible button an affordance when it was actually the signifier of a button's affordance. Norman's correction clarified what designers actually control: they cannot create affordances (those depend on physical reality and user capability), but they can create signifiers that make affordances discoverable.
The AI era has produced two distinct signifier failures. The first is the problem of missing signifiers: the blank text field of a conversational AI offers no perceivable cue for any of its capabilities. The user faces infinite possibilities and zero guidance — what Chapter 9 of the Norman volume calls the prompt's fundamental design failure.
The second and more insidious failure is unintended signification: AI outputs communicate meanings that no designer chose. Polished prose signifies "this is well-reasoned" regardless of whether the reasoning is sound. Clean code signifies "this is production-ready" regardless of whether it handles edge cases. Consistent design signifies "this was thought through" regardless of whether anyone thought about it. These signifiers influence evaluation without the evaluator's awareness, because the signals are the only channel through which the person receives information about the artifact.
Norman's original signifier framework assumed that signifiers were designed — chosen deliberately by a designer with an intended communication. AI signifiers are emergent, arising from training data rather than design decisions. The designer has lost direct control over what the system communicates at the moment of use. Reclaiming that control requires designing the conditions under which productive signification emerges, rather than placing specific signifiers on a stable surface. This is a fundamentally different design discipline, and the Norman volume argues it remains largely undeveloped.
Norman distinguished signifiers from affordances in "Affordance, Conventions, and Design" (Interactions, 1999) after observing that HCI literature had collapsed the two. He expanded the treatment in the revised edition of The Design of Everyday Things (2013), making signifiers the primary design lever while preserving affordances as the underlying physical reality.
The concept's AI extension emerged through the convergence of Norman's later work on complex systems with the empirical reality of how users encounter modern AI — arriving at interfaces that violate every signifier principle Norman's earlier work established.
Signifiers are what designers actually control. Physical affordances depend on material and user capability. Signifiers depend on communication choices. This is the designer's proper domain.
The blank prompt as anti-signifier. A text field with a blinking cursor communicates nothing about what to type. It is the worst-signified interface since the glass door with no handle.
Unintended signification in AI outputs. Fluency, polish, and consistency signify quality without reference to whether quality is present. This is a design hazard the current generation of AI systems has barely addressed.
Confidence and uncertainty as missing signifiers. A well-designed AI would signify when it is certain versus speculating, what it inferred versus what was specified, where its grounding is solid versus thin. None of this is communicated in typical output.
Some design researchers argue that reintroducing signifiers into natural language interfaces will reintroduce the very friction that made those interfaces attractive. The counter-argument, developed throughout the Norman volume, is that the friction of unsignified AI is already present — it is borne by the evaluator downstream, at far greater cost than the modest friction of well-designed signification would impose at the point of use.