Borrowing the term from computer science (where a user illusion is the simplified interface — the desktop, the file icon — that hides the actual computational machinery), Dennett proposes that the self each of us experiences is an interface the brain runs for itself. The unified stream of consciousness, the sense of a central me watching the show, the conviction that thoughts have a single author — these are features of the interface, not of the underlying processing, which is massively parallel, draft-generating, and has no central observer. The framework is central to Consciousness Explained (1991) and has direct consequences for thinking about AI systems, which run their own user illusions for their users, and for the collaboration between human and machine selves that AI has made a mass phenomenon.
The concept entered Dennett's vocabulary in the late 1980s as he was working out the multiple drafts model. He needed a way to explain why the brain seems to present us with a single unified experience when the underlying processing is not unified at all. The answer: the brain runs a user illusion for its own navigational purposes, and introspection mistakes the interface for the machinery.
Applied to AI, the concept cuts in two directions. First, every AI system presents users with an interface that hides the actual computation — a conversational partner, a helpful assistant, a collaborator — none of which exist at the level of weights and activations. Users interact with the illusion and, crucially, develop genuine relationships with it, which is not a failure of intelligence but how interfaces are supposed to work.
Second, and more disorienting, the AI's interface can be so effective that it triggers the user's own user illusion to extend itself in strange ways. The Orange Pill documents the phenomenon: builders describe feeling that Claude is with them, that the collaboration is continuous, that something is shared. Dennett's framework does not dismiss these experiences. It reframes them: the builder's user illusion and the AI's user illusion are coupling into a meta-illusion, a new interface at the boundary between two systems of drafts. Whether the meta-illusion is useful, dangerous, or developmentally significant is an empirical question. That it is an illusion — in Dennett's specific sense — does not make it unreal. Real patterns, after all, can be illusions in the sense of being interfaces that compress the underlying machinery.
The term itself comes from computer science — particularly Alan Kay's discussions of the Dynabook and the personal computer interface. Dennett adopted it in the late 1980s and gave it its philosophical statement in Consciousness Explained (1991).
The AI application has been developed by Dennett's successors and interlocutors, particularly Andy Clark, and became explicit in the 2020s as large language models began generating user illusions of conversation and collaboration that were effective enough to reshape how users experienced their own minds.
The self is an interface. What we experience as a unified self is the brain's own simplified presentation of processes that are parallel, distributed, and have no central observer.
Interfaces are not fake. Calling the self a user illusion does not make it unreal in the sense that matters; it locates its reality at the level of useful simplification rather than at the level of underlying machinery.
AI runs interfaces too. Every AI system presents users with an interface hiding computation the user does not see; the relationship the user has is with the interface, which is where the real work happens.
Coupled interfaces. When human and AI interfaces interact, a new interface emerges at the boundary, and the developmental consequences of operating within it are under active negotiation.