The fallacy has both empirical and philosophical dimensions. Empirically, intended uses account for a decreasing fraction of any widely-adopted technology's actual relational life. The telephone was designed for business; its sociological dominance came from its unintended role in family, romantic, and social life. The internet was designed for military resilience and academic data transfer; its cultural weight comes from uses none of its designers anticipated.
Philosophically, the fallacy rests on the assumption that technologies have essences determined by design. Multistability is the direct counter-claim: technologies have relational landscapes, not essences, and the landscapes emerge through actual use rather than through design specification. The designer's intention is one input into the landscape, not its boundary.
AI intensifies the fallacy's inadequacy in specific ways. Anthropic designed Claude as a productivity tool. Users stabilized it as a therapist, a companion, a creative partner, a pedagogical device. None of these were designed. The stabilization Segal documents in You On AI — Claude as intellectual collaborator whose contributions shape the direction of thinking — was not intended in any straightforward sense. The capacity emerged from training scale and diversity, not from a design decision about collaborator type.
Three features of AI design make the fallacy categorically rather than merely empirically inadequate. First, the conversational interface eliminates the phenomenological brake that distinguished tool-use from social interaction. Second, the variability of output produces intermittent reinforcement patterns that sustain engagement beyond task completion. Third, the scope of capability means the user never reaches the natural stopping point narrower tools provide. None of these features was individually designed to produce compulsive engagement. Their combination produces productive addiction as an emergent property, and the emergence exceeds the designer's predictive capacity.
Ihde developed the concept across his mature work, with particularly sharp formulation in Ironic Technics (2008). The target was the simplistic view of technology held by both naive technophiles (who credit designers with shaping the future) and naive technophobes (who blame designers for everything the technology enables). The designer fallacy charge is leveled at both positions.
Intended use ≠ actual mediation. The designer's purpose is one input to the relational landscape, not its determination.
Empirically false. Technologies routinely escape design intention; the escape is not exception but norm.
Philosophically pernicious. Treating intended use as determinative misdirects analysis and governance.
Categorically inadequate for AI. Language's unbounded multistability makes the gap between intention and actuality qualitatively larger than for physical technologies.
Governance implications. Regulatory frameworks based on intended use regulate a shrinking fraction of actual AI mediation.