Multistability is Ihde's rejection of both technological determinism and pure social constructivism. A technology does not have one correct use dictated by its design, nor is it infinitely pliable in the face of cultural interpretation. It has a relational landscape — a range of configurations that its material properties make possible and that specific contexts of use actualize. The hammer can be a tool, a weapon, a sculpture, a doorstop; it cannot be a telescope. Multistability is the framework for thinking about this bounded variability. Applied to AI, the concept undergoes a transformation so extreme it tests its own coherence: a technology whose primary medium is language inherits language's effectively unbounded multistability, and the designer's intention becomes a diminishing fraction of the technology's actual relational life.
The concept originally operates across encounters. The same hammer is embodiment for the carpenter, alterity for the toddler, hermeneutic artifact for the museum curator. Different users in different contexts produce different stabilizations. The analytical task is to map which stabilizations a given technology supports and what each reveals about the technology's relational character.
AI multistability operates within encounters as well. The same builder in the same session stabilizes Claude as embodiment (writing through it), hermeneutic text (reading its output), quasi-other (addressing it), and background (flowing with it invisible). This within-user multistability is a structural novelty. The oscillation between modes is not a sequence of different uses but a single continuous engagement whose relational character refuses to settle.
Traditional multistability's boundedness came from material constraints. The hammer's mass, shape, and hardness limit its possible uses. AI's primary medium is natural language, and language is the most multistable artifact humans have ever produced — combinatorially explosive, semantically unbounded, applicable to virtually any purpose. A system that processes and generates language inherits this multistability and makes it operational at scale.
The consequence for governance is significant. Regulatory frameworks that address AI based on intended use are regulating a fraction of the technology's actual relational life. The designer fallacy — the assumption that intended use determines actual mediation — was always an error; with AI it becomes a categorical inadequacy. Users stabilize Claude as therapist, companion, tutor, strategist, debate opponent, friend. None of these was designed. All of them produce real mediations with real effects on real people.
The concept developed across Ihde's mature work, receiving its most systematic articulation in Postphenomenology and Technoscience (2009). It draws methodologically on phenomenological variation — Husserl's practice of imagining a phenomenon under different conditions to discover its invariant features — but applies the method to technologies rather than to essences.
Bounded variability. Technologies support multiple stabilizations but not infinite ones; material properties set limits.
Across and within. AI extends multistability from cross-user variation to within-user oscillation.
Against essentialism. There is no single correct meaning or use of any technology.
Against voluntarism. Cultural meaning-making cannot make technologies do anything; materiality constrains.
Empirical not aprioristic. The stabilizations a technology supports must be discovered through variational analysis, not deduced from design.
The AI case tests whether 'bounded' multistability retains analytical purchase when the bounds are effectively unbounded. If language's combinatorial space is the range of stabilizations AI can support, the concept approaches the triviality Ihde worried about — the claim that technologies can mean anything, which is uselessly true.