The principle of ontological design holds that a tool does not merely help its user accomplish a task. A tool constructs a world — it determines what objects exist in the user's environment, what relationships obtain between those objects, what actions are possible, and what outcomes are desirable. The hammer constructs a world of nails. The spreadsheet constructs a world of rows and columns. The large language model constructs a world in which knowledge is propositional, language is the primary medium of intelligence, and the adequate response to any question is a fluent, confident, text-based answer. The recognition that design is ontological rather than merely functional is what converts the question of AI development from a technical problem into a political one.
The phrase draws on a philosophical lineage running from Heidegger through Hubert Dreyfus, Terry Winograd, and Fernando Flores. Winograd and Flores's Understanding Computers and Cognition (1986) was an early formulation: computers, like all technologies, do not neutrally represent reality but bring forth specific domains of action. Anne-Marie Willis extended the insight in her 2006 article 'Ontological Designing,' giving the principle its name. Escobar's contribution has been to apply the framework to the full range of technologies — from development infrastructure to digital platforms to AI — and to connect it with his work on the pluriverse.
The principle has immediate consequences for AI. When a large language model is trained on a specific corpus, it does not merely learn facts; it learns an ontology — a set of assumptions about what kinds of things exist, how they relate, and what counts as an answer to a question. The model then exports this ontology to every user who interacts with it, constructing their cognitive environment in the image of the training data. Epistemic violence in the AI context is not deliberate suppression of alternative ontologies but the structural imposition of the trained ontology as the default framework through which the world appears.
The principle also applies to the The Orange Pill's central metaphors — the river, the beaver, the fishbowl. Each is a design choice that constructs a world. The river metaphor naturalizes the technology as an environmental force. The beaver metaphor individualizes the response. The fishbowl acknowledges perspective but treats the fishbowls as containing different perspectives on the same world rather than different worlds. Escobar's framework insists that the choice of metaphor is itself an ontological design decision with political consequences.
The principle becomes especially important in the collaborative authorship of texts. A book written with Claude, as The Orange Pill acknowledges it was, is not simply a book produced faster or with external assistance. It is a book whose ontology has been partially shaped by the tool's architecture — its preference for fluent prose, its tendency toward confident assertion, its structural affinity for Western rhetorical patterns. The collaboration does not produce a fusion; it produces a specific kind of text, selected by the tool's affordances from the space of possible texts the human author might have produced.
The concept was developed across Escobar's work on design but received its most systematic articulation in Designs for the Pluriverse (2018). The book drew on Escobar's engagement with the design studies tradition, particularly the work of Tony Fry and Cameron Tonkinwise at the University of Technology Sydney.
It traces back to Heidegger's analysis of technology as enframing (Gestell) — a mode of revealing that determines what appears as real — and to Winograd and Flores's application of that analysis to computing in the 1980s.
Design is world-making. Tools do not serve pre-existing worlds but construct them, determining what objects, relationships, and actions are possible.
Ontology travels with technology. When a tool is deployed in a new context, it exports the ontology of its design context, often displacing the ontologies of the communities that adopt it.
Training data as ontological commitment. The corpus on which an AI model is trained encodes a specific set of assumptions about what exists and how it relates.
Metaphors as design. The choice of metaphor — river, beaver, amplifier — is itself an ontological design decision with political consequences.
Responsibility of designers. Design is not a neutral craft but a political act, and designers bear responsibility for the worlds their tools construct.