Timothy Morton coined hyperobject in his 2013 book Hyperobjects: Philosophy and Ecology after the End of the World to name entities so vast they defy human perception as wholes. Examples include global warming, nuclear radiation, the Pacific garbage patch, and — as the Timothy Morton — On AI simulation argues — the AI transformation itself. A hyperobject cannot be pointed to directly; one can only point to its local manifestations. You are always already inside it. The boundary between observer and observed dissolves.
Morton identifies five properties that define hyperobjects. Viscosity: they stick to everything they contact, restructuring entities irreversibly. Nonlocality: they are distributed across many places simultaneously, manifesting differently at each location. Temporal undulation: they operate on timescales radically mismatched to human experiential time. Phasing: they appear and disappear from perception without regularity. Interobjectivity: they are constituted by relationships with other entities rather than existing independently. These properties interact to produce the characteristic experience of hyperobjects — ontological disorientation, the collapse of the subject-object distinction, and what Morton calls 'the end of the world' in the ontological (not apocalyptic) sense.
The Great Pacific Garbage Patch exemplifies the hyperobject. Spanning 1.6 million square kilometers, it cannot be photographed from space or perceived as a totality by any observer. A ship could sail through its densest region without the captain noticing anything unusual. Researchers sample it, model it, track its currents — but the entity itself exceeds perception. Similarly, climate change manifests locally as weather events (a hurricane, a drought) that are ambiguous as evidence of the larger entity. No single event is climate change, yet climate change is present in every event as the hyperobject of which each is a local manifestation.
Applied to AI, the hyperobject framework reveals why the transformation resists intervention. The AI hyperobject is not the chatbot, not the coding assistant, not any specific application. It is the total reconfiguration of human cognitive culture by distributed computational intelligence operating simultaneously in data centers, smartphones, classrooms, hospitals, creative workflows, and the cognitive habits of every knowledge worker. The entity is massively distributed, viscous (once a mind is restructured by smooth interfaces, expectations adhere), nonlocal (there is no single place where AI 'is'), temporally undulant (effects accumulate on biographical and generational timescales that fall outside human perceptual resolution), and interobjective (humans and AI systems now constitute each other through their relationships).
Morton's concept emerged from object-oriented ontology, the philosophical school holding that objects are withdrawn — they always exceed the relations and perceptions through which we access them. Hyperobjects amplify this withdrawal dramatically. The gap between the entity and any observer's access to it becomes so vast that traditional epistemology (the knowing subject standing outside the known object) collapses. What remains is dark ecological awareness — thinking that proceeds without mastery, action that occurs without the Enlightenment reassurance that understanding leads to control.
Morton developed the hyperobject concept while wrestling with how to think about climate change philosophically. In Hyperobjects (2013), Morton observed that environmental philosophy had long operated within a nature/culture binary that ecological crisis had rendered obsolete. Climate change is not 'out there' in nature, separate from human culture. It is everywhere — in every breath, every product, every policy decision. Traditional environmental aesthetics, which Morton had critiqued in Ecology Without Nature (2007), treated nature as a stable background against which human action occurred. Hyperobjects dissolve that background. There is no 'world' as a stable context. There is only the hyperobject, and we are inside it.
The concept drew on Graham Harman's object-oriented ontology, which holds that objects are fundamentally withdrawn from access, and on Bruno Latour's actor-network theory, which insists on the radical interconnectedness of entities. Morton synthesized these into a framework adequate to entities operating at planetary and geological scales. The hyperobject is Morton's answer to the question: How do you think about an entity that you are constitutively inside, that operates on timescales your nervous system cannot process, and that cannot be perceived as a whole by any observer?
Five properties define hyperobjects. Viscosity, nonlocality, temporal undulation, phasing, and interobjectivity interact to produce ontological disorientation.
Hyperobjects dissolve the subject-object distinction. The observer is inside the observed, constituted by it, unable to achieve the external position traditional epistemology assumes.
AI is a hyperobject. The transformation of cognitive culture by distributed computational intelligence satisfies Morton's criteria with the same rigor as climate change.
Thinking the hyperobject transforms the thinker. The attempt to think at the entity's scale changes the quality of attention and care the thinker brings to local action.
Coexistence, not mastery. Hyperobjects cannot be solved or managed; they can only be inhabited with awareness, humility, and ongoing practices of care.
Critics argue Morton's framework produces political passivity — if there is no outside and no mastery, what grounds action? Defenders counter that hyperobject-awareness produces better action by stripping away the illusions of control that lead to hubris and failure. The debate over whether AI should be framed as a hyperobject turns on whether the framework's disorientation is productive (forcing deeper engagement) or paralyzing (eliminating grounds for intervention).