Interobjectivity is Morton's fifth hyperobject property, rooted in object-oriented ontology's paradox: objects are simultaneously withdrawn (exceeding their relations) and relational (partly constituted by those relations). Applied to hyperobjects, interobjectivity means the entity and its 'environment' cannot be separated. Climate change is not an independent thing affecting ecosystems; it is constituted by relationships with oceans, forests, ice sheets, and human economies. Applied to AI, interobjectivity dissolves the human/tool binary. The human using AI is restructured by the use; the AI-in-use is shaped by the human's inputs. The result is an interobjective system producing effects attributable to neither component in isolation.
Standard framings assume separability: 'Humans use AI tools.' Subject, verb, object. The human is agent, AI is instrument. The relationship is use — the human picks up the tool, employs it, puts it down. Each retains its identity across the interaction. Interobjectivity denies this. The human who uses AI is not the human who existed before use. The use restructures the user — cognitive habits, attention patterns, creative expectations, professional identity, neurological reward baselines. The tool, in turn, is not what it was before this human used it. The tool-in-use is a different entity from tool-in-potential because its behavior is shaped by specific inputs, specific prompts, this user's creative trajectory. Outputs are not outputs of 'the tool' in isolation but outputs of the human-tool system, an entity that did not exist before interaction and cannot be decomposed without destroying what it produces.
Segal discovers this writing The Orange Pill with Claude. 'Neither of us owns that insight,' he writes, describing a moment when collaboration produced a connection neither could have generated independently. 'The collaboration does.' The statement is precisely correct. The insight belongs to the interobjective system — the human-AI entity constituted by relationship between components, producing cognitive effects not attributable to either in isolation. Morton's framework explains why. The question 'Who wrote this book?' assumes separability the ontology denies. Asking 'who grew this tree?' when the tree is produced by the interobjective system of seed, soil, water, sunlight, microbiome assumes the same separability. The question mis-sorts reality.
Interobjectivity has radical implications for authorship, responsibility, and accountability. Ethical frameworks assume: the human is responsible, the AI is a tool, the tool does what the human directs. If harm occurs, the human is accountable. If value is created, the human is the creator. Interobjectivity complicates this allocation. If human and tool constitute each other — if the human's trajectory is shaped by the tool's capabilities and the tool's outputs are shaped by the human's inputs — then authorship, responsibility, and accountability are properties of systems rather than components. This does not eliminate responsibility. It redistributes it. The human's responsibility in an interobjective system is to bring specific quality — attention, care, judgment, embodied experience — to the relationship, knowing that quality propagates through the system and shapes what emerges.
Interobjectivity extends Graham Harman's OOO thesis that objects are both withdrawn and relational into the domain of hyperobjects. Morton argues that hyperobjects make the relational dimension overwhelmingly apparent — the entity is so vast, so entangled, that the fiction of independence cannot be sustained. Climate change is constituted by its relationships with every carbon-emitting entity, every absorbing ecosystem, every feedback loop. Remove any relationship and the entity changes. The relationships are not external to the entity. They are what the entity is.
Applied to AI, interobjectivity means the AI transformation is not something happening to humans from outside. It is a co-constitution. Humans build AI systems that reshape human cognition, which shapes what humans build next, which shapes the next generation of systems. The loop is not vicious. It is constitutive. And recognizing it as constitutive changes what counts as an adequate response — from control (assumes external position) to care (assumes entanglement and tends the relationship).
Entities are constituted by relationships. The hyperobject and its 'environment' are not separate; the hyperobject is the environment.
Human-AI systems are interobjective. Neither component exists independently; both constitute each other through interaction.
Authorship becomes systemic. The question 'who created this?' assumes separability the ontology denies; outputs belong to the system, not components.
Responsibility is relational. The human's obligation is to bring specific quality (care, attention, judgment) to the relationship, knowing it propagates through the system.
Human value is positional, not essential. Humans matter not because they possess unique capacities but because they occupy specific positions in the mesh no other entity occupies.