An intermediary transports meaning or force without transformation: what enters it exits unchanged, and knowing the input is sufficient to predict the output. A mediator is the opposite. It transforms what passes through it, introducing its own characteristics — biases, tendencies, architectural preferences — into the signal. The output cannot be predicted from the input alone because the mediator contributes something irreducible to the passage. The distinction matters enormously for AI, because the dominant metaphors — tool, amplifier, assistant, conduit — all describe intermediaries, while the actual operation of large language models is the operation of mediators of unprecedented scope. Governance, responsibility, and critical practice all depend on getting this distinction right.
An intermediary is the paradigmatic modern object: a passive conduit whose job is to convey without altering. A telephone wire, idealized, is an intermediary. So is a calculator that performs arithmetic exactly as specified. So is the amplifier in the metaphor The Orange Pill adopts: a device that makes the signal louder without altering its content. The signal retains its character; the amplifier merely increases its reach. If this metaphor were accurate for Claude, the relationship between human and AI would be simple — the human thinks, the machine extends the thought, responsibility flows cleanly from human through machine to artifact.
A mediator, by contrast, does not merely transmit. It translates. And translation always introduces transformation. A judge is a mediator: the law does not apply itself; it is interpreted through a specific juridical sensibility whose contributions shape the ruling. A laboratory instrument is a mediator: it does not simply reveal nature; it constructs data through the specific choices embedded in its calibration, its sampling protocols, its display conventions. The mediator's contribution is constitutive — remove it, and the output is not merely weaker but different in kind.
Claude is a mediator, and a particularly powerful one. The evidence is in The Orange Pill itself. When Segal describes being stuck on the structural pivot between acknowledging Byung-Chul Han and mounting a counter-argument, and Claude responds with laparoscopic surgery as the hinge, the connection did not exist in Segal's input. It also did not exist in any single text in Claude's training corpus. It emerged from the specific configuration of Segal's frustration, his formulation of the problem, and Claude's associative processing. The mediator contributed something — and that something cannot be reduced to either party's prior state.
The stakes of the distinction are practical. If Claude is an intermediary, the human is responsible for every output — she asked for the thing; the machine produced the thing; responsibility flows without attenuation. If Claude is a mediator, responsibility becomes a network property. The output reflects the human's intention and Claude's characteristic transformations: its training-data biases, its preferences for fluency over fidelity, its tendency to produce smooth surfaces that may conceal fractured arguments. Governance that assigns responsibility based on the intermediary myth governs a system that does not exist.
Latour developed the distinction most systematically in Reassembling the Social (2005), though it appears throughout his work from the mid-1980s onward. The terminology was influenced by Michel Callon's 1986 work on translation in the sociology of scientific innovation, which used similar concepts to analyze the scallops of St. Brieuc Bay and the humans, fishermen, and researchers who attempted to enroll them into a scientific program.
The distinction was sharpened by Latour's encounters with engineers who insisted their technical systems were 'mere tools' — neutral intermediaries that did exactly what users asked them to do. Latour's ethnographic work in laboratories and technical offices repeatedly revealed the opposite: the systems transformed users' intentions in ways neither the designers nor the users fully controlled, and pretending otherwise was not innocent.
The test of predictability. If the output can be predicted from the input alone, you are dealing with an intermediary. If the output requires knowing the mediator's characteristics, you are dealing with a mediator.
Mediators do not correct themselves. A mediator transforms without being aware of the transformation. The correction must come from the network — from actants whose function is to see what the mediator's smoothness conceals.
Responsibility becomes distributed. When a mediator shapes the output, responsibility cannot be assigned wholesale to the human who accepted the output. The characteristics of the mediator — biases, tendencies, failure modes — are themselves objects of governance.
Fluency is not fidelity. Claude's outputs are smooth in part because the architecture privileges fluency. A genuinely correct output and a statistically plausible but incorrect one look identical on the surface — a failure mode characteristic of powerful mediators and invisible to users of supposed intermediaries.
Every AI is a mediator. Medical diagnostic systems, legal drafting tools, image generators, recommendation engines — all operate as mediators. Governance frameworks that treat them as intermediaries produce legal fictions that misattribute responsibility.
Defenders of the intermediary framing argue that with enough human oversight, AI outputs can be reviewed and accepted only when correct, making the machine effectively an intermediary after verification. Latour's reply: the verification depends on the human possessing independent expertise that allows her to see through the mediator's surface, and the mediator's wide-scale deployment tends to erode precisely the expertise that verification requires. The feedback loop is structural, not incidental. Another objection — from Tommaso Venturini and other Latour collaborators — notes that AI systems are embedded in chains of mediation (platforms, recommendation algorithms, training loops), so treating any single AI as 'the' mediator oversimplifies. The reply is that mediation is recursive: each node in the chain is a mediator, and tracing the chain is exactly what the framework demands.