The Myth of the Human Agent — Orange Pill Wiki
CONCEPT

The Myth of the Human Agent

The foundational fiction of modern thought: that agency originates in sovereign human subjects and radiates outward into passive instruments. The story the AI moment makes empirically indefensible — and that the amplifier metaphor tries to preserve.

Modern thought rests on a distinction so pervasive that most people experience it as reality itself: subjects act, objects are acted upon. Humans think; tools compute. Humans decide; instruments execute. The entire vocabulary of modern agency — intention, creativity, authorship, responsibility, autonomy — is built on this separation. The AI moment has made the separation indefensible, because the actual distribution of agency in AI-assisted networks is not organized according to the myth. Recognizing this is not anti-humanism. It is the prerequisite for governing the networks AI has produced.

In the AI Story

Hedcut illustration for The Myth of the Human Agent
The Myth of the Human Agent

The myth is not wrong about human uniqueness. Humans are conscious in ways non-humans are not, bear stakes in ways instruments cannot, and possess moral standing that rocks and algorithms lack. These are real differences, and actor-network theory does not deny them. The myth is wrong about something more specific: that agency — the capacity to make a difference in the world — is concentrated in human subjects and merely transmitted through non-human instruments. This concentration claim is empirically false for every concrete network that has been carefully traced.

The Orange Pill oscillates between glimpsing the distributed nature of agency and retreating to the sovereign human subject. Segal comes closest to recognition when he writes that certain collaborative insights 'belonged to neither of us — they belonged to the collaboration, to the space between us.' This is an actor-network statement, even if not framed as one. It acknowledges that the insight was a network product rather than a human product with machine assistance. But the book cannot sustain this recognition. It returns repeatedly to the language of the myth: the human as visionary, the AI as instrument, human judgment as essential, machine capability as amplifier. The oscillation is not personal failing; it is the gravitational pull of four centuries of philosophical investment in the sovereign subject.

The hammer argument — made long before AI, using the humblest of technologies — illustrates why the myth fails. The hammer does not merely transmit the carpenter's intention. The hammer shapes the intention. With a hammer in hand, the carpenter thinks about what a hammer can do. Her plans are formed in relation to the tool's capabilities and limitations. The tool participates in the formation of the intention, not by controlling the carpenter but by entering her cognitive process as a parameter that shapes what she attempts. The hammer is an actant. Extend the analysis to Claude, and the myth's inability to describe the actual network becomes obvious.

The practical stakes are high. If the human is the sole locus of agency, the AI transformation is the arrival of a better tool, and the appropriate response is individual skills training — learn to prompt well, evaluate output carefully, maintain oversight. These prescriptions are not wrong, but they are incomplete because they address only one actant and treat the rest as scenery. If agency is distributed, the transformation is structural. The engineer in Trivandrum who built a complete frontend feature in two days after eight years confined to backend systems was not amplified. She was reconstituted as a different kind of actor in a different kind of network. The myth cannot describe her transformation, and the governance prescriptions the myth generates cannot address it.

Origin

The myth is not a specific theory but a cluster of commitments that runs through modern philosophy from Descartes onward. Its sharpest statement is in Kant's distinction between autonomous rational subjects, who bear moral status as ends in themselves, and objects, which are merely means. The commitment shapes liberal political theory, analytic philosophy of action, economic theory of choice, and ethical theories of responsibility.

Latour's critique developed in dialogue with phenomenologists (Heidegger, Merleau-Ponty) who had already complicated the subject/object distinction, and with Michel Serres, whose work on hybrid collectives preceded and influenced ANT. But Latour's move was more radical than the phenomenological one: rather than enriching the human subject's relationship to its tools, he proposed treating the sovereign human subject itself as a residue of rhetorical operations that could not survive empirical tracing of actual networks.

Key Ideas

Human uniqueness without human concentration. Humans are special in morally important ways, but agency is not concentrated in them; it is distributed across networks of which they are participants.

Tools shape intention. The carpenter's hammer, the writer's word processor, the builder's Claude — each enters the cognitive process as a parameter that shapes what the human attempts, not merely a conduit for pre-formed intentions.

Decisions as network outcomes. 'The human decided' is shorthand for a process involving deadline pressure, emotional investment, aesthetic preferences, available alternatives, and embodied expertise. Calling this a sovereign act is not description; it is myth.

Reconstitution, not amplification. The AI-augmented worker is not the same worker plus a tool. She is a different kind of actor in a different kind of network, with different capabilities, different limitations, and different relationships.

Governance consequences. The myth produces governance that addresses only the human actant. The reconstituted network requires governance that addresses its actual topology, its concentrations of power, and its distributed characteristics.

Debates & Critiques

Defenders of the myth argue that dissolving the sovereign subject erodes the foundation of moral and legal responsibility. If the human is just one node in a network, how can she be held accountable for outcomes? Latour's reply distinguishes the question of causal distribution (which is a matter of tracing the network) from the question of moral responsibility (which is a matter of allocating accountability across the traced network). The tracing reveals that responsibility is distributed — to humans, to organizations that build AI systems, to institutions that deploy them, to governance structures that oversee them. This does not eliminate human responsibility; it contextualizes it within the broader distribution of agency that the network actually exhibits. Responsibility frameworks built on the myth produce legal fictions that misattribute accountability.

Appears in the Orange Pill Cycle

Further reading

  1. Bruno Latour, Pandora's Hope (Harvard University Press, 1999)
  2. Bruno Latour, Reassembling the Social (Oxford University Press, 2005)
  3. Michel Serres, The Parasite (University of Minnesota Press, 2007)
  4. Don Ihde, Technology and the Lifeworld (Indiana University Press, 1990)
  5. Madeline Akrich, 'The De-Scription of Technical Objects' in Shaping Technology / Building Society (MIT Press, 1992)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT