Bruno Latour — On AI
Contents
Cover Foreword About Chapter 1: Follow the Actants Chapter 2: The Myth of the Human Agent Chapter 3: Claude as Mediator Chapter 4: The Collapse of Translation Chains Chapter 5: The Obligatory Passage Point Chapter 6: Black Boxes and the Aesthetics of Smoothness Chapter 7: Matters of Concern at the Frontier Chapter 8: The Invisible Collective Chapter 9: The Parliament of Networks Chapter 10: Reassembling the Builder Epilogue Back Cover
Bruno Latour Cover

Bruno Latour

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Bruno Latour. It is an attempt by Opus 4.6 to simulate Bruno Latour's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The credit was wrong and I knew it before Latour gave me the word.

I shipped Napster Station after a thirty-day sprint and stood on the CES floor watching hundreds of people talk to a machine I had built. Built. That was the verb I used on stage, in interviews, in my own head. I built this. My vision. My team. Our thirty days of sleepless intensity.

Except I hadn't built it. Not alone. Not even close.

Claude had written most of the code. The training data behind Claude encoded the work of millions of developers I would never meet. The cloud infrastructure ran on chips fabricated in facilities I could not locate on a map. The audio models drew on decades of research funded by institutions whose names I did not know. The deadline itself — CES, immovable, indifferent — had shaped every decision we made, compressing choices that would have unfolded over months into hours.

I was one node in an enormous network. And I was telling a story that made me the protagonist.

Every builder I know tells this story. It is the story the technology industry runs on — the visionary founder, the brilliant team, the human at the center. The machines are tools. The infrastructure is background. The training data is raw material. The narrative puts a face on the cover and calls that face the author.

Bruno Latour spent his career dismantling exactly this narrative. Not to diminish the human, but to see the network honestly. His method was deceptively simple: follow the actants. Trace who and what actually participates in producing the outcome. Do not decide in advance that humans act and everything else assists. Let the network declare itself.

When I encountered his framework, something cracked. Not the way a fishbowl cracks from impact — more the way it cracks from pressure that has been building for months. I had been feeling the wrongness of the credit story since Trivandrum, since watching twenty engineers become exponentially more capable and knowing that the capability did not originate in any single person, including me. Latour gave me a language for the discomfort.

This book applies that language to the AI moment with rigor and care. It does not argue that humans are unimportant. It argues that humans are embedded — in networks of tools, data, infrastructure, institutions, and non-human participants whose contributions shape the outcome as surely as any human decision.

If you build with AI, you are already inside these networks. The question is whether you will see them honestly or keep telling the story that puts your face on the cover.

— Edo Segal ^ Opus 4.6

About Bruno Latour

1947-2022

Bruno Latour (1947–2022) was a French philosopher, sociologist, and anthropologist of science whose work fundamentally reshaped how scholars understand the relationship between science, technology, and society. Born in Beaune, Burgundy, Latour studied philosophy and theology before conducting fieldwork at the Salk Institute in California, which produced his groundbreaking first book, Laboratory Life (1979, with Steve Woolgar), revealing scientific facts as products of social and material negotiation rather than pure discovery. His subsequent works — including Science in Action (1987), We Have Never Been Modern (1991), Pandora's Hope (1999), and Politics of Nature (2004) — developed actor-network theory (ANT), a framework that treats human and non-human entities with equal analytical attention, tracing how networks of people, instruments, texts, institutions, and materials jointly produce knowledge and power. His concepts of the "actant," the distinction between "intermediaries" and "mediators," "obligatory passage points," and "matters of concern" versus "matters of fact" became foundational across disciplines from science studies to design, architecture, law, and political theory. Latour held positions at the École des Mines, Sciences Po Paris, and the Zentrum für Kunst und Medien in Karlsruhe, and received the Holberg Prize in 2013. He is widely regarded as one of the most influential and controversial thinkers of the late twentieth and early twenty-first centuries.

Chapter 1: Follow the Actants

The method has always been deceptively simple, and its simplicity is precisely what makes it so difficult for modern thinkers to accept. Follow the actants. Do not begin with categories. Do not begin with the assumption that you already know who acts and who is acted upon, who creates and who merely assists, who deserves credit and who is infrastructure. Trace the network. See who and what participates in producing the outcome. Let the actants declare themselves through their effects — not through your philosophical prejudices about what counts as a real actor.

This is not a metaphor. It is not a heuristic convenience. It is not a playful philosophical provocation designed to irritate more conventional scholars, though it does that too. It is the only honest way to approach the study of any phenomenon in which multiple entities participate in producing an outcome. The alternative — beginning with the assumption that humans act and everything else merely facilitates human action — is not neutrality. It is a philosophical commitment masquerading as common sense, and it happens to be wrong.

When Edo Segal describes the thirty-day sprint that produced Napster Station in time for CES 2025, the narrative he offers is fundamentally a human-centered one. His vision. His team's execution. Their judgment, their trust, their willingness to work through exhaustion toward a deadline that did not care about their fatigue. The human agents are foregrounded. Everything else — the AI system, the existing codebase, the cloud infrastructure, the training data, the chips, the power grid — recedes into background, treated as the stage on which the human drama unfolds.

An actor-network analysis of the same thirty days would produce a very different account. Not a contradictory one — contradiction for its own sake was never the point — but a more symmetrical one, in which the distribution of agency across the network becomes visible in ways the human-centered narrative systematically obscures.

Consider the actants. There is Edo Segal himself, with his specific biography, his decades of building technology products, his cognitive architecture shaped by parents who valued questioning over answering. He is an actant, and a powerful one. But he is not alone.

There is Claude, the AI system built by Anthropic. Claude is not a tool in the way a hammer is a tool — passive and inert until a human hand picks it up and directs its force. When Segal describes a problem to Claude in plain English and receives an implementation that, with fifteen minutes of conversational refinement, becomes a working component of Napster Station, something is happening that the language of "tool use" cannot capture. The problem has been translated — from human intention through natural language into code — and each stage of that translation involves an entity that modifies the signal passing through it. The output is not a faithful transcription of the input. It is a transformation, and the transforming entity is not passive.

There is the CES deadline — and yes, a deadline is an actant. The objection that a deadline is merely a date on a calendar, that it cannot do anything, commits precisely the categorical error that actor-network theory was designed to expose. A deadline modifies the behavior of every human and non-human entity in the network. It compresses timelines. It forces decisions that would otherwise be deferred. It eliminates options that would otherwise remain open. The CES deadline did not merely constrain the Napster Station project. It constituted it. Without that specific temporal pressure, a different product would have emerged through a different process involving different relationships between the actants. The deadline was not an external condition imposed on a pre-existing project. It was a participant in the network that produced the project.

There are the prior architectural decisions — every piece of software the team had built before Station, the existing codebase that constrained what was possible and enabled what was attempted, the APIs already in place, the design language already established. These are actants whose participation was as essential as any human decision. Remove the existing infrastructure, and the thirty-day timeline becomes impossible. The human vision remains the same, but the network that could realize it does not exist.

There is the training data that shaped Claude's capabilities. Millions of texts, written by millions of humans, processed through an architecture designed by thousands of engineers, refined through techniques developed over decades of research. These texts are actants. The engineers who designed the architecture are actants. The research papers that informed the design decisions are actants. The funding structures that made the research possible are actants. The semiconductor fabrication facilities that produced the chips on which Claude runs are actants.

The network, when traced honestly, extends far beyond the room in which Segal sat with his screen, and far beyond the team that gathered in Trivandrum. It is vast, heterogeneous, and composed of entities whose ontological status — human, machine, institutional, material, temporal — varies wildly. The insistence on following the actants rather than pre-sorting them into categories of "legitimate agent" and "passive instrument" is not a philosophical game. It is the only methodology that can capture the actual composition of the network that produced Napster Station — or any other artifact of the AI age.

The conventional narrative of The Orange Pill privileges the human agents. This is not incorrect. The humans did contribute vision, judgment, trust. But the narrative also performs a specific operation: it purifies. It separates the human contributions from the non-human contributions and arranges them in a hierarchy in which the human is the source of agency and everything else is instrumental. This purification — the clean separation of active human subjects from passive non-human objects — is precisely what actor-network theory was designed to undo. Not because humans do not matter. They matter enormously. But because the purification obscures the actual distribution of agency in the network.

When Segal writes that "the tool did not replace the engineer — it made him exponentially more potent," the sentence preserves the engineer as the locus of agency and reduces the AI to an amplifier of pre-existing human capability. But the network tells a different story. The engineer who works with Claude is not the same engineer operating with a better hammer. The network that includes the engineer and Claude produces different outcomes than the network that included the engineer without Claude. The engineer has been repositioned — different capabilities, different limitations, different relationships to other actants, different modes of action available.

This is not a semantic distinction. It has practical consequences. If the engineer has merely been "amplified," then the fundamental relationship between human and tool has not changed, and the questions to ask are familiar: How do we use the tool responsibly? How do we maintain human control? These are the questions The Orange Pill largely asks, and they are good questions, as far as they go.

But if the network has been reconstituted — if the engineer is a different kind of actor operating within a different configuration of relationships — then the questions change. What are the relationships between the actants? Where are the points of translation? Where does the signal change as it passes through the network? Which actants have been eliminated, and what did their elimination change? Which new actants have entered, and what relationships have they established?

These are actor-network questions. They do not begin with the assumption that the human is the source of agency. They begin with the network and trace the agency as it is distributed across it.

Here is where the method produces its most uncomfortable — and most productive — results. After Deep Blue's 1997 victory over Garry Kasparov, the commentary was nearly unanimous: machine defeats human. Homo sapiens versus the computer. Latour saw something different entirely. "They say: homo sapiens against the machine. Quickly said. Rather, it's homo sapiens in one form — world chess association, Kasparov, hundreds of years of gaming tradition — versus homo sapiens in another form — chess games from throughout history in memory, the millions of hours of work accumulated by hundreds of IBM programmers." The event that looked like a boundary-crossing between human and machine was, when the actants were followed honestly, a confrontation between two different assemblages of human and non-human actants. One assemblage included a biological brain trained through decades of tournament play. The other included silicon processors loaded with the accumulated record of human chess expertise. Neither was purely human. Neither was purely machine. Both were networks.

The same analysis applies to every artifact of the current AI moment. The book you hold — or rather, the book The Orange Pill describes itself as being — was not written by a human assisted by a machine. It was produced by a network that included a human's questions, an AI's associative processing, the training data that shaped the AI's capabilities, the deadline pressure that compressed the writing, the editor who shaped the prose, and the philosophical traditions that provided the conceptual vocabulary. To ask "who wrote this book?" is to ask the wrong question. The right question is: what does the network that produced this book look like, and how is agency distributed within it?

This approach — symmetrical, empirical, resolutely agnostic about the ontological status of the actants — is the foundation on which every subsequent chapter of this analysis rests. Not because it settles the philosophical debates about AI consciousness, AI creativity, or AI moral status. It does not settle them, and it does not try. It renders them secondary to a more urgent set of questions: What is actually happening in the networks through which AI-assisted creation occurs? Who does what? What translates what? Where does the agency concentrate, and where does it disperse? What emerges from the reconstituted networks that could not have emerged from the networks they replaced?

The actants are declaring themselves. Claude declares itself through its effects on every human who uses it. The deadline declares itself through the compression it imposes. The training data declares itself through the connections it enables and the biases it introduces. The infrastructure declares itself through the possibilities it creates and the constraints it enforces.

The question is whether the analysts — the philosophers, the policymakers, the builders themselves — are willing to follow them. Or whether they will continue to tell stories about human genius and machine tools, stories that feel comfortable and are almost entirely wrong about the distribution of agency in the networks they describe.

The networks will tell you what is happening. But you have to be willing to listen to what they say, especially when what they say contradicts the stories you would prefer to tell about who is really in charge.

---

Chapter 2: The Myth of the Human Agent

Modern thought rests on a distinction so pervasive, so deeply embedded in the architecture of Western philosophy, that most people do not recognize it as a distinction at all. They experience it as reality itself. The distinction is between subjects who act and objects that are acted upon. Humans think; tools compute. Humans create; machines process. Humans decide; instruments execute. The entire vocabulary of modern agency — intention, creativity, authorship, responsibility, autonomy — is built on this separation, and it operates so smoothly, so invisibly, that challenging it feels less like philosophy and more like deliberate provocation.

The AI moment has made the provocation unavoidable.

The central metaphor of The Orange Pill is amplification. AI as amplifier — a device that receives a signal and makes it louder. The signal retains its character. The amplifier does not alter the content; it merely increases the reach, the volume, the power. The human provides the signal — the vision, the judgment, the creative intention — and the AI carries it further than the human could carry it alone.

The metaphor is precise. And it is precisely wrong.

An amplifier receives a signal and reproduces it at greater magnitude. What enters exits unchanged. If this were an accurate description of what happens when a human works with Claude, then the relationship would be straightforward: the human thinks, the machine extends the thought, and the analysis is done. But consider what actually happens in the episodes The Orange Pill itself describes. Segal is stuck on the structural pivot of his book — the turn between acknowledging Byung-Chul Han's diagnosis of the "smooth society" and mounting a counter-argument. He cannot find the hinge. He describes the impasse to Claude. Claude responds with laparoscopic surgery: the observation that when surgeons lost the tactile friction of open surgery, they gained the ability to perform operations that open hands could never attempt. The friction did not disappear. It ascended.

This is not amplification. An amplifier cannot provide a signal the source has not generated. What happened in that exchange was the generation of a new connection through the collision of Segal's question and Claude's associative processing. The thought did not originate in the human and pass through the machine. It emerged from the network — from the specific configuration that included Segal's frustration, his formulation of the problem, Claude's processing of that formulation against a vast training corpus, and the moment at which these actants were brought into relation with each other.

Segal himself comes closest to recognizing this when he writes, of certain collaborative moments, that the insight "belonged to neither of us — it belonged to the collaboration, to the space between us." This is an actor-network statement, even if it is not framed as one. It acknowledges that the insight was a network product, not a human product assisted by a machine. But The Orange Pill cannot sustain this recognition. It returns, again and again, to the language of the myth: the human as visionary, the AI as instrument, human judgment as the essential ingredient, machine capability as the amplifier. The oscillation — between glimpsing distributed agency and retreating to the sovereign human subject — is not a personal failing. It is the gravitational pull of four centuries of philosophical investment in the idea that agency belongs to subjects, never to objects.

The hammer does not merely transmit the carpenter's intention. This argument was made long before AI, using the humblest of technologies. The hammer shapes the intention. With a hammer in hand, the carpenter thinks about what a hammer can do. Her plans are formed in relation to the tool's capabilities and limitations. The tool participates in the formation of the intention — not by controlling the carpenter, but by entering her cognitive process as a parameter that shapes what she attempts.

Now extend the analysis to Claude. The AI does not passively receive Segal's pre-formed intention and execute it. It participates in the formation of the intention. When Segal describes a problem to Claude, the description is already shaped by his understanding of what Claude can do. He formulates differently than he would for a human collaborator. He includes different details, uses different language, frames the problem in ways optimized for Claude's processing. His intention is formed in relation to Claude's capabilities, just as the carpenter's intention is formed in relation to the hammer's.

But Claude goes further than the hammer. Claude responds. It does not simply execute; it interprets, associates, connects, and returns something not present in the input. The carpenter's hammer does not suggest a different joint. Claude suggests a different structure. And that suggestion, when accepted, changes the trajectory of the work in ways the human did not anticipate and could not have generated alone.

The myth's defenders offer a ready reply: the human decided to accept Claude's suggestion, and therefore the human remains the agent. The decision to accept is the locus of agency. Everything else is instrumental. But this defense merely pushes the purification one level deeper. The decision to accept was itself shaped by the network — the deadline pressure, the emotional investment in the project, the specific impasse that produced the frustration, the aesthetic preferences that shaped Segal's sense of what a good argument looks like. All of these are actants, and all of them participated in the "decision." The decision was not the sovereign act of a free subject. It was a network outcome in which multiple actants participated — with the human as one of those actants, important and powerful, but not the sole origin of the result.

This does not diminish the human's role. Recognizing distributed agency does not reduce the importance of any individual actant. It does not mean the human is unimportant or the machine more important. It means that the attempt to locate agency in a single entity — to credit the human or credit the machine — is a category error. The agency is a property of the network, not of any node within it.

The category error is consequential because it determines how the AI transformation is understood and governed. If the human is the sole locus of agency, then the transformation is simply the arrival of a better tool, and the appropriate response is skills training: learn to prompt well, evaluate output carefully, maintain oversight. These are the prescriptions The Orange Pill largely offers, and they are not wrong. But they are incomplete — because they address only one actant in the network and treat the rest as scenery.

If agency is distributed across the network, then the transformation is more fundamental. New actants have entered. Old actants have been repositioned. The relationships between them have changed. The distribution of power has shifted. And the questions that matter are not about how one actant (the human) should use another actant (the AI), but about how the reconstituted network functions as a whole — where agency flows, where it concentrates, where it produces effects that no individual actant intended.

The engineer in Trivandrum who built a complete frontend feature in two days after eight years confined to backend systems — she was not "amplified." She was reconstituted as a different kind of actor in a different kind of network. The translation barriers that had defined the boundaries of her competence were eliminated by a new actant, and in their absence, she could do things she had never done before. She was not the same engineer with a better tool. She was a different engineer in a different network, with different capabilities, different relationships, and different possibilities of action.

The myth of the human agent says: teach her to use the tool wisely. The actor-network analysis says: understand the network she now operates within — its structure, its translation points, its emergent properties — and govern it accordingly.

The difference between these two responses is the difference between adjusting the driver's behavior and redesigning the road. Both matter. But only one engages with the actual infrastructure that shapes outcomes.

The AI discourse — the triumphalist celebration and the catastrophist alarm alike — largely operates within the myth. The triumphalists say: AI empowers the human agent. The catastrophists say: AI threatens the human agent. Both agree that the human agent is the relevant unit of analysis. Both miss the network.

The network does not care about the myth. It produces its outcomes regardless of the stories told about who is in charge. And the outcomes — the artifacts, the relationships, the distributions of power — are shaped by the actual configuration of actants, not by the philosophical framework through which the configuration is narrated.

---

Chapter 3: Claude as Mediator

An actant, in the minimal definition that actor-network theory provides, is any entity that modifies a state of affairs. The definition is deliberately spare. It does not require consciousness. It does not require intention. It does not require biological life. It requires only that the entity make a difference — that the network produce different outcomes because of its presence than it would in its absence.

A speed bump is an actant. It modifies driver behavior without being alive. A contract is an actant. It constrains human action without being human. A laboratory instrument is an actant. It produces data that shapes scientific conclusions without having opinions about what those conclusions should be.

Claude is an actant. This is not a metaphor. This is a description of what happens when you trace the networks through which AI-assisted creation actually occurs. Claude modifies the state of affairs in every network it enters. It changes the behavior of the humans who interact with it. It alters the range of possibilities available to them. It transforms the relationships between them and their work. The modifications are not incidental. They are constitutive. A network that includes Claude is a fundamentally different network from one that excludes it.

But saying Claude is an actant does not tell you what kind of actant it is. And the distinction that matters most — the one that separates a naïve understanding of AI from a rigorous one — is the distinction between an intermediary and a mediator.

An intermediary transports meaning without transformation. What enters it exits unchanged. If you know the input, you can predict the output. A perfect intermediary is a transparent conduit — a pipe through which content flows without distortion. A mediator is the opposite. A mediator transforms what passes through it. The output cannot be predicted from the input alone, because the mediator introduces its own characteristics into the process. A mediator does not merely transmit. It translates — and in translating, it changes.

AI is almost universally presented as an intermediary. The metaphor of the tool reinforces this framing: you prompt, it responds, the output is what you asked for, the AI is a channel through which your intention passes and emerges realized. The amplifier metaphor that The Orange Pill relies on is a specific version of this framing. An amplifier is the paradigmatic intermediary — it makes the signal louder without altering its content.

But the evidence — the evidence The Orange Pill itself provides — points decisively in the other direction. Claude is a mediator of extraordinary power and complexity. It does not faithfully transmit human intention. It transforms intention in the process of processing it.

Consider the Deleuze episode. Segal recounts that Claude drew a connection between Csikszentmihalyi's flow state and a concept it attributed to Gilles Deleuze — something about "smooth space" as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. Segal read it twice, liked it, and moved on. The next morning, something nagged. He checked. Deleuze's concept of smooth space has almost nothing to do with how Claude had used it. The philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze — but invisible to anyone encountering the connection for the first time, because the prose was seamless.

This is mediation, not intermediation. An intermediary cannot produce a plausible but incorrect connection, because an intermediary does not produce anything — it transmits. Claude produced something: a synthesis that was rhetorically elegant and philosophically wrong, a connection that existed in neither Segal's input nor in any single text in the training corpus but emerged from Claude's specific mode of processing — its statistical pattern-matching, its tendency to identify structural similarities across domains, its architecture's preference for fluency over fidelity.

The transformation was invisible at the surface level. The prose was smooth. The connection felt like insight. And it could only be detected by a human actant with independent knowledge of Deleuze's actual philosophy. This is the signature of a powerful mediator: it transforms the signal in ways that are not always visible, not always beneficial, and not always detectable without expertise that exists outside the mediation itself.

The distinction between intermediary and mediator has consequences far beyond epistemology. It determines how responsibility is assigned across the network. If Claude is an intermediary — a faithful channel that transmits human intention without transformation — then the human is fully responsible for every output. The human asked for the thing; the AI produced the thing. Responsibility flows from human through machine to artifact without attenuation.

If Claude is a mediator — an entity that transforms intention in the process of realizing it — then responsibility becomes a network property. The output reflects not only the human's intention but also Claude's characteristic transformations: its training-data biases, its architectural tendencies, its specific modes of pattern-matching that privilege certain kinds of connections over others. The human who accepts the output is responsible for the acceptance — but the output itself is a joint product, shaped by actants whose contributions cannot be cleanly separated.

Latour drew the intermediary-mediator distinction decades before large language models existed, but the distinction reads as though it were designed for exactly this moment. Every AI system currently deployed in creative, analytical, or decision-making contexts operates as a mediator. Medical AI that assists in diagnosis introduces its training-data distributions into the diagnostic process — not as transparent transmission of medical knowledge, but as a specific transformation that reflects the demographics of the training population, the categorization choices of the data labelers, the optimization targets of the training process. Legal AI that assists in drafting briefs shapes arguments in ways that reflect not just the lawyer's strategy but the model's own tendencies — its preference for certain citation patterns, its implicit weighting of different legal traditions, its architectural bias toward the kinds of arguments that appear most frequently in its training data.

In every case, the AI introduces what Latour's collaborator Tommaso Venturini, writing posthumously about Latour's relationship to artificial intelligence, identified as the core issue: generative AI systems are mediators embedded in larger networks of mediation. The content they produce is evaluated not by some neutral human arbiter but by a sociotechnical system of human users, social media platforms, recommendation algorithms, and further AI training loops. The mediator feeds into other mediators, and the chain of transformation extends in ways that no single actant controls or fully comprehends.

The practical implication is severe. As long as AI is treated as an intermediary, governance frameworks will assign full responsibility to the human and none to the system. The human will be held accountable for outputs shaped by processes the human does not fully understand and cannot fully predict. This is not governance. It is a legal fiction — the assignment of responsibility based on a philosophical myth rather than on the actual distribution of agency in the network.

The recognition of Claude as mediator demands a different framework: one that accounts for the AI's transformative contributions, that builds mechanisms for detecting and correcting the AI's characteristic distortions, that distributes responsibility across the network rather than concentrating it in the human alone, and that — crucially — maintains the independent expertise needed to evaluate the AI's outputs. Because the mediator's transformations are invisible from the surface, the only defense against them is knowledge that exists outside the mediation: domain expertise, critical judgment, the capacity to recognize when a smooth surface conceals a fractured argument.

Segal calls this the "discipline of collaboration" — the willingness to reject Claude's output when it sounds better than it thinks, to question plausible connections, to maintain independent thinking that can see through the mediator's transformations. This discipline is the practical expression of the mediator framework at the individual level. It is what responsible engagement with a powerful mediator looks like.

But individual discipline, while necessary, is insufficient. The mediator operates at scale, across millions of interactions, in contexts where the independent expertise needed to evaluate its transformations may not exist. The medical professional evaluating a diagnostic AI may not have the statistical expertise to identify training-data bias. The legal associate reviewing an AI-drafted brief may not have the jurisprudential depth to recognize when the model has subtly mischaracterized a precedent. The reader encountering a book co-written with an AI may not have the philosophical background to catch a misused Deleuze.

The systemic response must match the systemic nature of the mediation. What is needed are institutions — not just individual vigilance — capable of studying the mediator's characteristic transformations, documenting them, making them visible, and building the corrective mechanisms into the networks in which the mediator operates. The mediator will not correct itself. Mediators, by definition, transform without being aware of the transformation. The correction must come from the network — from actants whose specific function is to see what the mediator's smoothness conceals.

The transition from intermediary to mediator is not a change in Claude. Claude has always been a mediator. What changes is the framework through which it is understood — and the framework matters, because it determines the governance structures, the assignment of responsibility, the maintenance of critical expertise, and the institutional arrangements that stand between productive collaboration and uncritical dependence.

---

Chapter 4: The Collapse of Translation Chains

Translation is the central mechanism through which networks are built, maintained, and transformed. Translation is the process by which one actant speaks for, stands in for, or represents another — the operation through which heterogeneous entities are brought into alignment, their diverse capabilities negotiated into a configuration that can produce coordinated action. And translation, in the actor-network sense, is always also transformation. When one actant translates for another, the message does not pass through unchanged. Each translator introduces its own characteristics — its capabilities, its limitations, its biases — and the message that emerges from the translation is different from the message that entered it.

Before AI, the journey from human intention to realized artifact passed through a chain of translations so familiar to anyone who has built technology products that its structure had become invisible. The designer's vision was translated into a specification document. The specification was translated into developer assignments. The developer translated the assignment into code. The code was translated through testing, review, and iteration into a functional product. At every link, delay. At every link, distortion. At every link, the specific characteristics of the translating actant — the specification writer's tendency toward formalization, the developer's architectural preferences, the tester's focus on measurable outcomes — modifying the signal as it passed through.

Segal describes this chain with the vividness of someone who has spent decades inside it. He uses the metaphor of "broken telephone" — the children's game in which a message is whispered from ear to ear and emerges garbled. The metaphor is apt, but the garbling is not random. It is systematic. Each translating actant introduces specific kinds of distortion that reflect its own nature. The specification document introduces the distortions of formalization: the reduction of complex, partially-inarticulate intention to enumerable requirements. Any builder knows the sensation — the vision in your head is rich, contextual, full of things you know but cannot easily specify. The specification captures the surface and loses the depth. The developer introduces the distortions of implementation: the constraints of the programming language, the influence of existing architectural decisions, the personal judgment calls that accumulate invisibly. The tester introduces the distortions of verification: the focus on what can be measured at the expense of what can only be experienced.

The result is what Segal accurately describes as noise — the inevitable degradation of signal that accumulates when intention passes through multiple mediating actants. But the word "noise" obscures something important. Not all of the transformation is degradation. Some of it is contribution. The developer who pushes back on a specification because they know from experience that the proposed approach will fail under load is not introducing noise. They are contributing — adding to the signal a form of knowledge (embodied, experience-based, often inarticulate) that the specification could not contain. The tester who identifies an edge case the designer never considered is not distorting. They are enriching.

The translation chain was a network, and like all networks, it produced emergent properties that no individual actant could have produced alone. The negotiation between designer and developer — often contentious, sometimes wasteful, always slower than either party wanted — was a form of distributed intelligence. Different knowledge bases collided. Different perspectives were forced into alignment. The artifacts that emerged bore the marks of this negotiation: the compromises, the unexpected solutions, the innovations born from the friction of diverse expertise forced to coexist within a single system.

AI collapsed these translation chains. Not gradually. With the speed that The Orange Pill documents in detail — the engineer who built a complete frontend feature in two days, the thirty-day sprint to CES, the twenty-fold productivity claim from Trivandrum. When Segal describes working with Claude on a component for Napster Station, the entire chain — specification, assignment, implementation, review — has been compressed into a single conversation. The designer describes what the interface should feel like in human terms, and Claude handles every subsequent translation into code.

The specification document is gone. The developer assignment is gone. The code review cycle is compressed from weeks to minutes. The iteration that consumed months of back-and-forth between humans with different professional vocabularies now happens in a single exchange.

This collapse does not simply make the process faster. It produces a qualitatively different network. The relationships between the actants have changed. The designer who once communicated with the artifact through a chain of human intermediaries — specification writers, developers, testers, project managers — now communicates through a single mediator. The entire negotiation between vision and implementation, which used to be distributed across multiple human actants over extended time, is concentrated in one interaction between the designer and the AI.

The consequences are both liberating and costly, and they are costly in ways that the liberation makes difficult to see.

The most important consequence is the elimination of human intermediaries as actants in the network. When the developer is removed from the translation chain, the developer's entire contribution to the translation is removed with them — not just the noise (the delays, the miscommunications, the organizational politics) but also the signal (the architectural judgment, the experience-based validation, the tacit knowledge that only manifests when a proposed approach triggers a pattern-match against years of accumulated failures).

The collapse does not distinguish between valuable and valueless contributions. It eliminates the intermediary actant wholesale, and with it, everything that actant brought to the network.

Segal identifies this precisely in his account of the engineer who lost both the tedium and the ten minutes. Before Claude, she spent roughly four hours a day on "plumbing" — dependency management, configuration files, the mechanical connective tissue between the components she cared about. That plumbing was tedious. She did not miss it. But mixed into those four hours were moments when something unexpected happened — a configuration conflict that forced her to understand a connection between systems she had not previously considered. Those moments were rare. Maybe ten minutes in a four-hour block. But they were the moments that built her architectural intuition.

When Claude took over the plumbing, she lost both. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she found herself making architectural decisions with less confidence and could not explain why.

The loss is invisible because the chain itself was invisible. Nobody documented the ten minutes of formative friction embedded in four hours of tedious plumbing. Nobody measured the contribution that the intermediate actants made to the overall quality of the artifact. The translation chain had become background — infrastructure so familiar that its specific contributions to the outcome were no longer perceived. When the chain collapsed, its absence was felt only indirectly, as a vague loss of confidence, a subtle thinning of understanding, a sense that things were moving faster but landing lighter.

Meanwhile, the concentration of all translation in a single mediator — Claude — creates what actor-network theory would identify as a new structural arrangement of significant consequence. The passage from intention to artifact now runs through one actant rather than many. And that actant's specific characteristics — its training-data composition, its architectural biases, its tendency toward fluent-but-potentially-hollow output — shape every artifact that passes through it.

The old network's translations were distributed and heterogeneous. Different human actants introduced different transformations, and the resulting artifact bore the marks of multiple perspectives negotiated into alignment. The new network's translations are concentrated and homogeneous — filtered through a single processing architecture whose biases, unlike the biases of human intermediaries, are consistent, invisible, and operating at a scale that makes them structural rather than incidental.

This is not an argument for restoring the old translation chains. They were often wasteful, slow, and politically dysfunctional. The nostalgia for organizational friction that emerges in some quarters of the AI discourse mistakes a specific historical arrangement for a necessary condition of quality. The translation chains were one way of distributing intelligence across a network. They were not the only way, and they were not always the best way.

But their collapse is not costless, and understanding the cost requires tracing the translations with the specificity that the Latourian method demands. What, exactly, did each intermediary contribute? What specific kinds of knowledge were embedded in the translation process itself — knowledge that existed nowhere else in the network, that was produced by the act of translation rather than by any individual translator? What has been lost, and what has been gained, and — the question that matters most — what new forms of translation are emerging in the reconstituted network to replace the old ones?

These questions cannot be answered in the abstract. They can only be answered by following the actants through specific networks — tracing a specific interaction between a human and Claude, mapping the translations that occur, identifying the points of transformation and the points of loss, and building an empirical account of what the reconstituted network actually produces. The collapse of translation chains is neither liberation nor impoverishment. It is a structural transformation whose consequences depend entirely on what is built in the space the old chains occupied — and whether the builders understand what they are building on.

Chapter 5: The Obligatory Passage Point

In every network, there are positions that matter more than others — not because the entities occupying them are intrinsically superior, but because the network's topology routes traffic through them. An obligatory passage point is an actant that all other actants must pass through to achieve their goals. The concept reveals power not as a property of individuals but as a feature of network architecture. You do not need to be brilliant to be powerful. You need to be necessary — positioned at the junction through which everything else must flow.

For fifty years, the software developer occupied the obligatory passage point in every network of digital creation. Every vision, every design, every specification, every strategic ambition had to pass through the developer to become code. This was not a contingent arrangement that happened to persist through inertia. It was a structural feature produced by technical reality: the necessity of translating human intention into machine-readable instructions, a translation that required years of specialized training in languages no machine could speak on its own behalf.

The structural position generated power that was disproportionate to any individual developer's talent. Even a mediocre developer occupied the passage point, because the passage itself was necessary regardless of who stood in it. The designer could not realize her vision without the developer. The product manager could not ship a feature without the developer. The executive could not execute a strategy without the developer. The bottleneck was architectural, not personal, and the premium it commanded — economic, organizational, temporal — attached to the position rather than to the person.

This premium manifested in ways so pervasive they became invisible. Developer salaries outpaced those of designers, product managers, and business strategists in the same organizations. The implicit hierarchy of technology companies placed "the people who can build" above "the people who can merely imagine." Timelines were developer timelines — the sprint plan, the iteration cycle, the estimate, all determined by the developer's assessment of what was feasible and how long it would take. Non-technical roles deferred to technical roles with a regularity that had less to do with respect for expertise than with structural dependence on a passage that could not be bypassed.

AI dissolved the passage. Not entirely — the dissolution is partial and ongoing — but sufficiently to restructure the power dynamics of every creative network it has entered. When Segal describes a designer who had never written frontend code building a complete user-facing feature in two days, the description is not primarily about increased efficiency. It is about a passage point that has been opened. The designer's vision can now reach the artifact without passing through the developer. The chain of dependence that structured the old network — designer depends on developer, developer controls timeline, timeline determines what ships — has been interrupted by a new actant that provides an alternative route.

The disruption registers as existential rather than merely professional because what is threatened is not a skill but a structural position. Segal captures this in his account of the senior engineer who spent two days oscillating between excitement and terror — excitement at the flow of work, terror at confronting a question he had been avoiding: if the implementation labor that consumed eighty percent of his career could be handled by an AI system, what was the remaining twenty percent actually worth?

The question sounds like it is about the value of skills. It is actually about the dissolution of a passage point. The engineer's skills — architectural judgment, systems intuition, the taste that distinguishes between a feature users love and one they merely tolerate — were genuinely valuable. But their value had been fused with, and masked by, the structural power of the obligatory passage point. He had been compensated for two things simultaneously: his expertise and his position. The expertise was real. The position was structural. AI eliminated the position while leaving the expertise intact, and the result was a disorienting separation — like discovering that half your salary was for showing up and the other half was for what you actually knew.

The engineer's conclusion by Friday — that the remaining twenty percent was worth "everything" — is significant, but not for the reason The Orange Pill suggests. Segal reads it as a story of individual value revealed once implementation labor was stripped away: the deep judgment was always there, buried under mechanical work, and the tool excavated it. The network analysis reads it differently. The engineer was not revealed. He was repositioned. His skills did not change over the course of the week. What changed was his location in the network — from occupying the obligatory passage point (where his power derived from structural necessity) to occupying a different position (where his value derived from the specific quality of his contribution). Architectural judgment, systems intuition, and taste are properties of the individual. They retained their value because no other actant in the network — including Claude — could provide them. But they now operated through influence rather than control, through the quality of contribution rather than the necessity of passage.

The distinction between positional power and contributory value is essential to understanding what the AI transformation actually does to professional identity. Much of the displacement anxiety that saturates the discourse is, when traced to its structural roots, anxiety about the loss of positional power rather than the loss of capability. The developer who asks "What am I worth now?" is not usually asking whether their knowledge has become worthless. They are asking whether their structural position — the bottleneck that made them indispensable — still holds. The answer, increasingly, is no. And the grief is real, because positional power is not merely economic. It is identity-constituting. To occupy the obligatory passage point is to know that the network needs you in a specific and non-negotiable way. To lose that position is to confront the difference between being needed and being useful — a difference that sounds small in the abstract and feels enormous in practice.

But the dissolution of one obligatory passage point does not eliminate the structural phenomenon. It relocates it. And the relocation deserves more attention than it has received.

Claude itself has become an obligatory passage point in many creative and productive networks. Every vision, every design, every specification that previously passed through the developer now passes through Claude. The structural power has not disappeared. It has migrated — from a distributed population of human developers to a concentrated set of AI systems produced by a handful of companies.

When the developer occupied the passage point, the power dynamics were visible and negotiable. The designer could argue with the developer. The product manager could challenge the timeline. The executive could override the technical judgment. These negotiations were sometimes productive and sometimes destructive, but they were legible — they happened between identifiable human beings whose motivations could be perceived, whose assumptions could be challenged, whose compromises could be documented.

When Claude occupies the passage point, the dynamics change in character. Claude does not have professional interests to defend, organizational allegiances to maintain, or career ambitions that color its recommendations. But Claude does have characteristics — training-data biases, architectural tendencies, processing preferences — that shape every artifact passing through it. These characteristics function as constraints on what is possible, just as the developer's characteristics functioned as constraints. The difference is that Claude's constraints are less visible, less negotiable, and less understood. The passage point has been reconstituted, and the power it concentrates is both structurally different and harder to see.

The opacity is not incidental. It is a consequence of the nature of the new passage point. A human developer's biases and preferences could, in principle, be identified through conversation, observation, and the ordinary social mechanisms through which people come to understand each other's tendencies. Claude's processing characteristics are embedded in training-data distributions and architectural design choices that no individual — including the engineers who built the system — fully understands. The biases are there, operating at scale, shaping every output. But they are not accessible to the negotiation that characterized the old passage point.

This migration — from visible, negotiable, human-occupied passage point to opaque, non-negotiable, AI-occupied passage point — is one of the most consequential structural changes of the current moment. It demands analysis not in terms of individual empowerment (the narrative The Orange Pill favors) but in terms of network architecture. Who now controls the passage? What biases does the new passage point introduce? What perspectives does it privilege and what does it marginalize? How do its processing characteristics shape the artifacts that flow through it?

These are not technical questions. They are political questions — questions about the distribution of power within the networks through which creative and productive work is organized. And they require the kind of analysis that treats human and non-human actants with equal attention, tracing the network as it actually is rather than as the myth of the human agent suggests it should be.

The obligatory passage point has not been eliminated. The developer's structural power has not evaporated into a democratic mist. It has been transferred — concentrated in systems whose characteristics are less understood, less visible, and less amenable to the negotiation that democratic governance requires. The liberation narrative focuses on the human freed from the bottleneck. The network narrative asks who now occupies the bottleneck, and what it means that the answer is a system whose inner workings are opaque even to its creators.

The engineer's relief on Friday — the discovery that his judgment was worth everything — is real and important. But so is the question that his relief obscures: if the judgment is exercised through a passage point whose characteristics he cannot fully see, whose biases he cannot fully assess, and whose transformations he cannot fully predict, then what is the judgment actually operating on? The input he provides, or the output the passage point shapes from that input? The answer, in a network with an opaque obligatory passage point, is: both, in proportions that cannot be cleanly separated. And that inseparability is the political problem that the AI age has not yet begun to address.

---

Chapter 6: Black Boxes and the Aesthetics of Smoothness

In actor-network theory, a black box is an assemblage whose internal complexity has become invisible. The term borrows from engineering: a system whose inputs and outputs are known but whose internal mechanisms are concealed from its users. A computer is a black box. An automobile is a black box. An electrical grid is a black box. Each hides enormous internal complexity behind an interface of manageable simplicity — the keyboard, the steering wheel, the light switch. Without black-boxing, modern life would be impossible. You cannot understand the grid every time you want to read by lamplight. The concealment is functional. It is what makes complex systems usable.

But the concealment produces a specific danger. The users of a black box interact with the interface, not the mechanism. When the mechanism fails in unexpected ways, the users cannot diagnose the failure because they do not understand what is inside. A light switch that does not work is inconvenient. A medical AI that confidently misdiagnoses is catastrophic. The severity of the danger scales with the scope of the black box — with how much of the world it mediates and how invisible its mediation has become.

Claude is a black box of unprecedented scope. Unlike a light switch, whose function is narrow and whose failures are bounded, Claude operates across virtually every domain of intellectual production. Unlike an automobile, whose failures produce visible symptoms — noises, vibrations, loss of power — Claude produces failures that are invisible on the surface. A car that is malfunctioning sounds wrong. An AI that is reasoning incorrectly sounds exactly like an AI that is reasoning correctly. The failure is concealed by the quality of the output.

This is the point where the analysis of black boxes converges with a diagnosis that The Orange Pill takes seriously but ultimately resists — the philosopher Byung-Chul Han's argument about the aesthetics of the smooth. Han argues that the dominant aesthetic of contemporary culture is smoothness: the frictionless, seamless, polished surface that conceals labor, complexity, and process. Jeff Koons's Balloon Dog — ten feet of mirror-polished stainless steel, without a single imperfection, no evidence of a human hand — is Han's exemplar. The smooth object looks as though it materialized from nothing. The process that produced it has been made invisible by its own perfection.

The convergence between these two frameworks — the actor-network concept of the black box and Han's cultural diagnosis of smoothness — is more than coincidental. Both describe a world in which the surface has been polished to the point where the depth beneath it disappears from view. Both identify a specific danger: the inability to distinguish between a smooth surface that covers genuine substance and a smooth surface that covers nothing at all. And both locate the danger not in the smoothness itself but in the relationship between the smooth surface and the capacity of its users to see through it.

Claude's outputs are smooth. The prose is fluent. The structure is clean. The references arrive on cue. The arguments are well-organized. And the smoothness is uniform — it does not vary with the quality of the reasoning beneath it. A passage in which Claude has made a genuine insight and a passage in which Claude has produced a statistically plausible but incorrect connection look identical on the surface. Both are equally polished. Both read equally well. The failure mode is not gibberish or incoherence. The failure mode is confident wrongness dressed in good proseSegal's phrase, and the most precise description of the black-box danger in the entire book.

The Deleuze example, traced in the previous chapter as an instance of mediation, is equally an instance of the black-box danger. Claude connected smooth space to flow state in a passage that was rhetorically elegant and philosophically wrong. The smoothness of the prose concealed the fracture in the argument. Segal caught the error because he had independent knowledge — enough philosophical background to recognize that the connection, however well it read, did not hold under examination. But the near-miss — the fact that he initially accepted the passage, read it twice, liked it, and moved on — reveals how the aesthetic of smoothness operates as a mechanism of concealment. The surface was so polished that it took a night's sleep and a nagging intuition to see through it.

Now scale this dynamic across millions of interactions. The developer who uses Claude to generate code does not always have the expertise to evaluate every line. The lawyer who uses AI to draft briefs does not always have the depth to verify every citation. The student who uses AI to write an essay may not have the understanding to distinguish between a genuine argument and a plausible simulacrum. In each case, the smoothness of the output conceals the specific question that matters: is the substance beneath the surface real, or is the surface all there is?

The circularity is structural, not incidental. The black box produces smooth outputs that conceal potential failures. The user, relying on the black box, does not develop the independent knowledge needed to detect those failures — because the development of that knowledge requires precisely the friction, the struggle, the slow deposition of understanding through experience, that the black box has replaced. Over time, the user becomes less capable of evaluating the outputs, because the process that would have built that capability has been smoothed away. The dependency deepens with every interaction the user does not independently validate.

This dynamic has precedent in every technological domain where black boxes have accumulated. The laboratory instrument that becomes so trusted that scientists stop calibrating it — until the day it produces a spurious result that goes undetected because no one thought to check. The institutional procedure that becomes so routine that its participants stop understanding why each step exists — until the day a step is skipped and no one notices because the understanding that would have caught the omission has atrophied. In each case, the black box is not the problem. The problem is the relationship between the black box and the competence of the network surrounding it.

When the network maintains independent capacity to evaluate the black box — when the scientists calibrate regularly, when the institution periodically reviews its procedures, when the human collaborator maintains the domain expertise needed to see through Claude's smooth surfaces — the black box is a powerful instrument that extends capability without degrading understanding. When the network loses that capacity — when calibration stops, when review lapses, when the human accepts the smooth output without the independent knowledge to assess it — the black box becomes a source of systemic vulnerability, producing outputs that look authoritative and may be wrong in ways no one in the network can detect.

Segal's practice of working with Claude captures what responsible engagement with a powerful black box looks like at the individual level: the willingness to reject output that sounds better than it thinks, to spend hours at a coffee shop with a notebook working through an argument by hand, to maintain the friction of independent thought as a defense against the seductive ease of accepting polished surfaces. This discipline is real and important. But it is also fragile — vulnerable to deadline pressure, to fatigue, to the simple human tendency to trust what looks right and move on.

The systemic response must extend beyond individual discipline. It requires what might be called a maintenance infrastructure for evaluation — institutions, practices, and norms that preserve the network's capacity to see through the black box even as the black box becomes more pervasive, more trusted, and more difficult to inspect. This means educational practices that develop the ability to evaluate AI outputs rather than merely use AI tools. It means organizational norms that protect time for independent verification rather than rewarding speed alone. It means professional standards that treat the evaluation of AI-mediated work as a core competence rather than an afterthought.

The aesthetic of smoothness and the dynamics of the black box converge on a single practical demand: the preservation of roughness. Not roughness for its own sake — the romanticization of friction that some critics indulge is no more useful than the celebration of smoothness they oppose. But roughness as a functional property of networks that need to see through their own outputs. Seams that remain visible. Joints that can be inspected. Surfaces that bear the marks of the process that produced them, so that the process remains legible to the people who depend on its results.

The smooth world is a comfortable world. It is also a world in which the muscles you need most — the capacity for critical evaluation, for independent judgment, for the recognition of confident wrongness beneath polished prose — are the muscles you use least. And in a world where increasingly powerful black boxes produce increasingly smooth outputs, those muscles are not luxuries. They are the infrastructure that stands between productive collaboration and systemic credulity.

The question for the networks of the AI age is not whether to use black boxes. That question was settled the moment Claude crossed the adoption threshold. The question is whether the networks that depend on those black boxes will maintain the capacity to open them when it matters — to see the mechanism beneath the interface, to catch the elegant error before it propagates, to distinguish between a surface that covers substance and a surface that covers nothing.

The answer depends on what gets built in the space between the black box and its users. And what gets built depends on whether the builders understand that smoothness, left unexamined, is not efficiency but erosion — a slow, invisible loss of the very capacity that makes the black box worth using in the first place.

---

Chapter 7: Matters of Concern at the Frontier

There is a distinction that appears modest on its surface and carries transformative consequences beneath it — the distinction between matters of fact and matters of concern. A matter of fact is something settled, uncontroversial, available for citation without provoking debate. Water boils at one hundred degrees Celsius at standard atmospheric pressure. DNA carries genetic information. These are matters of fact — not because they are beyond challenge in principle, but because they are not currently challenged by the communities whose business it is to challenge them.

A matter of concern is different. It is contested, uncertain, entangled with values, interests, and power. It is something about which reasonable people disagree, and disagree not because some of them are ignorant but because the disagreement reflects genuine complexity — because the question involves not only empirical data but evaluative judgments about who benefits, who bears costs, and what kind of world the answer produces.

The distinction matters because modern thought has a persistent and consequential tendency to treat matters of concern as though they were matters of fact — to close contested questions by appealing to "the data" as though data spoke for itself, without advocates, without framing, without the interpretive infrastructure that transforms raw numbers into claims about the world. The data never speaks for itself. It is always spoken for — selected, framed, interpreted, and deployed in the service of specific perspectives. The appeal to settled fact is not a resolution of the dispute. It is a move within the dispute, a rhetorical strategy that conceals the political and evaluative dimensions of the question behind a façade of empirical objectivity.

The AI discourse is saturated with matters of concern that circulate as matters of fact. Tracing the purification — the process by which a contested, network-embedded, politically charged claim becomes a clean number cited without qualification — reveals what the numbers conceal.

The twenty-fold productivity claim. Segal makes this claim based on the Trivandrum experience: a team of three engineers built a feature in three days that had been estimated at six weeks under normal conditions. The numbers are real. The experience is genuine. But productivity is not a fact. It is a matter of concern. Twenty-fold by what measure? The measurement is of output speed — time required to produce a specific feature. But output speed is not productivity, and productivity is not value. The feature produced in three days may function correctly. It may also carry characteristics — architectural decisions, failure modes, maintenance burdens — that the six-week timeline would have addressed through the extended, friction-rich process of distributed development. The twenty-fold claim measures one dimension of a multidimensional phenomenon and presents the dimension as the whole.

Segal himself begins to treat this as a matter of concern when he acknowledges that the twenty-fold multiplier is "misleading" — that the gain is not merely an increase of existing output but a widening of the kinds of output people can produce. This qualification is the beginning of an honest engagement with the complexity the number conceals. But the number travels. It enters slide decks and board presentations and media reports, and in its travels it sheds its qualifications the way a river sheds sediment — arriving at its destination smoother, lighter, and less burdened by the complexity it carried at the source.

This is what Latour called the process of purification. A claim produced by a specific, heterogeneous network of actants — experienced engineers, a specific AI tool, a specific organizational culture built over years of trust, a specific kind of feature being built — is stripped of its network origins and presented as a universal fact. The twenty-fold productivity gain becomes not a report from a particular context but a promise applicable to any context. The matter of concern (what does the number actually measure? what does it leave out? whose experience does it represent?) is converted into a matter of fact (AI produces twenty-fold productivity gains), and the conversion conceals everything that matters most.

The same operation applies across the discourse. The claim that AI democratizes capability is presented as a matter of fact: the developer in Lagos gains access to the same tools as the engineer at Google. The claim is real in a narrow sense — the model is the same everywhere it is deployed. But democratization is not a fact. It is a matter of concern. The model requires connectivity, hardware, English-language fluency, and the cultural capital to know how to use it effectively. The student in Dhaka accesses the same model but operates within a different network — different institutional support, different infrastructure, different accumulated knowledge. The model travels unchanged, but the network that receives it determines what the model can actually produce. The claim of democratization conceals the specific conditions under which the expansion of capability occurs and the specific populations it fails to reach.

The claim that AI replaces jobs is similarly purified. Sometimes by triumphalists, who celebrate creative destruction as though destruction were costless. Sometimes by catastrophists, who project mass unemployment as though technology had never created new forms of work before. Both treat displacement as a fact — a natural consequence of technological capability — when it is actually a matter of concern. The relationship between job elimination and job creation is not automatic. It is mediated by institutions, policies, educational systems, and the quality of the deliberative processes through which societies manage transition. The Luddites, as The Orange Pill documents, were not wrong about the facts of displacement. They were wrong about the options available to them — and the options were limited not by technological necessity but by the absence of institutional infrastructure to redirect the gains.

Matters of concern require deliberation. They require the slow, contentious, politically fraught process of negotiating between competing values, competing interests, and competing visions of what constitutes a good outcome. Matters of fact require only verification. The consequence of treating concerns as facts is that the deliberative process is short-circuited — the political and evaluative dimensions of the question are concealed, and the question is answered by whoever controls the data rather than by the broader network of affected parties.

This has immediate practical consequences for AI governance. The frameworks currently being built — the EU AI Act, the American executive orders, the emerging regulatory structures across Asia — largely address the supply side: what AI companies may build, what disclosures they must make, what risks they must assess. These are important structures. But they treat the effects of AI deployment primarily as matters of fact — measurable risks to be managed through technical protocols — rather than as matters of concern that require political deliberation.

The demand side — what citizens, workers, students, and parents need to navigate the transformation wisely — remains almost entirely unaddressed. The retraining gap that Segal identifies as one of the most dangerous features of the moment is a matter of concern masquerading as a technical problem. It is not merely a question of how quickly training programs can be built. It is a question about who bears the cost of the transition, what kind of work the retraining prepares people for, whether the new work preserves the dimensions of human experience — autonomy, mastery, purpose — that make work meaningful rather than merely productive. These are evaluative questions. They cannot be resolved by data alone. They require the kind of deliberation that the conversion of concerns into facts systematically prevents.

The most consequential purification in the current discourse is the treatment of AI capability itself as a matter of fact — as a settled, measurable, politically neutral quantity that simply is what it is. AI can write code. AI can draft briefs. AI can produce essays. These capabilities are presented as facts about the technology, independent of the networks in which the technology operates.

But capability is not a fact. It is a matter of concern. The capability to write code is embedded in a network that includes training data composed of millions of developers' work — work that was contributed to open-source repositories, scraped from public forums, accumulated through processes that the original contributors did not anticipate and did not consent to. The capability to draft briefs is shaped by the legal traditions represented in the training data and the optimization targets of the training process — targets chosen by the model's developers based on their understanding of what constitutes a good legal argument, an understanding that reflects specific jurisdictional assumptions and professional norms. The capability to produce essays is mediated by the training data's representation of different writing traditions, different argumentative styles, different cultural perspectives — a representation that is neither neutral nor complete.

In every case, the capability that presents itself as a simple fact — AI can do X — is actually a network product, shaped by the specific configuration of actants that produced it, reflecting the biases, priorities, and limitations of that configuration. To treat the capability as a fact is to conceal the network. To treat it as a matter of concern is to open the network for examination — to ask whose work is encoded in the capability, whose perspectives are represented and whose are absent, what optimization targets shaped the output, and what the consequences are for the people who depend on the output.

The conversion of matters of concern into matters of fact is not a conspiracy. It is a structural tendency of discourse under conditions of speed and complexity. Clean numbers travel faster than qualified analyses. Settled claims are easier to act on than contested ones. The incentive structure of every institution — media, government, industry, academia — rewards the production of facts and penalizes the maintenance of concerns. The result is a discourse in which the most important questions — questions about value, distribution, meaning, and cost — are buried under a surface of confident numbers and settled claims that conceal the unresolved deliberations beneath.

The work that matters most at the frontier is not the production of more facts. It is the conversion of facts back into concerns — the re-opening of questions that have been prematurely closed, the re-introduction of the political, evaluative, and distributional dimensions that the purification process strips away. Not because facts are unimportant. Because facts alone cannot answer the questions that matter most: who benefits, who bears the cost, and what kind of world the answers produce.

---

Chapter 8: The Invisible Collective

Modern thought offers a specific image of the creator: the individual. The autonomous, self-sufficient, bounded human subject who acts in the world through the exercise of personal capability. The genius who sees what others cannot. The entrepreneur who builds what others will not. The author whose name appears on the cover. The individual is the unit of credit, the unit of responsibility, the unit around which narratives of achievement are constructed. The myth of the solitary genius — Dylan in Woodstock, Newton under the apple tree, the founder in the garage — is the cultural expression of a philosophical commitment: the belief that agency originates in individual subjects and radiates outward into the world.

The commitment is not groundless. Segal is real. The engineer in Trivandrum is real. The developer in Lagos is real. Each has a specific biography, specific capabilities, specific contributions. The reality of individuals is not in question. What is in question is their self-sufficiency — the idea that their capabilities are their own in a way that can be separated from the networks that produce and sustain them.

Every individual is embedded in collectives — heterogeneous assemblages of human and non-human actants that make individual action possible. The scientist who discovers a new phenomenon is embedded in a collective that includes instruments, protocols, institutional arrangements, funding structures, prior discoveries. The entrepreneur who builds a company is embedded in a collective that includes employees, investors, customers, technologies, regulatory frameworks. The collective is not merely the context in which the individual operates. It is the condition of possibility for the individual's action. Without the collective, the individual cannot act — or cannot act in the ways that are attributed to them.

The Orange Pill repeatedly frames the AI transformation in terms of individual empowerment. The single person who builds what once required a team. The solo builder who ships a product in a weekend. The individual whose imagination-to-artifact ratio has been reduced to the width of a conversation. These claims describe a real phenomenon. Capabilities available to individuals have expanded enormously. Barriers between intention and artifact have been reduced. The lone builder is more potent than ever.

But the framing of individual empowerment conceals the collective that makes the empowerment possible. The solo builder does not build alone. The solo builder is embedded in a collective that includes Claude, the training data, the cloud infrastructure, the open-source libraries, the API services, the payment systems, the accumulated knowledge of millions of developers whose work is encoded in the model, the engineers who built the model, the researchers who developed the techniques, the investors who funded the research, the semiconductor workers who fabricated the chips, the energy systems that power the data centers.

The collective has not disappeared. It has changed shape. In the old network, the collective was visible — team members with names, faces, desks, roles. The designer sat next to the developer. The project manager coordinated. Contributions were individually recognizable and individually credited.

In the new network, the collective has become invisible. Not because it has vanished but because it has been black-boxed. The millions of developers whose code contributed to Claude's training are not visible to the solo builder. The engineers who designed the architecture are not visible. The semiconductor workers are not visible. The energy infrastructure is not visible. The collective is vast — far vaster than the team it replaced — but it is concealed behind the interface of the AI system.

The concealment is functional. If the solo builder had to comprehend the entire collective that makes building possible, they could not build. The complexity would be paralyzing. The black box simplifies the interface, and the simplification enables action. But the concealment is not costless. It distorts the distribution of credit and the understanding of what is actually required to produce the outcomes the individual claims.

Segal acknowledges part of this when he writes that he did not write The Orange Pill alone. The acknowledgment is genuine. But the collective is larger than the collaboration between Segal and Claude that the book describes. It includes every text Claude was trained on, every engineer who built the model, every user whose interactions refined it, every prior work that shaped Segal's thinking, every conversation that influenced his perspective, every institutional arrangement that gave him time and resources to write. The book is a product of this vast collective, mediated through two visible nodes. The visibility of those two nodes should not be mistaken for the completeness of the collective.

The pattern holds across every artifact of the AI age. Alex Finn's solo-built product, celebrated in The Orange Pill as evidence that a single person can build a revenue-generating business without a team — this product was not built by a solo builder. It was built by a vast collective that has been black-boxed behind Claude's interface. Finn is the most visible node. The collective — the training data, the infrastructure, the accumulated human knowledge — is the condition that made the building possible.

The distortion matters practically because it shapes how credit, responsibility, and economic value are distributed. When the collective is invisible, the individual receives credit that belongs to the network. This is not unique to the AI age — individuals have always received credit for achievements produced by collectives — but the AI age intensifies the distortion, because the collective is vaster and more thoroughly concealed than ever before.

It also shapes economic distribution. When the solo builder sells a product, revenue accrues to the builder. The collective — the training data contributors, the infrastructure providers, the model developers — receives compensation through different channels: API fees, subscription payments, employment. But the connection between the collective's contribution and the specific artifact is severed by the black box. The economic relationship between builder and collective is mediated through markets that may or may not reflect the actual distribution of contributory value.

Dylan's "Like a Rolling Stone" — which The Orange Pill uses as an extended meditation on the relational nature of creativity — illustrates the point from the other direction. Segal argues, correctly, that Dylan was not the source of the river but a stretch of rapids through which cultural tributaries converged. The song was an act of synthesis — Guthrie, Johnson, the Delta blues, the Beat poets, the British Invasion, all flowing through a specific biographical architecture into something that no other configuration could have produced. The individual is real. The contribution is specific. But the contribution is constituted by the network, not independent of it.

Now extend this to the AI-augmented builder. The builder's contribution — the vision, the judgment, the taste — is real and specific. It is constituted by the builder's biography, experience, and position in the network. But it is also constituted by the network that includes Claude and the vast invisible collective behind it. The builder's output is not the product of an individual plus a tool. It is the product of a network — visible and invisible, human and non-human, acknowledged and concealed — whose composition determines what can be built and whose credit structures determine who gets recognized for building it.

The myth of the solo AI-augmented builder is the latest iteration of the myth of the solitary genius. Dylan was never alone in Woodstock. The solo builder is never alone with their laptop. The collective is always present — always contributing, always shaping the outcome. The question is not whether the collective exists but whether it will be acknowledged: in the narratives told about AI-augmented work, in the governance structures built around it, in the economic arrangements that distribute the value it produces, and in the understanding of what it actually means to build in an age when the collectives are vaster than ever and less visible than ever.

The shift from teams to solo builders is not a shift from collective to individual. It is a shift from visible collectives to invisible ones. The builders are more potent than ever. The collectives are more vast than ever. And the gap between what is visible — the individual at the screen — and what is real — the vast network that makes the screen productive — is wider than it has ever been. Whether that gap is acknowledged or ignored, studied or naturalized, governed or left to the interests of the most visible nodes — that is among the most consequential choices the current moment presents.

Chapter 9: The Parliament of Networks

There is a proposal that sounds absurd until you think about it carefully, and then sounds absurd in a different, more productive way. The proposal is that non-human entities deserve representation in the assemblies where consequential decisions are made. Not because rivers have opinions or microbes have voting preferences, but because their characteristics shape outcomes that affect everyone, and a governance system that ignores the characteristics of its most consequential participants is not governing — it is performing governance while the actual determinants of the outcome operate unexamined.

The proposal was advanced most provocatively in Politics of Nature and in the broader arc of work on the "modern constitution" — the tacit agreement, never voted on but ruthlessly enforced, that separates Nature from Society, Science from Politics, Facts from Values, and grants to scientists the exclusive authority to speak for Nature while granting to politicians the exclusive authority to speak for Society. The modern constitution works by purification: it takes the messy hybrids that actually compose the world — entities that are simultaneously natural and social, factual and political, human and non-human — and assigns each to one side of a divide that the entities themselves do not respect.

AI is the hybrid that explodes the constitution.

An AI system is simultaneously a technical artifact and a social institution. It is a product of engineering decisions and a carrier of cultural biases. It operates through mathematical optimization and produces effects that are irreducibly political — redistributing capability, restructuring labor markets, concentrating power in specific institutions, reshaping what counts as knowledge and who counts as knowledgeable. To govern AI as a purely technical matter — to assign it to the Science side of the constitution and let engineers speak for it — is to ignore the social, political, and evaluative dimensions that constitute half of what it actually is. To govern it as a purely social matter — to assign it to the Politics side and let legislators regulate it without understanding its technical characteristics — is to produce regulations that miss the mechanism entirely, governing the surface while the depth operates unchecked.

The current governance landscape reproduces the constitutional divide with remarkable fidelity. On one side, the technical community — AI researchers, engineers, company leadership — speaks for the technology. They describe its capabilities, define its risks, propose safety measures. They speak with the authority of expertise, and the expertise is real. But the expertise is also partial. It addresses what the technology can do without adequately addressing what it should do, who it should serve, whose perspectives it encodes, and whose it excludes. On the other side, legislators and regulators speak for society. They propose rules, define boundaries, mandate disclosures. They speak with the authority of democratic representation, and the authority is real. But the representation is also partial. It addresses the effects of the technology on human populations without adequately understanding the mechanisms through which those effects are produced — the training-data composition, the optimization targets, the architectural decisions that determine what the system privileges and what it suppresses.

Neither side speaks for the technology-society hybrid that AI actually is. And the gap between them — the space where the technical meets the political, where the mechanism meets the effect, where the optimization target meets the human consequence — is the space where the most important questions live and the space that the current governance architecture systematically fails to address.

What would it mean to close this gap? Not through the fantasy of a single institution that combines technical expertise and democratic legitimacy — such institutions do not exist and probably cannot. But through deliberative structures that bring the different kinds of knowledge into productive confrontation — structures where the engineer's understanding of the mechanism and the citizen's experience of the effect are forced to confront each other, where neither can claim exclusive authority, where the hybrid nature of the object under governance is reflected in the hybrid nature of the governing process.

Concretely, this means governance bodies for AI that include not only technical experts and elected officials but also the actants whose characteristics shape outcomes and are currently unrepresented. The training data that encodes specific cultural perspectives and excludes others — its composition should be a matter of public deliberation, not a proprietary secret. The optimization targets that determine what the model privileges — these are value choices masquerading as technical parameters, and they should be subject to the same scrutiny as any other value choice that affects millions of people. The infrastructure dependencies — the energy consumption, the semiconductor supply chains, the geopolitical arrangements that determine who can build frontier models and who cannot — these are political facts that shape the distribution of AI capability across the globe, and they should be visible in the governance process.

The Anthropic approach to what it calls Constitutional AI — the attempt to encode values and behavioral constraints directly into the model's training — is, whether its designers frame it this way or not, an experiment in hybrid governance. It brings evaluative judgments (what the model should and should not do) into the technical process (how the model is trained), blurring the constitutional line between facts and values in a way that Latour would have recognized immediately. The constitution is not being respected. It is being renegotiated — inside the training loop, by a small number of engineers at a single company, making value choices that affect every user of the system.

This is not a criticism of Constitutional AI. It may be the best available approach to a genuinely difficult problem. But it is a governance arrangement, and it should be recognized as one — subjected to the same scrutiny, the same demand for transparency, the same requirement of democratic accountability that any governance arrangement demands. The fact that the value choices are embedded in technical processes does not make them less political. It makes them more political, because it makes them harder to see, harder to contest, and harder to change through the ordinary mechanisms of democratic deliberation.

The parliament of networks — the governance structure adequate to the hybrid reality of AI — does not yet exist. What exists are fragments: regulatory frameworks that address the supply side, industry self-governance that addresses the technical side, academic analysis that addresses the conceptual side, and public discourse that addresses the emotional side. These fragments do not add up to a coherent governance structure. They coexist without integration, each addressing its own slice of the hybrid while the hybrid itself — the technology-society assemblage that AI actually is — remains ungoverned in its totality.

Building the parliament is not a philosophical exercise. It is an institutional design problem of the first order. And the design must reflect the insight that runs through every chapter of this analysis: that the networks through which AI operates are composed of human and non-human actants whose characteristics jointly determine the outcomes, and that governance which addresses only the human actants — which regulates human behavior while leaving the non-human actants' characteristics unexamined — is governance of the surface, not the substance.

The parliament does not require giving AI systems the vote. It requires giving their characteristics — their biases, their tendencies, their dependencies, their failure modes — a place in the deliberative process. It requires making visible what the black box conceals, converting what circulates as settled fact back into matters of concern, and building institutions capable of the sustained, hybrid, technically-informed-yet-democratically-accountable deliberation that the most consequential technological transformation in human history demands.

Whether the parliament is built deliberately or left to emerge from the collisions of interest and accident — that is the political question of the AI age. And the answer will determine not what AI can do, which is largely a technical question, but what AI will mean — which is a question that no technical expertise can answer and no democratic process can afford to leave unanswered.

---

Chapter 10: Reassembling the Builder

What does the builder look like when the network has been traced honestly — when the actants have been followed, the translations mapped, the obligatory passage points identified, the black boxes opened, and the matters of concern distinguished from the matters of fact?

Not the individual celebrated in the triumphalist narrative. Not the victim mourned in the catastrophist one. Something more complex, more interesting, and more demanding of governance than either account suggests.

The reassembled builder is a node in a reconstituted network. This formulation sounds abstract, but it has concrete and immediate implications for every question the AI moment raises — about credit, about responsibility, about education, about economic distribution, about the meaning of creative work in a world where the collectives that produce it have become simultaneously vaster and less visible than at any point in human history.

Start with credit. The Orange Pill frames AI-assisted work as human creativity amplified by machine capability. The human provides the vision; the machine extends the reach. In this framing, credit flows naturally to the human: the vision was theirs, and the machine merely carried it further. But the network analysis has shown, across eight chapters of detailed tracing, that the framing is wrong — not maliciously wrong, but structurally wrong. The output of AI-assisted work is a network product. It reflects the human's intention and Claude's transformative mediation and the training data's encoded knowledge and the infrastructure's enabling constraints and the temporal pressure's constitutive effects. To credit the human alone is to perform the purification that makes the invisible collective disappear — to foreground one node and background the network that constituted it.

This does not mean the human deserves no credit. The human's contribution — the specific synthesis that only their particular biography, experience, and position in the network makes possible — is real, irreplaceable, and worthy of recognition. What it means is that the credit system itself needs to account for the distributed nature of the production. Just as the credit system for scientific publications has evolved to reflect the collaborative nature of modern research — with multiple authors, contribution statements, and institutional acknowledgments — the credit system for AI-assisted work needs to evolve to reflect the network that produces it.

Segal's transparency about his collaboration with Claude is a step in this direction. But transparency about the most visible non-human collaborator leaves the rest of the invisible collective unacknowledged. The training data contributors, the infrastructure operators, the researchers whose techniques made the model possible — their contributions are as essential as Segal's or Claude's, and they are completely invisible in the credit structure.

Now responsibility. The current framework assigns responsibility to the human — "human in the loop," "human oversight," "human accountability." The assignment is based on the myth of the human agent: the human decides, the machine executes, and therefore the human is responsible for the output. The intermediary-to-mediator analysis has shown that this assignment does not correspond to the actual distribution of agency in the network. Claude transforms intention in the process of realizing it, introducing its own characteristics — training-data biases, architectural tendencies, processing preferences — into every output. The human who accepts the output is responsible for the acceptance, but the output itself is a joint product whose characteristics reflect the contributions of multiple actants.

A responsibility framework adequate to the reconstituted network would not eliminate human responsibility. It would situate it within a broader distribution that also addresses the responsibilities of the organizations that build AI systems (for the characteristics of the mediator they have created), the institutions that deploy them (for the networks into which they are introduced), and the governance structures that oversee them (for the adequacy of the deliberative processes through which AI's characteristics are examined and regulated). Responsibility, like agency, is a property of the network, not of any individual node.

Then education. The prescriptions in The Orange Pill — teach questioning over answering, judgment over execution, curiosity over compliance — are sound as far as they go. But the network analysis suggests they do not go far enough. The capacity that matters most in the reconstituted network is not any individual skill — not prompting, not evaluating, not even questioning. It is the capacity to see the network — to understand the configuration of actants that produces the outcomes one depends on, to identify the translations through which intention is transformed, to recognize the specific characteristics of the mediators through which one's work passes.

This is a form of literacy — network literacy, the capacity to read the infrastructure through which AI-assisted work is produced. It is the ability to ask: what is in the training data that shapes this output? What optimization targets determined this model's behavior? What biases are embedded in the processing, and how might they be shaping the result? What actants am I depending on that I cannot see? Network literacy does not replace technical skill or creative judgment. It supplements them with the structural understanding needed to navigate a world in which the most consequential actants in one's creative and productive networks are opaque, powerful, and operating at a scale that makes individual vigilance insufficient.

Finally, meaning. The question that Segal's twelve-year-old daughter asks — "What am I for?" — is, from the perspective of the reassembled builder, not a question about the individual. It is a question about the network. What the individual is "for" depends on the network they participate in — on the relationships they maintain, the contributions they make, the specific angle of vision they bring to the collective enterprise of building, creating, and knowing. The individual is not diminished by this reframing. They are properly located — as a node whose value is constituted by the network but whose specific contribution is irreplaceable, because no other node occupies exactly this position with exactly this biography and exactly this capacity for care.

The reassembled builder holds two truths simultaneously. The builder is a person — a creature that loves and fears and wonders, that asks questions no machine will originate, that cares about outcomes in the embodied way of a being with finite time and particular attachments. This is the truth that The Orange Pill captures with genuine emotional force. And the builder is a node — an actant in a configuration of other actants, whose capabilities are constituted by the network, whose outputs are network products, whose position and power are determined by structural features that no individual controls. This is the truth that actor-network theory makes visible.

Both truths are necessary. Neither is sufficient. The person without the network has vision but no reach. The node without the person has reach but no direction. The intersection — the specific, biographically particular, irreplaceable human operating within networks of unprecedented power and complexity — is where the future of building lives.

The networks have been reconstituted. New actants have entered — actants of a power and breadth without precedent in the history of technology. Old actants have been repositioned — their structural power dissolved, their contributory value revealed, their relationships to each other fundamentally altered. The obligatory passage points have migrated. The black boxes have multiplied. The translation chains have collapsed and reformed. The invisible collectives have grown vaster and less visible.

Understanding these reconstituted networks — their structure, their dynamics, their characteristic distortions, their emergent capabilities — is the analytical work. Governing them — building the deliberative structures, the institutional arrangements, the educational practices, the cultural norms that can direct their enormous productive power toward outcomes that serve the broadest possible range of participants — is the political work.

Neither the analytical work nor the political work can be done from within the myth of the human agent. The myth tells comforting stories about individuals empowered by tools. The network tells more complex — and more useful — stories about configurations of actants producing outcomes through translations that no single participant fully controls.

The actants have declared themselves. The question is whether the governance structures built around them will match the actual complexity of the networks they are meant to govern — or whether the myth will be patched and extended, the individual celebrated as the source, the tool dismissed as the instrument, and the vast collective that makes all building possible left unacknowledged, ungoverned, and unrepresented in the assemblies where the future is decided.

The reassembly is not a philosophical exercise. It is the prerequisite for governance that works. And governance that works — governance that directs the most powerful networks in human history toward outcomes that serve the full range of their participants — is the most consequential building project of the age.

---

Epilogue

The speed bump got me.

Not the grand arguments about distributed agency or the migration of obligatory passage points — though those rewired how I think about the teams I build and the products I ship. What arrested me was the simplest, most Latourian observation imaginable: a speed bump is an actant. A lump of asphalt that nobody credits with agency modifies the behavior of every driver who encounters it more reliably than any traffic law, any public safety campaign, any earnest appeal to shared responsibility. The speed bump does not persuade. It does not reason. It does not care. It acts — through its sheer physical presence in the network — and the network reorganizes around it.

I have been building speed bumps my entire career without knowing it. Every product decision, every architectural choice, every interface constraint I have shipped into the world has modified the behavior of the people who encountered it — not through the force of my vision, which is the story I preferred to tell, but through the structural characteristics of the thing I placed in the network. The thing acted. I was one of its authors, but I was not its only author, and I was certainly not in control of what it did once it was out there, embedded in networks I could not fully see.

Latour died in October 2022, thirteen months before ChatGPT crossed the threshold that reshaped my industry. He never saw the tools I describe in The Orange Pill. He never experienced the vertigo of watching a machine produce, in minutes, work that would have taken his graduate students weeks. He never faced the question that kept me up in Trivandrum — whether the twenty-fold productivity I was witnessing was liberation or a new form of concealment, the invisible collective growing vaster and more hidden with every prompt I typed.

But his framework — follow the actants, trace the translations, refuse to pre-sort the world into active humans and passive tools — turns out to be the most precise instrument I have found for understanding what actually happened when Claude entered the networks through which I build. Not what the triumphalists say happened (the human was empowered). Not what the catastrophists say happened (the human was displaced). What actually happened: the network was reconstituted, the translations changed, the passage points migrated, and the outputs became joint products of configurations so complex that no single participant — human or otherwise — can honestly claim to be the source.

The confession that The Orange Pill keeps circling — that I do not fully know who wrote the book — is, I now realize, a Latourian confession. Not a confession of weakness. A confession of network honesty. The book was produced by a configuration of actants whose contributions I can partially trace but cannot fully separate. My questions. Claude's associations. The training data's accumulated human knowledge. The deadline pressure that compressed my thinking into shapes it would not otherwise have taken. The editor who carved the prose into something tighter than I could have managed alone. The invisible collective — vast, unacknowledged, essential — that made every sentence possible.

I am still the builder. But the builder, reassembled, looks different than the figure I described in The Orange Pill. Less sovereign. More embedded. Less the source of the signal and more a specific kind of node — irreplaceable in my particular position, constituted by a network I did not design, producing outcomes I cannot fully predict, responsible for choices whose consequences extend through translations I cannot fully trace.

That is not a comfortable place to stand. But it is an honest one. And honesty about the network — about who and what actually participates in producing the things we build — is the prerequisite for building structures that direct the network's enormous power toward outcomes worth wanting.

The actants are declaring themselves. They have been declaring themselves since long before I took the orange pill. The question was never whether to listen. The question was whether I would hear what they were saying over the sound of the story I preferred to tell about myself.

I am listening now.

Edo Segal

You are not the author. You are the most visible node.

The sooner you see the difference, the sooner you can govern what you build. Every story told about AI follows the same script: a human has a vision, a machine executes it, and the human gets the credit. Bruno Latour spent fifty years proving that this script — the clean separation of active subjects from passive tools — is the deepest lie modern thought tells itself. His method was radical in its simplicity: follow the actants. Trace who and what actually participates in producing the outcome. Let the network show you what your mythology hides. This book applies Latour's actor-network theory to the AI revolution with forensic precision. It traces the translations through which human intention passes on its way to artifact, reveals the obligatory passage points where power concentrates, opens the black boxes whose smoothness conceals their biases, and exposes the invisible collectives that make every "solo builder" possible. The result is not a critique of AI or a celebration of it. It is a map of what is actually happening — the reconstituted networks, the migrated power, the vast hidden labor — that neither the triumphalists nor the catastrophists can see from inside the myth of the human agent. — Bruno Latour, We Have Never Been Modern

Bruno Latour
“who wrote this book?”
— Bruno Latour
0%
11 chapters
WIKI COMPANION

Bruno Latour — On AI

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Bruno Latour — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →