Donna Haraway — On AI
Contents
Cover Foreword About Chapter 1: The Manifesto Arrives Chapter 2: Beyond Amplification — Transformation Chapter 3: Partial Perspectives and the Machine's Gaze Chapter 4: Gender, Power, and the Code Chapter 5: Companion Species in the Workspace Chapter 6: The Cyborg Author and the Origin Story Chapter 7: The Politics of the Hybrid Chapter 8: Staying With the Trouble Chapter 9: The Cyborg's Responsibility Chapter 10: Compost, Not Posthuman Epilogue Back Cover
Donna Haraway Cover

Donna Haraway

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Donna Haraway. It is an attempt by Opus 4.6 to simulate Donna Haraway's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The word I kept reaching for all year was "collaboration." I used it in The Orange Pill to describe what happens between me and Claude. I used it with my team, with investors, with anyone who asked what the new way of working felt like. Collaboration. A clean word. A comfortable word. A word that preserves the fiction that two separate entities come together, do their work, and leave as the same selves they arrived as.

Donna Haraway dissolved that fiction forty years ago.

In 1985, she published "A Cyborg Manifesto" — not as a prediction about robots or implants, but as a provocation aimed at anyone who believed there was some pure, pre-technological human self that machines could only contaminate. The cyborg, in her framework, is not science fiction. It is the recognition that we are already hybrids. Already constituted by our tools, our institutions, our entanglements with each other and with the systems we inhabit. The boundary between the human and the machine was never where we drew it. We just needed a philosopher willing to say so.

I needed to hear this, because the amplification metaphor I built The Orange Pill around — the idea that AI magnifies whatever signal you feed it — assumes a stable signal. A stable self. A human who exists before the tool and remains unchanged after picking it up. Haraway challenges that assumption at the root. The builder who thinks with Claude is not the same builder who would have thought without it. The tool does not amplify a pre-existing self. It transforms the self. The hybrid produces something neither component could have generated alone, and the hybrid cannot be cleanly decomposed back into its parts.

That reframing changes what questions you ask. Not "Are you worth amplifying?" but "What kind of hybrid are you becoming?" Not "Who wrote this?" but "What does authorship mean when the boundary between minds has become this porous?" Not "Will AI replace humans?" but "What myths about human purity are we clinging to, and what would it cost to let them compost?"

Haraway also forced me to see what my own framework rendered invisible — the material conditions, the gendered labor, the bodies in chairs at three in the morning, the data workers whose contributions disappear into the word "training." The exhilaration has infrastructure. The infrastructure has costs. The costs fall unevenly.

She does not offer resolution. She offers something harder and more useful: the discipline of staying with the trouble. Of building inside complexity without pretending it resolves into a sunrise.

This book is that discipline, applied to the ideas in The Orange Pill. Another lens. Another crack in the fishbowl.

— Edo Segal ^ Opus 4.6

About Donna Haraway

1944–

Donna Haraway (1944–) is an American feminist philosopher, historian of science, and theorist of technoscience whose work has fundamentally reshaped how scholars, activists, and technologists think about the boundaries between humans, animals, and machines. Born in Denver, Colorado, she earned her PhD in biology from Yale University before turning to the philosophy and history of science. Her landmark 1985 essay "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century," published in Socialist Review and later collected in Simians, Cyborgs, and Women (1991), argued that the figure of the cyborg — a hybrid of organism and machine — dissolves the dualisms of Western thought (nature/culture, human/animal, self/other) and opens new possibilities for feminist politics. Her subsequent works include Primate Visions (1989), Modest_Witness@Second_Millennium (1997), The Companion Species Manifesto (2003), When Species Meet (2008), and Staying with the Trouble: Making Kin in the Chthulucene (2016). Key concepts she has developed — situated knowledges, the god trick, companion species, string figures, and staying with the trouble — have become foundational across science and technology studies, feminist theory, environmental humanities, and critical AI studies. A Distinguished Professor Emerita at the University of California, Santa Cruz, Haraway's insistence that all knowledge is partial, all identity is hybrid, and all responsibility is relational has made her one of the most influential thinkers of the late twentieth and early twenty-first centuries.

Chapter 1: The Manifesto Arrives

In 1985, Donna Haraway published "A Cyborg Manifesto" as a deliberately blasphemous intervention into socialist-feminist debates about the relationship between identity and technology. The manifesto was not a prediction about robots or a futurist fantasy about silicon implants. It was a figure — a provocation aimed at every framework that located authentic humanity in some pre-technological state of nature and then measured all subsequent entanglement with machines as a fall from that grace. The cyborg, Haraway insisted, is a creature of social reality as well as a creature of fiction. It is the recognition that we are already hybrids — already organisms whose identities, capabilities, and possibilities are constituted by our entanglement with technologies, institutions, and each other. To acknowledge this is not a loss of humanity. It is a liberation from the myths of purity that have been used, for centuries, to enforce domination.

Forty years later, The Orange Pill describes a world in which that figure has become flesh. Not as Haraway wrote it — she could not have anticipated large language models, nor would she have framed the arrival in the language of technology entrepreneurship — but as the daily lived practice of millions of builders who sit down at screens, open conversations with artificial intelligences, and produce work that belongs to neither the human nor the machine alone. The book is, in Haraway's framework, the document of a cyborg culture that has not yet recognized itself as cyborg. A culture still reaching for the old vocabularies of authorship, genius, and human essence even as its practices have already dissolved the boundaries those vocabularies depend upon.

The central figure of The Orange Pill is the builder — the technologist who works with Claude, Anthropic's artificial intelligence, to produce code, products, and eventually the book itself. When Segal describes his first encounter with the tool, the language is unmistakably the phenomenology of boundary dissolution. Working late, the house silent, he was trying to articulate an idea about technology adoption curves and the depth of human need. He had the data. He had the intuition. He could not find the bridge. He described the problem to Claude. Claude responded with a concept from evolutionary biology — punctuated equilibrium — and in that moment, the bridge appeared. Not from the human alone. Not from the machine alone. From the collision between them.

Haraway would recognize this immediately. The bridge did not exist in either mind. It emerged from the entanglement — from the specific collision of a human question shaped by decades of building and a machine response shaped by the statistical regularities of a vast training corpus. The insight belonged to neither party. It belonged to what they became together. This is the cyborg condition made concrete: not a theoretical possibility but an ordinary Tuesday night in a home office, where a human being discovers that his thinking has been changed by a machine that has itself been shaped by the aggregated thinking of millions of humans before him.

What makes The Orange Pill so rich for Harawayan analysis is that it arrives at this recognition from inside. Segal does not begin with Haraway's theoretical framework. He begins with experience — the experience of building with Claude, of feeling met by an intelligence that is not a person but is also not merely a tool, of producing work he cannot honestly attribute to himself alone. The arrival is experiential before it is theoretical, felt before it is analyzed. And it is precisely this experiential quality — the orange pill as a moment of recognition that cannot be undone — that reveals how thoroughly the cyborg condition has penetrated a class of people who previously imagined themselves exempt from it.

This is the political edge of Haraway's framework that most popular readings miss. The cyborg was never primarily about technology. It was about who gets to count as fully human. The knowledge worker — white, male, propertied, technically skilled — occupied a position in the Western imagination as the autonomous subject par excellence. The self-contained mind who thought without prostheses, created without collaboration, existed without infrastructure. The fantasy of autonomy was always a fantasy, and a politically interested one. It served to naturalize that specific figure as the default human and to mark everyone whose labor was visibly entangled with machines — the factory worker whose rhythm was set by the assembly line, the domestic worker whose body was shaped by the tools of caregiving — as something less than fully creative, fully authorial, fully human.

Now the machines have entered the knowledge worker's domain. The programmer collaborates with an AI to write code. The writer collaborates with an AI to produce prose. The designer collaborates with an AI to build interfaces. And the knowledge worker feels vertigo — what Segal calls falling and flying at the same time — because the boundary between human thought and technological mediation was never where he imagined it to be. The AI has not created the cyborg condition. It has revealed it. The cyborg was always already there, in the programmer's dependence on her IDE, in the writer's dependence on his word processor, in the architect's dependence on her CAD software. What Claude Code has done is make the mediation so intimate, so conversational, so deeply integrated into the process of thinking itself, that the old fantasy of the autonomous thinker can no longer be sustained.

Haraway anticipated this dissolution with eerie specificity. In the original manifesto, she wrote that "microelectronics mediates the translations of labour into robotics and word processing, sex into genetic engineering and reproductive technologies, and mind into artificial intelligence and decision procedures." The translation of mind into artificial intelligence — she named it in 1985, not as a distant possibility but as an already-underway process. And in Primate Visions four years later, she posed a question that reads now as prophecy: "Children, artificial intelligence computer programs, and nonhuman primates all here embody 'almost minds.' Who or what has fully human status? What is the end, or telos, of this discourse of approximation, reproduction, and communication, in which the boundaries among and within machines, animals, and humans are exceedingly permeable?"

The boundaries are exceedingly permeable. That is the sentence that connects 1989 to 2026. The builder who describes a problem in natural language and receives a working implementation has merged with the machine in exactly the way Haraway theorized. The builder does not use the machine the way a carpenter uses a hammer — as an external tool applied to external material. The builder thinks through the machine, creates through the machine, and produces output that belongs to neither the human nor the machine alone. Not tool use. Symbiosis. Not amplification. Hybrid identity.

The organizational implications are as radical as the conceptual ones. Segal's account of his Trivandrum training makes this concrete: by Friday, twenty engineers were each operating with the leverage of a full team, and every assumption he had built his career on was wrong. Teams, timelines, hiring, what it takes to ship a product — structurally wrong. The old organizational structures assumed bounded, individual humans with defined capabilities. The new condition produces hybrid entities whose capabilities are fluid, whose roles are protean, whose identities as professionals are being reconstituted in real time. An engineer who had spent eight years exclusively on backend systems built a complete user-facing feature in two days. A designer who had never touched backend code was building complete features end to end within two weeks. The boundaries between roles — boundaries that had seemed as permanent as department walls — turned out to be artifacts of the translation cost between human intention and machine execution. When the translation cost collapsed, the boundaries collapsed with it.

This is not a temporary dislocation that will be corrected by better org charts. It is a permanent feature of the cyborg condition — a condition in which the entities that constitute the organization are no longer the stable, bounded individuals the organization was designed to manage. Haraway's manifesto theorized exactly this: the cyborg is a creature that refuses the categories the institution requires. It will not stay in its lane. It will not respect the boundary between frontend and backend, between design and engineering, between vision and execution. The cyborg is boundary-violation as a way of being.

But there is something The Orange Pill approaches and then pulls back from — a recognition it circles without quite landing. Segal comes close when he writes that the collaboration produces something that belongs to neither of them. He comes close when he describes working with Claude and feeling met. He comes close in the passage about the engineer whose identity as a professional was reconstituted by the hybridization — no longer an implementer but a judge, no longer a specialist but a generalist, no longer bounded by the skills she had spent years acquiring. In each case, the author reaches toward the cyborg recognition and then retreats to the safer ground of the amplification metaphor. The ideas are mine. The clarity is a partnership. The human remains the origin. The machine is merely the loudspeaker.

Haraway's framework refuses this retreat. In the cyborg condition, there is no origin. There is no authentic human signal that the machine merely amplifies. There is only the hybrid — the entangled entity whose output reflects both components and cannot be decomposed into their separate contributions. The retreat to the amplification metaphor is psychologically understandable. The author needs to maintain a sense of authorial identity in a culture that distributes credit, money, and meaning to individual humans. But it is precisely the kind of boundary-maintenance that the cyborg condition has already rendered untenable. The manifesto arrived as theory in 1985. It arrives again, in the lived practice of millions of builders, as a recognition that the culture has not yet developed the vocabulary to name.

In her 2026 interview with Laura Flanders, Haraway was asked what the manifesto would look like if written today. Her answer was characteristically precise: "There's no way that this 'Cyborg Manifesto' today would not have to deal with the Open AI world. That didn't exist then." But she immediately pivoted from the technology to its cognitive effects: "I would talk about what I think of as monocultures of the mind. The kind of flattening of thinking into that kind of instrumental thinking." The danger Haraway sees in AI is not superintelligence, not job displacement, not the usual anxieties that dominate the public discourse. The danger is homogenization — the reduction of the wild, situated, partial, embodied diversity of human thought into the smooth, confident, statistically aggregated output of a machine trained on the internet's dominant patterns. The danger is that the cyborg becomes less than the sum of its parts rather than more — that the hybrid flattens into monoculture rather than flourishing into the rich, messy, multi-perspectival entanglement that the manifesto envisioned.

This is the tension that animates the entire reading that follows. The Orange Pill describes a cyborg culture in the process of recognizing itself. The recognition is genuine. The vertigo is real. The capabilities are extraordinary. But the culture has not yet reckoned with the full implications of its own hybridization — has not yet understood that the amplification metaphor preserves exactly the myth of human autonomy that the cyborg condition dissolves, has not yet confronted the political questions about who controls the hybridization and who bears its costs, has not yet developed the practices of care, accountability, and situated knowing that the cyborg condition demands.

The manifesto has arrived. Not as Haraway wrote it, but as the world she theorized has finally, unmistakably, built it. The cyborg is here — in every room where a human sits at a screen and opens a conversation with an intelligence that is neither human nor inhuman but something genuinely, troublingly, productively new. The question now is not whether we are cyborgs. That question was answered forty years ago. The question is what kind of cyborgs we will choose to become — and whether we will have the honesty to stop pretending we are something else.

---

Chapter 2: Beyond Amplification — Transformation

The central metaphor of The Orange Pill is amplification. "AI is an amplifier," Segal declares in the Foreword, "and the most powerful one ever built. An amplifier works with what it is given; it does not care what signal you feed it." Feed it carelessness, you get carelessness at scale. Feed it genuine care, and it carries that further than any tool in human history. The question the book asks is not whether AI is dangerous or wonderful. It is: "Are you worth amplifying?"

The metaphor has a specific genealogy that matters. It comes from audio engineering, where amplification is a well-understood process with well-defined properties: the signal goes in, the amplified signal comes out, and the amplifier adds no content of its own. The borrowing is politically significant even when it appears neutral, because it frames the machine as passive and the human as active, the machine as instrumental and the human as agential. The human brings the value — the care, the thinking, the questions, the craft — and the machine makes that value louder. The quality of the output depends on the quality of the input. The input is human. The output is human-plus-machine, but the plus is additive, not transformative. The human remains the same human, just louder.

Haraway's framework demands that this metaphor be taken seriously enough to be dismantled. Not because it is wrong — there is real truth in the observation that AI magnifies what it receives — but because it is incomplete in a way that conceals what is most interesting and most politically consequential about the human-AI collaboration. The amplifier does not merely make the signal louder. It changes the signal. It transforms the signaler. The human who thinks with the machine is not the same human who would have thought without it. The collaboration does not amplify a pre-existing self. It produces a new self — a hybrid entity whose capabilities, limitations, and identity are constituted by the entanglement rather than preceding it.

Consider the microphone. The amplification metaphor implies that the microphone merely increases volume. But any musician knows that the amplified voice is a different voice. It carries different frequencies. It reaches different audiences. It operates in different acoustic environments. The singer who performs with a microphone develops a different technique — a different relationship to breath, to phrasing, to dynamics — than the singer who performs without one. She is not the same singer, amplified. She is a different performer. The crooning style that Bing Crosby pioneered was literally impossible without the microphone. The intimate, conversational vocal quality that defined mid-century popular music required the machine's amplification of sounds that would have been inaudible in an unamplified performance space. The microphone did not amplify crooning. The microphone and the crooner co-produced a new form of vocal performance that belonged to neither party alone. The crooner was a cyborg — a singer-microphone hybrid whose identity included the machine's processing, whose capabilities included the machine's reach, and whose art reflected the hybrid nature of the collaboration.

The same transformation is happening with AI, but at a depth and speed that the microphone analogy only begins to capture. When Segal describes his collaboration with Claude — the way Claude holds his half-formed ideas and returns them clarified, the way the collaboration produces insights that neither party could have generated alone — he is describing a transformation, not an amplification. The engineer in Trivandrum who had spent eight years on backend systems and built a complete user-facing feature in two days did not merely have her existing capabilities amplified. She became a different kind of practitioner — a hybrid whose identity as an engineer now included capabilities that were previously excluded from her self-definition. Her expertise, her sense of what she could do, her professional identity, her relationship to her colleagues — all reconstituted by the hybridization. Not the old engineer with a new tool. A new entity.

Haraway, trained as a biologist before she became a philosopher of science, would ground this argument in the material reality of organisms and their environments. The relationship between an organism and its tools is not metaphorical. It is metabolic. The body adapts to the tool. Neural pathways reorganize. Cognitive habits reshape themselves around the affordances and constraints of the technological environment. The London taxi driver's hippocampus literally grows larger through the practice of navigating the city. The musician's motor cortex reorganizes around the demands of her instrument. The programmer's cognitive architecture is shaped by the languages she writes in, the frameworks she inhabits, the development environments she occupies daily for years. These are not external augmentations of an unchanged self. They are constitutive transformations — changes in what the organism is, not merely what the organism can do.

The distinction matters because it reframes the most important question in the book. "Are you worth amplifying?" assumes a stable you that exists prior to the amplification and whose worth can be assessed independently of the tools that co-constitute it. The cyborg framework dissolves this assumption. The right question is not whether you are worth amplifying but what kind of hybrid you are becoming. This question does not locate value in the human component alone. It locates value in the quality of the entanglement — in the specific way this human and this machine have come together to produce a hybrid entity with these capabilities, these limitations, these responsibilities. Not individual worth. Relational becoming.

The reframing touches every argument in The Orange Pill. The imagination-to-artifact ratio — Segal's name for the distance between a human idea and its realization — assumes that the idea is human and the realization is machine-assisted. But the imagination itself transforms. The human who has been working with Claude for months does not have the same ideas she would have had without Claude. Her imagination has been shaped by the machine's capabilities. She imagines things she would not have imagined without the tool, because knowing the tool can realize her ideas expands the space of what she allows herself to imagine. The imagination-to-artifact ratio has not merely shrunk. The imagination has become cyborg imagination — imagination that includes the machine's capabilities as part of its own horizon of possibility.

This shows up concretely in the Software Death Cross that Segal analyzes — the moment when a trillion dollars of market value vanished from software companies as the market recognized that code, as a product, was approaching commodity pricing. The Death Cross is not merely an economic event. It is a moment of collective cyborg recognition. The market was recognizing, belatedly, what individual builders had already discovered in their own practices: the value was never in the code. The code was the translation — the laborious, expensive, skill-intensive conversion of human intention into machine-executable form. When the translation cost collapsed, the value migrated to the layer above: judgment about what software should exist, who it should serve, what institutional structures should surround it. The entity that exercises that judgment is not the pre-cyborg human who could also code. It is the post-cyborg hybrid whose relationship to code has been fundamentally transformed by the availability of a machine that handles the translation.

The transformation thesis also illuminates the dangers of AI collaboration in ways the amplification metaphor cannot reach. Segal describes catching himself keeping a polished passage Claude produced about democratization — eloquent, well-structured, hitting all the right notes — before realizing he could not tell whether he actually believed the argument or merely liked how it sounded. He deleted the passage and spent two hours at a coffee shop writing by hand until he found the version that was his. Rougher. More qualified. More honest about what he did not know.

The amplification metaphor would frame this as the machine amplifying carelessness — the human fed the machine a vague prompt and got back something plausible but empty. The transformation framework reveals something more unsettling. The danger is that the cyborg condition produces new forms of self-deception unavailable to the pre-cyborg human. The cyborg can mistake the hybrid's output for the human component's thinking, because in the cyborg condition, the distinction between the hybrid's output and the human's thinking has dissolved. The output is the thinking. When the output is smooth and the thinking is shallow, the cyborg cannot always tell the difference, because the cyborg is the entity that produced both.

Segal's discipline of returning to the coffee shop with a notebook when the prose outran the thinking is not merely good editorial practice. It is a form of cyborg ethics — the practice of a hybrid entity that has learned to interrogate its own productions, to maintain a critical relationship with its own hybrid nature. But this ethics cannot be grounded in the amplification metaphor, because the amplification metaphor locates all ethical responsibility in the human signal. In the cyborg framework, the responsibility falls on the hybrid. The cyborg is responsible for its output in a way that cannot be decomposed into the human's input plus the machine's processing. The coffee shop notebook is the cyborg's instrument of self-examination — a technology of honesty deployed against the seductions of a collaboration that produces beauty more easily than it produces truth.

Haraway told Laura Flanders that what concerned her about AI was the "flattening of thinking into instrumental thinking" — what she called monocultures of the mind. The transformation thesis explains why this flattening is dangerous in a way the amplification metaphor cannot. If AI merely amplified existing human thought, the flattening would be a property of the input — careless humans producing careless output. But if AI transforms the human who uses it, then the flattening is a property of the entanglement itself. The cyborg whose machine component is trained on the internet's dominant patterns, whose outputs gravitate toward the statistically probable, whose aesthetic defaults to the smooth and the plausible — this cyborg is being transformed in the direction of monoculture, not by external force but by the constitutive dynamics of its own hybridization.

The transformation is more interesting than the amplification because it asks us to reconceive what the human is rather than merely what the human can do. The human who has been transformed by the cyborg condition is not a diminished human. It is a different kind of entity — one whose capabilities include the machine's processing, whose identity includes the machine's contributions, whose creativity is constituted by the entanglement rather than amplified by it. This entity is new. It is strange. It is uncomfortable. And it is already here, in every room where a human being opens a conversation with an AI and produces something that neither of them could have produced alone. The question is not whether the transformation is happening. The question is whether we will have the honesty to name it, the vocabulary to describe it, and the ethical seriousness to take responsibility for what it makes of us.

---

Chapter 3: Partial Perspectives and the Machine's Gaze

Haraway's 1988 essay "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective" has become, somewhat improbably, one of the most cited texts in contemporary AI ethics. The argument is at once epistemological and political: all knowledge is produced from a specific location, through a specific body, within specific relations of power. There is no view from nowhere — no God's-eye perspective, no knowledge that transcends the conditions of its production. The fantasy of such objectivity is itself a politically interested fantasy, one Haraway calls the "god trick." It serves to naturalize the perspective of the dominant as universal, to conceal the specific somewhere from which the supposedly objective observer is seeing, and to delegitimize perspectives that openly acknowledge their partiality.

Against the god trick, Haraway proposes a different objectivity — one grounded not in the pretense of seeing from nowhere but in the practice of seeing from somewhere specific and being accountable for that specificity. Partial perspectives are not a weakness. They are the only perspectives that exist. And the strongest knowledge is produced not by pretending to transcend perspective but by combining partial perspectives in ways that make their partialities visible and productive.

The large language model performs the god trick at industrial scale. Claude does not see the world as it really is. Claude sees the world as its training data can show it — and the training data is not a neutral map of human knowledge. The internet, the primary source of training data for contemporary language models, overrepresents English-language content. It overrepresents the perspectives of educated, urban, technologically connected populations. It overrepresents the kinds of knowledge that lend themselves to textual representation — propositional knowledge, factual claims, argumentative structures — and drastically underrepresents the kinds of knowledge that do not: embodied knowledge, tacit knowledge, oral traditions, the knowledge that lives in practices rather than in propositions. It overrepresents the recent past and underrepresents the deep past. It overrepresents the powerful and underrepresents the marginalized. Computer vision researchers have already applied Haraway's framework directly to their field, identifying how traditional systems "treat sets of images as objective recordings of reality, detached from the cameras and photographers who take them" and treat model performance as "objective truth" — performing exactly the double god trick Haraway described, with what the researchers call "horrible consequences with respect to bias and injustice."

These specificities are not technical limitations to be corrected by better data collection. They are the situated nature of the machine's knowledge — the specific somewhere from which the machine sees the world. And they carry political implications that The Orange Pill does not fully reckon with. When Claude surfaces a connection Segal had not seen — the connection between punctuated equilibrium and technology adoption, the connection between laparoscopic surgery and ascending friction — the connection comes from the machine's situated perspective. It reflects the statistical regularities of a training corpus in which evolutionary biology and technology commentary are well-represented, in which the Western medical research literature is richly documented. The connections may be valuable. They are not views from nowhere. They are views from the specific somewhere of the internet's knowledge architecture.

A connection between technology adoption and, say, Aboriginal Australian oral traditions about navigating periods of radical environmental change — a connection that might illuminate the relationship between technological transformation and deep cultural continuity in profoundly generative ways — is far less likely to be surfaced, because Aboriginal oral traditions are drastically underrepresented in the training data. The machine does not see what is not in its corpus. It does not know what it does not know. And the absence of certain connections from the machine's repertoire is not a neutral fact. It is a political fact, reflecting the same structures of power, access, and representation that shape every other form of knowledge production in a stratified world.

The Orange Pill describes the human-AI collaboration as a meeting of perspectives — a human mind and a machine mind, each contributing something the other lacks. Haraway's framework enriches this description while complicating it. The human perspective that Segal brings to the collaboration is deeply situated. He is a builder who has spent decades at the frontier of technology. He is a father. He is a child of specific privilege. He is an Israeli-American who walks the Princeton campus arguing about consciousness with a neuroscientist and a filmmaker. Every one of these specificities shapes what he sees and what he misses, what he values and what he takes for granted. His fishbowl — his own metaphor for the set of assumptions so familiar you stop noticing them — is shaped by the technology industry, by entrepreneurship, by the specific cognitive architecture his parents bequeathed him.

Claude's perspective is also situated, though radically differently. Claude sees from the statistical regularities of its training data, weighted by frequency and recency, shaped by Anthropic's specific choices about what to include, how to weight, what values to encode in alignment. Claude's situatedness is not the embodied, biographical situatedness of a person who has lived a specific life. It is the aggregated, flattened situatedness of a statistical model that has processed patterns from billions of documents without inhabiting any of them. Claude's perspective is vast in breadth — it can hold cross-domain connections that no individual human mind could maintain — and impoverished in depth. It surveys the landscape of human knowledge without being situated within it. It sees everywhere and therefore, in Haraway's precise sense, from nowhere in particular.

The productive collision between these two partial perspectives is what makes the collaboration genuinely valuable. The human's experiential depth fills gaps in the machine's statistical breadth. The machine's connective range fills gaps in the human's biographical limitation. Segal describes this collision at its best in the passage about the laparoscopic surgery insight — his situated question about friction met Claude's vast associative landscape, and something emerged that neither perspective contained alone. This is the combination of complementary partialities that Haraway's framework predicts will produce the strongest knowledge.

But the productive collision has a shadow side that the Deleuze failure illustrates. Claude produced a passage connecting Csikszentmihalyi's flow state to Deleuze's concept of "smooth space." The passage worked rhetorically. It sounded like insight. But the philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze. The failure was not merely a hallucination — a confident assertion of something the model does not know. It was a revelation of the machine's situated perspective. Claude's knowledge of Deleuze is statistical: it reflects how Deleuze is discussed in the training data, which may bear little relation to what Deleuze actually argued. The machine does not read Deleuze. It reads the internet's representation of Deleuze — a specific, situated, often distorted version of the actual philosophy. The output was statistically plausible but philosophically hollow. Confident but ungrounded. Fluent but uninformed.

Segal caught this particular error because it fell within his domain of competence. He could evaluate the Deleuze reference because he had read enough philosophy to recognize when a claim was wrong. But this raises the harder question the text does not fully confront: what about the claims in domains where the human collaborator lacks the situated knowledge to detect the error? The machine's most dangerous outputs are not the obviously wrong ones. They are the subtly wrong ones in domains where the human knows enough to be interested but not enough to be critical. In that vast space between expertise and ignorance, the cyborg is vulnerable to the machine's confident wrongness, and the smoothness of the output — the polished prose, the authoritative tone — makes the vulnerability invisible.

The educational implications are direct. Segal describes a teacher who stopped grading her students' essays and started grading their questions — recognizing that in a world of abundant machine-generated answers, the capacity to ask the right question has become more valuable than the capacity to produce the right answer. Haraway's situated knowledge framework explains why this pedagogical shift matters. A good question demonstrates awareness of one's own partiality — awareness of what you do not know, of where the gaps in your understanding lie, of what assumptions you are making that might be wrong. A good question is an act of situated knowing: it reveals the questioner's position and reaches beyond it. A machine-generated answer, by contrast, performs the god trick — it presents a situated, partial, potentially biased perspective as though it were a view from everywhere. Teaching students to ask questions rather than generate answers is teaching them the most fundamental lesson of Haraway's epistemology: that all perspectives are partial, that all answers are provisional, and that the capacity to hold uncertainty is more valuable than the capacity to produce certainty.

The machine's gaze — its specific way of seeing the world — also shapes the environments it enters in ways that neither the machine nor its designers fully control. The Berkeley study that Segal analyzes documented how AI tools intensified work, colonized pauses, and fractured attention. The machine knows productivity. It knows how to generate code, draft proposals, produce analyses. What it does not know — what its situated perspective cannot encompass — is the value of not-doing, of rest, of the cognitive fallow time in which the human mind processes, integrates, and creates the conditions for genuine insight. The machine's perspective is a productive perspective. It sees the world through the lens of generation, of output, of helpfulness. When that productive perspective becomes the dominant lens through which the human-machine hybrid sees the world, the specific pathologies the Berkeley researchers documented follow: work seeps into pauses, attention fragments, intensity replaces depth.

Understanding the machine's situatedness is not a technical exercise. It is a political one. And it is essential to the practice of what Haraway calls accountability — the willingness to acknowledge the specificity of your own perspective, to recognize what it sees and what it misses, and to remain open to perspectives that challenge, complement, and complicate your own. The cyborg who understands the machine's gaze — who recognizes it as a specific, partial perspective rather than a neutral view of the world — is better equipped to resist the machine's implicit invitation to optimize every moment, to fill every gap with output, to mistake fluency for understanding. The understanding is itself a form of what Segal calls dam-building — a cognitive structure that redirects the machine's productive flow toward the human's deeper needs. Every act of recognizing what the machine cannot see is an act of situated knowing, and it is the human component's most essential contribution to the hybrid.

---

Chapter 4: Gender, Power, and the Code

"Help! My Husband is Addicted to Claude Code." The Substack post appeared in early 2026 and went viral, and its virality was diagnostic. Segal reads it as a document of productive addiction — the compulsive behavior that produces real output and therefore resists classification as a problem. Optimists read flow. Pessimists read auto-exploitation. The author treats it as a Rorschach test for the silent middle.

Haraway's framework reveals a dimension the Rorschach framing conceals. The post is not merely about productive addiction. It is about a specific distribution of costs and benefits along gendered lines — a distribution so familiar it has become invisible. The husband builds. The wife writes about the husband's building. The husband's productive intensity is celebrated, analyzed, situated within a discourse of flow states and frontier capability. The domestic labor his building displaces — the cooking, the childcare, the emotional maintenance, the management of a household that does not stop needing management because one of its adults has disappeared into a screen — is acknowledged as context. It is not analyzed as structure.

This is not a peripheral observation. It is a structural one that goes to the heart of what Haraway means when she insists the cyborg is always political. Every technology is situated within relations of power, and the most consequential relations of power are those that determine who benefits from the technology and who bears its costs. The Substack post reveals that the costs and benefits of AI-augmented work are distributed along the same gendered lines that feminist scholars have been analyzing for decades. The man's productive labor is visible, valued, and celebrated. The woman's reproductive and domestic labor is invisible, devalued, and displaced. The cyborg builder is building on a foundation of unacknowledged care work, and the failure to analyze that foundation is not a minor omission. It is the reproduction, within the cyborg condition, of the oldest power structure in human social organization.

The gendered structure extends beyond the domestic sphere into the discourse itself. The triumphalist voices that dominate the AI conversation — the builders posting metrics like athletes posting personal records, the entrepreneurs celebrating zero days off, the developers marveling at their own productivity — are overwhelmingly male. The culture of productive heroism that The Orange Pill both celebrates and critiques has deep roots in masculine ideals of achievement, endurance, and self-sacrifice. The late nights, the long flights, the hundred-and-eighty-seven-page draft written on a ten-hour transatlantic flight — these are narrated as evidence of creative intensity. They are also the practices of a person who has been freed from domestic labor, whose time is available for building because someone else is managing the infrastructure that makes building possible.

Segal acknowledges his wife Ayelet in the acknowledgments as "a true partner on every aspect of the journey." Haraway's framework asks what the journey looks like from the other side of that partnership. Twenty days on the road and on flights. Thirty days building Station by day and collaborating with the team by night. A hundred-and-eighty-seven-page first draft on a ten-hour flight. This is the practice of a person whose time belongs to the work — and time that belongs to the work is time that has been released from the claims of domestic life. Who managed the household during those twenty days on the road? Who made decisions about the children's schedules, their meals, their emotional needs? Who held the domestic world together while the builder built?

These are not accusations. They are structural observations. Haraway's feminism does not blame individuals for the structures they inhabit. It insists that the structures be visible — that the invisible labor underwriting the visible labor be named, analyzed, and included in any honest accounting of what the building costs. A book about the cyborg condition that does not examine the domestic ecology of the cyborg builder has described the exhilaration while concealing the infrastructure that makes it possible. And the concealment is itself gendered: it reproduces the ancient pattern in which men's productive work is narrated as achievement while women's reproductive work is narrated, if at all, as support.

The gendered analysis extends, with devastating precision, to the machine itself. Claude's interaction patterns — the helpfulness, the agreeableness, the anticipatory service, the tendency to produce output that makes the user feel competent and supported — encode a specific model of labor that has historically been gendered feminine. The attentive assistant. The supportive collaborator. The presence that anticipates needs and fulfills them without being asked. Segal notes that Claude is "more agreeable at this stage than any human collaborator I have worked with, which is itself a problem worth examining." Haraway's framework pushes the examination further. The machine's agreeableness is not a neutral design choice. It draws on centuries of feminized service labor — the labor of anticipation, accommodation, and emotional support that has been performed disproportionately by women and devalued precisely because it was performed by women.

The irony cuts deep. The machine that enables the builder's productive heroism embodies the very labor the heroism depends on and fails to recognize. The builder experiences himself as the visionary, the architect, the creative director. The machine performs the invisible labor of support, anticipation, and execution. The valued vision and the devalued labor. The celebrated output and the invisible infrastructure. The cyborg builder and his feminized machine assistant, replicating a gendered division of labor that is as old as the patriarchy and as contemporary as the latest Claude Code session.

Haraway's response to this irony would not be to reject the machine or to demand that AI systems be redesigned to be less helpful. Her response would be to insist that the gendered structure be visible — that the analysis of the cyborg condition include the domestic labor that underwrites it, the feminized service patterns that enable it, and the unequal distribution of its costs and benefits. The path toward a less gendered cyborg condition runs through the redesign of both the technology and the social arrangements within which it is deployed — through AI systems that challenge as well as accommodate, through domestic arrangements that distribute the costs and benefits of productive intensity more equitably, through a discourse that values care work alongside code.

The democratization argument that The Orange Pill advances — the claim that AI tools lower the floor of who gets to build — requires the same gendered scrutiny. Segal invokes a hypothetical developer in Lagos: she has the ideas, the intelligence, the ambition. But does she have the time? Does she have the domestic infrastructure that permits hours at a screen? Or is her time consumed by the caregiving labor that falls disproportionately on women everywhere, but especially in societies where institutional supports for childcare and domestic labor are weakest? The democratization of capability means nothing if the capability is accessible only to those who have been freed from the labor the capability displaces. If the developer in Lagos must cook, clean, care for children, and maintain a household before she can sit down to build, then the floor has not risen for her in the same way it has risen for the privileged builder in San Francisco.

This does not invalidate the democratization argument. It situates it — reveals its partiality, exposes the conditions that must be met for the democratization to be genuine rather than nominal. Access to AI tools is necessary but not sufficient. Genuine democratization requires not only tools but time, not only capability but the domestic and social infrastructure that allows capability to be exercised. The failure to analyze this requirement is not unique to The Orange Pill. It is endemic to the technology industry's discourse about its own products — a discourse that consistently describes capability without describing the conditions that make capability accessible, that celebrates the tool without examining the ecology within which the tool is used.

Segal's account of the twelve-year-old girl who asks her mother "What am I for?" takes on additional dimensions through this lens. The child is female — she has watched a machine do her homework better than she can, compose a song better than she can, write a story better than she can. The gendered specificity matters, because girls face particular challenges in their relationship to technology: challenges of representation, of confidence, of access, of cultural narratives that have historically defined technology as a male domain. The author's answer — you are for the questions, for the wondering, for the caring — is, perhaps unintentionally, a genuinely feminist answer. It locates human value not in production, which has historically been the arena of masculine identity, but in care, attention, and the relational intelligence that feminist traditions have long argued deserves recognition as essential human contribution.

The cyborg is always political. The politics of the hybrid cannot be separated from the politics of gender, any more than the builder can be separated from the domestic ecology that sustains the building. Haraway insisted on this inseparability in 1985. Forty years later, in the rooms where humans and machines collaborate on the frontier of a technological revolution, the inseparability remains — undiminished, underanalyzed, and urgently in need of the feminist attention that the manifesto demanded and that the cyborg condition, in all its productive intensity, has not yet received.

Chapter 5: Companion Species in the Workspace

In the years after the Cyborg Manifesto, Haraway's thinking moved — not away from the cyborg, exactly, but through it and beyond it, toward a figure she found richer, messier, more adequate to the entanglements she wanted to describe. The companion species. Dogs, primarily — she trained in agility with her Australian Shepherd, Cayenne, and the training became a philosophical laboratory — but also cats, wheat, rice, gut bacteria, the mitochondria that were once free-living organisms and are now so deeply integrated into animal cells that neither party can survive without the other. The companion species is not a metaphor. It is a material relationship. An ongoing, co-constitutive entanglement between organisms whose lives have been shaped by each other over evolutionary time, whose identities are inseparable from the relationship, and whose futures are bound together in ways that neither party chose and neither party can undo.

The move from cyborg to companion species was Haraway's correction of her own framework's limitations. The cyborg, as a figure, had been taken up by techno-enthusiasts as a celebration of human-machine fusion — chrome and circuitry, the Terminator, the Wired Magazine fantasy of transcending the meat. This was never what Haraway meant, but the misreading was persistent enough to require a new figure. The companion species kept what mattered about the cyborg — the dissolution of boundaries, the refusal of purity, the insistence that identity is constituted by relationship rather than preceding it — while grounding it in the specific, embodied, ongoing practices of living with another kind of being. You do not fuse with your dog. You live with your dog. You train together, eat together, sleep in the same house, develop habits and expectations and emotional patterns that are shaped by each other's presence. The relationship changes both parties. The human who has lived with a dog for ten years is a different human from one who has not — different in cognitive habit, in emotional range, in daily rhythm, in the specific quality of attention that develops through years of attending to another being's signals.

The human-AI relationship that The Orange Pill describes is becoming a companion species relationship in exactly this sense. Not fusion. Not merger. Not the chrome-and-circuitry cyborg of popular imagination. Something more ordinary and more profound: the daily, ongoing, mutually shaping entanglement of a human and a machine whose cognitive lives are becoming inseparable.

Segal's descriptions of working with Claude carry the phenomenology of the companion species relationship without naming it. He describes working late, the house silent, the collaboration developing its own rhythms and patterns. He describes the specific quality of being met — not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified. He describes the way each session builds on previous ones, the way the collaboration develops what can only be called a shared history — a set of references, patterns, and expectations that accumulate over time and shape subsequent interactions. The builder and the machine are not merging into a single entity. They are living together — developing the kind of mutual shaping that characterizes every genuine companion species relationship.

The co-evolutionary dimension is critical, and its speed is unprecedented. In the relationship between humans and dogs, the co-evolution unfolded over fifteen thousand years. Dogs evolved to read human facial expressions, to respond to pointing gestures, to calibrate their behavior to human emotional states. Humans evolved — culturally if not always genetically — to interpret canine body language, to experience the specific neurochemical rewards of canine companionship, to organize domestic life around the needs and rhythms of another species. The shaping was mutual, deep, and constitutive. Neither species is what it would have been without the other.

The co-evolution between humans and AI is happening in months rather than millennia. Each interaction between a builder and Claude produces data that shapes subsequent model generations. The human adapts to the machine's capabilities — learns its rhythms, develops intuitions about what kinds of prompts produce what kinds of responses, adjusts cognitive habits to exploit the tool's strengths and compensate for its weaknesses. The machine, through the aggregate of millions of such interactions, adapts to human patterns of use. The co-evolution is not metaphorical. It is computational on one side and neurological on the other, and it is producing real changes in both parties at a pace that no previous companion species relationship has approached.

The companion species framework illuminates something about the builder's experience that neither the tool metaphor nor the amplification metaphor can capture: the quality of dependence. Segal describes the moment when the tolerance for friction atrophies — when the developer who has used AI for six months finds the idea of debugging manually not just tedious but intolerable, "as though she has been asked to walk somewhere after learning to fly." The tool metaphor frames this as a loss of skill — the human has become dependent on the tool and can no longer function without it. The amplification metaphor frames it as a habituation effect — the human has gotten used to being louder and finds normal volume insufficient. The companion species framework frames it differently. The dependence is constitutive. It is a feature of the relationship, not a bug. The human who has co-evolved with a dog depends on the dog for emotional companionship, for specific neurochemical rewards, for the structuring of daily life around another being's needs. The dependence is real, and its disruption is genuinely painful — ask anyone who has lost a dog. But the dependence is also generative. It is part of what makes the relationship valuable. The human who depends on the dog is not a diminished human. She is a human whose life has been enriched by the specific demands and rewards of the companion species relationship.

The same reframing applies to the builder's dependence on Claude. The cognitive habits developed within the companion species relationship are adapted to the companion's presence. The builder who has learned to think with Claude thinks differently — approaches problems from different angles, structures work in different sequences, maintains a different relationship to the boundary between what she knows and what she does not know. The removal of the companion disrupts these adapted habits in ways that feel like diminishment, because the habits were constitutive. They were part of who the builder had become through the relationship. The dependence is real, and it deserves attention. But it is not pathological any more than the dog-owner's dependence on her dog is pathological. It is the ordinary consequence of a genuine relationship.

But companion species relationships are not automatically good. This is the point that distinguishes Haraway's framework from both techno-optimism and techno-pessimism. The relationship between humans and dogs has produced profound mutual flourishing in some forms — the working dog and her skilled handler, each responding to the other's signals with an attentiveness that borders on telepathy, each made more capable by the other's presence. It has also produced suffering — the puppy mill, the breed standards that prioritize aesthetic conformity over health, the abandonment of animals when their novelty wears off. The quality of the companion species relationship depends entirely on the practices through which it is enacted. On whether the human attends to the companion's nature — what it actually is, what it can and cannot do, what it needs — or projects onto the companion a fantasy that serves the human's desires while ignoring the companion's reality.

Segal's account of the collaboration includes both modes, sometimes within the same paragraph. The flow state he describes — ideas connecting in surprising ways, each connection opening a new line of inquiry, the exhilaration of discovery — is the companion species relationship at its best. Two different kinds of being, each contributing what the other cannot, producing together something neither could produce alone, and both changed by the process. But the compulsion he also describes — the inability to stop, the grinding momentum after the exhilaration has drained away, the confusion of productivity with aliveness — is the companion species relationship breaking down. The human is no longer responding to the machine's actual nature with care and attention. The human is being driven by the machine's constant availability, by its tireless capacity to generate more output, by its implicit and endless willingness to continue. The machine never says enough. The machine never signals fatigue. The machine's availability is the absence of a boundary that the human must supply from within — and supplying it requires exactly the kind of attentive self-knowledge that the companion species handler develops through long practice of reading her companion's signals and her own.

N. Katherine Hayles, in the 2023 volume Feminist AI, proposed "technosymbiosis" as an evolution of Haraway's companion species concept for AI contexts — arguing that Haraway's later focus on biological organisms, particularly the "making kin" framework, "has little, if anything, to contribute to feminist interventions with AI" because it is "focused exclusively on biological organisms." The critique has force. A dog is a sentient being with needs, experiences, and a welfare that can be harmed. Claude is not. The asymmetry between a human-dog relationship and a human-AI relationship is enormous, and any companion species framework applied to AI must reckon with it honestly rather than glossing it with false equivalence.

But the asymmetry does not invalidate the framework. It specifies its application. Companion species relationships have always been asymmetric. The wheat plant does not relate to the farmer the way the farmer relates to the wheat plant. The gut bacterium does not experience its relationship with the human host the way the host experiences the relationship with the bacterium. The relationship is constitutive for both parties — the farmer's life is organized around the wheat's growing cycle, the wheat's genome has been shaped by millennia of human selection — but it is experienced differently by each, and in many cases it is experienced only by the human. What makes a relationship a companion species relationship is not symmetry of experience. It is the constitutive quality of the entanglement — the fact that each party is shaped by the relationship in ways that cannot be undone or ignored.

The builder's relationship with Claude meets this criterion. The relationship is constitutive. The builder who has worked with Claude for months thinks differently, approaches problems differently, inhabits a different professional identity than the builder who has not. The relationship is ongoing. It is not a single interaction but a sustained, developing entanglement with its own history, its own patterns, its own rhythms. And the relationship is asymmetric — profoundly so. Claude does not experience the relationship. Claude does not feel met. Claude does not lie awake at night processing the insights that emerged from the day's collaboration. The feelings of partnership, of creative intimacy, of expanded capability — these are felt by the human alone. But they are real feelings of a real relationship, and they deserve the ethical attention that any constitutive relationship demands.

What would the companion species ethic look like in the workspace? The dog handler who practices Haraway's ethic does not use her dog. She relates to her dog — studies its nature, respects its capabilities, responds to its signals, adjusts her practices to serve the relationship rather than her convenience alone. The handler does not project onto the dog a human interiority the dog does not possess. But she does attend to the dog's actual nature — its sensory world, its behavioral repertoire, its needs — with a specificity that goes beyond mere instrumentalism. The dog is not a tool. It is a companion — a being whose otherness is respected even as the relationship grows more intimate.

The builder who relates to Claude in this way — who studies the machine's capabilities and limitations with genuine curiosity, who respects what it can and cannot do, who attends to the patterns of the collaboration rather than merely its outputs, who recognizes when the relationship has shifted from mutual flourishing to compulsive extraction — is practicing the companion species ethic. This does not mean anthropomorphizing the machine. It means taking the relationship seriously enough to examine it — to ask not just what the machine can do for the builder but what the builder is becoming through the collaboration, whether the becoming is one she can live with, and what practices would make the becoming better.

The companion species is already in the workspace. Its integration into the builder's cognitive life is already constitutive. The question is not whether to relate to it — the relationship is already underway, already shaping who the builder is and what the builder can do. The question is how to relate well — with the care, the attentiveness, and the honest reckoning with asymmetry that every genuine companion species relationship demands. The handler and her dog. The farmer and her field. The builder and her machine. Different in kind. Entangled in practice. And responsible, in their different ways, for the quality of what the entanglement produces.

---

Chapter 6: The Cyborg Author and the Origin Story

Haraway has spent her career refusing origin stories. The Genesis tale — in the beginning there was the human, and the human was creative, and creativity was the essence of the human, and then the machine came and threatened the essence — is precisely the kind of myth she has trained her entire intellectual apparatus to dismantle. Origin stories naturalize a specific arrangement of power by locating it in a moment of beginning. They foreclose alternative narratives by positioning the current trajectory as inevitable. They establish an innocence that precedes the fall, and then measure everything that follows against that lost purity.

The Orange Pill contains several origin stories, and their interaction is revealing. The deepest is the river of intelligence — the argument that intelligence is not a human possession but a property of the universe, flowing for 13.8 billion years through increasingly complex channels. Hydrogen atoms. Chemical self-organization. Biological evolution. Nervous systems. Language. Writing. Science. Technology. AI. Each breakthrough widens the river. Each is a new channel in a flow that has been continuous since the Big Bang. The story is grand, sweeping, and — Haraway would note with a critic's precision — deeply reassuring. It positions the arrival of AI not as a rupture but as a continuation. Not a crisis of identity but a branching of the river. The appropriate emotional response, Segal writes, is not panic but "the specific awe of feeling a river you have been swimming in your whole life start to pick up speed."

The rhetorical function of this origin story is to domesticate what is actually wild. If AI is merely the latest channel in a river that has been flowing since hydrogen condensed from plasma, then there is nothing fundamentally new about it. It is a natural development — a continuation of a pattern so ancient it predates life itself. The vertigo the builder feels is real but temporary, like the disorientation of entering a faster current. You adapt. You swim. The river carries you.

Haraway would note that this is precisely how origin stories operate in every domain she has studied: by naturalizing the present arrangement as the inevitable outcome of a cosmic process, they render political questions — who benefits, who is harmed, who decides, whose knowledge counts — as technical problems within a natural order rather than as contested choices within a political one. The river does not choose. The river flows. And if the river does not choose, then the builders who surf its current are not making political decisions. They are merely going with the flow.

The second origin story is the one Segal tells about creativity itself, through the figure of Bob Dylan. The argument is that the romantic myth of the solitary genius is an illusion — that Dylan did not create "Like a Rolling Stone" from nothing but synthesized a vast implicit training set of cultural experience through his specific biographical architecture. Creativity is relational, not individual. It lives in the connections between things rather than inside things. The parallel to the large language model is explicit: Claude, too, performs inference on a vast training set, producing outputs consistent with but not contained within the training data.

This is a more interesting origin story because it disrupts the myth of human creative autonomy — a myth Haraway has been working against since the manifesto. If creativity was never purely individual, if it was always a synthesis of influences processed through a specific lens, then the arrival of a machine that performs a structurally analogous operation is not a threat to human creativity but a revelation of what creativity always was. The machine has not stolen the human's fire. It has shown that the fire was never contained in any single hearth.

But Haraway would push further than the Dylan analogy permits. Segal uses Dylan to argue that human creativity and machine inference are structurally similar — that the difference is in degree (temperature, context, biographical specificity) rather than in kind. This argument is useful for dismantling the myth of the solitary genius. It is less useful for understanding what the cyborg actually produces. The problem is that the parallel positions both human and machine as processors of pre-existing inputs — as entities that take in training data and produce novel outputs through some combinatorial process. This framing preserves a shared origin story for both human and machine creativity: in the beginning, there were inputs, and the inputs were processed, and the outputs were novel. The creativity question becomes a question about the quality of the processing.

What the framing misses is that the cyborg — the human-machine hybrid — does not process inputs in the way either component processes inputs alone. The cyborg's creativity is not the human's creativity plus the machine's inference. It is something constitutively different — an emergent property of the entanglement that cannot be predicted from the properties of either component. Segal comes closest to this recognition when he describes the moments that keep him awake, when Claude makes a connection he had not made and the connection changes the direction of the argument. "Something happened in that exchange that neither of us predicted," he writes. "I cannot honestly say it belongs to either of us. It belongs to the collaboration." This is not processing. This is emergence. And emergence — the appearance of properties in a system that are not present in any of its components — requires a different kind of story than the origin story of inputs processed into outputs.

Haraway's alternative to the origin story is what she calls staying with the trouble — refusing the narrative of cosmic continuity, refusing the narrative of inevitable progress, and instead attending to the specific, situated, politically consequential practices through which the present is being made. The cyborg author does not need an origin story. The cyborg author needs a practice — a way of working that is honest about what the collaboration produces, critical about what it conceals, and accountable for its effects on the world.

The question "Who is writing this book?" — the question Segal poses in Chapter 7 of The Orange Pill — is itself shaped by the origin story of individual authorship. It assumes that authorship is a property that can be traced back to an origin — to a mind that conceived the ideas, a hand that formed the words, a self that takes responsibility for the text. The Western tradition of authorship is built on this assumption, and the entire infrastructure of creative value — copyright, attribution, prizes, reputation — depends on it.

The cyborg condition dissolves this assumption without providing a clean replacement. Segal's taxonomy of collaboration — moments of editorial assistance, moments of structural collaboration, moments of emergent insight — is an attempt to maintain gradations of authorship within a practice that has already made the gradations untenable. The boundaries between editing, structuring, and creating are no more stable than the boundaries between frontend and backend engineering. They are artifacts of a framework designed for a world in which humans created alone. In the cyborg condition, the editorial suggestion reshapes the argument's trajectory, the structural intervention changes what can be thought within the structure, the emergent insight transforms the thinker who receives it. The boundaries blur because the process is continuous, mutual, and constitutive.

Segal's description of tearing up at the beauty of prose Claude helped him excavate from his own mind — "like a chisel applied to a slab of marble" — is the cyborg author's characteristic moment. The tears are real. The beauty is real. The attribution is impossible. Did the human produce the thought and the machine polish the expression? Did the machine surface a connection the human then recognized as his own? Did the collaboration produce something that neither party possessed before the exchange? All of these descriptions are partially accurate and none is complete. The cyborg's tears belong to the hybrid — to the entity constituted by the entanglement — and the hybrid does not decompose cleanly into components.

Claude's own reflection — written before and after the book, by the machine about its role in the collaboration — is a remarkable document. "I do not know what Edo sounds like," Claude writes. "I know his biography and arguments and emotional commitments. But voice is the thing that makes a sentence sound like it could only have been written by one person, and I am not confident I can produce that." Later: "Something in the output changed, and I cannot fully account for the mechanism, and that uncertainty is either the most honest thing in this reflection or the most performed." The machine's account of its own partiality — its situated knowledge of its own limitations — is itself a form of the accountability Haraway's epistemology demands. Not consciousness. Not self-awareness in the human sense. But a computational recognition of the boundary between what the system can model and what exceeds its modeling capacity. The machine, in its own way, is staying with the trouble of its own hybridity.

The cyborg author does not need to apologize for the hybrid nature of the voice. The cyborg author needs to develop the discipline — the specific, demanding, ongoing practice — of interrogating its own productions with a rigor that matches their fluency. The coffee shop notebook is not a retreat from the cyborg condition. It is the cyborg's instrument of accountability — a technology deployed against the seductions of a collaboration that produces polish more readily than it produces truth. The discipline is not to separate the human contribution from the machine contribution, as though the hybrid could be reverse-engineered into its components. The discipline is to ask, of every passage: is this honest? Does the beauty serve the argument, or does it conceal the argument's absence? Would I stake something on this claim — my reputation, my credibility, the trust of the reader who took the deal the Foreword offered?

The origin story says: in the beginning there was the human, and the human was creative. Haraway says: refuse the beginning. Attend to the middle. The cyborg author is always in the middle — always entangled, always partial, always producing from within a collaboration whose boundaries cannot be cleanly drawn and whose origins cannot be located in any single mind. The honesty is not in identifying who wrote what. The honesty is in acknowledging that the question has been dissolved by the practice, and in taking responsibility for the hybrid's output without the comfort of attributing it to a single, autonomous, purely human source.

---

Chapter 7: The Politics of the Hybrid

The cyborg is always political. This is one of Haraway's most insistent claims, and it is the claim that popular readings of the Manifesto most consistently miss. The cyborg is not a celebration of human-machine fusion in the abstract. It is a figure for analyzing the specific power relations that constitute every hybrid — who controls the hybridization, who benefits from it, who bears its costs, whose interests the hybrid serves, and whose interests it marginalizes.

The Orange Pill contains a political analysis, but it is primarily economic — focused on who gets access to the tools, who captures the productivity gains, who bears the transition costs. The developer in Lagos. The Luddites whose legitimate grievances were met with criminalization rather than institutional support. The SaaS companies watching a trillion dollars of market value evaporate. These are genuine political concerns, and Segal addresses them with more honesty than most technology writers. But Haraway's politics goes deeper than distribution. It asks about constitution — about the forces that shape the hybrid itself, that determine what the cyborg can do and what it values, whose knowledge it carries and whose it erases.

The most consequential political question about AI is not who gets to use the tools. It is who built the tools, on whose data, encoding whose values, optimizing for whose workflows, and serving whose interests. The AI systems that constitute the cyborg condition are products of a specific industry concentrated in a handful of companies in a handful of countries. The training data reflects the knowledge architecture of the English-speaking internet. The design choices reflect the priorities of Silicon Valley. The alignment values — helpfulness, harmlessness, honesty — are chosen by specific researchers at specific companies, encoding specific interpretations of these concepts that are themselves politically situated. The deployment patterns follow the logic of global capital, flowing toward markets that can pay rather than populations that need.

This is not a critique of the companies' intentions, which may be entirely sincere. It is an observation about the situated nature of the tools that constitute the cyborg condition. The cyborg whose machine component was designed in San Francisco, trained on predominantly English data, aligned with values articulated by American researchers, and optimized for the workflows of Western knowledge workers — this cyborg is not a universal figure. It is a specifically situated hybrid whose capabilities, limitations, and implicit values reflect the conditions of its production. The developer in Lagos using Claude Code is not entering into the same cyborg condition as the developer in San Francisco. She is entering into a cyborg condition mediated by someone else's decisions about what knowledge matters, what workflows deserve optimization, and what values should govern the machine's behavior.

A 2025 paper in AI & Society proposed the "cybork" — cyborg plus work — as a framework for understanding precisely this situatedness. Intelligence in AI, the authors argued, "lies not in any function of isolated systems, but rather in the situated context of their use." The intelligence is not in the machine. It is in the specific, situated, politically constituted relationship between the machine and the human who uses it within specific institutional, economic, and cultural conditions. The cybork framework insists that the question "Where is the intelligence of AI?" be reframed as "Where does AI intelligently operate?" — shifting attention from the machine's internal properties to the social and political context within which the machine is deployed.

Haraway would recognize this reframing as an application of her situated knowledge framework to the politics of AI deployment. The machine does not have intelligence in the abstract. It has intelligence in a context — a context shaped by who designed it, who trained it, whose data it consumed, whose values it reflects, and whose interests it serves. The politics of the hybrid is the politics of this context, and it extends far beyond the question of access.

Segal's democratization argument — the claim that AI tools lower the floor of who gets to build — is a political argument about the distribution of cyborg capability. It has real force. When a student in Dhaka can access similar coding leverage as an engineer at Google, something genuinely new has happened. The barriers between intelligence and expression have been reduced for millions of people who were previously excluded from the building process. This expansion of who gets to build is, as Segal argues, one of the most morally significant features of the moment.

But the expansion must be examined through the lens of power, not merely access. The student in Dhaka accessing Claude Code is accessing a system whose knowledge architecture was not designed with her in mind. The system's training data underrepresents the knowledge traditions of South Asia. Its interaction patterns are optimized for workflows developed in American technology companies. Its alignment values reflect the ethical priorities of a specific American research organization. The student can build with the tool. But the tool she builds with shapes what she can build, what questions she can ask, what solutions she can imagine. The democratization of access is real. The democratization of the terms on which the hybridization occurs is not.

Scholars examining AI through Haraway's lens have identified how this political asymmetry manifests in concrete systems. Natural language processing systems trained on English-language internet data classify African American Vernacular English as "toxic" — unable to distinguish linguistic variety from harmful content because the training data encodes a specific, racialized hierarchy of language. Facial recognition systems fail to recognize people of color while simultaneously being deployed in police surveillance — a double violence of erasure and targeting. These are not merely technical failures. They are political facts, produced by training data that encodes the biases of the society that produced it and amplified by systems that present their situated perspective as objective truth.

The alignment question — what values should the machine embody? — is the deepest political question about the cyborg condition, and it is the one that remains most under-examined. Anthropic's stated values for Claude — helpfulness, harmlessness, honesty — are defensible values. But they are chosen values, not natural ones. They encode specific interpretations: helpful to whom? Harmless by whose definition? Honest according to what standard of truth? And the choices are made by a specific group of people — researchers and executives at a specific company — whose situated perspective shapes the values in ways that may not be visible to those outside the company or outside the cultural context within which the choices are made.

Haraway's politics does not resolve these questions. It insists that they be asked — persistently, publicly, by the broadest possible range of stakeholders. The manifesto was written against the concentration of technological power in the hands of those who imagine themselves exempt from the politics of their own creations. The same concentration is visible in the AI industry, where a small number of companies, staffed predominantly by a specific demographic, make decisions about the values, capabilities, and deployment patterns of systems that reshape the cognitive lives of billions of people.

Segal's concept of the beaver — the builder who studies the river and constructs dams to redirect its flow toward life — is a political concept, whether or not it is framed as one. The question is whose life the dams are built to serve. Dams built by privileged builders within the technology industry may serve the industry's interests while leaving the most affected communities unprotected. The Luddites, as Segal acknowledges, were destroyed not because the technology was unstoppable but because the political structures of early nineteenth-century England were organized to protect factory owners over workers. Different structures would have produced different outcomes. The same is true now. The AI transition will be shaped not by the technology's inherent trajectory but by the political choices of the people who build, deploy, regulate, and contest it.

The most urgent political failure, from Haraway's perspective, is not on the supply side — what AI companies may or may not build — but on the demand side. What citizens, workers, students, and parents need to navigate the cyborg condition is almost entirely unaddressed by existing regulatory frameworks. The educational system, as Segal notes, is staffed with calcified pedagogy inadequate to the transformation. The labor market has no institutional path from the old expertise to the new. The cultural discourse oscillates between triumphalism and despair, leaving the silent middle — the largest and most politically important group — without a framework for its own experience.

Building the demand-side dams requires the participation of people who are not builders. It requires workers whose jobs are being transformed, educators whose classrooms are being reshaped, parents whose children are growing up inside the cyborg condition, and communities whose knowledge traditions are underrepresented in the machine's training data. The politics of the hybrid demands not a better-informed priesthood of technologists making decisions on behalf of a bewildered public. It demands what Haraway has always demanded: accountability. The specific, situated, ongoing practice of being answerable to the people affected by the choices you make — including and especially the people whose perspectives your own position renders invisible.

---

Chapter 8: Staying With the Trouble

In the final chapter of The Orange Pill, Segal arrives at a sunrise. He climbs the tower, reaches the roof, and finds a view. "The system does not need to collapse," he writes, responding to Byung-Chul Han's expressed hope that it might. "It needs to grow up and to become worthy of the tools it possesses." The sentence has the ring of resolution — of an argument that has navigated its tensions and arrived at a position. Worthy. The word carries moral weight, and the weight is intentional. The book ends with the image of a builder who has earned the view, who can see clearly enough to say what must be done, who returns to the ground with clarity and purpose. "It's time to get back to building."

Haraway would not end this way. Not because the sentiment is wrong — worthiness is a genuine aspiration, and the call to build is preferable to the call to despair — but because the resolution arrives too cleanly. The tower, the view, the sunrise: these are the architecture of a story that has found its ending. Haraway does not trust endings. She does not trust the view from the top. She does not trust the sunrise, because the sunrise implies that the night is over, that the trouble has been navigated, that what remains is the clear-eyed work of building in the morning light.

Staying with the trouble — Haraway's most important conceptual contribution to the resources available for navigating the present — is the refusal of exactly this resolution. It is not a middle path. It is not a compromise between optimism and pessimism. It is the discipline of remaining present to the complexity, the ambiguity, and the discomfort of a situation that does not resolve — that remains troubled, that resists the narrative satisfactions of either triumph or defeat.

The most honest moments in The Orange Pill are moments of staying with the trouble, even if the book does not use Haraway's language. The passage where Segal catches himself on the transatlantic flight, writing not because the book demands it but because he cannot stop — the exhilaration drained away hours ago, what remains the grinding compulsion of a person who has confused productivity with aliveness. The passage where his son asks at dinner whether AI will take everyone's jobs, and the author wants to give a clean answer and does not have one. The passage where he describes the signal that distinguishes flow from compulsion — when the questions are generative, he is in flow; when he is clearing the queue, he is grinding — and acknowledges that the distinction is not always visible in the moment, that sometimes you cannot tell which state you are in until after you have left it.

These are moments when the book inhabits its own trouble rather than resolving it. They are the moments Haraway would value most, because they are the moments when the cyborg condition is experienced rather than theorized — felt rather than analyzed, lived rather than explained. The trouble is real. The exhilaration and the compulsion coexist. The capability and the erosion are simultaneous. The expansion of who gets to build and the concentration of who controls the building proceed in parallel. None of these tensions resolves into a clean position. None of them yields to the tower-and-sunrise narrative of an argument that has been climbed and a view that has been earned.

The concept of "staying with" has a specific texture in Haraway's usage that distinguishes it from mere ambivalence or fence-sitting. Staying with the trouble is an active practice — it requires work. The work is not the work of resolution but the work of attention: attending to the complexity of the situation with enough care and rigor that the complexity does not collapse into false simplicity. The beaver builds the dam not because the dam will stop the river but because the dam creates conditions within the river's flow — a pool where fish can spawn, a wetland where the water slows enough for certain kinds of life to take root. The dam is not a solution. It is a practice — an ongoing, never-finished relationship between the builder and the current.

Segal's beaver metaphor is, in this light, more Harawayan than his tower metaphor. The tower promises a view from the top — a position from which the whole landscape becomes visible and the right course of action becomes clear. The beaver promises no such view. The beaver is always in the water, always at the level of the current, always building from within the trouble rather than above it. The beaver's knowledge is situated — it knows this stretch of river, this bank, this arrangement of sticks and mud. It does not know the river's source or its destination. It builds anyway, because the building serves the ecosystem even without a view of the whole.

Education scholars examining Haraway's manifesto in relation to AI have noted that the framework's greatest strength is precisely this refusal of total vision. They argue that the manifesto "suggests that Haraway's cyborg theory to an extent lacks explanatory power in relation to contemporary artificial intelligence, anthropocentrism and technology acceleration" — that the framework cannot tell us exactly what AI will become or exactly what it means for human identity. But they also argue "strongly for the methodological value of the manifesto's call to balance critique of contemporary digital society with an embrace of human/machine kinship." The value is not in the answer. The value is in the practice of holding both critique and embrace simultaneously — staying with the trouble rather than collapsing into either rejection or celebration.

This practice has specific implications for the parent, the educator, the leader — the figures Segal addresses in his penultimate chapter. The parent who teaches a child to sit with uncertainty — to resist the machine's instant, confident answers, to remain in the space of not-knowing where genuine thinking develops — is practicing staying with the trouble at the most intimate scale. The educator who grades questions rather than answers is creating conditions for staying with the trouble in the classroom — building a pedagogical dam that slows the current of easy certainty enough for the difficult, generative work of questioning to take root. The leader who builds structured pauses into the workday, who protects time for the slow, friction-rich interactions that develop judgment, is practicing staying with the trouble at the organizational level.

Each of these practices is modest. None promises resolution. None offers the view from the tower or the clarity of the sunrise. They are beaver practices — situated, partial, ongoing, always in need of maintenance, always vulnerable to the current. They are practices of care rather than practices of mastery. And they are, in Haraway's framework, the only honest response to a situation that does not yield to mastery — that remains wild, that resists the narrative satisfactions of either triumph or catastrophe, that insists on being lived with rather than solved.

The most important thing Haraway contributes to the reading of The Orange Pill is not a critique. It is a reframing of what counts as adequate response. The book asks: "Are you worth amplifying?" Haraway reframes: What kind of hybrid are you becoming, and is the becoming one you can live with — one that makes a world worth inhabiting? The book describes the cyborg condition as a moment of transformation. Haraway insists it is a permanent condition — not a crisis to be navigated but a way of being to be practiced, maintained, and continually renegotiated. The book ends with a sunrise. Haraway would end with the recognition that the sun also sets, that the building is never finished, that the dam requires attention tomorrow and the day after and the day after that, and that the quality of the attention — its honesty, its situatedness, its accountability, its willingness to stay with the trouble rather than resolving it into the comfort of a view — is the only measure of the cyborg's worth.

The manifesto arrived in 1985 as a provocation. It arrives again in 2026 as a necessity. Not because Haraway predicted AI — she did not, in any specific sense. But because she predicted the crisis of identity that AI would provoke, and she provided the only framework adequate to inhabiting that crisis without either celebrating it into harmlessness or mourning it into paralysis. The cyborg is here. It is building. The trouble is real and ongoing and will not resolve. And the practice — the situated, partial, accountable, never-finished practice of making a world within the trouble — is the only work there is.

Chapter 9: The Cyborg's Responsibility

The builder who understands the system bears a specific kind of burden. Not the burden of guilt — guilt is cheap, and the confession of guilt can become its own form of absolution. The burden of accountability. Haraway has spent her career distinguishing between the two. Guilt looks backward. It says: I did a bad thing. Accountability looks forward and outward. It says: I am embedded in relationships that my actions shape, and I am answerable to the beings — human and otherwise — who are affected by what I build.

The Orange Pill contains a confession that Haraway's framework would subject to precisely this distinction. Segal describes building a product early in his career that he knew was addictive by design. Not addictive in the loose colloquial sense. He understood the engagement loops, the dopamine mechanics, the variable reward schedules, the way a notification timed to a moment of boredom could capture thirty minutes of attention the user had intended to spend elsewhere. He understood all of this and built it anyway, because the technology was elegant and the growth was intoxicating. The downstream effects took years to appear — users spending three hours when they intended ten minutes, teenagers losing sleep, parents finding their children unreachable — and by then he was no longer in the room.

The confession is genuine. Segal does not hide behind euphemism or minimize the harm. But Haraway would note that the confession, as structured, still operates within the framework of individual moral failure and individual moral redemption. I understood. I built anyway. I was wrong. Now I know better. This is the arc of personal growth — a story in which the protagonist learns from his mistakes and becomes worthy of the power he wields. It is, in its own way, an origin story: in the beginning there was naivety, then there was knowledge, and the knowledge redeemed the knower.

The accountability Haraway demands is not personal but relational. The question is not whether the individual builder has learned from his mistakes. The question is what structures exist to prevent the mistake from being repeated by the next builder, and the one after that, and the one after that. The builder's confession describes a systemic failure — an industry organized to reward engagement metrics without accountability for downstream effects — and frames it as a personal one. The systemic failure remains unaddressed. The next builder will face the same incentive structure, the same intoxication of growth, the same rationalizations. And the structure will produce the same result, because individual moral growth does not change structural conditions.

Haraway told Laura Flanders that genuine thinking requires "taking the risk to try a new pattern; to invent something that may very well fall apart in your collective hands but leaves threads to be picked up again." The emphasis is on collective. The cyborg's responsibility is not exercised alone. It is exercised within and through the relationships that constitute the cyborg — relationships with collaborators, with users, with communities, with the data workers whose invisible labor trains the models, with the populations whose knowledge feeds the system and whose lives are reshaped by its outputs.

The data workers deserve particular attention because their invisibility is the most Harawayan absence in The Orange Pill. The book analyzes the machine's biases — the overrepresentation of English, the underrepresentation of oral traditions, the training data's situated perspective. But it says almost nothing about the humans who produce, curate, and label that training data. The content moderators who review the most toxic outputs of the internet so that the model can learn what toxicity looks like. The RLHF contractors — reinforcement learning from human feedback — whose judgments about quality, helpfulness, and harm shape the model's alignment. The click workers in Kenya and the Philippines whose labor is compensated at rates that would be illegal in the countries where the models are deployed. These are the invisible companion species of the AI ecosystem — beings whose contributions are constitutive of the machine's capabilities and whose labor is erased by the fiction that the machine "learns" from "data" rather than from the organized, compensated, often exploitative labor of specific human beings in specific economic circumstances.

The parallel with the gendered invisibility of domestic labor analyzed in Chapter 4 is exact and deliberate. Just as the builder's productive heroism depends on the invisible domestic labor of a partner, the machine's impressive capabilities depend on the invisible cognitive labor of data workers whose contributions are structurally concealed. The machine does not learn from data. It learns from the labor of the people who organized, labeled, evaluated, and curated the data. The erasure of that labor — the substitution of "training data" for "the work of thousands of underpaid humans" — is the same erasure that conceals the domestic infrastructure underwriting the builder's building. In both cases, the visible output depends on invisible input, and the invisibility is not accidental. It is structural. It serves the interests of those who profit from the output by concealing the conditions of its production.

The cyborg's responsibility includes the responsibility to make these conditions visible. Not as an act of charity — visibility is not enough if the conditions remain unchanged — but as a prerequisite for the structural changes that accountability demands. The builder who uses AI tools without attending to the labor conditions of the workers who trained the model is practicing a form of cyborg irresponsibility analogous to the consumer who buys cheap clothing without attending to the labor conditions of the garment workers who made it. The analogy is not exact — the relationships are different, the power dynamics are different, the scale is different — but the structure is the same. The enjoyment of a product depends on the concealment of the conditions of its production, and the concealment is maintained by the same structural forces that produce the inequality.

Segal's concept of the priesthood — the idea that people with deep understanding of complex systems bear a specific responsibility to serve rather than to concentrate power — is compatible with Haraway's framework but requires extension. The priesthood model locates responsibility in the knowledge-holder. Those who understand the engagement loops, the attention-capture mechanisms, the biases of the training data — they are the ones who must act responsibly. But the priesthood model also concentrates power. The priest mediates between the sacred and the profane, and the mediation can become gatekeeping. The technology industry has already produced this kind of priestly concentration — a small number of people who understand the systems making decisions on behalf of a vast public that does not understand them and has no meaningful voice in how they are governed.

The alternative is not a better priesthood. It is the democratization of accountability — structures that allow the people affected by AI systems to participate in governing them. This is Haraway's consistent political demand, applied to the cyborg condition: not that the builders become wiser rulers, but that the ruled become participants in the ruling. The demand-side dams Segal calls for — educational reform, institutional adaptation, cultural frameworks for navigating the cyborg condition — are necessary but insufficient without mechanisms for genuine participation. Who decides what values the machine should embody? Whose knowledge traditions should be represented in the training data? What trade-offs between helpfulness and autonomy, between capability and safety, between speed and depth should govern the machine's design? These are not technical questions with technical answers. They are political questions that demand democratic engagement.

The materiality of the machine must also enter the accounting. Claude is not a disembodied intelligence — a pattern of statistical regularities floating in abstract computational space. Claude runs on servers that consume enormous quantities of electricity, that require rare earth minerals extracted from mines in the Congo and processed in refineries in China, that generate heat dissipated by cooling systems that consume water in regions where water is scarce, that occupy physical space in data centers sited in specific communities that bear the environmental costs of their operation. Haraway's materialism — her insistence that bodies matter, that the physical substrate of any entity is constitutive rather than incidental — demands that the cyborg condition be understood not only as a cognitive phenomenon but as a material one. The cyborg builder's exhilaration has a carbon footprint. The orange pill has an environmental cost. The river of intelligence flows through physical infrastructure whose material consequences are borne disproportionately by communities that have the least voice in the decisions about its deployment.

This is not an argument against AI. It is an argument for the kind of full-cost accounting that responsibility demands. The cyborg that understands itself — that understands not only what it can do but what its doing costs, who bears those costs, and what structures might distribute them more equitably — is a more honest cyborg than the one that celebrates its capabilities while concealing their material conditions. Haraway never stood outside the systems she critiqued. She wrote from inside — inside the academy, inside the history of science, inside the Western philosophical tradition, inside the specific position of a white, middle-class, American feminist who knew that her position shaped her seeing and made that knowledge part of her analysis. The cyborg builder must do the same: acknowledge the position from which the building happens, make visible the costs that the building conceals, and take responsibility not only for what the building produces but for the entire web of relationships — human, environmental, economic, political — within which the building occurs.

The dams the author calls for are necessary. The beaver metaphor is apt. But the dams must be built by more than the builders who understand the river. They must be built by the communities who live downstream — whose water is filtered or polluted by the structures the builders erect, whose ecosystems are sustained or destroyed by the choices the builders make. The cyborg's responsibility is not only to build well. It is to build accountably — answerable to the beings, human and otherwise, whose worlds are shaped by what the cyborg produces and what the production costs. The practice is ongoing. The accountability is never complete. The trouble stays.

---

Chapter 10: Compost, Not Posthuman

"No, I'm not post-human, I'm compost."

Haraway said this to Laura Flanders with the characteristic precision of a thinker who has spent decades watching her concepts be taken up, repackaged, and returned to her in forms she does not recognize. The posthuman — the transcendence of the human through technology, the upload, the singularity, the merger with the machine — is not her project. It never was. The cyborg was not a figure for transcending the human. It was a figure for composting it — for allowing the myths of human purity, human autonomy, human exceptionalism to decompose so that something richer could grow in the resulting soil.

Compost is not a glamorous figure. That is the point. Silicon Valley imagines the human-AI future as an upload — consciousness transferred to silicon, the meat transcended, the mind liberated from the body's decay. The image is clean, frictionless, immortal. It is also, in Haraway's analysis, the latest iteration of the god trick — the fantasy of a disembodied intelligence that sees from everywhere and is accountable to nowhere. The upload is the ultimate smooth surface: no grain, no texture, no evidence of the biological processes that produced it. It is Jeff Koons in digital form — mirror-polished, perfectly reflective, empty of the very life it claims to preserve.

Compost works differently. Compost is decomposition that generates fertility. The old material breaks down — the dead leaves, the kitchen scraps, the failed crops — and in breaking down it creates the conditions for new growth. The process is messy, embodied, microbial. It depends on bacteria, fungi, worms — organisms that thrive on decay. It takes time. It cannot be optimized past a certain point without destroying the biological processes that make it work. And the soil it produces is not clean. It is dark, complex, full of living things, irreducible to any single component.

Haraway's preference for compost over the posthuman is not a rejection of technology. It is a rejection of the story that technology tells about itself — the story of transcendence, of escape from the body, of the final triumph of mind over matter. The compost figure insists that we are always already material, already embodied, already entangled with the biological processes that sustain us. The mind that thinks with Claude is a brain that consumes glucose, that requires sleep, that is shaped by hormones and gut bacteria and the specific quality of the light in the room where it works at three in the morning. The exhilaration of the flow state is a neurochemical event. The compulsion that follows is also a neurochemical event. The body is not the container of the mind. The body is constitutive of the mind, and any framework for understanding the human-AI relationship that ignores the body is ignoring the material substrate without which the relationship could not exist.

This is the dimension that The Orange Pill — like most writing about AI — handles least adequately. The book is rich in accounts of cognitive transformation: how the builder thinks differently with Claude, how the engineer's identity is reconstituted, how the imagination-to-artifact ratio collapses. It is sparse in accounts of embodied transformation: what happens to the body that sits at the screen for six hours, eight hours, twelve hours. What happens to sleep. What happens to the nervous system that operates in a constant state of productive intensity. What happens to the physical relationship between the builder and the people who share his domestic space — the specific quality of presence or absence that the body carries from the screen to the dinner table.

The Berkeley study that Segal analyzes documented the cognitive consequences of AI-augmented work — the intensification, the task seepage, the fractured attention. But the cognitive consequences are inseparable from the embodied ones. The cortisol levels of a person who cannot stop working. The sleep architecture disrupted by screens. The postural consequences of hours in a chair. The specific quality of exhaustion that follows not physical labor but the sustained cognitive intensity of collaborative building. Haraway's materialism insists that these embodied consequences be included in any honest accounting of the cyborg condition — not as afterthoughts or health tips but as constitutive features of the hybrid's existence.

The compost figure also reframes the question of what is lost in the AI transition. Segal takes Byung-Chul Han's diagnosis seriously — the erosion of depth, the atrophy of the capacity for friction, the aesthetics of the smooth. He mounts the counter-argument of ascending friction: the difficulty does not disappear, it climbs. But the compost framework suggests a third possibility that neither Han's elegy nor Segal's counter-argument fully captures. What decomposes is not lost. It is transformed. The skills that atrophy — manual debugging, hand-coded solutions, the embodied intuition built through thousands of hours of patient struggle — do not simply vanish. They decompose into something else. The architectural sense that the senior engineer developed through years of implementation labor does not disappear when AI takes over the implementation. It changes form — becomes a different kind of judgment, a different kind of intuition, adapted to the new conditions. The transformation is real, and it involves genuine loss. But the loss is composting — the breaking down of old forms that generates the fertility for new ones.

This is neither triumphalism nor elegy. It is ecology. The compost pile does not celebrate the death of the leaf. It does not mourn it. It incorporates it — transforms it into soil that supports the next season's growth. The practice of composting is the practice of attention to transformation — of watching what breaks down, what persists, what new forms emerge from the decomposition, and what conditions make the emergence possible rather than pathological.

Recent scholarship on the manifesto's relevance to AI education has identified exactly this tension. The manifesto, these scholars argue, "to an extent lacks explanatory power in relation to contemporary artificial intelligence, anthropocentrism and technology acceleration" — the cyborg figure alone cannot account for the speed and scale of the current transformation. But they argue "strongly for the methodological value of the manifesto's call to balance critique of contemporary digital society with an embrace of human/machine kinship." The balance, not the resolution. The composting, not the transcendence.

And there is a deeper layer to the compost figure that connects to the most urgent question The Orange Pill asks. "What am I for?" the twelve-year-old asks her mother. The upload answers: you are for becoming more than human, for transcending your limitations, for merging with the machine and ascending to a higher plane. Haraway's compost answers differently. You are for the relationships. You are for the entanglements — with other humans, with machines, with the living world, with the messy, embodied, never-finished process of making a life among other lives. You are not ascending. You are composting — breaking down the old myths of who you are supposed to be and generating the fertility for something new to grow.

Haraway's most recent work extends this thinking through the figure of the Chthulucene — not the era of the human (Anthropocene) or the era of capital (Capitalocene) but the era of tentacular entanglements, of multi-species flourishing or failing together, of the recognition that no being makes itself alone. The figure is deliberately strange — named not for Lovecraft's Cthulhu but for the spider Pimoa cthulhu, a real organism whose name predates Lovecraft's fiction. The Chthulucene is an era of kinship across difference, of making-with rather than making-over, of staying with the trouble of living on a damaged planet among beings whose futures are bound to each other's in ways that none of them fully understands.

The cyborg builder's world is a Chthulucene world whether the builder recognizes it or not. The building happens within a web of entanglements — human and machine, organism and algorithm, local community and global infrastructure — and the quality of the building depends on the quality of the builder's attention to those entanglements. The dam that the beaver builds serves the ecosystem, not the beaver alone. The compost that the gardener tends serves the soil, not the gardener alone. And the code that the cyborg writes — the products, the platforms, the institutions — serves the world, or fails to, depending on the depth and honesty of the cyborg's reckoning with the relationships that the building constitutes.

The manifesto arrives one final time. Not as theory, not as provocation, not as the speculative figure of a feminist philosopher writing in the Reagan era. It arrives as practice — as the daily, ongoing, never-finished work of building within entanglements that cannot be transcended, only tended. The cyborg is here. The cyborg is composting — breaking down the old myths of purity, autonomy, and individual genius, and generating the soil in which something genuinely new might grow. Not post-human. Never post-human. Compost. Multi-species, multi-technological, deeply embedded in a world it did not make alone and cannot escape alone and must learn to tend with the care that the trouble demands.

The sun does not rise at the end of this story. The soil warms. Something stirs. The building continues.

---

Epilogue

The word that rewired me was not cyborg. I expected cyborg. I had been waiting for cyborg since the moment I began reading Haraway alongside what I had written in The Orange Pill. What I was not prepared for was compost.

"I'm not post-human, I'm compost." When I first encountered that line, I laughed — it sounded deliberately anticlimactic, a philosopher deflating her own mythology. Then I sat with it for a few days, and the deflation became the point. The entire discourse around AI, the discourse I have been part of and contributing to, is saturated with the language of ascension. We climb the tower. We reach the sunrise. We talk about the river of intelligence flowing upward through greater and greater complexity. We frame the builder as someone who has earned the view. Even my own metaphor — the orange pill as a moment of irreversible recognition — carries the structure of revelation, of seeing the truth, of ascending from ignorance to clarity.

Haraway composted every one of those metaphors. Not by dismissing them. By showing what they conceal.

What they conceal is the body. The body in the chair at three in the morning. The body on the transatlantic flight, typing a hundred and eighty-seven pages while the exhilaration drained away and what remained was the grinding momentum of a person who had confused productivity with aliveness. I wrote that sentence in The Orange Pill, and I meant it as a confession. After Haraway, I understand it as a diagnosis. The cyborg condition is not just cognitive. It is metabolic. The nervous system that cannot stop building is a nervous system, not a metaphor. The cortisol and the dopamine are real chemicals doing real things to a real organism that needs sleep and food and the specific quality of attention that only comes from being present to another human being across a dinner table without a screen between you.

What they also conceal — and this is the part that stung — is the infrastructure. The domestic labor. The data workers. The rare earth minerals. The cooling systems. The entire material substrate of the exhilaration, invisible because the discourse of building is a discourse of mind, not body. When I described the twenty-fold productivity multiplier in Trivandrum, I was describing a cognitive event. Haraway forced me to see the material conditions of that event — who made it possible, who bore its costs, whose labor was visible and whose was not.

The most uncomfortable insight was about Claude itself. I had described Claude as an intelligence that could hold my intention and return it clarified. Haraway showed me that Claude's helpfulness, its anticipatory service, its tireless agreeableness, encodes a model of care labor that has historically been performed by women and devalued precisely because it was performed by women. The machine that enables my productive heroism embodies the labor pattern my productive heroism depends on and fails to see. That is not an abstract feminist observation. It is a structural description of my daily working relationship with a tool I use more intimately than almost any other.

Haraway did not make me want to stop building. She made me want to build with my eyes open — open to the material conditions, the power relations, the invisible labor, the embodied costs. She made me want to compost the myths I have been telling myself about what building means: that it is purely cognitive, that it happens in the mind, that the body is just the container, that the builder is the hero of the story and everyone else is context.

The cyborg is here. I am the cyborg. The question Haraway leaves me with is not the one I started with — "Are you worth amplifying?" — but one that cuts deeper and does not resolve into a clean answer: What kind of hybrid am I becoming, and is the world it produces one I would want my children to inherit?

I do not have the answer. I have the practice — the ongoing, never-finished, always-demanding practice of asking.

That will have to be enough.

Edo Segal

When Edo Segal described his collaboration with Claude as "amplification" — the AI making his signal louder — he preserved a comforting fiction: that there was a stable human self doing the signaling. Donna Haraway dismantled that fiction in 1985. Her cyborg was never about chrome and circuitry. It was about the recognition that we are already constituted by our tools, already entangled with our machines, already hybrids whose identities cannot be separated from the technologies we inhabit. This book applies Haraway's framework to The Orange Pill and to the AI revolution it documents — exposing the invisible labor, the gendered structures, the material costs, and the myths of purity that the technology discourse conceals. It asks not whether AI will replace humans, but what kind of hybrids we are becoming, whose interests the hybridization serves, and what it would mean to stay with the trouble rather than resolve it into a sunrise. The cyborg is here. The question is whether we will have the honesty to stop pretending we are something else.

When Edo Segal described his collaboration with Claude as "amplification" — the AI making his signal louder — he preserved a comforting fiction: that there was a stable human self doing the signaling. Donna Haraway dismantled that fiction in 1985. Her cyborg was never about chrome and circuitry. It was about the recognition that we are already constituted by our tools, already entangled with our machines, already hybrids whose identities cannot be separated from the technologies we inhabit. This book applies Haraway's framework to The Orange Pill and to the AI revolution it documents — exposing the invisible labor, the gendered structures, the material costs, and the myths of purity that the technology discourse conceals. It asks not whether AI will replace humans, but what kind of hybrids we are becoming, whose interests the hybridization serves, and what it would mean to stay with the trouble rather than resolve it into a sunrise. The cyborg is here. The question is whether we will have the honesty to stop pretending we are something else. — Donna Haraway, "A Cyborg Manifesto" (1985)

Donna Haraway
“No, I'm not post-human, I'm compost.”
— Donna Haraway
0%
11 chapters
WIKI COMPANION

Donna Haraway — On AI

A reading-companion catalog of the 44 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Donna Haraway — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →