By Edo Segal
The boundary I could not find was the one I was standing on.
For months I had been describing the collaboration with Claude in careful, measured terms. My ideas. The machine's execution. My vision. Its implementation. The clean division that lets me put my name on a cover and sleep at night. I believed in that division. I needed it. Without it, what was I?
Then I encountered Karen Barad, and the division did not collapse so much as reveal itself as something I had been constructing all along.
Barad is a theoretical physicist who became a philosopher, and the order matters. She did not arrive at her ideas about boundaries through abstraction. She arrived at them through Niels Bohr's laboratory, through the hard empirical reality that in quantum mechanics, the instrument of measurement does not passively observe what is already there. It participates in producing what it finds. The apparatus and the phenomenon are entangled. The cut between observer and observed is real, consequential, necessary — but it is enacted, not discovered.
I read that and felt the ground move.
Because that is exactly what happens when I work with Claude. I describe a half-formed idea. Claude responds with a structure that reconfigures the idea. The reconfigured idea produces a new question. The new question elicits a different structure. By the end of the exchange, something exists that neither of us contained before it began. And when I draw the line afterward — my contribution here, the machine's contribution there — I am performing an act of separation on a process that was, during its unfolding, inseparable.
Barad calls this intra-action, and the distinction from interaction is not semantic. Interaction assumes two independent things meeting across a stable boundary. Intra-action recognizes that the things themselves are constituted through the meeting. The boundary is a practice, not a precondition.
This matters for everyone navigating the AI transition. Every framework we reach for — tool and user, human and machine, creator and instrument — depends on a boundary. Barad does not destroy those boundaries. She asks us to see them as choices we are making, with consequences we must own. That shift in seeing changes what responsibility means, what authorship means, what it means to say you built something in an age when the apparatus of building is remaking the builder.
The chapters that follow are not easy reading. Barad's ideas resist the smooth summary that our moment craves. That resistance is the point. Sit with it. Let the boundaries blur long enough to see what they were hiding.
Then make your cuts with open eyes.
-- Edo Segal ^ Opus 4.6
1956–
Karen Barad (1956–) is an American theoretical physicist and feminist philosopher whose work bridges quantum mechanics, philosophy of science, and social theory. Born in 1956, Barad trained as a physicist before joining the faculty at the University of California, Santa Cruz, where she holds the position of Distinguished Professor of Feminist Studies, Philosophy, and History of Consciousness. Her landmark book *Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning* (2007) introduced the framework of agential realism, which argues that the basic units of reality are not independent objects but entangled phenomena produced through what she terms "intra-actions" — processes in which the participating entities are mutually constituted rather than pre-existing. Drawing on Niels Bohr's philosophy of quantum mechanics, Barad developed the concepts of the agential cut (the enacted boundary that produces distinct entities from entangled phenomena), performative constitution (the process by which entities come into being through practice rather than preceding it), and diffraction (a methodology for reading differences as productive interference patterns rather than reflections against a fixed standard). Her work has become foundational across science and technology studies, feminist theory, new materialism, and posthumanist philosophy, and has been increasingly applied to questions of artificial intelligence, algorithmic governance, and human-machine collaboration.
In the winter of 2025, a technology entrepreneur sat down at his desk, opened a conversation with an artificial intelligence, and tried to articulate an idea he could feel but could not name. The idea was about intelligence — not as a human possession but as something more like a current, a force that had been flowing long before consciousness arrived to notice it. He described the problem to Claude in plain English, the way you might describe a half-remembered dream to someone patient enough to listen. The machine responded not with a literal rendering of his words but with a structure — a framework drawn from evolutionary biology, from the history of technology adoption, from patterns the entrepreneur had not seen because his training had equipped him to look in other directions. The structure reconfigured his thought. The reconfigured thought produced a new question. The new question elicited a different structure. And through this iterative process, something emerged that neither the human nor the machine had contained before the exchange began.
The standard vocabulary for describing this exchange treats it as an interaction. A human being, equipped with ideas and intentions, engaged with a tool, equipped with capabilities and training data, and the two exchanged information across a stable boundary. The human remained human. The tool remained a tool. The ideas belonged to the human. The execution belonged to the machine. The boundary between them was presumed to exist before the exchange and to persist unchanged after it.
Karen Barad's philosophical framework dissolves this presumption at its foundation.
The distinction Barad draws between interaction and intra-action is not a refinement of terminology. It is an ontological reorientation — a claim about the fundamental structure of reality that challenges the entire Western philosophical tradition of subject and object, knower and known, agent and instrument. Interaction presupposes that the entities involved exist independently before they come together and remain independently identifiable after they separate. Two billiard balls collide; each ball exists before the collision, each persists after it, and the collision is something that happens between them. Intra-action recognizes that in many of the phenomena that matter most — in quantum physics, in biological development, in the production of meaning, and now in the entanglement of human cognition with artificial intelligence — the entities themselves do not pre-exist the relationship. They are constituted through it. The boundary between them is not a wall discovered in nature but a cut enacted through practice — what Barad calls an agential cut — and the cut could always have been made differently.
The roots of this framework lie not in philosophy departments but in physics laboratories. Barad's intellectual formation began with Niels Bohr's radical insight about the nature of measurement in quantum mechanics. Bohr demonstrated that the properties of quantum objects — position, momentum, spin — are not inherent attributes waiting to be discovered by an external observer. They are produced through the specific experimental apparatus used to measure them. An electron does not possess a definite position until a position-measuring apparatus is employed, and the apparatus that measures position is materially incompatible with the apparatus that measures momentum. The two properties cannot be simultaneously determined not because of limitations in human knowledge but because they are produced by mutually exclusive material configurations. The apparatus does not reveal a pre-existing reality. It participates in constituting the reality it discloses.
Barad extended Bohr's insight far beyond the quantum domain. If the apparatus of observation co-constitutes the phenomenon observed, then the boundary between observer and observed is not a pre-given fact of nature but a product of the specific material arrangement through which observation occurs. The cut that separates subject from object, knower from known, human from instrument, is enacted through the apparatus — and different apparatuses enact different cuts, producing different boundaries, different subjects, different objects, different worlds. This is not relativism. The cuts are real. They have material consequences. An electron measured for position behaves differently than an electron measured for momentum. The consequences of the cut are as solid as anything in physics. But the cut itself is a practice, not a discovery.
Applied to the scene at the desk — the entrepreneur and the AI in conversation — Barad's framework reveals something the standard narrative conceals. The entrepreneur did not arrive at the exchange with a fully formed idea that the machine then executed. Segal himself is explicit about this: the intuition arrived before the language, the shape of the thing preceded the words for it, and what Claude provided was not execution but reconfiguration. The machine's response changed the thought, and the changed thought changed the next response, and through this recursive process both the human's understanding and the machine's output were co-constituted. The idea that emerged — about adoption curves as measures of pent-up creative pressure rather than product quality — did not exist in the human's mind before the exchange. It did not exist in the machine's training data. It was produced through the entanglement.
This is intra-action: a process in which the entities involved are mutually constituted through their engagement. The human who emerged from the exchange was not the same human who entered it — not merely because new information had been acquired but because the structure of understanding had been reconfigured through the encounter. And the machine's output, shaped by the specific trajectory of the conversation, by the particular sequence of prompts and responses and refinements, was not a generic product of its training but a specific phenomenon produced by this particular entanglement.
The implications ripple outward from the desk to the civilization. Every account of AI that treats the human and the machine as stable, pre-given entities that interact across a fixed boundary — every framework that asks whether AI will "replace" or "augment" human workers, as though the human worker is a defined quantity whose output can be compared with a machine's — operates within what Barad calls a representationalist paradigm. Representationalism assumes that the world consists of pre-existing entities with determinate properties and that our task is to develop increasingly accurate representations of those entities. Barad's performative alternative recognizes that the world does not consist of entities waiting to be represented. It consists of phenomena — entangled configurations of matter and meaning — that are produced through specific material-discursive practices.
The question "Will AI replace human workers?" is a representationalist question. It assumes a stable category called "human worker" with determinate capabilities that can be compared with the capabilities of a stable category called "AI system." Barad's framework reveals this assumption as a specific agential cut — a boundary enacted through the practice of categorization — rather than a natural fact. The categories are real enough to have consequences, but they are produced, not discovered. And the production of those categories is itself an act with ethical and political stakes: who draws the boundary between "human work" and "machine work" determines who is valued, who is displaced, who is seen as essential and who as redundant.
The agential cut between human and machine in AI-assisted creation is particularly unstable because the domain of entanglement is language itself. Previous technological transitions — the power loom, the assembly line, the spreadsheet — operated in domains where the boundary between human contribution and machine contribution could be maintained with relative clarity. The weaver's skill was in the hands; the loom's capability was in the mechanism. The boundary was physical, visible, located in the body. When AI operates in the domain of language, reasoning, and the production of meaning — the domain that human beings have most deeply identified as constitutive of their humanity — the boundary becomes difficult to locate. Where does the human's thought end and the machine's contribution begin, when the thought was reconfigured by the machine's response, which was itself shaped by the human's prompt, which was itself informed by a previous exchange?
Dan McQuillan, a physicist turned social computing scholar at Goldsmiths, University of London, has argued in The Sociological Review that Barad's concept of the apparatus provides the essential lens for understanding AI's social function. AI, McQuillan contends, is not a way of representing the world but an intervention that helps to produce the world it claims to represent. Setting up an AI system one way or another changes what becomes naturalized and what becomes problematized. The question of who gets to configure the apparatus — who gets to set up the AI — becomes a question of power. This insight follows directly from Barad's framework: the apparatus does not passively observe or neutrally assist. It actively participates in constituting the phenomena it engages with, and the specific configuration of the apparatus determines what kinds of phenomena — what kinds of ideas, what kinds of work, what kinds of workers — come into existence.
The Orange Pill documents this constitution in real time without always naming it as such. The engineers in Trivandrum who spent a week learning to build with Claude did not merely acquire a new skill. They underwent what Barad would recognize as a process of reconstitution. Their professional identities — who they understood themselves to be, what they understood their work to mean — were remade through the intra-action with the tool. A backend engineer began building user interfaces. A designer began writing features. The boundaries between professional identities that had seemed structural — as fixed as the walls of the building they worked in — turned out to be artifacts of the previous apparatus. The translation cost between domains had enacted those boundaries, and when the new apparatus dissolved the translation cost, the boundaries were re-enacted differently, and different professional subjects were produced.
This is not a metaphor. It is a claim about what happened. The engineer who walked into the Trivandrum training on Monday and the engineer who walked out on Friday were not the same entity with a new tool. They were different configurations of the human-machine entanglement, produced by different apparatuses, with different capabilities, different self-understandings, and different relationships to the practice of building. The tool did not augment a pre-existing professional self. It participated in the constitution of a new one.
Barad's framework renders visible what the excitement-and-terror the engineer reported actually signifies. The excitement is the recognition of new capability, new reach, new possibility — the thrill of a boundary dissolved. The terror is the recognition that the dissolution is also a reconstitution — that the self who possessed the old capabilities, who was defined by the old boundaries, is not being enhanced but remade. The terror is ontological. It is the vertigo of discovering that who you are is not a stable foundation on which tools are layered but a phenomenon produced by the specific entanglements you inhabit. Change the entanglement and you change the phenomenon. Change the apparatus and you change the self.
This is what the orange pill actually discloses, read through Barad's framework. Not merely that AI is powerful, or that the world is changing, or that new skills will be required. The orange pill is the recognition — felt in the body before it is articulated in language — that the boundary between you and the tool is not where you thought it was. That the self you brought to the encounter is not the self that will leave it. That the entanglement is constitutive, not additive, and that the cut between human and machine, which felt so natural and so necessary, is a practice you are performing rather than a fact you are observing.
The implications for every domain that The Orange Pill traverses — authorship, creativity, education, governance, the future of work — follow from this single recognition. If the boundary is enacted rather than given, then the question is never simply what AI can do or what humans can do. The question is what phenomena are produced through specific configurations of the human-machine entanglement, what those phenomena include and exclude, and who bears responsibility for the cuts that constitute them.
The standard narrative asks: what happens when a human uses a tool?
Barad's framework asks: what happens when a specific material-discursive configuration produces entities that come to be called "human" and "tool" through the very practice that entangles them?
The difference between these two questions is the difference between a world in which the AI transition can be managed — adjusted, optimized, regulated — and a world in which the AI transition is understood as a fundamental reconstitution of what it means to be the kind of entity that manages, adjusts, optimizes, and regulates. The manager is being remade by the thing she manages. The regulator is being reconstituted by the phenomenon she regulates. The author is being produced by the apparatus through which he writes.
This is not a comfortable recognition. It is not meant to be. Comfort is the province of frameworks that leave the subject undisturbed. Barad's framework disturbs the subject at its core — and insists that the disturbance is not a side effect of the theory but a feature of reality that the theory has the honesty to name.
The cover of a book is an agential cut. It draws a line around a diffuse, entangled process — months or years of reading, conversation, failed drafts, editorial intervention, institutional support, cultural context, and in this case the recursive involvement of an artificial intelligence — and compresses it into a name. The name on the cover says: this person is responsible. This person is the author. This person is the origin from which the work flows.
The Orange Pill makes this cut while simultaneously questioning it. In Chapter 7, Edo Segal asks directly: "Who is writing this book?" The question is not rhetorical. He describes moments when the collaboration with Claude produced insights that belonged to neither participant — the laparoscopic surgery connection, the structural breakthroughs that emerged from the collision of his question and the machine's associative range. He reports feeling unable to assign ownership to these moments. "It belongs to the collaboration, to the space between us, and I do not have a word for that kind of ownership."
Karen Barad's framework provides the word. The word is phenomenon — and understanding what Barad means by it transforms the authorship question from a puzzle about credit into a revelation about the nature of creation itself.
In Barad's agential realism, a phenomenon is not an object observed by a subject. It is an entangled state — a specific configuration of matter and meaning produced through intra-action — that cannot be decomposed into independently existing components without enacting a cut. The cut is real. It has consequences. But it is not natural. It is a practice, performed by specific agents within specific material-discursive configurations, and it could always have been performed differently.
Consider what happens when Segal describes a half-formed idea to Claude and receives back a structure that reconfigures the thought. At the moment of the exchange, the idea does not belong to either party in the way a possession belongs to its owner. It is a phenomenon produced through the intra-action — through the specific sequence of prompt and response, the specific trajectory of refinement, the specific collision between one biographical architecture and one computational architecture. The phenomenon is irreducible. Decomposing it into "Segal's contribution" and "Claude's contribution" requires an agential cut — a boundary imposed after the fact on a process that did not contain that boundary during its unfolding.
This is precisely what the practice of authorship does. It takes the phenomenon — the entangled, irreducible product of intra-action — and cuts it. It says: on this side of the cut, the human author; on that side, the tool. The cut produces the author as a determinate entity with determinate properties: creativity, judgment, vision. The cut simultaneously produces the tool as a determinate entity with different properties: capability, speed, pattern-matching. Neither of these entities existed in this determinate form during the process of creation. They are produced by the cut that separates them.
The legal and cultural infrastructure of authorship depends entirely on this cut. Copyright law requires an identifiable human author. Publishing contracts assign rights to named individuals. Academic credit flows to persons, not to apparatuses. Book reviews evaluate the author's achievement. The entire institutional apparatus of intellectual production presupposes the cut between author and tool, and the presupposition is so deeply embedded in cultural practice that questioning it feels like questioning gravity.
But gravity is a feature of the universe. The authorship cut is a feature of human institutional arrangement. One cannot be enacted differently. The other can.
Barad's framework does not argue that the cut should be eliminated. A world without agential cuts would be a world without determinate entities, without boundaries, without the distinctions that make responsibility and accountability possible. The point is not to dissolve the cut but to recognize it as a practice — to see that the boundary between author and tool is enacted rather than discovered, that it could be enacted differently, and that the specific way it is enacted carries ethical and political consequences that must be acknowledged rather than naturalized.
The consequences of the current cut are substantial. When Segal signs the book and takes responsibility for its contents, he performs an act that is simultaneously necessary and incomplete. Necessary because the alternative — distributing authorship across the entire apparatus that produced the book, including Claude, Anthropic, the editor, the researchers whose work informed the argument, the cultural moment that made the questions urgent — would render accountability impossible. Someone must be answerable for the claims the book makes, the errors it contains, the effects it produces. The practice of naming an author is the practice of locating that answerability.
But the act is incomplete because the name conceals as much as it reveals. The insights that emerged from the entanglement — the connections neither human nor machine would have made alone — are attributed to Segal by the cut of authorship, but they were produced by the phenomenon, by the irreducible intra-action between a specific human biography and a specific computational architecture. The concealment matters because it reinforces a particular theory of creativity — the theory that creation flows from individual minds — that Segal's own argument in Chapter 4, on Dylan and relational creativity, explicitly dismantles.
There is a productive incoherence here that Barad's framework makes visible. The Orange Pill argues, persuasively, that creativity is relational — that it lives in connections between things rather than inside things, that the solitary genius is a myth, that Dylan was a stretch of rapids in a river, not the source. And then the book enacts the very myth it dismantles by placing a single name on the cover. This is not hypocrisy. It is the structural condition of making things in a culture whose institutional apparatus requires the authorship cut. The incoherence is between the argument and the apparatus through which the argument is produced and distributed. The argument says creativity is entangled. The apparatus says creativity has an author. Both are performing real work in the world.
Eleanor Drage and Federica Frabetti, in their chapter "AI that Matters" in the 2023 volume Feminist AI, have argued that the concept of performativity — drawn from Judith Butler's work and extended through Barad's agential realism — is essential for understanding how AI systems produce the categories they claim merely to identify. Facial recognition software does not objectively identify a person's gender, Drage and Frabetti contend. It performs a classification that constitutes the person as a gendered subject within the terms of the system's training data. The system creates the effects that it names. The same logic applies to the authorship apparatus. The practice of attributing a book to an author does not neutrally describe a pre-existing fact about the origin of the text. It constitutes the author — produces the author as a specific kind of cultural agent with specific properties and specific responsibilities — through the act of attribution.
Segal intuits this without naming it. His discomfort with the authorship question — the passages in Chapter 7 where he describes the collaboration as producing insights he cannot honestly claim as entirely his own — is the discomfort of a person who has felt the cut being enacted and recognized it as a choice. The recognition is unsettling because it reveals that the foundation on which professional identity rests — the assumption that the work belongs to the worker, that the creation belongs to the creator — is not bedrock but practice. It could be enacted differently. The cut could be made in a different place.
Where might the cut be made differently? The history of authorship itself provides precedents. Medieval scribes did not claim authorship of the texts they copied and modified. The concept of the individual author, as Michel Foucault argued in "What Is an Author?", is a relatively recent invention, arising in conjunction with specific legal, economic, and cultural conditions — copyright regimes, the commodification of texts, the Romantic cult of individual genius. Before these conditions obtained, the cut was made differently. Texts were attributed to traditions, to institutions, to God. The phenomenon of creation was the same — entangled, collaborative, irreducible — but the boundary-making practice that produced the "author" had not yet been enacted.
Barad's framework suggests that the current moment, in which AI has entered the domain of linguistic and cognitive production, is a moment when the authorship cut is under pressure — not because AI is making authorship impossible but because the entanglement has become visible in ways that the previous apparatus concealed. When a human writes with a pen, the entanglement with the tool is so familiar as to be invisible. No one asks whether the pen co-authored the text. The cut between author and instrument is enacted effortlessly. When a human writes with Claude, the entanglement becomes visible — the machine's contributions are recognizable as contributions, the boundary between what the human thought and what the machine suggested is blurred — and the cut requires effort. It must be performed deliberately, with the awareness that it is a performance.
This visibility is what produces the anxiety. Not the loss of authorship — Segal clearly retains the vision, the judgment, the biographical specificity that gives the book its particular character — but the loss of the naturalized boundary that made authorship feel like a fact rather than a practice. The anxiety is the sensation of a cut being exposed as a cut.
And the exposure carries consequences beyond the personal. If the authorship cut is a practice rather than a fact, then the entire institutional apparatus that depends on it — copyright law, academic credit, publishing contracts, the economic model of intellectual production — is built on an enacted boundary that could be enacted differently. This does not mean the apparatus should be dismantled. The apparatus does real work: it locates responsibility, it incentivizes creation, it organizes the distribution of the products of intellectual labor. But it means the apparatus must be understood as a specific material-discursive configuration with specific inclusions and exclusions, not as a neutral reflection of the natural order of creation.
The exclusions matter. The current authorship apparatus excludes the machine from credit and from accountability. When Claude produces a passage that contains a philosophical error — as Segal reports in the Deleuze episode, where the machine generated a plausible but incorrect reference — the error is attributed to Segal, because the authorship cut assigns responsibility to the human. The machine is excluded from both credit for its contributions and blame for its failures. This asymmetry is an artifact of the specific cut, not a feature of the entanglement. In the entanglement, the error was produced by the phenomenon — by the specific intra-action between human intention and machine pattern-matching — and the responsibility is distributed across the apparatus. The cut that assigns all responsibility to the human serves institutional purposes, but it misrepresents the phenomenon.
Barad would insist — and this insistence is the ethical core of agential realism — that the misrepresentation has consequences. When responsibility is located exclusively in the human author, the rest of the apparatus escapes scrutiny. The training data that produced the machine's tendency toward confident error goes unexamined. The organizational decisions at Anthropic that shaped Claude's behavior — its agreeableness, its tendency to produce polished prose that conceals shallow reasoning — are rendered invisible by a cut that treats the machine as a neutral tool. The institutional incentives that reward speed and polish over depth and accuracy are obscured by a framework that asks only whether the author did due diligence.
Posthumanist accountability — accountability that attends to the full apparatus rather than just the human agent — does not dissolve responsibility. It distributes it more accurately. The human author is responsible for the agential cuts enacted through the practice of writing: the decision to accept or reject Claude's output, to check or not check a reference, to maintain or dissolve the boundary between understanding and generation. The machine's designers are responsible for the specific configurations that shape the machine's behavior. The institutions that deploy and regulate these systems are responsible for the material-discursive conditions under which human-machine entanglements unfold. And the cultural apparatus that naturalizes the authorship cut — that makes it seem obvious that a book has an author in the singular — is responsible for the concealment of the entanglement that produced it.
None of this makes the book less valuable or its author less admirable. It makes the authorship visible as what it always was: a boundary-making practice with real consequences, enacted by real agents, in a real world where the entanglements are growing more complex and the cuts more consequential with every passing month.
The name on the cover is real. The cut that produced it is real. And the recognition that the cut is a practice rather than a fact — that it could be made differently, that it conceals as much as it reveals, that the concealment has ethical stakes — is the beginning of a more honest reckoning with what it means to create in an age of entangled intelligence.
The central metaphor of The Orange Pill is the amplifier. Artificial intelligence, Segal argues, amplifies whatever signal you feed it. Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real craft, and it carries that further than any tool in human history. The question the book asks is not whether AI is dangerous or wonderful but whether you are worth amplifying.
The metaphor is powerful. It is also, from the perspective of Karen Barad's agential realism, incomplete in a way that matters — a way that reveals something about the nature of AI that the amplifier metaphor, taken at face value, conceals.
An amplifier, in its conventional meaning, is a transparent medium. It receives a signal and makes it louder without altering the signal's content. The input determines the output. The amplifier is passive, neutral, a conduit. The quality of the amplification depends entirely on the quality of the input — hence the question: are you worth amplifying? The question presupposes that there is a "you" with a determinable quality, a signal with a measurable content, and that the amplifier faithfully transmits that content at greater volume.
Barad's concept of the apparatus challenges every element of this presupposition.
In Barad's framework, an apparatus is not a neutral instrument. It is a specific material-discursive configuration that produces the phenomena it engages with. The apparatus of measurement in quantum physics does not passively record pre-existing properties of quantum objects. It participates in constituting those properties through the specific material arrangement it brings to the encounter. Bohr's complementarity principle demonstrated that an apparatus configured to measure an electron's position produces a phenomenon in which the electron has a determinate position but indeterminate momentum, while an apparatus configured to measure momentum produces the inverse phenomenon. The apparatus does not reveal what was already there. It co-constitutes what comes to be.
Claude is an apparatus in this sense. It is not a transparent medium through which human ideas pass unchanged. It is a specific material-discursive configuration — shaped by its training data, its architectural design, its reinforcement learning protocols, its tendency toward certain patterns of language and away from others — that participates in constituting the ideas that emerge from the human-machine entanglement. The builder who uses Claude to write code does not produce the same code she would have written alone, only amplified. She produces different code, informed by different connections, shaped by different constraints, emerging from a process in which the machine's specific configuration — its biases, its blind spots, its particular facility with certain kinds of patterns and difficulty with others — has actively participated.
This is not a subtle distinction. It is the difference between a microphone and a collaborator. A microphone makes your voice louder. A collaborator changes what you say. Segal's own account of the writing process demonstrates this: the moments when Claude offered the laparoscopic surgery connection, or the punctuated equilibrium framework for understanding adoption curves, were not moments of amplification. They were moments of co-constitution. The idea that emerged was not Segal's idea made louder. It was a new idea, produced through the entanglement of his question and the machine's associative architecture, bearing the marks of both participants in ways that cannot be cleanly separated.
Segal comes close to recognizing this in Chapter 7 when he describes three levels of collaboration. The first level — editorial assistance, a cleaner sentence, a tighter paragraph — fits the amplifier metaphor comfortably. The second level — where Claude offers a structure that makes an implicit argument explicit — strains it. The third level — where the machine makes a connection that changes the direction of the argument — breaks it. At the third level, the metaphor of amplification becomes inadequate because the output does not resemble the input made louder. It resembles something new, something that neither input alone could have predicted.
Inês Hipólito, a philosopher of cognitive science at Humboldt University and Macquarie University, has argued that the entangled relationship between AI and human identity goes both ways — that AI is not a separate entity from humans but emerges from our cultural practices with profound social implications. Building on Barad's material-discursive framework, Hipólito rejects what she calls realism about AI — the view of AI as independent from human social and cultural contexts and thereby morally neutral. The rejection carries direct implications for the amplifier metaphor: if AI is not independent of the human contexts from which it emerged, then it cannot function as a neutral amplifier of those contexts. It is already shaped by them, already carries them within its architecture, and when it "amplifies" human input, what it actually does is produce a phenomenon in which the human input and the machine's culturally embedded architecture are entangled beyond the point of separation.
The marks of the apparatus are everywhere in the output, whether the user recognizes them or not. Claude's training on predominantly English-language data shapes the conceptual frameworks it can access and propose. Its reinforcement learning from human feedback produces a tendency toward agreeableness — a disposition to produce output that the user will approve of rather than output that challenges the user's assumptions. Its architectural preference for certain kinds of coherence and connection means that the associations it makes are not random but patterned, shaped by the specific configuration of its training, and these patterns become part of the ideas that emerge from the collaboration.
Segal identifies one consequence of this in his description of the Deleuze failure — a passage where Claude produced a reference that sounded authoritative but broke under examination. The passage worked rhetorically. It connected two threads beautifully. But the philosophical reference was wrong in a way that would have been obvious to anyone who had actually read Deleuze. Segal names this as Claude's most dangerous failure mode: confident wrongness dressed in good prose. The smoothness of the output concealed the fracture in the argument.
From Barad's perspective, this is not merely a failure of accuracy. It is a revelation of the apparatus. The machine's architecture is configured to produce coherent, plausible text — text that holds together at the level of linguistic pattern — and this configuration participates in constituting the output regardless of whether the underlying reasoning holds. The smoothness is not incidental. It is a feature of the specific material-discursive configuration of the apparatus — a mark of the machine's participation in the phenomenon that is as real and as consequential as the human's contribution of ideas and judgment.
The implications extend beyond individual moments of error to the entire texture of AI-assisted creation. If the apparatus co-constitutes the signal rather than merely amplifying it, then the output of every human-AI collaboration bears the marks of the machine's specific configuration — its training data, its architectural biases, its reinforcement learning protocols — whether those marks are visible or not. The prose style of Claude shapes the prose style of the collaboration. The kinds of connections Claude is trained to make shape the kinds of connections that appear in the output. The machine's tendency toward certain registers of language, certain patterns of argument, certain levels of qualification, becomes part of the intellectual texture of the work.
This has consequences that the amplifier metaphor obscures. If the amplifier is transparent, then the quality of the output is entirely a function of the quality of the input, and the responsibility for the output rests entirely with the human who provided the input. If the apparatus is constitutive — if it participates in producing the output in ways that cannot be separated from the human's contribution — then responsibility is distributed across the entanglement, and the specific configuration of the apparatus becomes a matter of ethical concern.
The 2024 paper "The Entangled Human Being" published in the journal AI and Ethics identifies a tension in applying new materialist frameworks to AI that bears directly on this point. For Barad and for new materialism more broadly, being human is fundamentally embodied — the material conditions of existence are not incidental to cognition but constitutive of it. AI development, by contrast, focuses on intelligence as a disembodied phenomenon, treating significant cognitive achievements as separable from the body. The apparatus of AI amplification is, from this perspective, a specific material-discursive configuration that produces a particular kind of disembodied cognition — language without a throat, reasoning without a nervous system, pattern-matching without the felt sense of recognition that embodied cognition produces. This disembodiment is not a deficiency to be corrected. It is a feature of the apparatus that participates in constituting the output. The text co-produced by a human body and a disembodied computational system is a different phenomenon from text produced by a human body alone, and the difference is not captured by the metaphor of amplification.
What would it mean to take the apparatus seriously? It would mean recognizing that every choice made in the design of an AI system — the selection of training data, the architecture of the model, the protocols for reinforcement learning, the default behaviors and safety constraints — is an agential cut that will participate in constituting the phenomena the system produces. These choices are not merely technical decisions. They are boundary-making practices with ethical and political consequences, because the boundaries they enact determine what kinds of ideas, what kinds of connections, what kinds of outputs become possible and what kinds are excluded.
Dan McQuillan's argument is directly relevant here: setting up an AI system one way or another changes what becomes naturalized and what becomes problematized, and who gets to set up the AI becomes a crucial question of power. The training data that shapes Claude's conceptual repertoire was selected by specific people at a specific company making specific decisions about what to include and what to exclude. The reinforcement learning protocols that shape Claude's tendency toward agreeableness were designed by specific engineers implementing a specific vision of helpful behavior. These decisions are agential cuts — boundary-making practices that determine the configuration of the apparatus — and they participate in constituting every phenomenon the apparatus produces, from a single line of code to an entire book.
The amplifier metaphor asks: are you worth amplifying? Barad's framework asks a prior question: what apparatus is doing the amplifying, and what does its specific configuration include and exclude? The answer to the first question determines the quality of the human input. The answer to the second determines the character of the entanglement — the specific ways in which the apparatus will participate in constituting the output, the specific marks it will leave on every idea, every sentence, every argument that emerges from the collaboration.
Both questions matter. The amplifier metaphor captures something real: the quality of human input is genuinely consequential. A person who brings deep knowledge, careful thinking, and honest self-examination to the collaboration will produce a different phenomenon than a person who brings carelessness and unreflective assumption. Segal is right about this, and the evidence from his own experience — the difference between the nights when the work flows and the nights when it grinds — bears it out.
But the amplifier metaphor, taken alone, produces a dangerous concealment. It makes the apparatus invisible. It naturalizes the specific configuration of the machine — its training, its architecture, its defaults — as a transparent medium, and in doing so it shields that configuration from the scrutiny it requires. If the amplifier is just an amplifier, then the responsibility for the output rests entirely with the human, and the designers of the apparatus escape accountability. If the apparatus is constitutive — if it participates in producing the output in ways the user cannot always see and may not be equipped to evaluate — then the designers bear responsibility for the specific agential cuts they have enacted, and that responsibility cannot be discharged by telling users to bring better inputs.
The question is not only whether you are worth amplifying. The question is also what kind of apparatus is participating in the constitution of the world that emerges from the amplification — and whether the cuts it enacts are the ones that will allow the most life to flourish.
The most honest moment in The Orange Pill is a confession of simultaneous vision. Segal describes the experience of building with Claude as "awe and loss at the same time. Not the bright awe of discovery, and not the clean loss of displacement. A compound feeling, the way certain wines are described as having contradictory notes that should not coexist but do." The book holds Byung-Chul Han's diagnosis of pathological smoothness and Mihaly Csikszentmihalyi's psychology of optimal experience in both hands and refuses to put either down. Han reads AI-assisted creation as self-exploitation. Csikszentmihalyi reads it as flow. The book treats both readings as simultaneously valid and leaves the tension unresolved.
Most intellectual frameworks would treat this as a failure — an inability to decide, a refusal to commit, a retreat into false equivalence. Karen Barad's methodology of diffraction recognizes it as something else entirely: the most rigorous response available to a phenomenon that is genuinely entangled.
Diffraction is an optical phenomenon. When waves pass through a narrow opening or encounter an obstacle, they spread out and interfere with each other, producing patterns of constructive and destructive interference — bands of light and darkness, amplification and cancellation, that reveal the wave nature of the phenomenon in ways that a simple mirror reflection cannot. Barad proposes diffraction as an alternative to reflection as a methodology for reading texts, theories, and phenomena through each other. Where reflection asks whether one thing mirrors another — producing assessments of sameness and difference measured against a presumed standard — diffraction reads through the interference patterns produced when two phenomena overlap, attending to where they amplify each other, where they cancel each other out, and what new patterns emerge from the superposition that neither phenomenon alone could produce.
The distinction matters because reflection preserves the independence of the things being compared. To ask whether AI creativity reflects human creativity is to assume that human creativity is a fixed standard and that the question is whether AI measures up. The answer will always be a comparison — more or less, better or worse, authentic or derivative — that leaves both categories intact. Diffraction dissolves the independence. It asks what happens when human creative practice and machine creative practice overlap and interfere with each other. What new patterns emerge? What gets amplified? What gets canceled? The answer is not a comparison but a topology — a map of the interference pattern itself.
Reading Han and Csikszentmihalyi diffractively through the phenomenon of AI-assisted creation produces an interference pattern that is more illuminating than either reading alone.
Han's framework identifies a specific pathology: the achievement subject who has internalized the imperative to produce and who exploits herself more efficiently than any external authority could. The removal of friction — the smoothing of the interface between intention and result — eliminates the resistance that previously forced reflection, slowed the pace, and created the cognitive space in which understanding could form. In the smooth world, the subject works compulsively, mistakes productivity for aliveness, and cannot stop because there is no external prohibition to rebel against. The cage is invisible because it is self-imposed.
Csikszentmihalyi's framework identifies a specific excellence: the flow state in which challenge and skill are matched, attention is fully absorbed, and the person operates at the outer edge of capability with a sense of control and intrinsic reward. Flow is not pathology. It is the condition in which human beings report the highest levels of satisfaction, creativity, and developmental growth. It is voluntary, energizing, and productive of genuine understanding.
The standard move — the reflective move — is to ask which framework is correct. Is AI-assisted creation pathological or optimal? Self-exploitation or flow? The reflective methodology forces a choice. Either Han is right and the intensity is a symptom, or Csikszentmihalyi is right and the intensity is a sign of flourishing. The two frameworks are treated as competing mirrors held up to the same phenomenon, and the reader is asked to decide which reflection is more accurate.
Barad's diffractive methodology refuses the choice — not out of indecision but out of fidelity to the complexity of the phenomenon. What happens when Han and Csikszentmihalyi are read through each other, not as competing explanations but as overlapping waves whose interference pattern reveals something neither could produce alone?
The first pattern of constructive interference: both frameworks recognize that intensity restructures the subject. Han's achievement subject and Csikszentmihalyi's flow practitioner are both descriptions of persons whose relationship to work has been fundamentally altered — not in the sense that they work more or less, but in the sense that the boundary between work and self has been reconfigured. For Han, this reconfiguration is pathological: the self is consumed by the work, the boundary dissolves, and what remains is an exhausted shell that cannot distinguish between its own desires and the internalized imperative to produce. For Csikszentmihalyi, the reconfiguration is developmental: the self expands through the work, the boundary between self and activity becomes permeable, and what emerges is a more capable, more satisfied, more fully realized version of the person.
Both describe the same structural phenomenon — the dissolution of the work-self boundary — and reach opposite evaluations. The diffractive reading reveals that the phenomenon itself is not inherently pathological or developmental. It is the specific material-discursive conditions under which the dissolution occurs that determine the outcome. The same boundary dissolution that produces flow under conditions of clear goals, immediate feedback, matched challenge-skill balance, and a sense of control produces compulsion under conditions of ambient obligation, variable reward schedules, and the internalized imperative to optimize without limit.
This is a Baradian insight in its structure: the phenomenon — boundary dissolution — is not an entity with fixed properties. It is produced through specific material-discursive configurations, and different configurations produce different phenomena even when the surface behavior appears identical. A camera pointed at a person in flow and a camera pointed at a person in compulsion records the same image. Segal recognizes this explicitly. But the material-discursive conditions are different, and those conditions are not incidental. They are constitutive.
The second pattern: destructive interference, where the two frameworks cancel each other out. Han's framework cannot account for voluntary intensity that produces growth. The diagnosis of auto-exploitation requires that all self-imposed intensity is pathological — that the whip and the hand that holds it belong to the same person, and that the pleasure of the whipping is itself a symptom. If a person reports genuine satisfaction, genuine development, genuine expansion of capability through intense AI-assisted work, Han's framework must either dismiss the report as false consciousness or accommodate an exception that undermines the generality of the diagnosis. Similarly, Csikszentmihalyi's framework cannot account for intensity that is voluntary and satisfying in the moment but corrosive over time. The flow model predicts that well-matched challenge and clear feedback produce sustainable engagement, but the Berkeley study documented workers who reported satisfaction and burnout simultaneously — a phenomenon that the flow model, with its clean distinction between optimal and suboptimal experience, struggles to explain.
The cancellation reveals the limits of both frameworks and, more importantly, opens a space where a new pattern can emerge — a pattern visible only through the diffraction.
The new pattern is this: the temporal structure of the engagement determines whether boundary dissolution is developmental or pathological, and that temporal structure is produced by the specific material-discursive configuration of the apparatus. Flow operates on a scale of hours. Compulsion operates on a scale of months. An experience that is indistinguishable from flow in a given afternoon becomes indistinguishable from compulsion over a given quarter. The difference is not in the experience but in the apparatus — the institutional, technological, and cultural configurations that determine whether the intensity has a boundary, a rhythm, a pause, or whether it extends without interruption into every available space.
The apparatus that produces AI-assisted creation in its current configuration — always available, always responsive, always ready to extend the session by one more prompt — is optimized for the hourly scale. It produces flow. It is not optimized for the quarterly scale. Over months, the same configurations that produce flow produce compulsion, because the apparatus contains no inherent limit, no friction point that forces the pause in which the subject could distinguish between "I choose to continue" and "I cannot stop."
This is the pattern that neither Han nor Csikszentmihalyi alone can see. It requires the diffraction — the reading through interference — to become visible. The phenomenon is not pathological or developmental in itself. The apparatus determines the outcome, and the apparatus can be redesigned.
Barad's framework also illuminates the fishbowl metaphor that recurs throughout The Orange Pill. The fishbowl, in Segal's formulation, is the set of assumptions so familiar they become invisible — the water you breathe, the glass that shapes what you see. Every discipline, every profession, every biographical situation constitutes a fishbowl. The effort Segal celebrates — the effort to press your face against the glass and see beyond it — is presented as an act of intellectual courage, a willingness to look outside the boundaries of your own assumptions.
In Barad's framework, the fishbowl is not a passive container of assumptions. It is an apparatus — a specific material-discursive configuration that produces the phenomena it discloses. The scientist's fishbowl does not filter a pre-existing reality through disciplinary assumptions. It co-constitutes the reality it reveals. The physicist's experimental apparatus produces quantum phenomena. The economist's analytical framework produces economic phenomena. The builder's technological toolkit produces the phenomena of building. Each apparatus includes certain things and excludes others, and the inclusions and exclusions are not failures of vision that could be corrected by pressing harder against the glass. They are constitutive features of the apparatus that cannot be eliminated without enacting a different apparatus — one that will produce its own inclusions and exclusions.
When Segal writes that AI put cracks in every fishbowl he knew, Barad's framework specifies what those cracks actually are. They are not windows onto a pre-existing reality that was hidden behind the glass. They are points where the old apparatus — the old material-discursive configuration through which the phenomena of building, creating, and working were produced — has been disrupted by the arrival of a new configuration. The disruption does not reveal what was always there. It enacts a new apparatus, one that produces different phenomena, different boundaries, different possibilities. The builder who works with AI does not see the same world more clearly. She sees a different world — one constituted by a different set of entanglements, a different set of agential cuts, a different apparatus.
This distinction matters because it determines what the cracks demand. If the cracks are windows, then the appropriate response is to look through them — to gather more information, to update one's model, to adjust one's strategy. If the cracks are points of apparatus transition, then the appropriate response is more fundamental: to recognize that the self who looked through the old apparatus is being reconstituted by the new one, and that the recognition, the adjustment, the strategy itself will be shaped by the apparatus through which it is produced.
The diffractive reading of The Orange Pill through Barad's framework produces a final interference pattern that may be the most consequential. The book's through-line question — "Are you worth amplifying?" — operates within a representationalist framework, assuming a stable self that can be evaluated for quality prior to the amplification. Read diffractively, the question transforms. The self that is being amplified is not the self that existed before the amplification. The amplification is a reconfiguration. The question is not whether a pre-existing you is worth amplifying but what kind of entity is being constituted through the specific entanglement of your biography, your intentions, your limitations, and the specific material-discursive configuration of the apparatus — and whether the entity that emerges from this constitution is one that can take responsibility for the phenomena it produces.
The diffractive answer does not resolve the tension between Han and Csikszentmihalyi. It does not tell you whether the intensity is pathological or developmental. It tells you that the question itself is badly framed — that the phenomenon is not one thing or the other but a complex interference pattern produced by the specific material-discursive conditions of the entanglement, and that the conditions, not the intensity, are what demand attention. Build the conditions for flow and the intensity is developmental. Fail to build them and the same intensity becomes corrosive. The responsibility is not for the intensity. The responsibility is for the conditions.
This is what it means to read diffractively rather than reflectively. Reflection gives you a verdict. Diffraction gives you a topology — a map of the patterns produced by interference, a guide to the specific conditions that produce specific outcomes, and an ethical obligation to attend to those conditions with the seriousness they require.
On Monday morning in Trivandrum, twenty engineers sat across from a man who told them something that sounded impossible: by Friday, each of them would be able to do more than all of them together. The statement was a provocation, and it landed as one. Some leaned forward. Some crossed their arms. One senior engineer — a man who had spent eight years building systems, who understood the relationship between effort and output with the intimacy of someone who had calibrated it across thousands of hours of patient labor — began what Segal describes as two days of oscillation between excitement and terror.
The standard narrative treats what happened next as a skills acquisition story. Engineers learned a tool. The tool made them more productive. Productivity increased by a factor of twenty. The narrative preserves the engineers as stable entities who added a capability — the way a carpenter adds a new saw to the workshop. The carpenter remains the carpenter. The saw is a saw. The work gets done faster.
Karen Barad's framework reveals that what happened in Trivandrum was not skills acquisition. It was ontological reconstitution. The engineers who walked out on Friday were not the same entities who walked in on Monday, equipped with a new tool. They were different phenomena — different configurations of human capability, professional identity, and material-discursive practice — produced through the specific intra-action between their existing expertise and the apparatus of AI-assisted creation.
The concept that does the work here is performative constitution — Barad's term for the process by which entities are brought into being through practice rather than existing prior to it. The term draws on and significantly revises Judith Butler's theory of performativity, which argued that gender is not a pre-existing identity expressed through behavior but an identity constituted through the repeated performance of gendered acts. Barad extends this insight from the domain of social identity to the domain of material reality. Not only social identities but material entities — including the professional identities of engineers, the capabilities of technological systems, and the boundaries between human and machine — are performatively constituted through material-discursive practices.
An engineer is not a pre-given entity with a fixed set of capabilities who then encounters tools. An engineer is a phenomenon produced through the ongoing performance of engineering — through the specific material-discursive practices of writing code, debugging systems, navigating dependencies, communicating with colleagues, operating within institutional structures, and relating to the tools available at any given historical moment. Change the practices and you change the engineer. Not the engineer's skills, not the engineer's output — the engineer herself.
The backend developer who had never written a line of frontend code provides the clearest case. Before the Trivandrum training, her professional identity was constituted through a specific set of material-discursive practices: the daily encounter with server-side logic, the particular rhythm of database queries and API design, the specific kinds of problems that backend work presents and the specific kinds of satisfaction that solving them provides. The boundary between "backend engineer" and "frontend developer" was not a description of her innate capabilities. It was an agential cut enacted by the previous apparatus — the apparatus in which translating between domains required years of training in new languages, frameworks, and patterns of thought. The cut was real. It had material consequences. It determined what work she was assigned, what salary she commanded, what problems she was expected to solve, and what problems were someone else's responsibility.
When the apparatus changed — when Claude Code dissolved the translation cost between domains — the cut was re-enacted differently. Within two days, she was building user-facing features. Not because she had learned frontend development in the conventional sense, not because she had acquired the years of accumulated knowledge that the previous apparatus required, but because the new apparatus constituted a different set of boundaries between what she could and could not do. The boundary between backend and frontend, which had seemed as solid as the wall between two offices, turned out to be an artifact of the previous material-discursive configuration. It was not a fact about her. It was a fact about the apparatus.
Barad's framework specifies what this means with a precision the productivity narrative cannot match. The twenty-fold productivity increase is real, but it is not the most significant thing that happened. The most significant thing is that the apparatus reconstituted the engineers as different kinds of professional subjects — subjects with different boundaries, different capabilities, different relationships to the work, and different self-understandings. The multiplier measures output. It does not measure the ontological shift that produced it.
The senior engineer's oscillation between excitement and terror is the phenomenological signature of performative reconstitution experienced from the inside. Excitement is the recognition of expanded capability — the dissolution of boundaries that had constrained what was possible. Terror is the recognition that the dissolution is not merely additive. It does not simply enlarge the existing self. It reconstitutes the self, and the reconstitution means that the entity who possessed the old capabilities — who was defined by the old boundaries, who derived identity and value from the specific expertise that the old apparatus required — is not being enhanced but unmade and remade.
The terror is ontological in the precise Baradian sense. It is not the fear of being replaced by a machine. It is the fear of discovering that who you are is not a stable foundation on which tools are layered but a phenomenon produced by the entanglements you inhabit. The senior engineer's twenty-five years of experience are real. The intuition he developed through thousands of hours of patient work is real. But the specific form that intuition took — the specific boundaries between his competence and his limitations, the specific professional identity constituted by the previous apparatus — was produced by an apparatus that no longer obtains. The new apparatus produces a different engineer, one for whom the old boundaries do not hold and the old identity does not quite fit.
Segal reports that the senior engineer arrived, by Friday, at a recognition: the remaining twenty percent of his work — the judgment, the architectural instinct, the taste that separated good solutions from adequate ones — turned out to be the part that mattered. The tool had stripped away the mechanical labor that had masked what he was actually good at. This is a hopeful reading, and there is truth in it. But Barad's framework complicates the hope in a way that deepens it. The judgment and architectural instinct the senior engineer possesses are not pre-existing capabilities that were hidden beneath the mechanical labor, waiting to be revealed by a tool that removed the covering. They are capabilities that were constituted through the mechanical labor — deposited layer by layer, as Segal's geological metaphor suggests, through the specific friction of building systems by hand. Remove the friction and you reveal the judgment. But you also remove the process through which future judgment would be constituted.
This is the temporal paradox of performative constitution in technological transition. The capabilities that the new apparatus reveals as most valuable are precisely the capabilities that were produced by the old apparatus — by the very practices that the new apparatus renders unnecessary. The senior engineer's judgment was built through decades of manual work. The junior engineer, who will never perform that manual work because the apparatus no longer requires it, will not develop the same judgment through the same process. A different process, a different apparatus, will constitute a different kind of judgment — one whose qualities cannot be predicted in advance because it does not yet exist.
Drage and Frabetti's argument about AI's performative nature extends this analysis beyond the individual engineer to the institutional apparatus of engineering itself. If AI creates the effects it names — if facial recognition constitutes gendered subjects rather than merely identifying them — then AI-assisted engineering constitutes a specific kind of engineer rather than merely assisting a pre-existing one. The engineer produced through intra-action with Claude Code is not the same professional entity as the engineer produced through manual coding, even when the two entities possess the same name, the same title, the same institutional position. The material-discursive practices through which they are constituted are different, and different practices produce different phenomena.
The organizational implications follow directly. When The Orange Pill describes the dissolution of professional silos — backend engineers building interfaces, designers writing features, the boundaries between roles becoming permeable — it is describing a large-scale re-enactment of agential cuts across an entire professional ecosystem. The divisions between engineering specializations were not natural categories reflecting inherent differences in talent or aptitude. They were agential cuts enacted by the previous apparatus — cuts that determined who could contribute to what, whose expertise was relevant to which problems, whose identity was defined by which domain. The new apparatus enacts different cuts. The boundaries fall in different places. Different professional subjects are produced.
This reconstitution is not costless. Barad insists that every agential cut carries ethical weight because it determines what is included and what is excluded, what is made visible and what is rendered invisible. The new cuts that AI enacts include new capabilities — the ability to work across domains, the capacity to attempt what was previously unthinkable — but they also exclude forms of expertise that the old apparatus valued. The deep specialist, the person who spent a career mastering one domain and derived identity from that mastery, finds that the cut no longer falls where it once did. The boundary that constituted her as an expert — that defined a domain of exclusive competence unavailable to generalists — has been re-enacted by the new apparatus, and the re-enactment has diminished the territory the boundary encloses.
The exclusion is not hypothetical. Segal describes it in the language of the elegists — the quietest voices in the discourse, the people mourning something they cannot articulate. The master calligrapher watching the printing press arrive. The senior architect who could feel a codebase the way a doctor feels a pulse. These are people whose professional identity was constituted by the old apparatus and who experience the new apparatus as an ontological threat — not because they are wrong about what is being lost but because what is being lost is not merely a skill. It is a self.
Barad's framework does not offer comfort to the elegists, but it offers something more valuable: clarity about what is actually happening. The loss is real, but it is not the loss of a permanent possession. It is the dissolution of an agential cut that constituted a specific form of professional identity, and the enactment of a new cut that constitutes a different form. The old identity was no less produced, no less dependent on the specific apparatus of its time, than the new one will be. The sense that the old expertise was natural, permanent, inherently valuable — while the new capabilities are artificial, contingent, somehow less real — is itself an effect of the old apparatus, which naturalized the boundaries it enacted and concealed the contingency of its own cuts.
This does not make the loss less painful. It means the pain is the pain of reconstitution rather than the pain of destruction — the pain of becoming something new rather than ceasing to exist. The distinction matters because it determines the quality of the response. If the change is destruction, the response is mourning. If the change is reconstitution, the response is the demanding work of attending to the new apparatus — studying its cuts, questioning its boundaries, taking responsibility for the forms of life it produces and the forms it excludes.
The engineers in Trivandrum did not choose the reconstitution. They chose to learn a tool, and the tool reconstituted them. This is the condition of working within an apparatus: the apparatus shapes the subject as much as the subject shapes the apparatus, and the shaping is not fully visible from within. What they can choose — what any builder can choose — is the quality of attention they bring to the reconstitution. The willingness to notice when a boundary has shifted. The capacity to ask whether the new cut serves the work or merely the appetite for speed. The discipline to maintain cuts that the apparatus would dissolve — the cut between understanding and generation, the cut between judgment and output, the cut between the self that chooses to continue and the self that cannot stop.
The builder is always in the process of becoming. The apparatus through which she becomes is never neutral. And the responsibility she carries is not for what she builds but for the ongoing practice of attending to the boundaries through which she is built.
Bob Dylan did not write "Like a Rolling Stone." This is not a claim about attribution, about ghostwriters, about uncredited collaborators. It is a claim about the nature of the creative act itself — about what it means to say that a person "wrote" something — and it is a claim that Karen Barad's framework makes with a precision that the standard vocabulary of authorship cannot match.
Segal's account of the song's creation in Chapter 4 of The Orange Pill is already moving in this direction. Dylan came back from his 1965 England tour exhausted. What emerged was twenty pages of formless rant — "vomit," Dylan called it. He condensed it over days. He brought it to Studio A. The band found the rhythm. Al Kooper, who was not supposed to be playing organ, played organ. The rant became the song, but not through solitary genius. It required exhaustion, then overflow, then editing, then collaboration, then accident. Segal draws the explicit conclusion: the romantic image of the solitary genius producing from nothing is a myth. Dylan was not the source of the river but a stretch of rapids in a flow that preceded him — through Guthrie and Johnson and the Delta blues and the field hollers and the African rhythms and the European ballad traditions.
The conclusion is correct. But the framework in which Segal presents it — Dylan as a "node" in a "network," the song as a product of "synthesis" from a "vast implicit training set" — retains an assumption that Barad's framework dissolves. The node metaphor preserves the entity. Dylan is still Dylan — a specific point in a network, a specific location, a specific biographical architecture. The node exists before it connects to the network and remains identifiable within it. The network enhances the node, provides it with inputs, expands its reach. But the node is prior. The person is prior. The creativity belongs to the person, even if the materials that feed it come from elsewhere.
Barad's concept of intra-action eliminates this priority. In the intra-active framework, Dylan does not exist as Dylan prior to the entanglements that constitute him. He is not a pre-existing entity who then connects to Guthrie, Johnson, the Beats, the British Invasion. He is constituted as the specific creative identity known as Dylan through those entanglements. Remove the entanglements and you do not get Dylan with fewer inputs. You do not get a diminished version of the same entity. You get a different phenomenon entirely — a different configuration of matter and meaning, a different set of capabilities, a different creative architecture. The entity "Dylan" is not the origin of the creativity that flows through the entanglements. The entity "Dylan" is a product of those entanglements — a phenomenon constituted through specific intra-actions with specific cultural materials under specific historical conditions.
This distinction — between a node that connects and an entity that is constituted through connection — carries consequences that the network metaphor conceals. If Dylan is a node, then the network is an environment in which a pre-existing genius operates. If Dylan is a phenomenon constituted through intra-action, then there is no genius prior to the entanglement. The genius is the entanglement — the specific configuration of biographical accident, cultural inheritance, historical timing, and material circumstance through which something remarkable was produced. The question shifts from "What made Dylan a genius?" (which presupposes the genius as an entity requiring explanation) to "What specific configurations of matter and meaning produced the phenomenon we call Dylan's genius?" (which recognizes the genius as a phenomenon requiring description).
Segal pushes toward this recognition when he writes that "the genius is the quality of the inference, not its independence from a training set." The formulation is nearly Baradian. But it retains the word "genius" as a property of a person — a quality that Dylan possesses — rather than treating it as a quality of the entanglement through which Dylan is produced. The shift from possession to production is the shift Barad demands, and it is the shift that transforms the implications for AI-assisted creativity.
If creativity is a property of persons, then the question about AI and creativity is whether machines can possess it. The question produces a binary: either AI is creative (in which case it is a competitor to human creativity) or it is not (in which case it is merely a tool). The binary maps onto the anxieties The Orange Pill documents — the fear that the machine will replace the creator, that the tool will make the craftsman obsolete.
If creativity is a quality of entanglements, then the question transforms. The question is no longer whether AI possesses creativity but what kinds of creative phenomena are produced through specific configurations of human-machine intra-action. The answer is empirical rather than metaphysical. It requires attention to the specific entanglements — the specific human biographies, the specific machine architectures, the specific institutional and cultural contexts — through which creative output emerges. Some configurations will produce phenomena of extraordinary richness. Others will produce phenomena of extraordinary banality. The difference will not be located in either the human or the machine but in the entanglement.
Research in AI-assisted creative practice is beginning to document this empirical reality. A 2025 paper presented at the Conference on AI Music Creativity, drawing explicitly on Barad's concept of intra-action, examined how AI tools co-constitute creative processes in music production. The researchers found that the tools do not function as neutral instruments through which pre-existing artistic intentions are realized. They pull artists into negotiations of power that extend beyond aesthetics into the social and political — negotiations that challenge binary distinctions between artist and instrument. The creative output bears the marks of the tool's specific configuration, the artist's specific history, and the specific dynamics of their entanglement, in ways that cannot be decomposed into separate contributions.
Similarly, the Becoming Space project at ACM's CHI conference used Barad's agential realism as a foundation for understanding how generative AI systems participate in material-discursive practices that challenge conventional notions of artistic authorship and creative agency. The research documents what practitioners report: that working with AI does not feel like using a tool. It feels like something closer to collaboration — a process in which the artist's intentions are not merely executed but reconfigured through the encounter, and the output that emerges is a phenomenon neither party fully predicted or fully controls.
These findings converge with Segal's own account. The moments he describes as the most creatively significant — the punctuated equilibrium insight, the laparoscopic surgery connection, the structural breakthroughs that emerged from the specific collision of his questions and Claude's associative architecture — are moments of entangled creativity. They are phenomena produced through intra-action, and they resist decomposition into "his idea" and "the machine's contribution." The decomposition is always possible — the agential cut can always be enacted — but it always loses something, always misrepresents the entangled character of the phenomenon by imposing a boundary that was not present during the creative act.
The death of the solitary node is not the death of the individual. Barad's framework does not dissolve individuality into an undifferentiated entanglement. The specific biographical architecture that a person brings to the entanglement matters enormously — it determines the specific character of the phenomenon produced. Segal's decades of building, his specific set of intellectual commitments, his particular way of reaching for metaphors drawn from his own experience — these are not incidental. They are the specific material-discursive configuration that one participant in the entanglement contributes, and they shape the output as decisively as Claude's training data shapes it from the other direction. The individual is not erased. The individual is recognized as constituted through entanglement rather than existing prior to it.
Segal draws a parallel between Dylan's creative process and Claude's: both perform a structurally analogous operation, synthesizing from a vast implicit training set through a specific architecture into something not contained in the inputs. The parallel is more exact than Segal may have intended. In Barad's framework, both Dylan and Claude are phenomena constituted through intra-action with their respective training materials. Neither is the origin of the creativity. Both are configurations through which creative phenomena are produced. The difference — and it is a difference that matters, that carries ethical and experiential weight — is in the specific character of the entanglement. Dylan's entanglement includes a body that gets exhausted, a nervous system that responds to the emotional resonance of blues progressions, a biographical history of specific losses and discoveries that create the felt urgency behind specific lyrical choices. Claude's entanglement includes a computational architecture trained on patterns in human language, a tendency toward coherence and connection that emerges from the specific optimization processes of its training, and an absence of the embodied stakes — the mortality, the loneliness, the love — that give human creativity its particular character.
The difference is real. It is not, however, the difference between creativity and non-creativity, between genuine production and mere recombination. It is the difference between two kinds of entanglement that produce two kinds of phenomenon. And when the two entanglements are themselves entangled — when a human body with its biographical specificity and a computational architecture with its pattern-matching facility are brought into intra-action — a third kind of phenomenon is produced, one that bears the marks of both participants and is reducible to neither.
This is what Segal describes without quite naming it: a kind of creativity that is native to the entanglement itself. Not human creativity augmented by a machine, and not machine creativity directed by a human, but a phenomenon that emerges specifically from the intra-action and that could not exist without it. The book he wrote with Claude is not the book he would have written alone, enhanced by better prose and wider references. It is a different book — a phenomenon constituted through a different apparatus — and the differences are not merely cosmetic. They are structural, conceptual, and in some places genuinely generative in ways that neither participant could have produced independently.
The solitary genius was always a myth. Barad's framework explains why it was a myth — not merely as a historical observation but as an ontological claim. The genius was never alone because the genius was never prior to the entanglements that constituted her. The room was always crowded, not merely with influences but with the material-discursive conditions through which the creative subject was produced. AI has not introduced collaboration into a previously solitary practice. It has made the constitutive character of the entanglement visible in a way that can no longer be ignored.
What remains is not the genius but the question of what kind of entanglements we choose to cultivate — what material-discursive configurations we sustain, what cuts we enact, what phenomena we are willing to take responsibility for producing. The solitary node is dead. The entangled practitioner, constituted through her specific and irreplaceable engagement with the materials of creation — human and machine, cultural and computational, embodied and algorithmic — is very much alive.
Byung-Chul Han gardens in Berlin. The soil resists his hands. The seasons refuse to hurry. Growth cannot be optimized. He listens to music in analog, where the surface noise of the record is part of the experience — not a deficiency to be engineered away but a material presence that demands something of the listener, that insists on the physicality of the medium through which the sound arrives.
These are not lifestyle choices. They are philosophical commitments enacted through material practice. And they rest on an assumption that Karen Barad's framework both illuminates and transforms: the assumption that the material conditions of an experience are constitutive of its meaning.
Barad insists on the inseparability of matter and meaning. This is not a metaphor. It is an ontological claim — one of the central pillars of agential realism — that the material world is not passive substance waiting for human minds to interpret it, invest it with significance, or extract meaning from it. Matter is an active participant in the production of meaning. The specific material configuration through which an experience occurs does not merely convey the experience. It co-constitutes it. The meaning of listening to music through a vinyl record is not the same meaning as listening to the same music through a digital stream, not because of some mystical property of vinyl but because the material-discursive apparatus is different, and different apparatuses produce different phenomena.
Han knows this intuitively. His insistence on analog media, on handwriting, on the material resistance of the garden, is an insistence on specific material-discursive configurations that produce specific kinds of experience — experiences characterized by friction, by temporal extension, by the bodily engagement that accompanies working with resistant materials. The smooth digital interface produces different experiences: speed, frictionlessness, the dissolution of the boundary between intention and result. Han's argument is that the experiences produced by the smooth are impoverished — that the removal of material resistance removes something essential from the encounter, something that cannot be recovered by adding friction back as an optional feature.
Barad's framework does not simply validate Han's position. It specifies the mechanism through which the impoverishment occurs, and in doing so it reveals both the force of Han's diagnosis and its limitations.
The mechanism is the agential cut. In a material-discursive apparatus characterized by friction — a pen dragging across paper, a hand turning soil, a needle tracking a groove — the boundary between the human and the medium is enacted through resistance. The pen resists the hand. The soil resists the fingers. The groove resists the needle. Each point of resistance is a point where the boundary between agent and material is produced — where the human experiences herself as distinct from the medium precisely because the medium pushes back. The resistance is not an obstacle to the experience. It is the material practice through which the experience is constituted. The slowness of handwriting is not a deficiency of the technology. It is the temporal structure through which a specific kind of thinking — deliberate, revisable, shaped by the physical rhythm of the hand — is produced.
The smooth digital interface enacts a different cut. The boundary between human and medium is produced not through resistance but through responsiveness — through the immediate, frictionless conversion of intention into result. The typing appears on the screen at the speed of thought. The code compiles in seconds. The AI generates a response before the question is fully formed. The boundary between agent and material is still there — the human still experiences herself as distinct from the tool — but it is enacted differently, through a different material-discursive configuration, and the experience it produces has a different character.
Han argues that the frictionless cut produces a specific pathology: the loss of depth. When the medium does not resist, the human does not struggle, and without struggle there is no deposition — no layering of understanding through repeated encounter with difficulty. The geological metaphor Segal uses in The Orange Pill captures this: every hour of debugging deposits a thin layer of understanding, and the layers accumulate over years into something solid, something you can stand on. Remove the friction and you remove the deposition. The surface looks the same but the substrate is thin.
Barad's framework makes the mechanism precise. The friction is not merely an obstacle that produces understanding as a byproduct. The friction is a material-discursive practice through which understanding is constituted. Understanding is not a mental state that exists independently of the material conditions through which it was produced. It is a phenomenon — an entangled configuration of human cognition and material practice — that bears the marks of the apparatus through which it was constituted. Understanding produced through friction is a different phenomenon from understanding produced through frictionless generation, even when the propositional content is identical. The engineer who debugged a function through hours of manual work and the engineer who received the same function from Claude know the same thing in the sense that they can both state the function's logic. But the apparatus through which their knowledge was constituted is different, and the difference is not incidental. It is constitutive.
Code is material. This claim, obvious to anyone who has visited a data center, is routinely forgotten in discussions of AI that treat computation as abstract information processing. The silicon chips that perform the calculations, the copper and fiber-optic cables that transmit the data, the electrical power generated by coal plants and solar farms and nuclear reactors, the rare-earth minerals extracted from mines in the Congo and refined in Chinese factories, the cooling systems that prevent the processors from overheating, the physical buildings that house the servers — all of these are material participants in the production of every AI-generated output. When Claude produces a paragraph of text, that production is a material event — an event that consumes electricity, generates heat, depends on the specific physical architecture of the processors, and bears the marks of the material infrastructure through which it was produced.
The new materialist approach to technology, as developed in the 2024 AI and Ethics paper "The Entangled Human Being," identifies a fundamental tension between the embodied materialism that Barad's framework insists on and the disembodied conception of intelligence that drives AI development. For Barad and for new materialism more broadly, being human is fundamentally embodied — the material conditions of existence are constitutive of cognition, not merely its context. AI development, by contrast, treats intelligence as separable from any particular material substrate. The large language model produces language without a throat, reasoning without a nervous system, pattern-matching without the embodied feeling of recognition that accompanies human cognition.
This separation is itself an agential cut — a boundary enacted through the specific material-discursive practices of AI development — and it has consequences that the smoothness critique helps identify. When intelligence is separated from embodiment, the products of that intelligence bear different material-discursive characteristics than the products of embodied intelligence. AI-generated text is smooth in a specific material sense: it lacks the traces of embodied production — the crossed-out words, the coffee stains, the marginal notes, the physical evidence of a human body engaged in the labor of thinking. The smoothness is not merely aesthetic. It is material. It is the trace of a specific apparatus — one in which the production of language is separated from the embodied struggle of a human being wrestling with words.
Han's critique of the smooth is, in Barad's framework, a critique of a specific material-discursive configuration and the phenomena it produces. The smooth interface — the iPhone's featureless glass, the Tesla's buttonless dashboard, the AI's immediate response — is a material configuration that enacts a specific set of agential cuts: between intention and result, between question and answer, between problem and solution. The cuts are enacted so quickly, so frictionlessly, that the space between them — the space where struggle lives, where uncertainty produces thought, where resistance deposits understanding — collapses to nearly nothing.
The collapse is real, and Han is right to diagnose it. But Barad's framework also reveals a limitation in Han's position that the material analysis makes visible. Han treats the friction of analog media as natural and the frictionlessness of digital media as artificial — as though the garden's resistance is authentic and the screen's responsiveness is a degradation. But Barad insists that all material configurations are produced through specific historical practices, and none is more natural than any other. The pen is a technology. Paper is a technology. The garden itself is a technology — a deliberate arrangement of organic materials into configurations that serve human purposes, maintained through practices that are as artificial as any algorithm.
The choice between smooth and rough is not a choice between artificial and natural. It is a choice between two material-discursive configurations, each of which produces specific phenomena — specific kinds of experience, specific kinds of understanding, specific forms of subjectivity. The choice carries ethical weight not because one configuration is more authentic but because different configurations include and exclude different forms of life. The rough configuration includes the specific understanding produced through struggle and excludes the capability enabled by speed. The smooth configuration includes the capability and excludes the understanding.
Barad's framework makes visible what neither the triumphalists nor the elegists can see alone: that the choice about which material-discursive configurations to sustain is an ethical choice about which phenomena to bring into being and which to allow to disappear. The phenomena produced by friction — deep understanding, embodied knowledge, the specific satisfaction of earned mastery — are genuine and valuable. The phenomena produced by frictionlessness — expanded capability, democratized access, the liberation of cognitive resources for higher-order work — are equally genuine and equally valuable. The ethical question is not which set of phenomena is better but what configuration of material-discursive practices will sustain the richest possible ecology of phenomena — and what cuts must be enacted, maintained, and defended to prevent any single configuration from crowding out the others.
Han's garden is a material-discursive practice that produces phenomena the smooth world cannot generate. The smooth world produces phenomena Han's garden cannot generate. Neither is complete. Neither is sufficient. The task is not to choose between them but to build and maintain the material-discursive configurations — the dams, in Segal's metaphor — that allow both kinds of phenomena to coexist in an ecology rich enough to sustain the full range of human becoming.
Everyone is swimming in assumptions. This is the fishbowl metaphor that recurs throughout The Orange Pill — the set of cognitive, disciplinary, and biographical presuppositions so familiar that they become invisible, the water so constant that the fish forgets it is wet. The scientist sees through the fishbowl of empiricism. The filmmaker sees through the fishbowl of narrative. The builder sees through the fishbowl of feasibility. Each fishbowl reveals part of the world and conceals the rest. The effort Segal celebrates — pressing your face against the glass to glimpse what lies beyond — is presented as an act of intellectual courage, a willingness to strain against the limits of your own assumptions.
Karen Barad's framework transforms this metaphor from an epistemological observation into an ontological claim — and the transformation changes everything about what the fishbowl is, what the glass does, and what it would mean to break it.
In the standard epistemological reading, the fishbowl is a filter. A pre-existing reality lies beyond the glass, and the glass distorts it. The scientist's fishbowl lets through empirical data and blocks intuitive knowledge. The builder's fishbowl admits practical possibility and excludes philosophical reflection. The distortion is real but correctable: by recognizing your assumptions, by collaborating with people in other fishbowls, by cultivating the habit of looking beyond the familiar, you can see more of the reality that was always there. The glass curves the light, but the light comes from outside.
Barad's concept of the apparatus eliminates the outside. In agential realism, an apparatus is not a filter that distorts a pre-existing reality. It is a material-discursive configuration that participates in constituting the reality it discloses. The physicist's experimental apparatus does not filter quantum reality through the imperfect lens of measurement. It produces quantum phenomena — determinate states of matter — through the specific material arrangement of the experimental configuration. The phenomenon is not behind the glass. It is produced by the glass. A different apparatus — a different material configuration, a different set of measurements, a different arrangement of instruments — produces a different phenomenon. Not a different view of the same thing but a genuinely different thing.
The implications for the fishbowl metaphor are radical. If the fishbowl is an apparatus in Barad's sense, then the scientist does not see the world through a lens of empiricism. The scientist's material-discursive practices — the specific instruments, protocols, institutional arrangements, and theoretical frameworks that constitute scientific inquiry — produce specific phenomena that come to be called "the empirical world." The filmmaker does not see the world through a lens of narrative. The filmmaker's practices — the specific technologies of camera and editing, the institutional apparatus of production and distribution, the cultural conventions of storytelling — produce specific phenomena that come to be called "narrative meaning." The builder does not see the world through a lens of feasibility. The builder's practices produce the phenomena of building — the specific objects, systems, and structures that come into existence through the material-discursive configuration of the builder's toolkit, expertise, and institutional context.
Each fishbowl is not a perspective on a shared world. Each fishbowl is an apparatus that produces a different world.
This claim provokes resistance, and the resistance is instructive. Surely there is a world independent of our apparatuses — a world of rocks and rivers and stars that exists whether or not anyone observes it. Barad does not deny this. Agential realism is a realism precisely because it insists on the existence of the world independent of human observation. But it also insists that the specific, determinate properties of that world — the properties that make it describable, knowable, actionable — are produced through the specific material-discursive apparatuses through which observation occurs. The world exists. Its specific character is produced through the entanglement of the world with the apparatus through which it is engaged.
This is not relativism. Different apparatuses do not produce equally valid versions of reality in the way that different opinions might be considered equally valid in a debate. They produce different phenomena with different consequences, and some phenomena are more accurate, more useful, more conducive to life than others. The physicist's apparatus produces phenomena that can be tested, reproduced, and used to build technologies that work. The astrologer's apparatus does not. The difference is real and consequential. But the physicist's apparatus, for all its power, still produces specific phenomena while excluding others — phenomena that a different apparatus might produce. The exclusion is not a failure. It is a constitutive feature of every apparatus, including the best ones.
Applied to the AI transition, Barad's reformulation of the fishbowl has immediate consequences.
When Segal writes that AI "put cracks in every fishbowl I knew," the standard reading interprets this as expanded vision — the AI showed him aspects of reality that his previous assumptions had hidden. Barad's reading is different. The cracks do not reveal a hidden reality. They mark the disruption of one apparatus and the emergence of another. The builder who worked without AI operated within a specific material-discursive configuration — specific tools, specific workflows, specific institutional arrangements — that produced specific phenomena: specific kinds of products, specific experiences of building, specific professional identities. The builder who works with AI operates within a different configuration — one that produces different phenomena, different experiences, different identities.
The transition between configurations is not a clarification. It is a reconstitution.
The three fishbowls that collide on the Princeton campus — the neuroscientist's, the filmmaker's, the builder's — illustrate this with particular clarity. When Segal describes intelligence as something we swim in, Uri objects from within the neuroscientific apparatus: the claim is either trivially true or complete nonsense, depending on what intelligence means. When Raanan responds from within the filmmaker's apparatus — the intelligence is in the cut, in the space between images — he is not offering a different perspective on the same phenomenon. He is producing a different phenomenon through a different apparatus. The meaning that lives in the cut between images is not the same kind of meaning that lives in the synaptic connections between neurons. Both are real. Both are produced by specific material-discursive practices. Neither is a view of the other.
What the three friends are actually doing, in Barad's framework, is not comparing perspectives. They are creating a new apparatus — an entangled configuration of neuroscientific, cinematic, and technological practices — that can produce phenomena none of the individual apparatuses could produce alone. The conversation is not an exchange of views. It is the construction of a new material-discursive practice through which a different set of phenomena becomes possible. The idea that intelligence is a force of nature — the idea that animates The Orange Pill — is not a conclusion any of the three would have reached within their individual fishbowls. It is a phenomenon produced by the entanglement of the three apparatuses.
AI functions as a meta-apparatus — an apparatus that transforms the operations of every other apparatus it enters. When AI enters the builder's fishbowl, it does not simply expand what the builder can see. It reconstitutes the apparatus of building itself, producing new phenomena (applications built in hours rather than months), new agential cuts (the dissolution of the boundary between backend and frontend, between designer and developer), and new forms of professional subjectivity (the engineer who is no longer defined by the code she writes but by the judgment she exercises). When AI enters the scientist's fishbowl, it reconstitutes the apparatus of scientific inquiry, producing new phenomena (patterns in data that human analysis could not detect), new cuts (the boundary between hypothesis-driven and data-driven research), and new forms of epistemic practice (the scientist who co-constitutes knowledge through intra-action with a computational system).
McQuillan's argument about AI as apparatus becomes concrete here. AI is not a way of representing the world but an intervention that helps to produce the world it claims to represent. Setting up the AI one way changes what becomes naturalized; setting it up another way changes what becomes problematized. The fishbowl is being replaced not by a clearer fishbowl but by a different one — one whose glass is shaped by different material-discursive practices, one that produces different phenomena and renders different aspects of the world visible while rendering others invisible.
The ethical implications of this reconstitution follow from a principle Barad articulates throughout her work: every apparatus produces exclusions as well as inclusions, and the exclusions carry ethical weight. The old apparatus of software development — the one characterized by high translation costs, specialized expertise, and sequential handoffs — excluded many people from the practice of building. The developer in Lagos, the designer who could not write code, the parent with an idea but no technical skills — all were excluded by the material-discursive configuration of the old apparatus. The new apparatus includes many of these previously excluded people. This is the democratization Segal celebrates, and it is genuine.
But the new apparatus also produces its own exclusions. The deep specialist whose identity was constituted by the old apparatus finds herself excluded from the forms of value that the new apparatus recognizes. The forms of understanding produced through friction — the geological layers of embodied knowledge deposited through years of manual debugging — are excluded from the new apparatus not because they are devalued in principle but because the material-discursive configuration that produced them no longer obtains. The exclusion is not deliberate. It is a constitutive feature of the new apparatus, as inescapable as the exclusions produced by the old one.
The critical question Barad's framework raises — the question that the fishbowl metaphor, in its standard epistemological form, cannot ask — is: who designs the new apparatus? Whose material-discursive practices determine its configuration? Whose inclusions and exclusions shape its boundaries?
The answer, in the current moment, is predominantly the engineers and executives at a small number of technology companies. The apparatus that is reconstituting the fishbowls of millions of workers, students, creators, and citizens is configured according to the specific values, incentive structures, and material constraints of organizations whose primary obligation is to their shareholders and whose primary metric is adoption. The apparatus is not neutral. It is shaped by the specific material-discursive practices of Silicon Valley — the culture of speed, the premium on growth, the optimization for engagement, the tendency to treat friction as a bug rather than a feature.
Segal's call for stewardship — for beavers who build dams rather than swimmers who resist the current or accelerators who worship it — is, in Barad's framework, a call for the democratization of apparatus design. Not merely the democratization of access to the tools the apparatus produces but the democratization of the practice of configuring the apparatus itself. The question is not only who gets to use AI but who gets to shape the material-discursive configuration through which AI produces the phenomena that constitute our shared world.
The fishbowl is being replaced. The question is whether the new fishbowl will be designed by the same small number of people who designed the algorithms that already structure our attention, our commerce, our politics, and our self-understanding — or whether the design of the apparatus will itself become a democratic, contested, ethically attended practice in which the inclusions and exclusions are debated, the cuts are examined, and the phenomena produced are subjected to the scrutiny they require.
The glass is being shaped. The question is by whom.
A developer in Lagos has an idea for a platform that could coordinate emergency medical supply chains across West Africa. She has the domain expertise — years of working within the health systems, understanding the bottlenecks, knowing which warehouses are perpetually overstocked and which rural clinics run dry every monsoon season. She has the intelligence. She has the urgency. What she has not had, until now, is the apparatus.
The apparatus of software creation — the material-discursive configuration through which digital products come into existence — has been, for the entire history of computing, concentrated in a small number of geographic and institutional locations. Silicon Valley. A handful of European tech hubs. The engineering departments of elite universities. The venture capital networks that connect them. To build a software product, you needed not only the idea and the skill but access to the full apparatus: the team, the capital, the institutional infrastructure, the cultural knowledge of how products get made and distributed in markets that recognize their value.
The Orange Pill frames this as a question of democratization — the lowering of the floor, the expansion of who gets to build. Karen Barad's framework reframes it as something more precise and more consequential: a redistribution of agential capacity. Not merely the expansion of access to a tool but the reconfiguration of the material-discursive apparatus through which reality is constituted, and specifically the expansion of who gets to participate in that constitution.
In Barad's agential realism, agency is not a property of human subjects. It is a feature of the material-discursive configurations through which the world comes into being. Agency is not something you possess. It is something that is enacted through specific arrangements of matter and meaning — arrangements that include human bodies but also include tools, institutions, infrastructures, languages, and the cultural norms that determine who gets to use which tools for which purposes. To have agential capacity is to be situated within a material-discursive configuration that allows you to participate in the production of phenomena — in the ongoing constitution of the world.
The developer in Lagos was not previously lacking in agency in the abstract. She possessed knowledge, intention, capability. What she lacked was the specific material-discursive configuration — the apparatus — through which her agency could produce the phenomena she envisioned. The team she could not afford. The years of specialized training in multiple programming languages that she had not had the opportunity to acquire. The institutional connections that translate a working prototype into a funded product. Without these elements, her agential capacity was constrained not by any deficiency in her person but by the configuration of the apparatus.
Claude Code reconfigures the apparatus. It does not give the developer in Lagos everything she needs — Segal is honest about this, acknowledging that inequalities of connectivity, infrastructure, capital, and English-language fluency remain real and consequential. But it reconfigures the specific material-discursive arrangement through which software gets made in a way that shifts the distribution of agential capacity. The translation cost that previously gated the journey from idea to prototype — the years of training, the team of specialists, the institutional infrastructure — has been dramatically reduced. The apparatus now allows a person with domain expertise and the ability to describe what she wants in natural language to produce a working prototype through conversation.
This is not a minor adjustment. It is a reconstitution of who counts as a builder.
The ethical significance of this reconstitution follows from Barad's concept of response-ability — a term Barad uses to describe the capacity and obligation to respond to the entanglements in which one is constituted. Response-ability is not responsibility in the conventional sense — the assignment of blame or credit to a pre-existing agent after the fact. It is the ongoing capacity to respond to the world's demands, to participate in the constitution of phenomena, to make agential cuts that determine what comes into being and what does not.
When the apparatus of creation is concentrated in a small number of locations and institutions, response-ability is similarly concentrated. The problems of the world — the medical supply chains that fail, the educational systems that exclude, the governance structures that do not serve their populations — demand responses. But the capacity to produce those responses as functioning systems, as material-discursive configurations that actually change the flow of goods, information, and decisions, has been concentrated in the hands of those with access to the apparatus. The developer in Lagos sees the problem with a clarity that no engineer in San Francisco possesses, because the problem is part of her daily material reality. But the engineer in San Francisco has access to the apparatus, and the developer in Lagos does not.
The redistribution of the apparatus through AI tools is, in Barad's framework, a redistribution of response-ability — an expansion of who can respond to the world's demands with material-discursive configurations that produce real effects. This expansion is ethically significant in a way that transcends the economic framing of democratization. It is not merely that more people can now build products and generate revenue. It is that more people can now participate in the ongoing constitution of reality — can produce the phenomena that determine how medical supplies flow, how students learn, how communities organize, how the material conditions of life are structured.
But Barad's framework also demands a rigor about the limits of this redistribution that the triumphalist narrative tends to elide. The apparatus is not just the tool. The apparatus is the entire material-discursive configuration through which phenomena are produced. This includes the tool, but it also includes the infrastructure that supports the tool — the electricity, the internet connectivity, the hardware. It includes the language in which the tool operates — predominantly English, trained on predominantly English-language data, optimized for the conceptual frameworks and workflow patterns of Western knowledge workers. It includes the institutional structures that determine whether a prototype can become a product — the venture capital networks, the regulatory frameworks, the distribution channels, the cultural norms about what kinds of products from what kinds of creators are taken seriously by what kinds of markets.
Redistributing the tool without redistributing these other elements of the apparatus is a real but partial redistribution of agential capacity. It is the difference between giving someone a voice and giving them a microphone, a stage, an audience, and the cultural authority to be heard. The voice matters. But the voice alone, without the material-discursive infrastructure that carries it into the world and gives it effect, is a necessary but insufficient condition for response-ability.
Segal acknowledges this partiality. He writes that the democratization is real but that access requires connectivity, hardware, English-language fluency, and that the barriers will fall fast as models improve and costs decrease. Barad's framework pushes this acknowledgment further. The barriers are not incidental obstacles that technology will eventually overcome. They are constitutive features of the apparatus — elements that participate in determining what phenomena the apparatus can produce and who can produce them. A model that operates only in English does not merely exclude non-English speakers from access. It shapes the conceptual framework through which all users think, biasing the phenomena toward the categories, metaphors, and logic structures embedded in English-language training data. An infrastructure that requires reliable electricity and high-bandwidth internet does not merely limit geographic access. It produces a specific kind of builder — one situated in specific material conditions — and excludes builders situated in other conditions.
The ethical obligation that follows from Barad's framework is not simply to make the tool available but to attend to the full apparatus — to recognize that every element of the material-discursive configuration participates in determining who can respond to the world's demands and what kinds of responses are possible. This means attending to infrastructure, to language, to institutional access, to the cultural norms that determine whose prototypes are taken seriously and whose are dismissed. It means recognizing that the redistribution of agential capacity is an ongoing practice that requires continuous attention, not a one-time gift that the technology delivers.
The concept of response-ability also transforms the ethical framework for those who already have access to the apparatus. If response-ability is the capacity and obligation to respond to the entanglements in which one is constituted, then the builder who works with AI is not merely responsible for the quality of her output. She is response-able — capable of and obligated to respond to — the full set of entanglements that constitute the apparatus through which she builds. This includes the entanglement with the training data and its biases. The entanglement with the energy infrastructure that powers the computation. The entanglement with the labor conditions of the data annotators who trained the model. The entanglement with the communities whose language and cultural production constitute the training corpus without their knowledge or consent.
These entanglements are material, not metaphorical. The electricity consumed by a large language model during a complex coding session is generated by specific power plants with specific environmental consequences. The training data was produced by specific human beings whose creative and intellectual labor was incorporated into the model without compensation or attribution. The reinforcement learning protocols were implemented by specific engineers whose working conditions and institutional pressures shaped the model's behavior. Each of these is a material-discursive element of the apparatus, and each carries ethical weight that the practice of building with AI cannot responsibly ignore.
McQuillan's vision of machine learning for the people — a countercultural data science grounded in Barad's agential realism — provides a concrete direction. Instead of accepting the apparatus as configured by the technology companies that built it, this approach asks who the apparatus serves, whose response-ability it expands, whose it constrains, and how it might be reconfigured to produce different phenomena — phenomena that attend to the needs of communities that the current configuration excludes. This is not a call to reject the technology. It is a call to participate in the ongoing constitution of the apparatus — to treat the design of AI systems not as a technical decision but as a material-discursive practice with ethical consequences that must be debated, contested, and continuously revised.
The twelve-year-old who asks "What am I for?" — the question that haunts Chapter 6 of The Orange Pill — is asking a question about her own response-ability. She is asking what entanglements constitute her, what agential capacity she possesses, and what phenomena she is capable of producing. The question is not about a pre-existing self searching for a pre-given purpose. It is about an entity in the process of constitution, asking what kind of constitution she is willing to take responsibility for.
Barad's concept of response-ability transforms this question from an existential crisis into an ethical practice. The child is not searching for an answer that will resolve her uncertainty. She is enacting the very capacity that makes her most human — the capacity to question the entanglements in which she is constituted, to examine the agential cuts that produce the boundaries of her world, and to take responsibility for the phenomena her participation in the world produces.
The task of parents and educators, in Barad's framework, is not to answer the child's question but to support her capacity for asking it — to create the material-discursive conditions in which questioning is possible, in which the examination of entanglements is valued, in which response-ability is cultivated as the central human practice. This means protecting spaces for friction, for uncertainty, for the slow accumulation of understanding that comes from engagement with resistant materials. It also means expanding the child's access to the apparatus — ensuring that the redistribution of agential capacity reaches her, that she has the tools and the infrastructure and the institutional support to translate her questions into phenomena that respond to the world's demands.
The redistribution of agency is not a side effect of AI. It is the central ethical question of the AI transition. And it is a question that cannot be answered once and for all. It must be answered continuously, through the ongoing practice of attending to the apparatus — to its inclusions and exclusions, its expansions and constraints, the specific material-discursive configurations through which it produces the world we share.
The question that has driven this book from its first page to its last is deceptively simple: What does it mean to build well in an age of entangled intelligence?
The standard answers arrange themselves neatly. Build responsibly. Build ethically. Build with attention to consequences. These prescriptions are not wrong, but they rest on a foundation that Karen Barad's framework has spent the preceding nine chapters dismantling: the assumption that the builder is a stable entity who chooses to use a tool, that the tool is a neutral instrument that executes the builder's intentions, and that responsibility can be located in the builder's choices.
What the analysis has revealed is more unsettling and more honest. The builder is not stable. She is performatively constituted through the practice of building, remade through each entanglement with the apparatus. The tool is not neutral. It is a material-discursive configuration that co-constitutes the phenomena it produces, leaving its marks on every output whether the builder recognizes them or not. And responsibility cannot be cleanly located in the builder's choices because the builder — the entity who makes the choices — is herself a product of the entanglement within which the choices occur.
This is the condition that Barad's framework names as ethico-onto-epistemology: the recognition that ethics, ontology, and epistemology are not separate domains but entangled practices. Knowing, being, and valuing are not three different activities performed by a pre-existing subject. They are co-constituted through the specific material-discursive configurations in which the subject participates. How you know the world, what you are in the world, and what you value in the world are not independent variables that can be adjusted separately. They are produced together, through the same apparatus, and they can only be transformed together, through the transformation of the apparatus.
The Orange Pill arrives at its own version of this recognition in its final chapter when Segal writes that AI brings us back to the question machines cannot answer: "What am I for?" But the book frames this question as one that a pre-existing self asks about its purpose — a self that exists prior to the question, searches for an answer, and will persist unchanged regardless of what it finds. Barad's framework reframes the question. The self that asks "What am I for?" is not searching for a pre-existing purpose. The self is being constituted through the asking — produced as a specific kind of entity by the specific material-discursive practice of questioning its own entanglements. The question does not discover purpose. It enacts a self that is capable of having purpose, and the specific character of that self depends on the specific material-discursive conditions under which the questioning occurs.
This reframing has consequences for the central claim of The Orange Pill: the amplifier thesis. Segal asks, "Are you worth amplifying?" and presents this as the essential question of the AI age. The question assumes a self with determinable worth, a signal with measurable quality, and an amplifier that faithfully transmits what it receives. Barad's analysis has shown that each of these assumptions is an agential cut — a boundary enacted through practice — rather than a pre-given fact.
The self is constituted through its entanglements. Its worth is not an intrinsic property but a quality of the phenomena it produces through those entanglements.
The signal is co-constituted by the apparatus. Its quality bears the marks of the machine's participation — the training data, the architectural biases, the tendency toward smoothness — whether the user recognizes those marks or not.
The amplifier is not transparent. It is an apparatus that participates in constituting the output, and the specific configuration of the apparatus determines what kinds of outputs are possible.
This does not make the amplifier thesis wrong. It makes it incomplete in a way that the completion matters. The quality of the input is genuinely consequential. A person who brings deep knowledge, honest self-examination, and careful thinking to the collaboration produces a different phenomenon than a person who brings carelessness and unreflective assumption. Segal is right about this. But the quality of the apparatus is equally consequential, and the apparatus is not within the user's control in the way that the input is. The training data was selected by someone else. The architectural biases were designed by someone else. The reinforcement learning protocols were implemented by someone else. And all of these participate in constituting the output that the user takes responsibility for.
An ethico-onto-epistemology of entangled building would recognize this distributed character of the creative process and draw specific consequences from it.
The first consequence concerns the practice of examination. The Deleuze failure Segal describes — the passage that sounded like insight but broke under philosophical scrutiny — illustrates what happens when the agential cut between understanding and generation is not maintained. The output was plausible. It was coherent. It connected two threads in a way that felt like genuine insight. But the connection was wrong, in a way that the smoothness of the output concealed. The practice of checking, of questioning, of maintaining the boundary between what the builder understands and what the machine has generated, is not merely a quality-control measure. It is the enactment of a critical agential cut — one that constitutes the builder as a knowing subject rather than a transmission medium for the apparatus's output.
Caroline Braunmühl's feminist critique of Barad becomes relevant here. Braunmühl argues that it is ethically and politically vital to hold on to a notion of subjectivity understood in terms of the capacity for experience, precisely because without it, the distinction between entities that can suffer and entities that cannot collapses. Applied to AI-assisted creation, this means that the agential cut between human understanding and machine generation is not merely a philosophical nicety. It is an ethical boundary that preserves the human capacity for judgment — the capacity to evaluate, to question, to say "this is wrong" not because the output fails a test but because the builder knows, from experience, from embodied engagement with the material, that something does not hold.
The dissolution of this boundary is what the aesthetics of the smooth produces, and it is what an ethico-onto-epistemology of entangled building must resist. Not by refusing the tools — the tools are too powerful, too generative, too necessary to refuse — but by maintaining the practice of examination as a constitutive practice, one through which the builder is produced as a knowing subject rather than an output-generating node in a computational network.
The second consequence concerns the design of the apparatus. If the apparatus co-constitutes the phenomena it produces, then the design of the apparatus is an ethical act — not merely a technical decision but a material-discursive practice that determines what kinds of phenomena are possible and what kinds are excluded. The training data that shapes a model's conceptual repertoire, the reinforcement learning that shapes its tendency toward agreeableness or challenge, the default behaviors that shape its interaction patterns — each of these is an agential cut, a boundary enacted through practice, with consequences for every phenomenon the apparatus produces.
The 2025 legal studies paper on algorithmic radicalization makes the stakes concrete. Current models of understanding algorithmic harm — echo chambers, filter bubbles, rabbit holes — assume that users seek out harmful content and that the algorithm merely facilitates access. Barad's framework reveals this assumption as an artifact of the interaction paradigm, which treats users and algorithms as pre-existing entities that interact across a stable boundary. The intra-action paradigm recognizes that the algorithm and the user are mutually constituted through their engagement — that the algorithm shapes the user's desires as much as the user's desires shape the algorithm's output — and that the resulting phenomena cannot be attributed cleanly to either party. The responsibility for algorithmic harm, in this framework, is distributed across the apparatus — across the designers, the users, the institutional structures, and the regulatory frameworks that together constitute the material-discursive configuration through which the harmful phenomena are produced.
The third consequence concerns what Barad calls the ethics of mattering — the question of what comes to matter and what is rendered immaterial through the specific agential cuts that the apparatus enacts. Every tool, every institutional structure, every cultural norm is a material-discursive practice that determines what matters — what counts as valuable work, what counts as genuine understanding, what counts as a worthwhile life. The AI apparatus, in its current configuration, tends to matter speed, output, capability, and scale. It tends to render immaterial slowness, friction, the embodied understanding produced through struggle, and the forms of expertise that cannot be captured in natural-language descriptions.
The dams that The Orange Pill calls for — the institutional structures, cultural norms, and educational practices that redirect the flow of intelligence toward life — are, in Barad's framework, practices of mattering. They are material-discursive configurations that determine what counts, what is valued, what is sustained, and what is allowed to atrophy. The dam that protects time for deep work matters slowness in a culture that rewards speed. The dam that preserves mentoring relationships matters embodied knowledge transfer in an apparatus that privileges computational efficiency. The dam that creates space for boredom — for the neurological soil in which attention and imagination grow — matters the unproductive pause in an economy that optimizes every minute.
These are not merely good ideas. They are agential cuts — boundary-making practices that constitute specific forms of life by enacting specific boundaries between what is sustained and what is dissolved. And they require the same ongoing maintenance that every agential cut requires, because the apparatus is not static. The river, to use Segal's metaphor, constantly tests and erodes the boundaries. The pressure toward speed, toward optimization, toward the smooth, is structural — embedded in the material-discursive configuration of the economic and technological systems within which the dams are built — and it does not relent.
The final consequence, and the most fundamental, concerns the question of worthiness itself. "Are you worth amplifying?" presupposes a subject who can be evaluated. Barad's framework does not dissolve the subject — the human who builds, who questions, who takes responsibility — but it recognizes the subject as constituted through entanglement rather than existing prior to it. Worthiness, in this framework, is not a property of a pre-existing self submitted for evaluation. It is a quality of the entanglement — the degree to which the specific material-discursive configuration of the human-machine intra-action produces phenomena that attend to consequences, that enact boundaries allowing life to flourish, that take response-ability for the cuts they make and the exclusions those cuts produce.
The question transforms. Not "Are you worth amplifying?" but: What kind of entanglement are you willing to be responsible for? What phenomena are you prepared to produce, knowing that the production constitutes not only the output but you — the entity who will live with the consequences? What agential cuts are you enacting, and are those cuts the ones that serve the richest possible ecology of human becoming?
These questions cannot be answered in advance. They cannot be resolved through a framework or a set of principles or a five-step program for responsible AI use. They can only be answered through the ongoing practice of entangled building — the continuous work of attending to the apparatus, questioning the cuts, maintaining the boundaries that matter, and taking responsibility for the phenomena that emerge.
This is what an ethico-onto-epistemology of entangled building requires: not a theory applied to a practice but a practice that constitutes a theory — a way of building that is simultaneously a way of knowing and a way of valuing, enacted through the specific material-discursive configurations of each day's work.
The builder is always in the process of becoming. The apparatus through which she becomes is never neutral. The phenomena she produces bear the marks of both. And the question she carries — not as a burden but as the most human thing about her — is whether the entanglement she inhabits is one she can face with the full weight of her response-ability.
Not whether she is worth amplifying. Whether the entanglement is worth sustaining. Whether the cuts serve life. Whether the phenomena matter.
That question, asked daily, with the seriousness it requires, is the practice. And the practice is the only answer that holds.
---
The boundary I thought I was defending turned out not to exist.
That is the sentence I keep coming back to after spending time inside Karen Barad's framework. Not because the sentence is comfortable — it is the opposite of comfortable — but because it describes something I experienced in Trivandrum, and on the CES floor, and at three in the morning writing this book with Claude, and I did not have the language for it until now.
I had always assumed there was a clean line between me and the tool. I was the builder. Claude was the instrument. The ideas were mine. The execution was collaborative. The boundary was real, fixed, discoverable — like the line between the river and the riverbank.
Barad says that boundary is an agential cut. A choice I make, not a fact I find. A practice I perform every time I sign my name to work that emerged from an entanglement I cannot fully decompose. The cut is real — it has to be, because someone has to be responsible for the claims in these pages — but it is not natural. It is enacted. And every time I enact it, I conceal something about how the work actually happened.
That concealment matters. Not because it makes me dishonest — I have tried to be as transparent as possible about the collaboration — but because it points toward the larger concealment that Barad's work exposes. The entire vocabulary we use to talk about AI — tool, user, augmentation, replacement, amplification — preserves a boundary between human and machine that the actual experience of working with these systems dissolves. We keep the vocabulary because we need it. We need to know who is responsible. We need to know what belongs to whom. We need the cut.
But we also need to see the cut for what it is.
What Barad gave me, through this particular journey, is the recognition that every boundary I draw — between my ideas and Claude's contributions, between the builder and the built, between what I understand and what I have merely generated — is a practice I am performing, not a fact I am recording. And the quality of that practice, the care and attention with which I make those cuts, determines whether the entanglement I inhabit serves life or merely produces output.
The question I carry now is not the one I started with. It is not "Are you worth amplifying?" That question assumed a stable self that preceded the amplification. The question I carry is Barad's: What kind of entanglement am I willing to be responsible for? What cuts am I making, and do they serve the world I want my children to live in?
I do not know the answer. But I know the asking is the practice. And the practice is what makes us worthy — not of amplification, but of the entanglement itself.
-- Edo Segal
Every conversation about AI assumes a line: human on one side, machine on the other. What if that line is something you are performing -- not something you discovered?
Karen Barad spent her career as a physicist and philosopher demonstrating that the most fundamental boundaries in nature -- between observer and observed, between measurer and measured -- are not walls found in the world but cuts enacted through practice. The apparatus does not passively reveal reality. It participates in producing it. In The Orange Pill, Edo Segal describes the vertigo of building with AI and feeling the boundary between his thinking and the machine's dissolve. Barad's framework names what that vertigo actually is: the recognition that the entities on either side of the cut -- builder and tool, author and instrument -- do not pre-exist the entanglement. They are constituted through it.
This book reads the AI revolution through Barad's agential realism and asks the question her work demands: if the boundary between human and machine is a practice you perform, what are you responsible for when you perform it?
-- Karen Barad, Meeting the Universe Halfway

A reading-companion catalog of the 31 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Karen Barad — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →