Simon Schaffer — On AI
Contents
Cover Foreword About Chapter 1: The Moment Nobody Saw Chapter 2: The Instrument and the Fact Chapter 3: The Invisible Hands Chapter 4: Making Up the Orange Pill Chapter 5: The Community That Sees Chapter 6: The Controversy Study Chapter 7: The Priesthood and Its Instruments Chapter 8: The River's Hidden Channels Chapter 9: Discovery Beyond the Individual Epilogue Back Cover
Simon Schaffer Cover

Simon Schaffer

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Simon Schaffer. It is an attempt by Opus 4.6 to simulate Simon Schaffer's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The instrument I trusted most was the one I understood least.

Not Claude. I knew Claude was a black box — everyone who uses it knows that. The instrument I never examined was the measurement itself. When I stood in that room in Trivandrum and watched my engineers achieve what I called a twenty-fold productivity multiplier, I was reporting what I saw. What I did not ask — what it never occurred to me to ask — was how much of that fact was produced by the lens I was looking through.

Twenty-fold compared to what? Measured how? By whose definition of productivity? Over what timescale? With what assumptions about what counts as output and what counts as noise?

These are not hostile questions. They are the questions that Simon Schaffer has spent four decades teaching the world to ask about every scientific instrument, every experimental result, every moment that a community decides something real has happened. Schaffer is a historian of science at Cambridge, and his life's work has been excavating the hidden architecture beneath discoveries that appear, from the outside, to have simply occurred. The air pump did not reveal the vacuum. The air pump, plus the social machinery of witnessing that surrounded it, produced the vacuum as a scientific fact. The telescope did not show Galileo Jupiter's moons. The telescope, plus the community Galileo painstakingly constructed to credit what the telescope showed, produced the moons as astronomical knowledge.

The distinction sounds academic. It is not. It is the most practically urgent question you can ask about the AI moment: How much of what we are seeing is the technology, and how much is the apparatus of seeing?

I wrote in *The Orange Pill* about the river of intelligence and the beaver who builds dams. Schaffer made me ask who dug the riverbed. Not the water — the channels. The institutional decisions, the corporate strategies, the invisible labor of millions of data labelers and content creators whose work was absorbed into the training data that gives these models their capabilities. Those channels were not carved by nature. They were dug by hands, and the hands are mostly invisible, and the invisibility is not an accident.

This book will not make you distrust AI. It will make you distrust your certainty about AI — which is more valuable, because certainty that does not understand its own construction is the most dangerous kind.

Schaffer gave me a gift I did not want and cannot return: the recognition that the orange pill moment I described is real *and* constructed, and that understanding the construction is the only honest foundation for the stewardship I advocated.

The instrument and the fact are never separate. They never were. Understanding that changes everything about how you build.

-- Edo Segal ^ Opus 4.6

About Simon Schaffer

1955-present

Simon Schaffer (1955–present) is a British historian of science at the University of Cambridge, where he has taught in the Department of History and Philosophy of Science since 1985. Born in London and educated at Cambridge and at Harvard under the supervision of Gerald Holton, Schaffer is best known for *Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life* (1985), co-authored with Steven Shapin, which fundamentally reshaped the field of science studies by demonstrating that Robert Boyle's experimental demonstrations succeeded not through evidence alone but through the social construction of credible witnessing. His subsequent essays — including "Babbage's Intelligence" (1994), "Glass Works: Newton's Prisms and the Uses of Experiment" (1989), and "Self Evidence" (1992) — extended this analysis to the hidden labor behind scientific instruments, the politics of demonstration, and the social processes through which discoveries are constituted as knowledge. Schaffer's recurring themes include the invisibility of skilled labor in knowledge production, the co-production of instruments and the facts they measure, and the political genealogy of concepts like "intelligence" and "discovery." He presented and co-produced the BBC documentary series *Light Fantastic* (2004) and has served on the advisory board of the Leverhulme Centre for the Future of Intelligence. His work remains foundational to the sociology of scientific knowledge and the history of the physical sciences.

Chapter 1: The Moment Nobody Saw

In 1866, Gregor Mendel published a paper on the hybridization of peas. The paper described, with meticulous quantitative precision, the laws governing the inheritance of traits across generations of Pisum sativum. It was presented to the Natural History Society of Brno, printed in the society's proceedings, and distributed to libraries and scientific institutions across Europe. Mendel sent copies to several prominent botanists. The paper was not suppressed. It was not hidden. It was available, in print, to anyone who cared to read it.

Almost nobody did. For thirty-four years, the paper sat in the published record like a letter addressed to a recipient who had not yet been born. When Hugo de Vries, Carl Correns, and Erich von Tschermak independently arrived at similar conclusions around 1900, they found Mendel's paper waiting for them — fully formed, precisely argued, containing the mathematical formalism that their own work had been groping toward. The rediscovery of Mendel is one of the most celebrated episodes in the history of science, typically narrated as a story about a genius ahead of his time. But as Simon Schaffer's framework insists, the more revealing question is not why Mendel was brilliant. The question is why nobody could see what he had shown them.

The answer is not that the scientific community of the 1860s was stupid. The answer is that the community lacked what Schaffer, in his work on the social construction of scientific knowledge, identifies as the conditions for recognition — the interpretive frameworks, institutional structures, and communal practices that allow an experimental result to register as a discovery rather than as an anomaly, a curiosity, or simply noise. Mendel's paper was legible as a contribution to hybridization studies, a modest subfield of botany. It was not legible as the foundation of a new science of heredity, because the science of heredity did not yet exist as a recognized category of inquiry. The paper needed a community that was asking the questions it answered, and in 1866, that community had not yet formed. The experimental results were there. The recognition was not.

Schaffer has spent four decades demonstrating that this pattern — discovery existing in the world without being recognized as discovery — is not an aberration but the norm. In Leviathan and the Air-Pump, the foundational work he co-authored with Steven Shapin in 1985, Schaffer showed that Robert Boyle's experimental demonstrations with the air pump succeeded not because the experiments spoke for themselves but because Boyle painstakingly constructed a community of credible witnesses who could certify what the experiments showed. The facts did not precede the community. The community produced the facts, through specific social practices of witnessing, testimony, and collective assent. The air pump was a machine, but it required a social apparatus — the Royal Society, its conventions of gentlemanly discourse, its particular standards of who counted as a reliable witness — to convert its mechanical outputs into scientific knowledge.

Applied to the moment Segal describes in The Orange Pill — the winter of 2025, when AI capabilities seemed to cross a threshold that transformed the technology from impressive to transformative — Schaffer's framework performs a specific and unsettling operation. It does not deny that something changed. It insists that what changed was not only, or even primarily, the technology. What changed was the community's capacity to recognize what the technology had become.

The capabilities that Segal describes as arriving in December 2025 did not materialize from a vacuum. Transformer architectures had been published in 2017. Large language models had been scaling in capability since GPT-2 in 2019. The ability to generate functional code from natural-language descriptions had been demonstrated by GitHub Copilot in 2021 and had improved steadily through 2022, 2023, and 2024. The trajectory was continuous. Each model was better than the last, and each improvement was documented, benchmarked, and publicly available. There was no hidden moment, no classified breakthrough, no sudden discontinuity in the technical record.

What there was, in the winter of 2025, was a convergence of social conditions that produced recognition. The accumulated experience of early adopters had reached a critical mass. The demonstration effects — the viral posts, the revenue curves, the confessions of engineers who could not stop building — had constructed a body of testimony that functioned, in Schaffer's terms, the way Boyle's witnessing community functioned: as a social apparatus for certifying that something real had happened. The Google engineer who posted "I am not joking and this isn't funny" was not reporting a measurement. She was performing a public act of witnessing, contributing to the collective construction of the moment as a moment — as a threshold, a phase transition, something that demanded to be named.

Segal calls this the orange pill: the recognition from which there is no return. Schaffer's framework suggests that the pill's color was not a property of the technology. It was a property of the community's readiness to see what the technology had been showing them, in gradually increasing resolution, for years. The pill changes color when the community is prepared to swallow it.

This distinction matters because it shifts analytical attention from the technology to the social machinery of recognition. If the orange pill moment were simply a technical threshold — a specific capability level that, once crossed, produced inevitable recognition — then the appropriate response would be purely technical: study the capability, measure it, deploy it. But if the moment was produced by the convergence of technical capability with social readiness, then understanding the moment requires understanding the social conditions as carefully as the technical ones. And the social conditions include precisely the elements that technical analysis tends to ignore: who was doing the demonstrating, to whom, through what channels, with what institutional backing, and with what frameworks of interpretation already in place.

Schaffer's 1994 essay "Babbage's Intelligence," published in Critical Inquiry, provides the historical depth that this analysis requires. In that essay, Schaffer traced the genealogy of the concept of "intelligence" itself, showing that in early nineteenth-century Britain, the word simultaneously referred to information received from external sources (as in "military intelligence") and to the cognitive capacity to process and interpret that information. Charles Babbage's calculating engines operated at the intersection of these two meanings: they were machines designed to receive data and process it according to rules, and their cultural intelligibility as "intelligent" machines depended on a specific set of social conditions — the factory system, the division of labor, the devaluation of human computational work — that made the analogy between mechanical calculation and human thought seem natural rather than absurd.

The same double meaning operates in the contemporary AI moment. When Segal describes Claude as an intelligence that can "hold my intention in one hand and the connections between ideas in the other," the plausibility of this description depends not only on what Claude can technically do but on the social conditions that make the attribution of intention-holding to a machine seem reasonable rather than metaphorical. Those conditions include the accumulated decades of AI research that have normalized the language of machine intelligence, the specific interface design choices that present the model's outputs as conversational responses rather than statistical predictions, and the community of builders whose testimonies construct the machine as a collaborator rather than a tool.

None of this makes the capabilities less real. Schaffer's point, developed across his entire body of work, is not that socially constructed knowledge is false knowledge. It is that all knowledge is socially constructed, and understanding the construction is a prerequisite for understanding the knowledge. Boyle's air pump really did demonstrate the existence of a vacuum. Mendel's peas really did reveal the laws of inheritance. The AI models of 2025 really did produce functional code from natural-language descriptions with a sophistication that previous models had not achieved. The construction does not negate the reality. The construction is the mechanism by which the reality becomes available to a community as knowledge.

The practical consequence of this analysis is that the orange pill moment is not over. It is still being constructed. The book you are reading now — The Orange Pill — is part of the construction. So is every conference presentation, every viral tweet, every corporate earnings call that frames AI as transformative. And so is every critical voice that challenges the framing, every elegist who insists that the losses be named, every skeptic who asks whether the productivity gains are real or merely redefined. The moment is being negotiated, in real time, by competing communities with competing frameworks and competing interests.

The Mendel case offers a cautionary lesson. The thirty-four years during which Mendel's results sat unrecognized were not a neutral interval. During those decades, biological research proceeded without the framework that Mendel's findings would have provided. Questions that could have been asked were not asked. Research programs that could have been launched were not launched. The cost of delayed recognition was measured not in the paper's invisibility but in the work that was not done because the community could not yet see what the paper made possible.

The AI moment carries an analogous risk, but in reverse. If the recognition is premature — if the community certifies a threshold that the technology has not actually crossed, or certifies it in terms that distort what the technology actually does — then the cost is measured in the misallocation of resources, the misdirection of effort, the construction of expectations that the technology cannot sustain. The construction of the moment is not a spectator sport. It is an act with consequences, and the quality of the construction determines the quality of the consequences.

Schaffer's work does not tell us whether the orange pill moment is being constructed accurately or prematurely. Historical analysis cannot settle that question in real time. But it can insist on a discipline that the triumphalist narrative tends to skip: the discipline of asking not just "What can this technology do?" but "What social conditions made it possible for us to see what this technology does?" and "Whose frameworks of interpretation are shaping how we see it?" and "What might we be failing to see because the dominant framework does not have room for it?"

The moment nobody saw — Mendel's paper, waiting in the published record for a community that could recognize its significance — is a reminder that seeing is not a passive reception of information. Seeing is a social achievement, produced by specific communities under specific conditions, and the conditions are as important as the thing being seen. The AI moment is real. The question Schaffer's framework poses is not whether it is real but whether we have built the community capable of seeing it clearly — seeing not just the capability but the construction, not just the river but the channels that were dug by specific hands, for specific purposes, at specific costs.

The history of science suggests that clarity comes slowly, after the initial excitement has subsided and the social machinery of recognition has been examined as carefully as the discovery it produced. That examination is the work of the chapters that follow.

Chapter 2: The Instrument and the Fact

In 1660, Robert Boyle invited a small group of gentlemen to witness an experiment. Inside a glass vessel from which air had been evacuated by a mechanical pump, a bird died. The gentlemen observed the bird's death, discussed what they had seen, and certified, through their collective testimony, that the experiment demonstrated the existence of a vacuum and the necessity of air for life. The demonstration was persuasive. The resulting consensus formed one of the foundation stones of experimental natural philosophy.

But Schaffer's analysis, developed in Leviathan and the Air-Pump, reveals the layers of construction concealed beneath the apparent simplicity of the demonstration. The air pump was not a transparent window onto nature. It was a complex, expensive, temperamental machine that frequently leaked, that required skilled technicians to operate, and that produced results only under specific conditions that Boyle carefully controlled. The "fact" of the vacuum was not observed directly. It was inferred from the behavior of the bird, through the mediating apparatus of the pump, within a framework of interpretation that Boyle had constructed through previous publications, demonstrations, and the cultivation of a witnessing community whose social standing guaranteed the credibility of their testimony.

The instrument did not reveal the fact. The instrument, together with the social apparatus that surrounded it, produced the fact. And the fact, once produced, validated the instrument — confirmed that the air pump was a reliable tool for investigating nature. The circularity was not a flaw in Boyle's method. It was the method. Every scientific instrument operates within this circle: the instrument produces results that are interpreted through a framework that the instrument's prior results helped to establish. The barometer did not discover atmospheric pressure. It produced atmospheric pressure as a scientific concept, through a circular process of measurement, interpretation, and the construction of consensus about what the measurements meant.

Schaffer's career has been devoted to making this circularity visible — not to debunk science but to reveal the social labor that scientific facts require. The facts are real. The vacuum is real. Atmospheric pressure is real. But their reality as scientific knowledge, as facts that a community accepts and builds upon, is produced through specific instruments, specific social practices, and specific frameworks of interpretation that are themselves historically contingent.

Applied to AI, this analysis targets a specific and important claim in The Orange Pill: the twenty-fold productivity multiplier that Segal reports from his engineering team's work in Trivandrum. In Segal's account, the claim emerges from direct experience: twenty engineers, equipped with Claude Code, producing work that would previously have required a team many times larger. The claim is presented as an observation — something measured, witnessed, verified through the practical test of working code deployed to real users.

Schaffer's framework does not challenge the observation. It asks what produced it. A twenty-fold productivity multiplier is not a pre-existing fact waiting in nature to be discovered. It is a fact constructed through a specific instrument — Claude Code — within a specific framework of measurement that defines what counts as productivity, what counts as equivalent output, and what counts as a meaningful comparison between the old process and the new.

Consider what the measurement requires. "Productivity" must be defined. In Segal's account, it is defined implicitly as output per person per unit of time: the number of features shipped, the volume of functional code produced, the speed with which a working system moves from conception to deployment. This definition is not wrong. It captures something real about what changed in that room in Trivandrum. But it is a definition, not a neutral observation, and the definition shapes what the measurement can see.

What the definition captures: the acceleration of implementation. Code that would have taken weeks now takes hours. Features that required sequential handoffs between specialized roles now emerge from single individuals working across domains. The translation cost that Segal describes — the friction of converting human intention into computational instruction — has been dramatically reduced.

What the definition does not capture: the quality of the understanding that accompanies the implementation. The depth of architectural knowledge that the engineers are or are not building through the accelerated process. The long-term reliability of systems built at unprecedented speed. The organizational knowledge that accumulates through slower, more deliberate development processes — knowledge about what breaks, what users actually need, what hidden dependencies emerge only under sustained use. These dimensions of productivity are real but difficult to quantify, and they tend to disappear from frameworks of measurement that privilege speed and volume.

The instrument — Claude Code — shapes the fact it helps produce. Like Boyle's air pump, it does not passively reveal a pre-existing reality. It creates a new kind of output and a new framework for evaluating output, and the framework is calibrated to the instrument's strengths. The AI tool excels at rapid generation of functional code across multiple domains. A measurement framework that defines productivity as rapid generation of functional code across multiple domains will inevitably show dramatic gains. The gains are real within that framework. But the framework itself is an artifact of the instrument, not a neutral lens through which pre-existing productivity is observed.

This is not a debunking argument. The history of science is full of instruments that produced genuine knowledge through precisely this kind of circular validation. The thermometer defined temperature as the thing thermometers measure, and the resulting framework proved extraordinarily productive for physics, chemistry, and engineering. The telescope defined celestial observation as the thing telescopes make visible, and the resulting framework transformed humanity's understanding of the cosmos. An instrument's co-production of the facts it measures is not disqualifying. It is the normal mechanism through which scientific knowledge is generated. But understanding the mechanism prevents the mistake of treating instrument-produced facts as though they were instrument-independent truths.

Schaffer's essay "Glass Works: Newton's Prisms and the Uses of Experiment," published in 1989, provides a particularly instructive parallel. Schaffer demonstrated that Newton's famous prism experiments — the demonstrations that established the heterogeneity of white light — depended not only on the prisms but on a specific set of experimental practices, display conventions, and rhetorical strategies that Newton developed to make his results persuasive to the Royal Society. Newton's opponents were not irrational. They had their own instruments, their own experimental practices, their own frameworks for interpreting what prisms did to light. The controversy was resolved not by the evidence alone but by the social processes through which Newton's framework achieved dominance over competing frameworks.

The AI moment exhibits a structurally identical dynamic. The demonstrations that construct the orange pill moment — Segal's Trivandrum sprint, the Google engineer's viral post, the revenue curves of Claude Code — are not raw data points passively registering an objective transformation. They are performances, in the precise sense that Schaffer uses the term: carefully staged presentations of capability designed to produce conviction in a witnessing community. The Trivandrum sprint was real work producing real output, but its narrative construction as a demonstration of twenty-fold productivity is a rhetorical act that selects certain features of the experience for emphasis (speed, breadth, individual leverage) while backgrounding others (the years of engineering expertise the team brought to the task, the institutional infrastructure that supported the sprint, the specific and potentially non-generalizable conditions under which the multiplier was achieved).

Schaffer's 1992 essay "Self Evidence" extends this analysis to the politics of demonstration itself. Scientific demonstrations, Schaffer argued, are never simply displays of fact. They are performances that construct credibility by staging the relationship between the demonstrator, the instrument, and the audience in ways that produce trust. The credibility of the demonstration depends not only on what is shown but on who shows it, to whom, in what setting, and with what prior claims to authority. Boyle's demonstrations succeeded in part because the witnesses were gentlemen whose social standing guaranteed the reliability of their testimony. Newton's demonstrations succeeded in part because Newton controlled the experimental conditions with a rigor that precluded alternative interpretations.

The demonstrations that construct the AI moment operate through analogous mechanisms. Segal's credibility as a demonstrator rests on his biography — three decades at the technology frontier, specific companies built and sold, a track record that licenses the claim "I have seen this before, and this time is different." The Google engineer's credibility rests on her institutional affiliation — the implicit argument that if a principal engineer at Google is astonished, the astonishment is warranted. The revenue curves function as a form of quantitative witnessing — numbers that perform objectivity by appearing to stand outside the subjective impressions of individual demonstrators.

Each of these credibility mechanisms is legitimate. Segal's experience is real. The Google engineer's expertise is real. The revenue curves measure real transactions. But in Schaffer's framework, legitimate credibility is still constructed credibility. It is produced through specific social mechanisms — biography, affiliation, quantification — that could have been organized differently and that produce conviction through their social authority, not merely through the evidence they present.

The practical implication is that the AI moment, like every scientific revolution Schaffer has studied, is being constructed through instruments and demonstrations that co-produce the facts they claim to reveal. The twenty-fold multiplier is real within the framework of measurement that the instrument enables. The orange pill moment is real within the framework of recognition that the demonstrations have constructed. But neither the multiplier nor the moment is instrument-independent. They are products of specific tools, specific measurement practices, specific demonstration strategies, and specific communities of credibility — all of which could have been configured differently, and all of which shape the reality they appear merely to report.

Understanding this does not diminish the AI moment. It disciplines the understanding. It insists that claims about what the technology does must be accompanied by analysis of the frameworks through which those claims are produced — frameworks that are themselves artifacts of the technology, not neutral vantage points from which the technology can be objectively assessed. The instrument and the fact are co-produced. They always have been. The air pump and the vacuum, the prism and the spectrum, Claude Code and the twenty-fold multiplier. The history of science teaches us to examine the co-production, not because the facts are false, but because understanding how facts are made is a prerequisite for understanding what they mean.

Chapter 3: The Invisible Hands

In the winter of 1795, the Astronomer Royal Nevil Maskelyne dismissed his assistant David Kinnebrook from the Royal Observatory at Greenwich. The dismissal was not for incompetence or misconduct. It was for a discrepancy. Kinnebrook's recorded observations of stellar transit times differed from Maskelyne's by approximately half a second — a deviation that Maskelyne attributed to Kinnebrook's personal failing, an inability to observe with the precision the position demanded.

The incident would have been forgotten entirely had the German astronomer Friedrich Bessel not noticed it two decades later and realized something that Maskelyne had been unable to see: the discrepancy was not a personal error. It was a systematic difference in the reaction times of different observers — what Bessel termed the "personal equation." Every human observer introduced a characteristic delay between the moment a star crossed the telescope's wire and the moment the observer registered the crossing. The delay varied from person to person but was consistent within each individual. Maskelyne's observations were not more accurate than Kinnebrook's. They were differently biased. The "correct" time was an artifact of whichever observer's personal equation happened to be treated as the standard.

Schaffer, in his detailed analysis of this episode and its broader implications for the history of observation, draws out the point that matters: the observer's body was part of the instrument. The observatory produced knowledge not through the telescope alone but through the compound apparatus of telescope, clock, record book, and the trained human body that connected them. And the human component — the most variable, most vulnerable, most socially determined part of the apparatus — was precisely the component whose contribution was rendered invisible by the conventions of scientific publication. The published observations bore the Astronomer Royal's name. The labor of the assistants who gathered, computed, and verified the data was systematically erased.

This erasure — the rendering invisible of the human labor that makes knowledge systems function — is the thread that runs through Schaffer's entire body of work. In essay after essay, study after study, he has excavated the hidden workers of science: the human computers who performed the calculations that astronomers published under their own names; the instrument makers whose mechanical skill was essential to every experimental result but who received no credit in the published papers; the technicians, preparators, demonstrators, and assistants whose daily labor maintained the institutional infrastructure within which celebrated discoveries were certified.

Robert Hooke, the most accomplished experimental philosopher of seventeenth-century England, served as Robert Boyle's operator — the skilled technician who actually built and ran the air pump while Boyle designed the experiments and received the credit. Hooke was, by any measure, Boyle's intellectual equal and perhaps his superior in experimental skill. But the social conventions of Restoration England made it difficult for a man of Hooke's social standing to claim the authority of a natural philosopher. The division between the gentleman who designed and the mechanic who executed was not merely a practical arrangement. It was an epistemological hierarchy: the gentleman's testimony counted because his social position guaranteed disinterestedness, while the mechanic's labor was necessary but epistemically invisible.

Schaffer's framework, applied to the contemporary AI moment, exposes the same pattern operating at a vastly larger scale.

The AI systems that Segal celebrates in The Orange PillClaude Code producing functional software from natural-language descriptions, holding a builder's intention and returning it clarified, serving as an intellectual partner in the production of a book — present themselves through an interface that performs autonomy. The user types a prompt. The system responds with coherent, sophisticated, contextually appropriate output. The interaction feels like a conversation between two minds. The interface design reinforces this impression: the chat format, the natural-language responses, the conversational memory that allows the system to maintain context across exchanges. The system appears to be a singular intelligence engaging in dialogue.

Behind this appearance lies a labor infrastructure of extraordinary scale and specificity, and its invisibility is not accidental. It is produced by the same mechanisms that rendered Hooke invisible behind Boyle, Kinnebrook invisible behind Maskelyne, the human computers invisible behind the astronomers whose publications bore their names.

The training data that gives a large language model its capabilities is derived from the accumulated textual production of millions of human beings — writers, programmers, scientists, journalists, forum contributors, bloggers, poets, bureaucrats — whose work was absorbed into training datasets through web-scraping processes that did not seek or receive individual consent. These creators are to the AI system what the observers were to the observatory: essential contributors whose labor is systematically erased by the conventions through which the system's outputs are attributed. When Claude produces a paragraph of elegant prose, the prose draws on patterns internalized from millions of texts produced by millions of hands. The interface attributes the output to "Claude." The hands are invisible.

The next layer of invisible labor is more proximate and more troubling. The data labelers who prepared training datasets — categorizing images, tagging text, identifying toxic content, rating output quality — performed the cognitive work that allowed the model to learn the distinctions it applies. This labor was overwhelmingly performed by workers in Kenya, India, the Philippines, and other countries where wages are low enough to make the per-task economics viable. Time magazine's reporting on OpenAI's content moderation workforce in Kenya documented workers earning less than two dollars per hour to review and categorize content that included graphic descriptions of violence, sexual abuse, and self-harm. The psychological cost of this labor is documented and ongoing. The workers are to the AI system what the instrument makers were to the Royal Society: essential producers of the conditions under which the system's capabilities become possible, rendered invisible by the very conventions that present the system as autonomous.

Schaffer, at a 2018 panel at Cambridge's Leverhulme Centre for the Future of Intelligence, asked directly: "What is the political genealogy of AI?" The question was not rhetorical. It demanded an answer that would trace the line of invisible labor from Babbage's human computers — the women and men who performed mathematical calculations by hand in the nineteenth century, whose title was literally "computer" before the word was transferred to the machines that replaced them — through the telephone switchboard operators, the keypunch operators, the data entry clerks, and the early software testers, to the contemporary data labelers and content moderators whose work makes contemporary AI possible.

In his foundational 1994 essay "Babbage's Intelligence," Schaffer showed that Charles Babbage's calculating engines were not conceived in a vacuum of pure mathematical imagination. They emerged from, and were justified by, the industrial system of divided labor that Babbage had studied in detail and celebrated in his 1832 treatise On the Economy of Machinery and Manufactures. Babbage understood that the power of the factory system lay in the decomposition of complex tasks into simple, repetitive operations that could be performed by unskilled workers — and, ultimately, by machines. His calculating engines applied this same logic to mental labor: the complex task of mathematical computation was decomposed into simple operations (addition, subtraction, the carrying of digits) that could be mechanized. The engine was not a brain. It was a factory for calculation, and its intelligibility as an "intelligent" machine depended on the prior social devaluation of the human computational labor it was designed to replace.

This is the insight that Schaffer called "the politics of intelligence" — the recognition that the attribution of intelligence to machines has always been entangled with the devaluation of the human labor that the machines replace. A task performed by low-status workers is not considered to require intelligence. When a machine performs the same task, the machine is celebrated as intelligent. The attribution follows the labor hierarchy, not the cognitive complexity. Babbage's engines were called intelligent not because calculation is inherently intelligent but because the human computers who performed calculation had been socially positioned as performing mere mechanical work — and therefore as replaceable by actual mechanisms.

The same dynamic operates in the contemporary AI moment with remarkable fidelity. The tasks that AI performs most impressively — generating prose, writing code, summarizing documents, producing images from text descriptions — are tasks that, when performed by humans, are considered to require skill, training, and creative judgment. But the discourse around AI systematically reframes these tasks as pattern-matching, statistical prediction, recombination — terms that position the human version of the work as less cognitively demanding than it appeared. The reframing is necessary for the attribution of intelligence to the machine to succeed, because if the human work is acknowledged as genuinely creative and cognitively complex, the machine's performance of the same work becomes harder to celebrate without simultaneously acknowledging what is being displaced.

Segal's Orange Pill navigates this tension with more honesty than most technology discourse. His account of the elegists — the senior engineers mourning the loss of craft — and his acknowledgment that "something beautiful was being lost" constitute a recognition that the human labor being displaced had genuine cognitive depth. But the book's dominant framework nonetheless participates in the broader discursive pattern Schaffer identifies: the celebration of the machine's capability coexists uneasily with the acknowledgment of the human labor that made that capability possible and the human labor that the capability displaces.

The content moderators in Kenya are not mentioned in The Orange Pill. The data labelers are not mentioned. The millions of creators whose work was absorbed into training datasets are not mentioned. This is not a criticism specific to Segal — almost no AI discourse written from the builder's perspective mentions these workers, because the conventions of the discourse render them invisible, just as the conventions of seventeenth-century natural philosophy rendered Hooke invisible behind Boyle.

Schaffer's work insists on making the invisible visible — not as an act of political accusation but as a prerequisite for understanding the knowledge systems we build. Every claim about AI's capabilities rests on labor that the claim's framework cannot see. The twenty-fold productivity multiplier in Trivandrum rests on training data produced by millions of uncredited creators. The conversational sophistication of Claude rests on categorization work performed by thousands of low-wage workers. The orange pill moment rests on a history of invisible hands that stretches back through Babbage's human computers to Maskelyne's dismissed assistant, each one essential to the system's functioning, each one erased by the conventions that present the system's outputs as autonomous achievements.

Understanding who built the instrument is not a distraction from understanding what the instrument does. It is a condition of understanding honestly. The hands that built the apparatus are not visible in the apparatus's outputs. But they are there — in the training data, in the labeled categories, in the moderated content, in the centuries of accumulated textual production that the model ingested and that the interface presents as its own intelligence. Making those hands visible does not diminish what the instrument accomplishes. It reveals what the accomplishment costs, and who pays.

Chapter 4: Making Up the Orange Pill

The phrase "making up" carries a productive ambiguity that Schaffer has exploited throughout his career. In ordinary usage, it means fabrication — the invention of something that does not exist. In Schaffer's usage, developed through decades of historical work on the social construction of scientific knowledge, it means something more precise and less pejorative: the active construction of a category, a moment, a discovery, through the collective labor of a community that gives the thing its shape, its boundaries, and its significance. Making up is not fabrication. It is constitution. The orange pill moment is being made up right now — not in the sense that it is fictional but in the sense that its meaning, its boundaries, and its significance are being actively produced by the community that experiences it.

Consider how discoveries are typically narrated. The standard narrative presents a linear sequence: a problem exists, a researcher investigates, a breakthrough occurs, and the community recognizes the breakthrough's significance. The narrative places the discovery at the center and treats the recognition as a natural consequence — as though seeing something important were simply a matter of looking in the right direction. Schaffer's career has been devoted to inverting this sequence, showing that recognition is not a consequence of discovery but a component of it. The breakthrough does not produce the recognition. The recognition produces the breakthrough — constitutes it, gives it its shape, determines what it means and what it excludes.

The air pump experiments that Schaffer and Shapin analyzed in Leviathan and the Air-Pump illustrate the mechanism. Boyle's experiments did not, in themselves, establish the existence of a vacuum. They produced specific physical effects — a bird dying, a candle extinguishing, a Torricellian column dropping — that were compatible with multiple interpretations. Thomas Hobbes, Boyle's most persistent critic, observed the same effects and denied that they demonstrated a vacuum. The dispute was not about the data. Both men had access to the same observations. The dispute was about the framework within which the observations would be interpreted — about what counted as a legitimate demonstration, what counted as a reliable witness, what counted as a satisfactory explanation. Boyle won not because his evidence was better but because his social apparatus for certifying evidence — the Royal Society, its conventions of gentlemanly witnessing, its institutional authority — was more powerful than Hobbes's philosophical counter-arguments.

The orange pill moment is being constituted through an analogous process. The technical capabilities that Segal describes are real. The code that Claude produces works. The acceleration that builders experience is measurable. But the constitution of these capabilities as a "moment" — as a phase transition, a threshold, a before-and-after that divides history into the time before the orange pill and the time after — is an act of collective construction that involves selection, emphasis, narrative framing, and the systematic privileging of certain interpretive frameworks over others.

The selection is visible in the evidence that the orange pill narrative foregrounds. The viral posts, the revenue curves, the personal testimonies of builders who experienced transformation — these form the evidentiary base of the moment. They are real evidence of real phenomena. But they are also selected evidence, chosen from a larger universe of possible data points that includes the workers whose productivity did not increase twentyfold, the projects that Claude Code botched, the hallucinations that required hours of human correction, the organizations where AI adoption produced confusion rather than acceleration. The selection is not dishonest. Every narrative selects. But the selection shapes the moment it claims merely to describe, and the shape is determined by the community that controls the selection.

The emphasis is visible in the metaphors that structure the narrative. "Phase transition" positions the moment as a natural event — as inevitable as water freezing, as impersonal as a change of state. "Orange pill" positions the moment as a revelation — as the removal of a perceptual barrier that had previously prevented the subject from seeing reality clearly. Both metaphors naturalize the moment, presenting it as something that happened to the community rather than something the community produced. Schaffer's framework insists on the reverse: the community produced the moment through the specific social practices of demonstration, testimony, and narrative construction that constitute scientific (and technological) discovery.

Schaffer's essay "Scientific Discoveries and the End of Natural Philosophy," published in Social Studies of Science in 1986, provides the theoretical apparatus for this analysis. In that essay, Schaffer argued that the concept of "scientific discovery" is itself a historical construction — a category that was invented in the early modern period to serve specific social functions (establishing priority, distributing credit, justifying patronage) and that has been retrospectively applied to earlier periods in ways that distort the actual practices of knowledge production. The "eureka moment" is a narrative convention, not a description of how knowledge is actually produced. Knowledge is produced through slow, messy, collaborative processes that are subsequently cleaned up, simplified, and attributed to individual moments of insight by the conventions of scientific storytelling.

The orange pill narrative performs exactly this kind of cleaning up. Segal's account of the winter of 2025 presents a sequence of recognitions — the Google engineer's post, the Trivandrum sprint, the CES demonstration — that build toward a moment of collective acknowledgment. But the narrative structure imposes a coherence on the experience that the experience itself may not have possessed. The builder's day-to-day reality — the mix of breakthrough and frustration, the hallucinations that derailed promising sessions, the organizational resistance, the moments of doubt — is compressed into a narrative of progressive revelation that makes the moment seem more unified, more decisive, and more complete than it was.

This is not a criticism of Segal's honesty. He acknowledges the mess. He describes the Deleuze fabrication, the moments when Claude's output was smooth but hollow, the nights when productive flow curdled into compulsion. But the book's overall narrative arc — the tower, the climb, the sunrise — imposes a structure of progressive illumination that is itself a construction, a way of making the moment up that gives it a specific shape and excludes other possible shapes.

What other shapes might the moment take? Schaffer's methodology suggests examining the alternative framings that the dominant narrative marginalizes. The elegists whom Segal describes in Chapter 2 of The Orange Pill offer one alternative: a narrative of loss rather than revelation, in which the moment is constituted not as a threshold of capability but as a threshold of displacement. The Berkeley researchers offer another: a narrative of intensification rather than liberation, in which the moment is constituted as the beginning of a new regime of work that is more demanding, more boundary-eroding, and more psychologically costly than the regime it replaces. Byung-Chul Han offers a third: a narrative of pathological smoothness in which the moment is constituted as an acceleration of the self-exploiting dynamics that were already eroding depth, attention, and the capacity for genuine thought.

Each of these framings corresponds to real features of the phenomenon. The displacement is real. The intensification is documented. The erosion of depth is a legitimate concern supported by the evidence that Segal himself presents. But in the competition among framings, the orange pill narrative — the narrative of transformative revelation, of capability expansion, of a threshold crossed — has achieved dominance, and its dominance is not a neutral reflection of the evidence. It is a product of institutional power.

The communities that control the orange pill narrative — the technology companies, the venture capital firms, the builders and entrepreneurs at the frontier — possess resources that the alternative framings lack. They control the platforms on which the demonstrations are staged. They control the revenue curves that function as quantitative testimony. They control the hiring decisions and organizational structures that determine which frameworks of interpretation are rewarded and which are marginalized. When a senior engineer posts about the death of craft and is scrolled past, the scrolling is not a neutral failure of attention. It is the operation of an algorithmic system designed and controlled by the same community that controls the orange pill narrative, an algorithm that rewards certain kinds of testimony (the triumphant, the measurable, the actionable) and suppresses others (the elegiac, the qualitative, the unresolved).

Schaffer, at the 2018 Leverhulme Centre panel, raised the question of whether AI constitutes a form of cultural imperialism — whether the frameworks through which AI is understood, evaluated, and deployed are frameworks that serve the interests of specific communities (primarily Western, primarily corporate, primarily English-speaking) at the expense of others. The question applies with equal force to the orange pill narrative itself. The narrative is being made up primarily by American technology companies and the builders who use their tools. The frameworks of measurement (productivity multipliers, revenue growth, lines of code) are frameworks calibrated to the values and priorities of this specific community. The communities whose experiences do not register within these frameworks — the data labelers, the displaced workers, the cultures whose creative production was absorbed into training data without consent — are not making up the moment. The moment is being made up for them, by others, in frameworks they did not design and do not control.

The recursive quality of this analysis deserves emphasis. The Orange Pill is itself a contribution to the making-up of the orange pill moment. It is not a neutral observation of a completed event. It is an active intervention in an ongoing process of construction — a book that, by naming the moment, by giving it a metaphor, by providing it with a narrative arc, contributes to the moment's constitution as a specific kind of event with a specific set of meanings. Segal acknowledges this recursion when he describes writing the book with Claude — the author inside the phenomenon he is describing. Schaffer's framework extends the recursion further: the book is not just inside the phenomenon. It is part of the mechanism by which the phenomenon is produced.

This does not invalidate the book. Every text about a historical moment participates in the construction of that moment. Darwin's Origin of Species was not merely a report on natural selection; it was the rhetorical instrument through which natural selection was constituted as a scientific theory, made persuasive to a community, and installed as the dominant framework for understanding biological change. Boyle's New Experiments Physico-Mechanicall was not merely a report on air-pump experiments; it was the textual apparatus through which the experimental method was constituted as a legitimate way of producing knowledge. Segal's Orange Pill is not merely a report on the AI moment; it is one of the instruments through which the moment is being constituted as a specific kind of event — a revelation, a threshold, a transformation from which there is no return.

The question Schaffer's framework poses is not whether the construction is legitimate. All knowledge construction is legitimate in the sense that it is the only mechanism through which knowledge is produced. The question is whether the construction is self-aware — whether it understands its own conditions of production, acknowledges the labor it rests upon, and recognizes the alternative framings it marginalizes. A construction that understands itself as a construction is more honest, more durable, and more capable of accommodating the complexities of the phenomenon it constitutes than a construction that presents itself as a transparent window onto reality.

The orange pill moment is real. It is also being made up. These two statements are not contradictory. They are, in Schaffer's framework, the same statement — because reality, in the domain of human knowledge, is always made up, always constructed through the social practices of demonstration, testimony, and collective certification that constitute what a community knows. The task is not to escape the construction. It is to build constructions that are honest about their own architecture — that show the seams, acknowledge the hands that built them, and leave room for the framings they cannot contain.

The history of science teaches that the constructions that endure are the ones that can accommodate their own critique. Newtonian mechanics endured not because it silenced all alternatives but because its framework was capacious enough to absorb challenges, refine itself, and acknowledge its own boundaries — until Einstein showed that those boundaries were themselves constructed within a framework that could be superseded. The orange pill narrative will endure to the extent that it can accommodate the elegists, the critics, the invisible laborers, and the alternative framings that its current architecture tends to exclude. Its durability depends not on its dominance but on its capacity for self-revision — a capacity that begins with the recognition that the moment it describes is not found but made.

Chapter 5: The Community That Sees

In the spring of 1610, Galileo Galilei published Sidereus Nuncius — the Starry Messenger — announcing that he had observed mountains on the Moon, previously unknown stars in the Milky Way, and four satellites orbiting Jupiter. The observations were made through a telescope of his own construction, an instrument that perhaps two dozen people in Europe possessed and fewer still knew how to use competently. Galileo was not merely reporting what he had seen. He was asking a community to trust that what he had seen was real — to accept, on the basis of his testimony and the instrument's mediation, that the heavens were not as Aristotle had described them.

The request was not granted automatically. The Aristotelian philosophers at Padua refused to look through the telescope. This refusal is typically narrated as a failure of curiosity — the closed-minded establishment refusing to confront evidence that contradicted their worldview. But Schaffer's analysis of comparable episodes reveals a more instructive dynamic. The philosophers' refusal was not irrational. It was epistemologically coherent within their framework. They had no reason to trust the telescope as an instrument of reliable observation. Lenses were known to produce distortions, illusions, artifacts. The bright points Galileo claimed were Jupiter's moons might have been optical defects. The apparent mountains on the lunar surface might have been refractions produced by the glass. Without an independent theory of optics that explained why the telescope should be trusted — and in 1610, no such theory existed in a form the Aristotelians would have accepted — the refusal to credit the telescope's testimony was a defensible epistemic position, not a symptom of intellectual cowardice.

What Galileo needed was not better evidence. He needed a community that could see what the telescope showed — a community equipped with the interpretive frameworks, the instrumental experience, and the social willingness to credit a new kind of observation as legitimate. Building that community was the work of years, not the work of a single publication. It involved sending telescopes to astronomers across Europe, training observers in the instrument's use, establishing correspondence networks through which observations could be compared and discrepancies resolved, and cultivating patrons whose institutional authority lent credibility to the enterprise. The community that eventually saw Jupiter's moons was not a pre-existing audience that Galileo addressed. It was a community that Galileo constructed — painstakingly, strategically, through social labor that was as essential to the discovery as the optical labor of grinding the lenses.

Schaffer's work has consistently shown that this pattern — the simultaneous construction of discovery and of the community capable of recognizing it — is not peculiar to Galileo. It is the mechanism through which all scientific knowledge achieves the status of knowledge. A finding becomes a discovery when a community certifies it as such, and the community's capacity to certify depends on interpretive frameworks, instrumental competences, and institutional structures that must themselves be constructed, often by the discoverer, before the certification can occur.

The AI moment presents a striking contemporary instance of this mechanism. The community that recognizes the orange pill — the builders, early adopters, and technology professionals who experienced the winter of 2025 as a threshold — did not exist in its current form before the demonstrations that constituted the moment. The community was constructed through the same practices that Galileo employed: the distribution of instruments (Claude Code subscriptions, API access, free-tier trials that let millions of people experience the capability firsthand), the training of observers (tutorials, documentation, the informal pedagogy of viral posts demonstrating what the tools could do), the establishment of correspondence networks (Twitter threads, Substack posts, Slack channels, Reddit forums where experiences were compared and frameworks of interpretation were negotiated), and the cultivation of authorities whose institutional credibility lent weight to the collective testimony (the Google engineer, the Anthropic researchers, the venture capitalists whose investment decisions functioned as a form of financial witnessing).

The community that sees the orange pill is, in Schaffer's terms, a manufactured community — not in the sense that its members are insincere but in the sense that its existence as a coherent community with shared interpretive frameworks was produced through specific social practices. The shared vocabulary (orange pill, phase transition, vibe coding, the discourse), the shared references (the Google engineer's post, the Berkeley study, Han's critique), the shared emotional register (the vertigo, the exhilaration, the terror) — these are the markers of a community that has been constituted through collective experience and collective narration. Before the demonstrations, these individuals existed as scattered builders with varied experiences of AI tools. After the demonstrations, they existed as a community that could see a specific thing and name it.

But every community that sees also, by the logic of its constitution, excludes those who do not see — or who see something different. The Aristotelian philosophers who refused to look through the telescope were not simply wrong. They were operating within a different interpretive framework, one that had its own standards of evidence, its own criteria of credibility, its own institutional structures. Their framework could not accommodate the telescope's testimony because the telescope was not an instrument their framework recognized as reliable. The conflict between Galileo and the Aristotelians was not a conflict between sight and blindness. It was a conflict between two communities with different capacities for recognition — and the resolution of the conflict was determined not by the evidence alone but by the institutional, political, and social processes through which one community's framework achieved dominance over the other's.

The AI discourse exhibits a structurally identical conflict. The community that sees the orange pill — the builders, the early adopters, the investors — operates within a framework that recognizes productivity metrics, revenue growth, and personal testimony as legitimate evidence of transformation. The community that does not see it — or that sees something else — operates within different frameworks. The elegists see displacement and loss, and their evidence is qualitative: the erosion of craft knowledge, the disappearance of formative struggle, the flattening of expertise into a commodity that a machine can produce. The critical scholars see power and exploitation, and their evidence is structural: the invisible labor behind the training data, the concentration of capability in a small number of corporate actors, the global inequalities that determine who builds the tools and who is built upon. The workers experiencing intensification see not liberation but a new regime of self-exploitation, and their evidence is phenomenological: the inability to stop, the colonization of rest, the grey fatigue that accumulates without producing understanding.

Each of these communities sees something real. The elegists' loss is real. The critical scholars' structural analysis is supported by documented evidence. The workers' intensification is measured by the Berkeley study. But these communities lack the institutional power to compete with the orange pill narrative on its own terms. They do not control the platforms where demonstrations are staged. They do not control the revenue metrics that function as quantitative testimony. They do not control the algorithmic systems that determine which voices are amplified and which are suppressed. Their frameworks of interpretation are available but marginalized — not because they are wrong but because the social mechanisms of recognition favor the frameworks held by the communities with the most institutional power.

Schaffer's analysis of Newton provides a particularly pointed parallel. Newton's dominance in the history of optics was not achieved through experimental superiority alone. It was achieved through Newton's extraordinary control over the conditions of demonstration — his insistence on specific experimental protocols, his refusal to engage with critics on their own terms, his use of institutional authority (the presidency of the Royal Society) to marginalize competing frameworks. Newton's rivals — Hooke, Huygens, the Jesuit experimenters on the Continent — had their own experimental results, their own interpretive frameworks, their own standards of evidence. They were not cranks. They were serious investigators whose work was suppressed or marginalized by the social mechanisms through which Newton's framework achieved dominance.

The parallel to the AI moment is not exact — historical parallels never are — but the structural dynamic is recognizable. The community that controls the orange pill narrative does not merely report the moment. It shapes the moment by controlling the conditions under which the moment is experienced, interpreted, and narrated. The demonstrations are staged on platforms built by technology companies. The metrics are defined by frameworks calibrated to the technology's strengths. The testimonies are amplified by algorithms designed to reward the specific emotional register — excitement, urgency, transformation — that the orange pill narrative employs.

Segal's Orange Pill attempts something unusual within this dynamic: the construction of a community capacious enough to hold competing visions in tension. The tower metaphor — five floors, each offering a different perspective, the view earned through the climb — is an architectural strategy for incorporating multiple frameworks within a single structure. The diagnostic chapters on Han give the elegist's framework serious engagement. The chapters on the Berkeley data give the workers' experience empirical grounding. The confessional register — Segal's admission that he, too, cannot stop building, that his own relationship with the tools is marked by compulsion as well as flow — gives the self-exploitation framework embodied testimony from inside the triumphalist camp.

Whether this attempt succeeds is a question that Schaffer's framework can pose but not answer. The history of science offers examples of communities that achieved genuine capaciousness — that could hold competing interpretations in productive tension long enough for the evidence to mature and the frameworks to evolve. The early Royal Society, for all its flaws, maintained a convention of tolerance toward diverse experimental approaches that allowed the air-pump controversy to play out over years rather than being resolved by fiat. The geological debates of the early nineteenth century sustained competing frameworks (catastrophism and uniformitarianism) long enough for the evidence to accumulate and the synthesis to emerge.

But history also offers examples of communities that failed to achieve capaciousness — that resolved controversies prematurely by institutional power rather than evidential weight, and that paid the cost in distorted knowledge and suppressed alternatives. The decades during which Mendelian genetics was rejected by Lysenkoist biology in the Soviet Union, at incalculable cost to Soviet agriculture and science, constitute the most dramatic example of a community that chose dominance over capaciousness.

The AI moment is young. The frameworks are still competing. The evidence is still accumulating. The community that will ultimately certify what the orange pill moment means — whether it was a genuine threshold, a premature recognition, a constructed euphoria, or some combination that no current framework can accommodate — has not yet fully formed. Schaffer's work insists that the formation of that community is not a passive process. It is an active construction, shaped by institutional power, by the politics of demonstration, by the mechanisms of credibility that determine whose testimony counts and whose is scrolled past.

The community that sees will determine what the moment means. The question is whether the community that forms will be wide enough to see what the dominant narrative cannot yet contain — the invisible labor, the displaced craft, the structural inequalities, the long-term consequences that productivity metrics cannot measure. A community that can see only the gains will construct a moment defined by the gains. A community that can see both the gains and their costs will construct something richer, more durable, and closer to whatever the truth of this moment turns out to be.

Galileo needed a community that could see Jupiter's moons. The AI moment needs a community that can see the moons and the hands that ground the lenses.

Chapter 6: The Controversy Study

In 1672, Isaac Newton sent a letter to the Royal Society describing an experiment with prisms. He reported that white light, passed through a glass prism, separated into a spectrum of colors, and that these colors, once separated, could not be further decomposed by a second prism. His conclusion: white light is heterogeneous, a mixture of rays of different colors, each with its own degree of refrangibility. The claim was bold, the experiment elegantly designed, and the letter composed with the particular confidence of a man who believed he had settled a question that had occupied natural philosophers for centuries.

The question was not settled. Robert Hooke, the Royal Society's curator of experiments and a formidable optical investigator in his own right, responded within weeks. Hooke did not deny Newton's experimental results. He had performed similar experiments and obtained similar spectra. What he denied was Newton's interpretation. The spectrum, Hooke argued, was produced not by the separation of pre-existing components within white light but by the modification of white light as it passed through the refracting medium. The same data, the same colored bands of light on the same wall, meant something entirely different within Hooke's framework. Newton saw analysis — the decomposition of a compound into its elements. Hooke saw transformation — the alteration of a simple substance by an external agent.

Schaffer's analysis of this controversy, developed in his essay "Glass Works: Newton's Prisms and the Uses of Experiment," demonstrates that the dispute could not be resolved by the experimental evidence, because the experimental evidence was compatible with both interpretations. The resolution required something beyond evidence: it required the social processes through which one interpretive framework achieved dominance. Newton won — eventually, after decades of controversy — not because his evidence was stronger but because his control over the conditions of demonstration, his institutional authority as president of the Royal Society after 1703, and his strategic decision to suppress or marginalize competing frameworks proved more effective than Hooke's alternative.

The controversy study, as a methodology, exploits a specific feature of scientific disputes: while the controversy is active, the social mechanisms of knowledge production become visible. The competing frameworks, the institutional pressures, the rhetorical strategies, the political alliances — all of these are on display during the period of uncertainty, before one side achieves dominance and the winning interpretation is retrospectively naturalized as "what the evidence showed all along." Once the controversy is settled, the social machinery disappears from view. Newton's theory of light becomes a fact of nature rather than the product of a specific historical contest. The controversy study catches the machinery in motion, before the settlement renders it invisible.

The AI moment is an active controversy, and its unsettled state makes the social machinery of knowledge production visible in ways that a settled consensus would conceal. The competition between interpretive frameworks — AI as amplifier versus AI as pathology, democratization versus displacement, the builder's ethic versus the aesthetics of the smooth — is playing out in real time, across platforms and publications and dinner tables, and the resolution is not yet determined. This is precisely the condition under which Schaffer's methodology is most productive: the moment when the evidence is still being contested, when the frameworks are still competing, when the social processes that will determine the outcome are still visible.

Consider the Berkeley studyYe and Ranganathan's eight-month embedding in a technology company, documenting the effects of AI tool adoption on work patterns. The study found intensification: more work, not less; task seepage into protected spaces; the erosion of boundaries between roles and between work and rest. These findings correspond to real phenomena, documented through rigorous qualitative methods over an extended period.

Within the orange pill framework, the Berkeley findings are acknowledged but domesticated. Segal treats them as evidence that dams need building — that the intensification is a transitional phenomenon requiring institutional responses (AI Practice, structured pauses, protected mentoring time) rather than a fundamental indictment of the technology itself. The intensification is presented as a problem of implementation, not of the technology's nature.

Within Han's framework, the same findings are central, not peripheral. They confirm the diagnosis of auto-exploitation, the prediction that tools promising liberation would instead produce new forms of self-imposed servitude. The intensification is not a transitional phenomenon. It is the technology's essential effect, produced by the same logic of smoothness and frictionlessness that Han identifies as the dominant pathology of the contemporary period.

Within the critical labor framework — the framework that Schaffer's own work most directly supports — the Berkeley findings reveal something that neither the orange pill framework nor Han's framework fully captures: the structural conditions under which intensification becomes invisible to the people experiencing it. The workers in the Berkeley study did not perceive themselves as exploited. They perceived themselves as productive, as capable, as liberated from tedious tasks. The intensification was, in the researchers' terms, experienced as empowerment. The social mechanisms that produce this experience — the internalized achievement imperative, the algorithmic optimization of engagement, the institutional reward structures that privilege visible output over invisible reflection — are precisely the mechanisms that Schaffer's controversy-study methodology is designed to make visible.

The same evidence, three frameworks, three readings — none of them false, none of them complete. The controversy study does not adjudicate between the readings. It analyzes the social processes through which one reading will eventually achieve dominance and be retrospectively naturalized as what the evidence always showed.

Those processes are already underway. The orange pill framework has institutional advantages that the competing frameworks lack. It is held by the communities that control the technology, the platforms, the investment capital, and the hiring decisions. It is expressed in the language of productivity, measurement, and growth that institutional decision-makers understand and reward. It produces actionable recommendations — build dams, create AI Practice frameworks, develop the capacity for judgment — that organizational leaders can implement. The competing frameworks produce diagnoses, critiques, and structural analyses that are intellectually powerful but institutionally unactionable. A CEO cannot implement Han's philosophy. A board of directors cannot operationalize Schaffer's invisible-labor analysis. The institutional advantage of the orange pill framework is not that it is more correct. It is that it is more useful to the people who make institutional decisions — and the people who make institutional decisions are the people whose frameworks eventually achieve dominance.

Schaffer's work on Newton illustrates the long-term consequences of framework dominance achieved through institutional power rather than evidential weight. Newton's corpuscular theory of light dominated optics for over a century, not because the wave theory was refuted but because Newton's institutional authority made it dangerous for younger scientists to challenge the corpuscular framework. The wave theory, developed by Huygens and later by Thomas Young and Augustin-Jean Fresnel, required decades of marginal existence before it could achieve the institutional support necessary to challenge Newton's dominance. The cost of the delay was measured in the optical investigations that were not pursued, the questions that were not asked, the phenomena (diffraction, interference, polarization) that were not adequately explained because the dominant framework could not accommodate them.

The AI controversy carries analogous risks. If the orange pill framework achieves premature dominance — if the celebration of capability expansion forecloses serious investigation of its costs — the result is not a false consensus but a constrained one. The questions that the dominant framework cannot accommodate — questions about invisible labor, about the long-term effects of friction removal on cognitive development, about the distributional consequences of a technology that expands capability while concentrating the infrastructure of capability in a small number of corporate actors — will not be asked with the urgency and resources they require.

The SaaS Death Cross that Segal describes in Chapter 19 of The Orange Pill provides a concrete illustration of frameworks in competition. The trillion-dollar decline in software company valuations can be interpreted as evidence that the market has recognized a genuine technological transformation (the orange pill reading), as evidence that the market is experiencing a speculative correction driven by narratives rather than fundamentals (the skeptical reading), or as evidence that the concentration of AI capability in a few infrastructure providers is destroying the distributed ecosystem of software companies that previously served the market (the structural reading). The revenue numbers are the same in all three interpretations. The frameworks that give the numbers their meaning are different, and the policy implications of each framework are radically different.

Schaffer's controversy-study methodology does not resolve the competition. Resolution is the work of history — of the slow accumulation of evidence, the institutional negotiations, the social processes through which one framework eventually achieves dominance and the controversy is retrospectively simplified into a story about evidence triumphing over error. What the methodology does is insist that the competition be examined as a social process, not merely as an intellectual debate. The question is not only which framework is best supported by the evidence. The question is which framework is best supported by the institutional power structures that determine how evidence is produced, interpreted, and certified.

The AI controversy will be resolved. The frameworks will compete, and one will achieve dominance, and the others will be relegated to the margins of academic discourse where they will be studied as curiosities — the Hookes and Huygenss of the AI debate, brilliant analysts whose frameworks lost not because they were wrong but because they lacked the institutional apparatus to sustain themselves against the dominant paradigm.

The work of the controversy study is to ensure that the losing frameworks are documented, understood, and preserved — not as monuments to error but as resources for the future moment when the dominant framework encounters the phenomena it cannot accommodate. Newton's corpuscular theory eventually failed when diffraction and interference demanded explanation. The orange pill framework may eventually encounter its own diffractions and interferences — the long-term cognitive effects, the structural inequalities, the invisible labor — that demand a framework more capacious than the one currently being constructed.

Preserving the competing frameworks is not a neutral academic exercise. It is, in Schaffer's terms, a form of intellectual insurance — a hedge against the possibility that the dominant framework, for all its institutional power and practical utility, has not captured the full complexity of the phenomenon it claims to explain.

Chapter 7: The Priesthood and Its Instruments

In 1663, the Royal Society of London received its royal charter, establishing it as the institutional home of English experimental philosophy. The charter granted the Society a set of privileges — the right to correspond freely, to publish without ecclesiastical censorship, to receive bequeathed collections — that positioned it as the authoritative arbiter of natural knowledge in England. But the charter did not create the Society's authority. The authority was constructed, painstakingly, through the social practices that Schaffer and Shapin documented in Leviathan and the Air-Pump: the conventions of witnessing, the standards of testimony, the codes of gentlemanly conduct that determined who could speak and whose speech counted.

The Society's authority rested, fundamentally, on a claim about the relationship between knowledge and virtue. The gentlemen who comprised its membership were credible witnesses not because they were expert but because they were disinterested — men of independent means whose social position guaranteed that they had no material motive to distort what they observed. The mechanic, the tradesman, the instrument maker might possess superior technical skill, but their testimony was epistemically suspect because their livelihood depended on the instruments they built and the patrons they served. Disinterestedness was a social category, not a cognitive one, and it was distributed along the lines of class, gender, and institutional affiliation that structured English society.

This arrangement produced a specific kind of knowledge-producing community — what might be called, with deliberate anachronism, a priesthood. The term captures both the authority and the exclusivity: the Fellows of the Royal Society mediated between nature and the public, translating the testimony of instruments into knowledge that the broader community could accept. They were priests in the original sense — those who tend to something sacred, who possess understanding that others lack, and whose authority derives from their position within an institutional structure that certifies their competence.

Segal employs the priesthood metaphor explicitly in The Orange Pill, arguing that the builders at the frontier of AI bear an obligation that flows from their understanding of the systems they build. "Understanding confers obligation," he writes. "If you understand how large language models concentrate attention, you are responsible for how that concentration affects people." The argument is that deep knowledge of AI systems creates a priestly duty — a duty to tend responsibly to technologies whose effects extend far beyond the community of builders.

Schaffer's historical work takes this metaphor and subjects it to the analysis that metaphors require when they are deployed to justify specific distributions of power. The history of knowledge-producing priesthoods is not a history of selfless stewardship. It is a history of communities that used their control over instruments of knowledge production to consolidate authority, marginalize competitors, and shape the frameworks of interpretation to serve their institutional interests — often while sincerely believing that they were serving the public good.

The Royal Society's conventions of gentlemanly witnessing excluded the vast majority of the population from participation in knowledge production. Women were entirely excluded. Working-class men were excluded by social convention even when their technical competence exceeded that of the gentlemen who directed their labor. The mechanic's testimony did not count because the social framework of credibility had been constructed to exclude it — not because mechanics were unreliable but because including them would have destabilized the class structure that the Society's authority depended upon.

The contemporary AI priesthood — the builders, researchers, and executives who control the development and deployment of large language models — operates within a different social framework but exhibits structurally analogous dynamics. The community that builds AI systems possesses knowledge that the broader public lacks: knowledge of how the models are trained, what their limitations are, where they are likely to fail, and what assumptions are embedded in their architectures. This knowledge differential creates the conditions for priestly authority — the capacity to mediate between the technology and the public, to translate the instrument's outputs into claims about the world.

But the knowledge differential also creates the conditions for priestly self-interest. The community that understands AI best is also the community that profits most from AI's deployment. The venture capitalists who fund AI development, the executives who direct it, and the builders who implement it have material interests that are advanced by specific framings of the technology — framings that emphasize capability expansion, democratization, and productivity enhancement while downplaying displacement, concentration, and the exploitation of invisible labor. The priestly obligation to tend responsibly is real, but it operates within an institutional structure that systematically rewards the framings that serve the priesthood's material interests.

Schaffer's 1994 essay "Babbage's Intelligence" traces this entanglement to its historical roots. Babbage, Schaffer demonstrates, was not merely a mathematician who happened to invent a calculating engine. He was a political economist who understood the calculating engine as an instrument of industrial rationalization — a device that would extend the factory system's logic of divided, mechanized labor into the domain of mental work. Babbage's advocacy for the engine was inseparable from his advocacy for the industrial system that the engine exemplified. The knowledge that made the engine conceivable was knowledge produced within and for a specific economic framework, and the engine's deployment would advance the interests of the social class that controlled that framework.

The contemporary AI priesthood inherits this entanglement. The knowledge that makes large language models possible — the research in transformer architectures, the scaling laws, the training methodologies — was produced within and is controlled by a small number of corporate institutions (Anthropic, OpenAI, Google DeepMind, Meta) whose economic interests are advanced by specific uses of the technology. The priestly obligation to deploy responsibly coexists, within the same institutions and often within the same individuals, with the economic imperative to deploy profitably. When these obligations conflict — when responsible deployment would require slowing down, limiting access, or acknowledging costs that undermine the growth narrative — the historical record suggests that economic imperatives tend to prevail over priestly obligations, not because the priests are insincere but because the institutional structures within which they operate reward growth more reliably than they reward caution.

Segal acknowledges this tension. His confession about building addictive products earlier in his career, his recognition that "the intoxication of the frontier overwhelmed my care for the people downstream," constitutes an unusual admission from inside the priesthood. But the admission operates within a redemption narrative — the builder who once built carelessly now builds responsibly, who has learned from his mistakes and emerged with a priestly ethic that his earlier work lacked. Schaffer's framework asks whether the redemption narrative is itself a construction that serves the priesthood's interests — whether the confession of past carelessness, followed by the assertion of present responsibility, functions as a credibility mechanism that strengthens the confessor's authority rather than genuinely redistributing power to the communities affected by the technology.

The question is not whether Segal's commitment to responsible building is sincere. The question is whether individual sincerity is sufficient to counteract institutional incentives. The history of knowledge-producing priesthoods — from the Royal Society to the medical profession to the nuclear physics community — suggests that individual virtue within a structurally compromised institution produces, at best, amelioration. The institution's structural incentives continue to operate regardless of any individual member's ethical commitments. The doctor who genuinely cares about patients still operates within a healthcare system that rewards procedures over prevention. The nuclear physicist who genuinely opposes weapons proliferation still operates within a research funding structure that is inseparable from military applications.

The AI priesthood faces a version of this structural problem. The builders who genuinely commit to responsible deployment still operate within corporate structures that reward growth, within investment frameworks that reward returns, within competitive dynamics that punish the company that slows down while its rivals accelerate. Segal's decision to keep his team at full size rather than converting productivity gains into headcount reduction is, within this analysis, a genuine act of priestly responsibility — and also an act that the market's structural incentives will continue to pressure him to reverse.

Schaffer's work does not dismiss the priesthood ethic. It contextualizes it within the historical patterns of knowledge-producing communities, and those patterns suggest that the ethic is necessary but not sufficient. What is also required is the institutional architecture that converts individual commitment into structural constraint — the equivalent of the Royal Society's conventions, but designed to constrain the priesthood's self-interest rather than to consolidate its authority.

The instruments the priesthood controls — the models themselves, the training infrastructure, the deployment platforms, the frameworks of measurement — are not neutral tools. They are, in Schaffer's terms, embodiments of theoretical assumptions and social relations. The model's training data encodes specific cultural biases. The deployment platform's interface design shapes how users experience the technology. The frameworks of measurement determine what counts as success and what counts as failure. The priesthood that controls these instruments controls the conditions under which the technology is understood, evaluated, and integrated into human life.

The obligation that flows from this control is not merely ethical. It is epistemic. The priesthood is responsible not only for how the technology is used but for how it is understood — for whether the frameworks of interpretation that the technology produces are honest about their own limitations, transparent about their conditions of production, and capacious enough to accommodate the perspectives of the communities that the priesthood's instruments affect but do not represent.

A priesthood that understands its own instruments — that sees them as constructions rather than transparent windows, that acknowledges the labor concealed behind their smooth interfaces, that recognizes the frameworks of measurement as artifacts rather than neutral lenses — is a priesthood capable of genuine stewardship. A priesthood that mistakes its instruments for transparent access to reality — that treats productivity multipliers as unmediated facts, that treats the orange pill moment as a discovery rather than a construction, that treats its own authority as natural rather than produced — is a priesthood that has lost the capacity for the self-awareness that responsible stewardship requires.

The instruments shape the facts. The priesthood controls the instruments. The question of who controls the priesthood — and whether the priesthood can construct institutional checks on its own authority — is the political question that the AI moment has placed at the center of contemporary life. Schaffer's historical work cannot answer it. But it can insist that the question be asked, and that the asking begin with the recognition that every priesthood in the history of knowledge production has faced the same structural temptation: to confuse its authority with truth, its instruments with transparent windows, and its interests with the public good.

Chapter 8: The River's Hidden Channels

Segal's central metaphor — intelligence as a river flowing through human civilization for 13.8 billion years, from hydrogen atoms to biological evolution to conscious thought to artificial computation — is among the most ambitious conceptual structures in contemporary technology writing. The metaphor performs real intellectual work. It positions AI not as an alien intrusion but as a continuation, a new channel for a current that has been flowing since the formation of matter itself. It dissolves the human-machine binary that distorts most AI discourse by placing both human and artificial intelligence within a single continuous process. And it provides the foundation for Segal's prescriptive framework: the beaver who builds dams in the river, redirecting the flow toward life rather than allowing it to sweep everything downstream.

Schaffer's framework does not reject this metaphor. It deepens it — by insisting on a distinction that the metaphor's naturalizing force tends to obscure: the distinction between a river and its channels.

A river, in the geological sense, is a natural phenomenon. Its course is determined by gravity, by the composition and contour of the terrain, by rainfall and snowmelt and the slow processes of erosion that carve paths through rock over millennia. The river is not designed. It is not directed by intention. It flows where physics takes it. The channels through which the river flows are, in this sense, natural formations — products of the same impersonal forces that produce the water itself.

Segal's metaphor borrows this naturalness and applies it to intelligence. Intelligence, like water, flows. It finds channels. It concentrates at bends. It widens into pools and narrows into rapids. The language of natural flow positions intelligence as a force that operates independently of human intention — a force that humans participate in but do not control, that has been flowing since before humans existed and will continue flowing after any particular human institution has dissolved.

The naturalizing move is powerful. It produces the specific emotional resonance that makes the metaphor memorable: the sense of participation in something vast, the humility of recognizing that human intelligence is a recent and modest expression of a much older and larger process, the awe of watching the river pick up speed as a new channel opens. These are genuine cognitive and emotional achievements. They shift the framework of understanding in ways that produce genuine insight.

But Schaffer's career has been devoted to demonstrating that the channels through which knowledge flows are never natural formations. They are social constructions — built by specific communities, maintained by specific institutions, directed by specific power structures, and contested by specific political processes. The channels do not open because the water finds them. The channels are dug — by labor that is often invisible, by institutions that serve specific interests, by communities that determine which flows count as significant and which are dismissed as noise.

This distinction is not a quibble about metaphorical precision. It has analytical consequences that matter for how the AI moment is understood and responded to.

If intelligence flows like a natural river, the appropriate human response is the one Segal recommends: study the current, respect its force, build dams at leverage points to redirect the flow toward life. The builder is a steward of a natural process — powerful but humble, intervening where possible but fundamentally subordinate to a force that transcends human control. The beaver in the river.

If the channels are socially constructed, the appropriate response is different in important ways. The builder is not a steward of a natural process. The builder is a participant in a political process — a process that determines which channels are dug, whose labor digs them, who benefits from the flow, and who bears the cost of the flooding. The question is not only "Where should the dam go?" but "Who decided where the channel would run in the first place?" and "Whose interests does this particular channeling serve?"

Schaffer's "Babbage's Intelligence" provides the historical case that makes this distinction concrete. The channel through which computational intelligence entered human civilization was not a natural formation. It was dug by specific institutional actors — the British government, which funded Babbage's engines; the factory owners, who provided the economic model of divided labor that Babbage's engines mechanized; the mathematical community, which defined the specific forms of calculation that the engines would perform. Each of these actors shaped the channel according to their interests. The British government wanted more accurate mathematical tables for navigation and artillery. The factory owners wanted cheaper, more reliable computation. The mathematical community wanted a tool that would extend their discipline's reach. The channel that resulted — mechanical computation as industrial rationalization — was not the only possible channel for computational intelligence. It was the channel that served the interests of the communities powerful enough to dig it.

Contemporary AI has been channeled through a similarly specific set of institutional decisions. The transformer architecture that underlies large language models was developed at Google, a corporation whose business model depends on the processing and monetization of textual information. The scaling laws that drove the development of increasingly large models were discovered within corporate research laboratories whose competitive dynamics rewarded scale above other possible virtues (interpretability, efficiency, equity of access). The training data was assembled through web-scraping practices that reflected specific assumptions about what knowledge is valuable and what labor is free. The interface design — the chat format, the conversational register, the presentation of outputs as responses from an intelligent interlocutor — was shaped by product decisions oriented toward user engagement and commercial adoption.

Each of these decisions channeled the flow of computational intelligence in a specific direction. Other directions were possible. AI research that prioritized interpretability over scale would have produced different tools with different capabilities and different distributions of benefit and cost. Training practices that compensated creators for the use of their work would have produced different economics and different relationships between AI companies and the creative communities whose labor they absorbed. Interface designs that foregrounded the statistical nature of the model's outputs — presenting them as probability distributions rather than confident assertions — would have produced different user behaviors and different risks.

The channels that were actually dug — the channels through which contemporary AI flows — reflect the interests of the communities that dug them. This does not mean the channels are bad. Some of the institutional decisions that shaped contemporary AI produced genuine expansions of human capability, precisely as Segal describes. The decision to make Claude Code available through a hundred-dollar-per-month subscription genuinely lowered the barrier to software development. The decision to present AI outputs in natural language genuinely removed the translation cost that had gated access to computational power. These are real gains, and they matter.

But the gains coexist with costs that the naturalizing metaphor tends to conceal. The channel that was dug through web-scraping runs over the intellectual property of millions of creators who were not consulted. The channel that was dug through corporate research laboratories concentrates the infrastructure of AI capability in a small number of institutions, creating dependencies that affect everyone who uses the tools but that only the institutions control. The channel that was dug through the specific labor practices of data labeling and content moderation runs through the lives of workers whose labor is essential to the system's functioning but whose contribution is systematically erased.

The river metaphor renders these costs as natural features of the landscape — as the inevitable consequences of a force that transcends human control. The social-construction analysis renders them as political features of a landscape that was shaped by specific decisions, made by specific actors, in the service of specific interests. The difference matters because the response to a natural force is adaptation (build dams, study the current, accept what cannot be changed) while the response to a political construction is contestation (challenge the decisions, redirect the channels, redistribute the costs and benefits).

Schaffer's framework does not reject the river metaphor. It insists that the metaphor be completed. The water is real. The flow is real. The increasing complexity and capability of intelligence over cosmic time is a genuine phenomenon that Segal describes with insight and power. But the channels through which this intelligence currently flows — the specific institutional, economic, and technological structures that shape contemporary AI — are not geological formations carved by impersonal forces. They are social constructions, built by human hands (many of them invisible), reflecting human interests (many of them unexamined), and producing distributions of benefit and cost that are contingent rather than inevitable.

The beaver builds dams. Schaffer asks who dug the riverbed.

The question is not hostile to the metaphor. It is the question the metaphor requires if it is to be more than a poetic gesture — if it is to serve as the foundation for the kind of responsible stewardship that Segal advocates. A beaver that builds dams without understanding who dug the channel it builds in is a beaver operating with incomplete information about the river it claims to understand. A builder who advocates stewardship without examining the institutional decisions that shaped the technology is a steward who tends to the surface while the foundations shift beneath.

The river's hidden channels are not hidden by nature. They are hidden by the conventions that present institutional decisions as technological inevitabilities, that present specific channeling as natural flow, that present the interests of the communities who dug the channels as the interests of everyone who uses the water. Making the channels visible — understanding who dug them, whose labor maintains them, whose interests they serve, and what alternative channels were foreclosed — is not a critique of the river. It is the condition for building dams that actually hold.

The intelligence is real. The flow is real. The channels are constructed. Understanding the construction is not a threat to the builder's ethic. It is the foundation on which a genuinely informed stewardship can be built — a stewardship that knows not only where the water flows but why it flows there, and at whose expense, and by whose labor, and through whose decisions the current takes the shape it takes.

Chapter 9: Discovery Beyond the Individual

On the twenty-fifth of November, 1694, Christiaan Huygens wrote to Gottfried Wilhelm Leibniz about a problem in mathematics that both men had been working on independently. The letter was cordial, technically precise, and suffused with the particular anxiety of a natural philosopher who suspects that someone else has arrived at the same destination by a different route. Huygens had reason for anxiety. Leibniz had developed the calculus — or a calculus, since Newton had developed another one, by different methods, arriving at equivalent results through a completely different notation and a different conceptual apparatus. The priority dispute between Newton and Leibniz would consume the intellectual energies of European mathematics for decades, producing accusations of plagiarism, institutional rivalries between the Royal Society and the Continental academies, and a fracture in the mathematical community that impeded the development of analysis for a generation.

The priority dispute is typically narrated as a story about two great minds and the question of which one got there first. But Schaffer's framework, applied to priority disputes across the history of science, reveals that the question "Who got there first?" conceals a more fundamental question: "What does it mean that they both got there at all?"

Robert K. Merton, the sociologist of science whose work on multiple discovery Schaffer both extends and complicates, documented the phenomenon systematically. The independent invention of the calculus by Newton and Leibniz. The independent formulation of natural selection by Darwin and Wallace. The independent discovery of oxygen by Scheele, Priestley, and Lavoisier. The simultaneous filing of telephone patents by Bell and Gray. Merton catalogued hundreds of cases and concluded that multiple discovery is not an anomaly but a structural feature of scientific knowledge production — a consequence of the fact that discoveries emerge from accumulated communal knowledge rather than from individual minds operating in isolation.

Segal draws on this pattern in The Orange Pill, arguing that simultaneous discoveries demonstrate the river of intelligence flowing through multiple channels at once. "The river finds its channels," Segal writes. "The channels are the minds it flows through." The metaphor captures something real about the accumulated momentum of knowledge — the way decades of prior work create the conditions under which specific advances become, in some sense, available to anyone positioned at the right confluence of training, temperament, and institutional support.

Schaffer's analysis adds a dimension that the river metaphor, in its naturalizing mode, tends to suppress. Multiple discovery is not merely evidence that intelligence flows through culture. It is evidence that the conditions for discovery are socially produced — that the "strategic research site," the location in the landscape of knowledge where the next advance is most likely to emerge, is constructed by institutions, by funding decisions, by the accumulated practices of communities that direct attention toward certain problems and away from others. Newton and Leibniz did not arrive at the calculus because the river of mathematical intelligence happened to flow through both of them simultaneously. They arrived at the calculus because the mathematical community of the late seventeenth century — its problems, its methods, its institutional structures, its patterns of communication and competition — had created the conditions under which the calculus was the next thing to be found.

The distinction matters because it shifts the locus of explanation from the individual to the community. In the river metaphor, the individual is a channel — shaped by biography, by training, by the specific configuration of influences that produce a particular angle of vision. The individual matters because no two channels are identical. Dylan's specific biography produced "Like a Rolling Stone" in a way that no other biography could have. The individual is irreplaceable within the metaphor, but the metaphor positions the irreplaceability as a property of the channel's unique shape rather than as a property of the water that flows through it.

Schaffer's framework pushes further. The individual is not merely a channel. The individual is a participant in a collective process of knowledge production that includes instruments, institutions, invisible laborers, and the accumulated practices of communities that determine what counts as knowledge. The priority dispute between Newton and Leibniz was not a dispute about whose channel the river flowed through first. It was a dispute about credit — about which individual would receive the social rewards (prestige, patronage, institutional authority) that the community attached to the discovery. And the dispute itself was a social mechanism, a way of distributing credit that reinforced the mythology of individual genius while obscuring the collective conditions that made the discovery possible.

The AI moment provides a contemporary instance of this dynamic operating at scale. The capabilities that Segal describes — natural-language code generation, conversational reasoning, the collapse of the imagination-to-artifact ratio — were not produced by a single mind or a single company. They were produced by decades of accumulated research across hundreds of institutions, by the labor of thousands of researchers whose individual contributions are recorded in published papers but whose collective contribution is rendered invisible by the convention of attributing breakthroughs to specific models (GPT-4, Claude, Gemini) and specific companies (OpenAI, Anthropic, Google). The transformer architecture was published by a team at Google Brain in 2017. The scaling laws were characterized by researchers at OpenAI. The reinforcement learning from human feedback that refined model behavior drew on work from multiple research groups. The training data was produced by millions of human beings whose individual contributions are not merely unattributed but unattributable — absorbed into a statistical aggregate that erases the specific voice, perspective, and labor of each contributor.

The convention of attributing AI capabilities to specific models and companies performs the same function that the priority dispute performed in the case of Newton and Leibniz: it distributes credit to individual actors (in this case, corporate actors) while obscuring the collective process that produced the capabilities. When Anthropic receives credit for Claude's capabilities, the credit is not inaccurate — Anthropic's specific engineering decisions, training practices, and safety investments shaped Claude into a distinctive product. But the credit obscures the extent to which Claude's capabilities rest on a collective foundation of prior research, publicly shared knowledge, and absorbed human labor that no single institution produced or controls.

Segal's Chapter 7, "Who Is Writing This Book?", confronts this problem directly in the context of authorship. "There are moments," Segal writes, "that keep me awake. Claude makes a connection I had not made. It links two ideas from different chapters, draws a parallel I had not considered. And the connection is so apt that it changes the direction of the argument. Something happened in that exchange that neither of us predicted. I cannot honestly say it belongs to either of us. It belongs to the collaboration, to the space between us."

Schaffer's framework reads this confession as a moment of genuine epistemological honesty — a recognition that knowledge production is collective rather than individual, that the insight emerged from the interaction rather than from either participant. But the framework also notes what the confession does not quite reach: the interaction between Segal and Claude is not a two-party collaboration. It is a collaboration that involves, invisibly, every text that Claude was trained on, every data labeler who categorized the training data, every researcher whose published work contributed to the model's architecture, every content moderator whose labor filtered the outputs. The "space between" Segal and Claude is not empty. It is populated by millions of invisible contributors whose labor made the space possible.

The mythology of the solitary genius, which Segal challenges effectively in his chapter on Dylan, is replaced in The Orange Pill by a mythology of the partnership — the human-AI collaboration as a new form of creative relationship. This is an improvement. It is more honest than the solitary-genius myth. But it remains a mythology in Schaffer's sense: a narrative that distributes credit in specific ways while rendering other distributions invisible. The partnership narrative credits the human (for vision, judgment, the questions that drive the collaboration) and the AI (for breadth, connection, the rapid generation of possibilities). It does not credit the collective labor that produced both participants' capabilities — the communities, institutions, and invisible workers without whom neither the human's accumulated knowledge nor the AI's trained capacities would exist.

Schaffer's work suggests that the mythology of partnership, like the mythology of individual genius before it, will eventually be recognized as a partial and interested account of a process that is fundamentally collective. The question is not whether to credit the individual or the partnership. The question is whether to build frameworks of understanding that are capacious enough to acknowledge the full scope of the collective process — the institutions, the invisible labor, the accumulated practices of communities that produce the conditions under which both human and machine intelligence operate.

This is not a counsel of despair. It is a counsel of honesty. The partnership between Segal and Claude produced a book that neither could have produced alone. That is a genuine achievement, and acknowledging the collective conditions of its production does not diminish it. What acknowledgment does is prevent the achievement from being recruited into a mythology that serves the interests of the communities that control the technology — a mythology that presents human-AI collaboration as a self-contained, self-sufficient process rather than as a process embedded in, and dependent upon, a vast infrastructure of collective labor and accumulated knowledge.

Discovery was never individual. Knowledge was always collective. The myth of the solitary genius was always a social construction that served specific functions — distributing credit, justifying patronage, reinforcing hierarchies of intellectual authority. AI has not changed this fundamental feature of knowledge production. What AI has done is make the collective nature of knowledge production harder to deny, by giving the collective process an interface — a conversational partner whose capabilities visibly derive from the accumulated production of millions of minds. The partnership is real. It is also incomplete as an account of what is happening. The full account includes the river, the channels, the hands that dug them, and the communities whose accumulated labor made the water flow.

Epilogue

The witness who troubled me most was not Galileo or Boyle or even the unnamed data labelers in Kenya. It was Nevil Maskelyne, dismissing David Kinnebrook from Greenwich Observatory for a half-second discrepancy — for observing the stars slightly differently than his superior, and being fired for it.

I kept returning to that image while reading Schaffer. Not because it is dramatic — it is actually quite small, a personnel dispute at an eighteenth-century institution, the kind of thing that barely leaves a mark in the historical record. But because the mechanism it reveals operates everywhere I look in the AI moment, and I had not seen it before.

Maskelyne did not know he was part of the instrument. He thought he was the observer and the telescope was the instrument, and Kinnebrook's failure was a failure of observation. It took Bessel twenty years to recognize what was actually happening: the observer's body was part of the apparatus, and different bodies produced different results, and neither result was more "correct" than the other. The "correct" time was an artifact of whoever happened to hold the position of authority.

I have been Maskelyne. When I stood in that room in Trivandrum and reported a twenty-fold productivity multiplier, I was reporting what my instrument showed me. The instrument — Claude Code, the specific tasks I assigned, the specific framework of measurement I applied, the specific team whose years of accumulated expertise I was leveraging — shaped the fact it produced. The multiplier is real within the framework I used. But the framework is mine, and it measures what it was built to measure, and what it was not built to measure disappears from the account as thoroughly as Kinnebrook disappeared from the observatory.

Schaffer's work does something I did not expect a historian of science to do: it made me more honest about what I know and how I know it. Not less confident — I still believe the orange pill moment is real, still believe the technology represents a genuine expansion of human capability, still believe the dams need building. But honest in a way that changes the texture of the confidence. I know now that the moment I described is also a moment I helped construct. I know the channels through which this intelligence flows were dug by specific decisions, not carved by impersonal forces. I know the hands that built the tools I celebrate are mostly invisible, and that their invisibility is not an accident but a feature of the system that produces the tools' apparent autonomy.

None of this makes me want to stop building. Schaffer is not Han — he does not counsel refusal or retreat into a garden. His work is more unsettling than that, because it does not offer a clean alternative. It offers a deeper understanding of the construction you are already inside, and then it leaves you there, with the understanding, to decide what to do with it.

What I want to do with it is this: build with open eyes. Build knowing that my instruments shape the facts they produce. Build knowing that the communities I cannot see — the labelers, the creators whose work was absorbed, the workers bearing costs I do not pay — are part of the apparatus I depend on. Build knowing that the orange pill narrative I helped construct is a construction, not a revelation, and that its durability depends on whether it can accommodate the perspectives it currently excludes.

The river is real. The water is real. But the riverbed was dug by hands, and the hands deserve to be seen.

-- Edo Segal

The AI revolution has a creation myth: brilliant engineers built thinking machines, and the world changed. But who really built the instrument -- and whose labor did the instrument erase?
Simon Schaff

The AI revolution has a creation myth: brilliant engineers built thinking machines, and the world changed. But who really built the instrument -- and whose labor did the instrument erase?

Simon Schaffer has spent four decades revealing what scientific revolutions conceal: the invisible workers, the constructed facts, the social machinery that transforms a laboratory result into accepted truth. In this Orange Pill Companion, his frameworks expose the hidden architecture beneath the AI moment -- the data labelers earning two dollars an hour, the millions of creators whose work was absorbed without consent, the measurement frameworks that produce the very productivity gains they claim to discover. Schaffer does not deny that AI works. He asks what it costs to make the working look effortless.

From Boyle's air pump to Babbage's engines to Claude Code, the pattern repeats: the instrument and the fact are co-produced, and the hands that built the instrument disappear from the story. This book makes them visible -- not to stop the building, but to ensure we understand what we are building on.

-- Simon Schaffer

Simon Schaffer
“Scientific Discoveries and the End of Natural Philosophy,”
— Simon Schaffer
0%
10 chapters
WIKI COMPANION

Simon Schaffer — On AI

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Simon Schaffer — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →