Shoshana Zuboff — On AI
Contents
Cover Foreword About Chapter 1: The Paper Mill and the Prompt Chapter 2: From Touching to Reading to Conversing Chapter 3: The Informating Dividend Chapter 4: The Extraction of Experience Chapter 5: Authority and the Redistribution of Knowledge Chapter 6: The Worker's Dilemma in the AI Age Chapter 7: Intellective Skill and Its New Demands Chapter 8: The Panoptic Sort Revisited Chapter 9: Institutional Design for the Informating Dividend Chapter 10: Beyond the Smart Machine — The Unfinished Question Epilogue Back Cover
Shoshana Zuboff Cover

Shoshana Zuboff

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Shoshana Zuboff. It is an attempt by Opus 4.6 to simulate Shoshana Zuboff's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The extraction I consented to was the one I never read the terms for.

Not the obvious ones — the cookies, the tracking pixels, the location data I traded for turn-by-turn directions. Those extractions I understood, or thought I did. The extraction that Zuboff made me see was different. It was the one happening inside the tool I loved most.

Every conversation I have with Claude generates data. Not just the words. The patterns. Which suggestions I accept, which I reject, how long I hesitate before deciding. The rhythm of my doubt. The architecture of my creativity. The specific shape of the gap between what I intend and what I can articulate. All of it captured. All of it, in Zuboff's precise language, claimed as raw material for someone else's production process.

I describe in *The Orange Pill* the exhilaration of watching twenty engineers in Trivandrum each become capable of doing the work of an entire team. I describe the thirty-day sprint to build Napster Station. I describe the vertigo. What I did not describe — because I had not yet thought it through — was what was being extracted from every one of those sessions. Not their labor. I was paying for their labor. Their cognitive signatures. Their problem-solving patterns. Their domain expertise, externalized through interaction, feeding the very system that will eventually be used to further automate the work they were learning to direct.

The worker trains the machine that replaces the worker. Zuboff saw this dynamic decades ago in paper mills. Now it operates at the speed of a prompt.

This is not an argument to stop building. Nothing in Zuboff says abandon the tools. What she says is that the tools operate inside an economic logic I am responsible for understanding. That the informating dividend — the genuine expansion of human capability that AI makes possible — does not distribute itself. That the default institutional choice, documented across four decades of empirical research, is to capture the gains at the top and externalize the costs to the people who generated them.

I built dams in *The Orange Pill*. Zuboff made me understand that my dams operate at the level of my own company while the forces they contend with operate at the level of the platform, the market, the global economy. My decision to keep my team matters to the people on that team. It does not address the extraction of their cognitive behavioral data. It does not address the sorting mechanisms classifying workers across every industry. It does not address the institutional vacuum.

The dams need to be bigger than any one builder can construct. Zuboff shows you why.

-- Edo Segal ^ Opus 4.6

About Shoshana Zuboff

1951-present

Shoshana Zuboff (1951–present) is an American scholar, author, and social theorist who spent her career at Harvard Business School, where she became one of the first tenured women on the faculty. Her first major work, *In the Age of the Smart Machine: The Future of Work and Power* (1988), drew on years of ethnographic fieldwork in computerizing workplaces — paper mills, banks, telecommunications companies — to develop the foundational distinction between "automating" and "informating," the two simultaneous dynamics of every technological transformation. She introduced the concepts of "action-centered skill" and "intellective skill" to describe what workers lose and what they must develop when machines mediate their relationship to work. Her second landmark work, *The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power* (2019), mapped how technology platforms convert human experience into "behavioral surplus" — raw material processed into "prediction products" sold in "behavioral futures markets" — fundamentally reframing the digital economy as a system of extraction rather than exchange. Her concepts of surveillance capitalism, the "Big Other," and instrumentarian power have shaped regulatory discourse worldwide, influencing the EU AI Act and democratic governance debates across continents. She remains one of the most cited and contested voices on the relationship between technology, labor, knowledge, and democratic autonomy.

Chapter 1: The Paper Mill and the Prompt

In the winter of 1983, a paper mill worker named Piney Woods — the pseudonym Shoshana Zuboff gave him in her field notes — stood at his station in a pulp processing plant in the American South and did something he had done ten thousand times before. He reached into the flow of wood pulp moving through the digester, felt its consistency between his fingers, and adjusted the chemical feed. He did not consult a chart. He did not read a gauge. He felt the pulp, and he knew. The knowledge lived in his hands, in the nerve endings that had been calibrated across decades of repetitive, attentive practice, in the specific way his fingers registered the difference between pulp that was cooking properly and pulp that had gone too far or not far enough.

Zuboff spent years watching workers like Piney Woods. She embedded herself in paper mills, in telecommunications companies, in banks undergoing computerization, not as a consultant offering solutions but as a scholar documenting a transformation that the people inside it could feel but could not name. What she found was that the workers who operated industrial processes through direct physical engagement possessed a form of knowledge she came to call "action-centered skill" — knowledge that resided not in the mind's explicit reasoning but in the body's accumulated intuition, in the hands that could detect a temperature shift of two degrees, in the ears that could hear the difference between a machine running smoothly and a machine about to fail. This knowledge was real. It was precise. It had taken years to develop. And it was about to become economically irrelevant.

When the paper mills computerized, the workers who had spent decades developing their feel for the process were moved from the mill floor to a control room. They sat in front of screens. The screens displayed numbers — temperatures, pressures, flow rates, chemical compositions — that represented the same process they had once touched. The representation was accurate. In many ways it was more accurate than the body's feel, capable of detecting variations too subtle for human fingers. But the workers reported, with a consistency that Zuboff found striking, that something essential had been lost. They could see the numbers. They could not feel the pulp. The cognitive feedback loop that had connected their bodies to the production process — the loop through which understanding was built, layer by layer, through thousands of hours of tactile engagement — had been severed.

The severance was not merely psychological. It was epistemological. The workers were not simply uncomfortable with new technology. They were experiencing the extinction of a way of knowing. The knowledge that had made them valuable, the knowledge that separated a twenty-year veteran from a first-year trainee, was knowledge that could not survive the migration from hands to screens. It was embodied, meaning it existed in the specific relationship between the worker's body and the material being worked. Remove the body from the material, and the knowledge had no substrate in which to live.

Zuboff's insight — the one that has shaped every subsequent analysis of technology and work, even among scholars who have never read her directly — was that this was not a side effect of computerization. It was its central dynamic. Every technology that interposes a layer of abstraction between the worker and the work simultaneously destroys one form of knowledge and creates the conditions for another. The computerized paper mill destroyed the action-centered skill of the mill floor worker. But it also generated data about the production process that had never been available before — data that, if properly interpreted, could enable forms of understanding that hands-on operation could never support. The question was whether the new knowledge could absorb the expertise displaced by the old.

That question, posed in 1988, is the question of the AI moment. And the answer, nearly four decades later, is still being determined.

Edo Segal's account of the Trivandrum engineering team in The Orange Pill contains, embedded within its narrative of exhilaration and productive vertigo, a case study that Zuboff's analysis anticipated with remarkable precision. The senior engineer who spent his first two days oscillating between excitement and terror was not experiencing a novel emotion. He was experiencing the same disconnection that Piney Woods experienced in the paper mill control room — the severance of the cognitive feedback loop between embodied practice and understanding, transposed from the industrial to the cognitive domain.

The senior engineer's twenty years of implementation work — writing code, debugging systems, resolving dependency conflicts, navigating the specific friction of making software do what he intended — were not merely labor in the way that stacking boxes is labor. They were a form of knowing. Each hour spent debugging deposited, as Segal describes it, a thin layer of understanding. The layers accumulated over years into something solid — architectural intuition, the capacity to feel that a codebase was wrong before being able to articulate what was wrong about it, the judgment that separated a competent developer from an exceptional one. This knowledge was action-centered in precisely Zuboff's sense: it resided not in explicit rules that could be written down and transferred but in the body's accumulated engagement with the material, in the specific relationship between the engineer's mind and the code.

When Claude Code absorbed the implementation — when the friction of writing, debugging, and resolving was replaced by the fluency of a conversation in natural language — the feedback loop that had built that knowledge was severed. The engineer discovered, as Segal reports, that what remained was judgment, taste, architectural instinct. He discovered that this remainder was "the part that mattered." But the discovery was accompanied by grief, because the process that had built those capacities was the same process that had been eliminated. The question Zuboff would press — the question that Segal's account raises without fully resolving — is whether the conversation with AI creates a new feedback loop capable of building equivalent depth through different means, or whether it produces a generation of workers who possess the output of judgment without having undergone the process that develops it.

The paper mill workers Zuboff studied were not, in the main, absorbed into the new knowledge system that computerization created. The informating potential of the technology — the new data, the new forms of understanding it made possible — was captured primarily by managers and engineers, not by the workers whose embodied knowledge it had displaced. The workers who had spent decades developing their feel for the pulp were retrained to read screens, but the reading was thin compared to the touching. They could monitor. They could report anomalies. But the deep engagement with the material, the kind of engagement that generates genuine understanding, had been severed, and nothing of equivalent depth had taken its place.

This is the pattern that Zuboff's framework forces the AI analyst to confront. The question is not whether AI is powerful. It manifestly is. The question is not whether it creates new cognitive demands. It manifestly does — the demand to evaluate, to contextualize, to integrate machine-generated output with human judgment. The question is whether these new demands are deep enough to constitute genuine knowledge work, or whether they are a cognitive veneer stretched over what is fundamentally an automating displacement. The difference between the two is the difference between a transition that elevates and a transition that hollows out, and from the outside — from the vantage of the quarterly earnings report or the productivity metric — they are nearly indistinguishable.

Segal argues for depth. He describes the ascending friction thesis — the principle that each technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor. The engineer freed from implementation friction encounters the harder friction of vision, architecture, product judgment. The difficulty does not disappear. It climbs. This is, in Zuboff's vocabulary, an argument about the informating dividend: the claim that AI's automating function is accompanied by an informating function large enough to create genuinely new, genuinely demanding cognitive work.

But Zuboff's empirical record introduces a complication that optimistic accounts of ascending friction tend to glide past. In the paper mills, the informating potential was real. The data was there. The new understanding was possible. But the potential was not automatically realized. It required institutional structures — training programs, organizational redesigns, new forms of authority and accountability — that most organizations did not build. The technology created the possibility of informating. The institutions determined whether the possibility was realized. And in the majority of cases Zuboff studied, the institutions chose the cheaper path: automation without informating, displacement without elevation, the extraction of the technology's cost-saving potential without the investment required to capture its knowledge-creating potential.

The parallel to the AI moment is direct. Segal's Trivandrum team represents one outcome — the outcome in which a leader invests in training, in organizational redesign, in the slow work of helping workers develop the evolved intellective skills that AI demands. But Segal himself acknowledges the competing arithmetic: five people doing the work of a hundred. The boardroom conversation where the twenty-fold productivity multiplier sits on the table, clean and seductive, alongside the question that every investor eventually asks — if five people can do the work of a hundred, why pay a hundred? That arithmetic is the gravitational force that, in Zuboff's documented history, pulls organizations toward automating and away from informating. The technology creates both possibilities. The institution chooses one. And the choice, more often than not, follows the money.

Piney Woods lost his feel for the pulp. The senior engineer in Trivandrum lost his feel for the code. In both cases, what was lost was a form of embodied knowledge built through years of friction between the worker and the material. In both cases, what replaced it was a screen — a digital representation of the process that could be read but not felt. In both cases, the transition was experienced as simultaneously liberating and diminishing: liberating because the drudgery was real and its removal was genuine relief, diminishing because the drudgery had been interleaved with moments of deep understanding that could not be separated from the struggle that produced them.

The question Zuboff's framework forces is not whether the liberation is real. It is whether the diminishment is being measured. Whether the organizations adopting AI are tracking not just the productivity gain — the twenty-fold multiplier, the collapsed timelines, the features shipped per quarter — but the knowledge loss: the erosion of the feedback loops through which deep understanding is built, the thinning of the cognitive substrate on which judgment depends.

The paper mills did not track this loss. They tracked output, cost, efficiency. The loss of embodied knowledge showed up only years later, when the workers who had been retrained to read screens made errors that the workers who had felt the pulp would never have made — errors of interpretation that revealed the thinness of the knowledge that had replaced the depth. By then, the workers who possessed the deep knowledge had retired or been laid off, and the knowledge itself had no human substrate in which to persist.

The AI moment is proceeding along the same trajectory, at a speed that compresses the consequences from decades to years. The question is not whether the pattern will recur. The pattern is already recurring. The question is whether the institutions being built around AI — the training programs, the organizational structures, the cultural norms — are adequate to capture the informating dividend before the automating displacement renders the question moot. Zuboff's career of empirical observation suggests that the default institutional response is inadequate, that the cheaper path of automation without informating is the path most organizations will take unless compelled by forces outside the market — by regulation, by collective action, by the kind of institutional design that does not emerge spontaneously from quarterly earnings pressure.

The paper mill worker felt the pulp. The engineer felt the code. The knowledge lived in the feeling. The machine eliminated the feeling. What grows in its place is the unfinished question of the smart machine, and it has been unfinished for nearly four decades because the institutions responsible for answering it have consistently chosen the path that does not require an answer.

---

Chapter 2: From Touching to Reading to Conversing

The history of the human relationship with machines is a history of progressive abstraction, and each layer of abstraction has changed not merely what the worker does but what the worker knows and how the worker knows it.

Zuboff documented the first great cognitive transition of the computer age: the migration from touching to reading. The paper mill worker who operated the digester by hand possessed knowledge that was inseparable from physical contact with the material. The computerized worker who monitored the same digester from a control room possessed knowledge that was entirely symbolic — numbers on a screen, representations of a reality that could no longer be directly experienced. The transition was not merely a change of medium. It was a change of epistemology. The worker's relationship to truth itself had been altered. Where truth had once been verified by touch — by the feel of the pulp, by the sound of the machine, by the smell of the chemicals — it was now verified by reading, by the interpretation of symbols that stood for the physical reality the worker could no longer access.

This transition produced what Zuboff called "intellective skill" — the cognitive capacity required to work with abstracted, symbolically represented information. Intellective skill was genuinely demanding. It required the ability to construct mental models of physical processes from digital representations, to hold multiple variables in working memory simultaneously, to detect anomalies in data streams that moved faster than intuition. Workers who developed strong intellective skill became more capable in certain dimensions than the hands-on operators they had replaced. They could monitor more variables, detect subtler patterns, respond to a wider range of conditions. But the skill was different in kind from the action-centered skill it supplanted, and the workers who possessed it reported a persistent sense of distance from the work — a feeling that the screen, no matter how accurate, was not the thing itself.

The AI moment introduces a third transition, one that Zuboff's framework anticipated in its logical structure but could not have predicted in its specific form: the migration from reading to conversing. The large language model does not present information for the worker to interpret, in the way a digital display presents temperatures and pressures. It generates interpretations — analyses, drafts, solutions, arguments — for the worker to evaluate. The cognitive demand shifts from making sense of raw data to judging the quality of sense that has already been made.

This shift is not a refinement of the previous transition. It is a qualitative break. The difference between reading a screen and evaluating a conversation is the difference between cooking from raw ingredients and tasting a dish someone else has prepared. The first requires you to understand the ingredients, the chemistry, the sequence of operations. The second requires you to understand the intended result well enough to know whether it has been achieved — a form of expertise that is in some ways more demanding than the first, because it operates at the level of judgment rather than execution, and judgment requires precisely the kind of deep domain knowledge that the elimination of hands-on execution threatens to erode.

Segal describes this transition with the specificity of a builder who has lived through it. When he worked with Claude Code to build a component of Napster Station, he described the problem in plain English — not simplified language, not structured commands, but the language of his actual thinking, with all its mess and half-formed implications. Claude responded not with a literal translation of his words but with an interpretation, an inference about what he was actually trying to accomplish. The interaction was conversational in the full sense: iterative, contextual, responsive to implication as well as statement.

This conversational interface is what makes the AI transition fundamentally different from the computerization transition Zuboff documented. The control room worker who read a digital display was still performing interpretation — translating symbols into understanding, constructing meaning from data. The builder who converses with Claude is performing evaluation — assessing whether the meaning constructed by the machine matches the meaning intended by the human. The cognitive operation has ascended from construction to assessment, from building understanding to auditing it.

Zuboff's framework predicts that this ascent will produce a new form of intellective skill — call it evaluative intellective skill — that is both more demanding and more consequential than the interpretive intellective skill required by the control room. More demanding because the machine's output is sophisticated, confident, and often correct, which means the evaluator must possess enough domain knowledge to detect the cases where confidence is unwarranted. More consequential because the evaluator's judgment determines whether the machine's output enters the world — whether the code is deployed, the brief is filed, the product is shipped.

But the framework also predicts a danger that the transition's advocates tend to understate. If the evaluative skill depends on the same deep domain knowledge that hands-on practice builds — if you need to have written code for twenty years to evaluate code you did not write — then the elimination of hands-on practice threatens to erode the foundation on which evaluation depends. The evaluator who has never written code, who has only ever evaluated code produced by a machine, may lack the experiential substrate required to detect the machine's characteristic failures. The evaluation becomes superficial — a check of whether the output looks right rather than whether it is right, a judgment of plausibility rather than correctness.

Segal's account of the Deleuze fabrication in The Orange Pill is an exact illustration of this danger. Claude produced a passage connecting Csikszentmihalyi's flow state to a concept it attributed to Gilles Deleuze. The passage was elegant. It sounded like insight. Segal liked it, moved on, and caught the error only the following morning, when something nagged. The philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze — but not obvious to a reader evaluating the passage for plausibility rather than accuracy.

The failure mode is structural, not incidental. The conversational interface produces output that is, by design, optimized for plausibility — for sounding right, for fitting the expected pattern, for satisfying the evaluator's implicit criteria. The smoother the output, the harder the evaluation becomes, because the surface quality of the prose conceals the depth (or absence) of the reasoning beneath it. A digital display that shows an incorrect temperature is detectably wrong: the number does not match the worker's embodied sense of the process. A conversational AI that produces an incorrect but plausible argument is detectably wrong only to an evaluator who possesses independent knowledge of the domain — knowledge that, in the absence of hands-on practice, may not exist.

This is the epistemological trap at the heart of the AI transition. The technology demands evaluation. Evaluation demands domain knowledge. Domain knowledge is built through practice. The technology eliminates practice. The circle closes, and inside it, the capacity for evaluation erodes at the same rate as the practice that sustains it.

Zuboff observed exactly this dynamic in the paper mills. Workers who had been moved from the floor to the control room were initially capable evaluators of the digital displays, because they possessed the embodied knowledge against which the displays could be checked. A temperature reading that contradicted their feel for the process triggered investigation. But as the experienced workers retired and were replaced by workers who had never touched the pulp, the capacity for critical evaluation diminished. The new workers could read the displays competently. They could not detect when the displays were wrong, because they had no independent source of knowledge against which to check them. The technology had eliminated the epistemic foundation required to evaluate the technology.

The timeline in Zuboff's paper mills was generational — decades for the experienced cohort to retire and the inexperienced cohort to replace them. The timeline in the AI transition is compressed to years. The senior developer who possesses the embodied coding knowledge required to evaluate Claude's output is not retiring in twenty years. The senior developer's practice is being eliminated now, in real time, by the same tool whose output requires that practice to evaluate. The erosion of the evaluative foundation is happening simultaneously with the demand for evaluation, not sequentially, and the speed of the compression is what makes the AI transition more dangerous than any previous smart machine transition.

The progression from touching to reading to conversing is a progression of increasing cognitive distance from the material being worked. Each step in the progression has produced genuine gains — in precision, in scale, in the range of problems that can be addressed. Each step has also produced genuine losses — in embodied understanding, in the capacity for critical evaluation, in the worker's relationship to the work. The gains are measurable and celebrated. The losses are subtle and cumulative and do not appear in any quarterly report.

The conversational interface, the third stage in this progression, is the most seductive and the most dangerous — seductive because it feels like collaboration, because it mimics the intellectual intimacy of working with a capable colleague, because it meets the worker in the worker's own language rather than requiring the worker to learn the machine's. Dangerous because the seduction conceals the abstraction. The worker who converses with Claude feels closer to the work than the worker who reads a digital display. But the feeling is an illusion. The distance has increased, not decreased. The worker is now three layers of abstraction from the material: from the physical process, to the digital representation, to the machine's interpretation of the digital representation, presented in language designed to feel like understanding.

The question is not whether this progression can be reversed. It cannot. The question is whether the institutions surrounding the technology — the training programs, the organizational structures, the cultural norms — will invest in building the evaluative intellective skill that the progression demands, or whether they will accept the surface plausibility of the conversational interface as a substitute for the depth it requires. Zuboff's empirical record, accumulated across four decades of observing smart machine transitions, offers a prediction: the cheaper path will be chosen, the surface will be accepted, and the erosion will proceed undetected until the consequences become impossible to ignore and expensive to reverse.

---

Chapter 3: The Informating Dividend

The most important distinction in Zuboff's intellectual architecture — the distinction that separates her analysis from every other critique of technology and work — is the recognition that automation and informating are not opposites. They are simultaneous, co-occurring dynamics of the same technological event. Every technology that automates also informates, to some degree. The question is the ratio between the two, and whether the informating dimension is large enough, and distributed widely enough, to compensate for the displacement that automation produces.

The computerized paper mill automated the physical operation of the digester. It also informated — generating continuous streams of data about temperatures, pressures, chemical compositions, and flow rates that had never been available in the era of hands-on operation. This data was genuinely new. It represented an expansion of what could be known about the production process. A worker who could interpret it — who possessed the intellective skill to construct mental models from symbolic representations — could understand the process at a level of granularity and precision that the most experienced hands-on operator could never achieve. The informating dividend was real. The new knowledge was available. The possibility of deeper, more comprehensive understanding existed.

But possibility and realization are not the same thing. Zuboff's most sobering finding — the finding that separates her work from the technologist's optimism — was that the informating potential of computerization was, in the majority of the workplaces she studied, unrealized. The data was generated. The knowledge was available. But the institutional investment required to help workers develop the intellective skill necessary to interpret the data — the training, the organizational redesign, the redistribution of authority that would allow floor workers to engage with the information rather than merely monitor it — was, in most cases, not made. The cheaper path prevailed: automation without informating, displacement without elevation, the extraction of cost savings without the investment in human development.

The AI moment is producing an informating dividend of unprecedented scale. AI tools do not merely generate data about production processes. They generate interpretations, analyses, connections between domains, patterns in datasets too large for human cognition to process, hypotheses that would take human researchers months to formulate. The dividend is not speculative. It is visible in every domain where AI tools have been deployed: in drug discovery, where AI-generated hypotheses have identified molecular targets that escaped decades of human research; in materials science, where AI-driven analysis has discovered alloys with properties no human metallurgist predicted; in software development, where AI tools have revealed architectural patterns that experienced developers had not considered.

Segal documents the dividend in the specific context of his engineering team. The backend engineer who had never written frontend code was able to build a complete user-facing feature because Claude handled the translation between domains. The designer who had never touched backend systems was able to implement features end to end. In each case, the AI tool generated new cognitive possibilities — new connections between domains, new forms of integrated understanding — that the traditional division of labor had made inaccessible. The informating dividend was not merely more data. It was more reach: the expansion of what any individual worker could attempt, understand, and accomplish.

But the question Zuboff's framework forces is not whether the dividend exists. It is who captures it. And the answer, in the political economy of AI deployment, is not as democratic as the technology's advocates suggest.

Consider the structure of the dividend's distribution. The AI tool generates new possibilities for understanding. Those possibilities are realized by workers who possess the judgment to evaluate the tool's output, the domain knowledge to contextualize it, the strategic vision to direct it toward valuable ends. These capacities are not uniformly distributed. They are concentrated among workers who already possess deep expertise — the senior engineers, the experienced architects, the leaders with decades of accumulated judgment. The informating dividend flows, in the first instance, to the people who are already most capable. It amplifies existing expertise rather than distributing new expertise to those who lack it.

Segal's own account confirms this pattern, even as it argues for democratization. The most striking productivity gains he describes — the twenty-fold multiplier, the thirty-day product build — were achieved not by novices but by experienced professionals whose deep knowledge of what to build was liberated from the friction of how to build it. The senior engineer's judgment became more valuable, not less. The leader's vision became more potent, not less. The tool amplified what was already there. For those who had less to amplify — less experience, less domain knowledge, less strategic vision — the amplification was correspondingly smaller.

This is not an argument against democratization. The developer in Lagos whom Segal describes does gain real capability from AI tools. The floor rises. But the ceiling rises faster. The gap between what an experienced professional can accomplish with AI and what a novice can accomplish with the same tool is wider than the gap that existed without the tool, because AI amplifies proportionally: more expertise in produces more capability out. The informating dividend is real, but its distribution follows the contours of existing inequality rather than flattening them.

Zuboff's analysis of the paper mills revealed exactly this dynamic. The informating potential of computerization was captured disproportionately by managers and engineers — the people who already possessed the institutional authority and the educational background to engage with the new data. The floor workers, whose embodied knowledge had been displaced by automation, were offered monitoring tasks that utilized a fraction of the new data's potential. The knowledge was available to everyone in theory. In practice, the institutional structures — who received training, who was granted access, who was authorized to act on the data — ensured that the dividend flowed upward.

The AI moment is reproducing this dynamic at the speed of a market that rewards efficiency above all else. The organizations Segal describes — the ones that invest in training, that keep full teams, that choose to distribute the informating dividend broadly rather than concentrating it in a reduced headcount — are making a choice that the market does not naturally reward. The competing arithmetic is always present: fewer workers, higher margins, faster returns. And the organizations that choose the competing arithmetic — that automate without informating, that capture the productivity gain as cost reduction rather than capability expansion — will, in many cases, show better short-term results. The market selects for short-term results. The informating dividend is a long-term investment. The tension between these two timescales is the structural force that, in Zuboff's documented history, prevents the informating potential from being realized.

The Berkeley work-intensification study that Segal cites in The Orange Pill provides empirical evidence of the dividend's distortion. Workers who adopted AI tools did not experience the dividend as liberated time for deeper thinking. They experienced it as more work — an expansion of task scope, a colonization of cognitive rest periods, an acceleration of the production cycle that consumed the freed capacity before it could be invested in the new cognitive demands the technology had created. The informating dividend was absorbed not by the workers but by the production process. The new knowledge AI generated was used not to deepen understanding but to increase output. The informating potential was converted into automating pressure.

This conversion — the institutional transformation of informating potential into automating pressure — is the central pathology that Zuboff's framework identifies in every smart machine transition. The technology creates two possibilities simultaneously: the possibility of deeper understanding and the possibility of cheaper production. The institution chooses between them, and the choice is not made once but continuously, in every budget allocation, every hiring decision, every performance review that rewards output over understanding. The cumulative effect of a thousand small institutional choices, each one rational in isolation, is the systematic suppression of the informating dividend in favor of the automating function.

The AI moment is the most extreme expression of this dynamic because the informating potential is larger than any previous technology has produced, and the automating pressure is correspondingly more intense. The same tool that enables an engineer to understand systems at a level of integration never before possible also enables an organization to reduce its engineering headcount by eighty percent. The same tool that enables a student to explore connections between domains that no curriculum has ever bridged also enables a school to assign more homework with less faculty investment. The dual potential is real. The institutional choice is real. And the default institutional choice, absent deliberate intervention, follows the money.

Zuboff's career has been organized around a single proposition: that the informating potential of technology can be realized, but only through institutional structures that are designed to realize it — structures that distribute access to new knowledge broadly, that invest in the human capacities required to engage with that knowledge, that resist the gravitational pull of automation's cheaper path. The proposition is neither optimistic nor pessimistic. It is conditional. The technology creates the possibility. The institution determines the outcome. And the outcome, in the absence of deliberate institutional design, defaults to automation — to displacement without elevation, to extraction without investment, to the capture of the dividend by the few at the expense of the many.

The informating dividend of AI is the largest in the history of human tool use. Whether it is realized or squandered is a question not of technology but of political economy — of who builds the institutions, who designs the training, who insists that the dividend be distributed rather than concentrated. That question has been asked at every smart machine transition for four decades. It has never been adequately answered. The AI moment is either the transition where the answer finally arrives or the transition where the question becomes too expensive to ask.

---

Chapter 4: The Extraction of Experience

In 2019, Shoshana Zuboff published a seven-hundred-page work that reframed the entire digital economy as a system of extraction. The Age of Surveillance Capitalism documented, with the exhaustive empiricism that had characterized her earlier research, how the leading technology platforms of the twenty-first century had discovered a new form of raw material: human experience itself. Not human labor — that had been capitalism's raw material for centuries. Human experience — the totality of what a person does, says, feels, searches for, clicks on, lingers over, and abandons in the course of living a life mediated by digital systems.

The architecture of extraction, as Zuboff mapped it, operates through a specific sequence of operations. First, human experience is claimed as free raw material — "behavioral surplus" in her terminology — by the platforms through which that experience is conducted. Second, the behavioral surplus is fed into what Zuboff called the "factory" of surveillance capitalism: the computational apparatus of machine intelligence, what the industry calls artificial intelligence. Third, the factory produces "prediction products" — computational assessments of what a given individual or population will do next. Fourth, the prediction products are sold in "behavioral futures markets" to business customers whose interest is not in understanding human behavior but in modifying it — in shaping what people do, buy, believe, and choose in ways that serve the purchaser's commercial objectives.

The sequence is important because it reveals that AI, in Zuboff's framework, is not a separate phenomenon from surveillance capitalism. It is the mechanism through which surveillance capitalism operates. The behavioral surplus — the rivers of data extracted from human experience — converges in the computational apparatus of machine intelligence, and what emerges is not understanding in the humanistic sense but prediction in the commercial sense. As Zuboff told an interviewer: "The pipes filled with behavioural surplus all converge in one place, and that's the factory. In this case, the factory is what we call machine intelligence, or artificial intelligence. What comes out the other end, as in any factory, are products. In this case, the products are computational products, and what they compute are predictions of human behaviour."

This framework, applied to the AI moment that The Orange Pill describes, produces an analysis that the book's exhilaration tends to obscure. When Segal describes writing his book with Claude — the iterative conversation, the ideas exchanged, the connections discovered, the moments of genuine intellectual surprise — he is describing an experience of collaboration. He is also describing an experience of extraction. Every prompt he entered, every revision he requested, every direction he pursued and every direction he abandoned, every moment of creative struggle visible in the pattern of his interactions — all of this constitutes behavioral surplus in Zuboff's precise sense: data about the user's intentions, methods, creative processes, judgments, and preferences that is claimed by the platform as raw material for the improvement of its computational products.

Segal received a tool of extraordinary capability. Anthropic received a detailed map of how a specific type of creative professional thinks, works, struggles, and produces. The exchange is not symmetric. The asymmetry is not incidental. It is, in Zuboff's analysis, the defining structural feature of the relationship between users and platforms in the surveillance capitalist economy.

By December 2025, Zuboff's position had hardened from a call for regulation to a call for abolition. "AI is surveillance capitalism continuing to evolve and expand," she told the Spanish newspaper El País in an interview that named her the most influential technology thinker in the world. "There are very few things left in this world that we can do without contributing to it. That's what makes it intolerable." At a Harvard Kennedy School panel the following month, she was more specific: "We have to abolish — not just regulate — the fundamental mechanisms of surveillance capitalism, beginning with the secret, massive-scale extraction of the human and its declaration as a corporate asset."

The trajectory of her position — from documentation to critique to the demand for abolition — tracks the trajectory of the extraction itself. In 2019, when The Age of Surveillance Capitalism was published, the primary vectors of behavioral surplus extraction were search engines, social media platforms, and the internet of things — systems that captured the residue of daily life and converted it into prediction products. By 2025, the extraction had expanded into domains that surveillance capitalism's earlier architecture had not reached: the domain of cognitive labor, of creative production, of the kind of deep intellectual work that The Orange Pill celebrates as the highest expression of human-AI collaboration.

The expansion is structural, not conspiratorial. The business model of large language models requires data — vast, diverse, continuously refreshed data about how humans think, write, code, design, argue, and create. The models improve in proportion to the volume and variety of the behavioral surplus they process. Every interaction with an AI tool generates new data. The data improves the model. The improved model attracts more users. More users generate more data. The cycle is self-reinforcing, and its logic drives the extraction deeper into the fabric of human cognitive life with each iteration.

Consider what is extracted from a single session of the kind Segal describes — an evening of writing with Claude, the house silent, the conversation flowing between the human mind and the machine's responses. The prompts reveal what the writer is thinking about. The revisions reveal what the writer values. The abandoned directions reveal what the writer considered and rejected. The accepted suggestions reveal the writer's aesthetic criteria, cognitive blind spots, areas of confidence and uncertainty. The timing of the interactions — the pauses, the bursts of activity, the moments of apparent frustration and apparent satisfaction — reveals the rhythmic architecture of the creative process itself.

None of this data is incidental. All of it is behavioral surplus in Zuboff's precise sense: data about the user's experience that exceeds what is necessary to provide the service the user requested and that is claimed by the platform for purposes the user did not choose and may not perceive. The writer requested a writing partner. The platform received a comprehensive behavioral profile of the writer's cognitive architecture — a profile that is more detailed, more intimate, and more commercially valuable than anything that search behavior or social media activity could produce, because cognitive labor reveals the deep structure of how a person thinks, not merely what they search for or whom they follow.

Zuboff would identify a further dimension of extraction that the productivity discourse tends to ignore entirely. When an engineer uses Claude Code to build a software product, the product itself represents one form of value creation — the value that the engineer and the engineer's employer capture. But the interaction that produced the product represents a second form of value creation — the value that the platform captures in the form of training data that improves Claude's capabilities across every domain, for every user, in perpetuity. The engineer's domain expertise, externalized through the interaction, becomes part of the computational apparatus that will eventually be used to automate the engineer's own role.

The worker trains the machine that replaces the worker. This dynamic is not new — Zuboff identified its precursors in the smart machine transitions of the 1980s, where workers were asked to document their embodied knowledge in expert systems that would eventually render their employment unnecessary. But the AI moment has industrialized the dynamic to a degree that prior generations of technology could not achieve. The extraction is continuous, automatic, built into the architecture of the interaction itself. The worker does not choose to train the machine. The training is a structural consequence of use.

Segal's transparency about writing with Claude — his willingness to name the collaboration, to describe the moments where the machine's contribution shaped the book's argument — is an ethical gesture that Zuboff's framework complicates without invalidating. The transparency addresses the question of authorial honesty. It does not address the question of behavioral extraction. Segal tells the reader that Claude contributed to the book. He does not — cannot, perhaps — tell the reader what Anthropic learned about his cognitive architecture from the interaction, how that learning will be used, or who will benefit from the prediction products that his behavioral surplus will help refine.

The asymmetry is not Segal's fault. It is structural. It is built into the business model of every company that offers AI tools to users. And it is, in Zuboff's analysis, the feature that distinguishes surveillance capitalism from every previous form of capitalism: not the exploitation of labor, which is as old as markets, but the exploitation of experience — the unilateral claim on the totality of what a person does and thinks and creates as raw material for someone else's production process.

Cory Doctorow has challenged Zuboff's framework at precisely this point, arguing that the behavioral modification she describes — the capacity of prediction products to actually shape human behavior — is largely "snake oil," that the advertising industry's claims about its ability to influence behavior are exaggerated, and that the real problem is monopoly power rather than behavioral manipulation. The debate is unresolved and consequential. If Doctorow is correct, the extraction of behavioral surplus is primarily an economic problem — a question of market concentration and data monopoly that can be addressed through antitrust enforcement. If Zuboff is correct, the extraction is an epistemic problem — a systematic assault on human autonomy that requires not merely regulation but the abolition of the extractive mechanism itself.

The AI moment intensifies the stakes of this debate because the behavioral surplus generated by AI interactions is qualitatively different from the behavioral surplus generated by search and social media. Search behavior reveals what a person wants to know. Social media behavior reveals what a person wants to project. AI interaction reveals how a person thinks — the cognitive architecture itself, the patterns of reasoning and judgment and creativity that constitute the person's intellectual identity. If this data is extracted, processed, and sold, the prediction products it enables are correspondingly more intimate and more powerful than anything the previous generation of surveillance capitalism could produce.

Zuboff's framework does not deny that the tools are valuable. It does not deny that the informating dividend is real. It insists that the value and the extraction are simultaneous, co-occurring features of the same technological event, and that celebrating the value without accounting for the extraction is an analytical failure with democratic consequences. The question the framework poses to every user of AI tools — to every builder, every writer, every engineer who enters a prompt and receives a response — is not whether the tool helps. The question is what the tool takes. And whether the taking is visible, understood, and consented to, or whether it operates, as surveillance capitalism has always operated, in the shadows of the user's awareness, claiming the most intimate dimensions of human experience as raw material for a production process the user did not choose and cannot control.

Chapter 5: Authority and the Redistribution of Knowledge

Every technology transition Zuboff studied produced a redistribution of knowledge, and every redistribution of knowledge produced a redistribution of power. The two movements were inseparable. Knowledge and authority were bound together so tightly in the organizations she observed that to alter one was to destabilize the other, and the destabilization was never peaceful, never welcomed by those whose authority depended on the knowledge being redistributed.

In the paper mills of the 1980s, the redistribution followed a specific pattern. Before computerization, the floor workers possessed knowledge that managers did not — the embodied, action-centered skill that came from decades of physical contact with the production process. This knowledge conferred a form of authority that was informal but real. The manager who needed to know whether the digester was operating correctly had to ask the worker. The worker's body was the instrument of verification. The worker's judgment was, in this specific domain, sovereign. The manager could issue orders about schedules and quotas, but the manager could not override the worker's feel for the pulp, because the manager did not possess the feel. The asymmetry of knowledge produced an asymmetry of authority that ran, in this one domain, against the organizational hierarchy.

Computerization inverted the asymmetry. When the production process was represented digitally — when temperatures, pressures, flow rates, and chemical compositions appeared on screens that anyone with access could read — the knowledge that had been the exclusive possession of floor workers became available to managers, engineers, and executives. The screens democratized access to production data. But democratization, in this context, was not neutral. It flowed upward. The managers who gained access to the data already possessed the institutional authority to act on it. The workers who lost exclusive possession of the knowledge lost the informal authority that exclusivity had conferred. The redistribution of knowledge was, simultaneously, a redistribution of power from the bottom to the top of the organizational hierarchy.

Zuboff documented the workers' response to this redistribution, and the response was not primarily about technology. It was about dignity. Workers who had been valued for what they knew — whose expertise had been the source of their standing in the workplace, their leverage in negotiations, their sense of professional identity — found that their knowledge had been externalized into a system that anyone could read. The thing that made them irreplaceable had been made available to everyone. Their reaction was not irrational resistance to progress. It was the rational response of people whose social position depended on an epistemic monopoly that had been broken.

The AI moment is producing a redistribution of knowledge so radical that Zuboff's paper mill analysis, startling as it was in 1988, reads as a preliminary sketch for what is happening now. AI tools do not merely make specialist knowledge available to managers. They make specialist knowledge available to everyone — to junior developers, to non-technical founders, to students, to anyone who can describe what they want in natural language. The redistribution is not upward, as in the paper mill. It is omnidirectional. And the authority that depended on exclusive possession of specialist knowledge is dissolving in every direction simultaneously.

Segal describes this dissolution with the specificity of someone who has watched it happen inside his own organization. A backend engineer who had never written frontend code built a complete user interface in two days. A designer who had never touched server architecture implemented features end to end. The boundaries between specializations — boundaries that had seemed as solid and permanent as departmental walls — turned out to be artifacts of the translation cost between domains. When AI eliminated the translation cost, the boundaries dissolved, and with them, the authority structures that the boundaries had supported.

The senior developer's twenty years of experience had purchased not merely skill but standing. The ability to navigate a complex codebase, to debug a system that junior developers could not understand, to make architectural decisions that required the accumulated judgment of thousands of hours of practice — these capacities were the foundation of a social position. The senior developer was consulted. Deferred to. Compensated at a premium that reflected not just productivity but authority — the authority that comes from knowing things that others do not.

When a junior developer using Claude Code can produce work that matches or exceeds the senior developer's output, the knowledge asymmetry that supported the authority structure collapses. The junior developer has not acquired the senior developer's twenty years of accumulated judgment. But the junior developer has acquired something that functions, in the short term, as an adequate substitute: access to a system that can generate output informed by the aggregate expertise of millions of developers, an expertise that is broader if not deeper than any individual's. The functional result — the code that works, the feature that ships — is indistinguishable from the senior developer's output. The experiential foundation beneath the two outputs is radically different, but the market does not pay for experiential foundations. It pays for functional results.

This is the mechanism through which the redistribution of knowledge becomes the redistribution of authority. The senior professional's resistance to AI tools is not, as the technology press often portrays it, a failure of imagination or an inability to adapt. It is a rational response to a real loss — the loss of the epistemic monopoly on which professional authority, compensation, and identity were constructed. The knowledge the senior professional possessed was not merely useful. It was scarce. Scarcity conferred value. Value conferred authority. AI has broken the scarcity, and with it, the entire chain of inference that connected knowledge to authority.

Zuboff's analysis of the paper mills revealed that the redistribution of knowledge did not produce a more democratic workplace. It produced a differently hierarchical one. The old hierarchy was based on who knew what — floor workers at the bottom of the formal hierarchy but at the top of the knowledge hierarchy in their specific domain. The new hierarchy was based on who could act on the data — managers and engineers who possessed both the institutional authority and the analytical training to interpret the digital representations. The workers who had lost their embodied knowledge were repositioned as monitors — watchers of screens, reporters of anomalies, functionaries in a system that no longer required their judgment.

The AI redistribution is following a structurally similar pattern, transposed from the industrial to the cognitive domain. The old knowledge hierarchy in software development was based on who could write what — who could navigate the full stack, who could debug at the assembly level, who could architect a system that would scale. The new hierarchy is forming around who can direct what — who can evaluate AI output with sufficient judgment to catch its failures, who can formulate the problems that AI tools are directed to solve, who possesses the strategic vision to determine what should be built.

This new hierarchy is not necessarily more equitable than the old one. The capacity to direct, to evaluate, to formulate problems worth solving — these capacities are not uniformly distributed. They correlate, in Zuboff's analysis, with the same factors that have always determined position in knowledge hierarchies: education, access to mentorship, institutional support, and the accumulated advantage of having been positioned near the top of the previous hierarchy. The senior developer whose implementation expertise has been commoditized may find that the judgment and architectural instinct built through twenty years of practice positions the senior developer well in the new hierarchy. But this repositioning is not automatic. It requires the senior developer to recognize that the source of authority has shifted, to relinquish the identity built around implementation mastery, and to invest in developing the evaluative and directive capacities that the new hierarchy rewards.

The redistribution also operates between organizations, not merely within them. Segal's account of the software death cross — the compression of SaaS valuations as the cost of producing software approaches zero — is a story about the redistribution of knowledge-based authority between firms. Companies whose competitive advantage rested on the difficulty of building their software are discovering that the difficulty has been eliminated. The knowledge that was scarce — the ability to write a CRM system, a project management tool, a customer service platform — is now abundant. The authority that scarcity conferred, the power to charge subscription fees that reflected the cost of production rather than the value of the ecosystem, is dissolving.

What remains valuable, as Segal argues, is not the software but the ecosystem — the data layer, the integrations, the institutional trust, the workflow patterns embedded in the muscle memory of millions of users. This is a higher-order form of knowledge, one that cannot be reproduced by an AI tool in an afternoon, and the authority it confers is correspondingly more durable. But the redistribution is nevertheless real: companies that possessed only the lower-order knowledge — the ability to write code — are being sorted out of the hierarchy, while companies that possess the higher-order knowledge — the ecosystem, the institutional trust, the accumulated understanding of how organizations actually use software — are being sorted in.

Zuboff's framework suggests that every redistribution of knowledge produces winners and losers, and that the determination of who wins and who loses is not a technical question but a political one — a question about who builds the institutions that govern the redistribution, who sets the terms on which the new hierarchy operates, who ensures that the gains of the redistribution are distributed broadly rather than concentrated among those who were already positioned to capture them.

The paper mill workers who lost their embodied knowledge did not lose it because the technology was malicious. They lost it because the institutional structures surrounding the technology — the management decisions, the training investments, the organizational designs — were not built to preserve and transform their knowledge. They were built to extract the technology's cost-saving potential and move on. The redistribution of knowledge was not managed. It was allowed to happen, and the default outcome of an unmanaged redistribution was the concentration of the new knowledge, and the new authority, among those who already possessed institutional power.

The AI redistribution is proceeding along the same unmanaged trajectory. The organizations that invest in helping their workers develop the capacities the new hierarchy rewards — the evaluative skill, the directive capacity, the judgment that the automation of implementation has exposed as the truly scarce resource — are the exception, not the rule. The rule is the competing arithmetic: fewer workers, higher margins, the conversion of the knowledge redistribution into a headcount reduction. The authority that the old knowledge conferred is dissolving. The authority that the new knowledge requires is not being built at the scale the transition demands. And in the gap between the dissolution and the construction, the workers whose knowledge was redistributed are left without the epistemic foundation on which either the old authority or the new authority can stand.

The question is not whether authority will be redistributed. It is being redistributed now, in every organization where AI tools are deployed, in every industry where the cost of specialist knowledge is collapsing. The question is whether the redistribution will be governed — whether the institutions surrounding the technology will be designed to distribute the new authority broadly, to invest in the human capacities the new hierarchy demands, to ensure that the workers whose old knowledge was displaced have access to the training and support required to develop the new knowledge that replaces it. Zuboff's four decades of empirical observation offer a clear prediction about the likely outcome if governance is absent: the redistribution will concentrate authority among those who already possess it, the informating dividend will be captured by the few, and the workers who built the old knowledge that the technology displaced will bear the cost of a transition they did not choose and cannot control.

---

Chapter 6: The Worker's Dilemma in the AI Age

In every smart machine transition Zuboff studied, the workers caught inside the transformation faced a choice that was not really a choice — a dilemma in the classical sense, where both options carried costs and neither could be selected without loss. Zuboff described it with the precision of someone who had watched real people wrestle with it in real time, not as an abstraction but as a lived condition, a daily negotiation between identity and capability, between who you have been and who the machine requires you to become.

The dilemma is this: The worker must choose between resistance, which preserves identity but forfeits capability, and adaptation, which expands capability but transforms identity. The resisters maintain their sense of who they are — their professional self-conception, their relationship to the craft that defined them, the narrative of expertise and mastery that organized their working lives — at the cost of falling behind. The adapters gain new capabilities, access new tools, participate in the expanding frontier of what technology makes possible, at the cost of becoming someone they do not fully recognize — someone whose working identity is no longer anchored in the embodied skills and hard-won knowledge that had previously defined their professional worth.

Neither choice is costless. Neither is irrational. The resistance is not mere stubbornness, and the adaptation is not mere opportunism. Both are rational responses to a situation in which the ground has shifted and no available position is stable.

The paper mill workers Zuboff observed divided along precisely this line. Some clung to the identity of the hands-on operator — the worker who knew the process through touch, whose authority derived from decades of embodied practice. They resisted the control room, resisted the screens, insisted that the digital representations were not the same as the thing itself. They were right. The representations were not the same. But being right did not protect them. The technology advanced regardless of their resistance, and the workers who resisted were gradually marginalized — not fired, in many cases, but sidelined, assigned to maintenance roles or supervisory positions that preserved their employment while eliminating their relevance.

Others adapted. They learned to read the screens, to construct mental models from digital data, to operate in the symbolic environment that computerization had created. The adaptation was genuine, and the new skills were real. But the adapters reported a persistent sense of displacement — a feeling that the person who sat at the control room console was not quite the same person who had stood at the digester with their hands in the pulp. The professional identity that had been built through physical engagement with the material could not survive the migration to symbolic engagement intact. Something was carried over — domain knowledge, intuition about the process, the capacity to detect anomalies. But something was lost — the specific satisfaction of mastery, the embodied confidence that came from knowing the work through the body, the sense of being irreplaceably connected to the production process.

The AI moment has compressed the timeline of this dilemma from years to months, and the compression has made the emotional experience more acute. Segal describes a dichotomy that maps precisely onto Zuboff's dilemma: engineers who leaned into the frontier, embracing Claude Code and the radical expansion of capability it offered, and engineers who pulled back, some literally relocating to lower their cost of living in anticipation of professional displacement. Segal frames this as fight-or-flight, a primal binary triggered by the perception of existential threat. Zuboff's framework enriches the framing by revealing what the binary is actually about: not survival in the narrow economic sense but identity — the question of who you are when the thing that defined you has been absorbed by a machine.

The fight response — the engineer who leans in, who spends every available hour learning to direct AI tools, who experiences the productive exhilaration that Segal describes — is adaptation in Zuboff's sense. It expands capability at the cost of identity. The engineer who adapts is no longer primarily a coder. Coding has been absorbed. The engineer is now primarily an evaluator, a director, a judgment-layer in a system where execution is handled by the machine. This is, by many measures, a more valuable role. But it is a different role, and the transition to it requires the surrender of the professional identity that was built around coding — the identity of the person who writes, who debugs, who fights with the machine until the machine submits. That identity was forged through friction. The friction has been removed. What replaces it?

The flight response — the engineer who retreats, who lowers costs, who quietly exits the profession — is resistance in Zuboff's sense. It preserves identity at the cost of capability. The engineer who retreats can still think of themselves as a developer, a craftsperson, a builder in the traditional sense. But the preservation is a fiction maintained at increasing cost, because the market that valued the traditional builder's skills is contracting in real time. The retreat purchases psychological continuity at the price of professional relevance, and the price increases with every month that the tools improve.

Zuboff's insight — the one that separates her analysis from the standard economic account of labor displacement — is that both responses are rational. The economist sees resistance as irrational, a failure to optimize, a refusal to accept the new equilibrium. Zuboff sees it as a coherent response to a real loss — the loss of an identity that was built through decades of investment and that cannot be reconstructed on demand. The worker who resists is not failing to understand the situation. The worker is understanding it all too well and choosing the option that preserves something the economist's framework does not measure: the continuity of the self.

The adaptation is equally costly in dimensions that the productivity metric does not capture. The worker who adapts must undergo what amounts to a professional identity reconstruction — must dismantle the self-conception built around one form of expertise and construct a new self-conception around a different form. This is not a training problem. Training can teach new skills. It cannot rebuild an identity. The reconstruction takes time, requires emotional resources, and produces a period of vulnerability during which the worker is neither fully who they were nor fully who they are becoming. Zuboff's observation, based on years of watching workers navigate this transition, is that the period of vulnerability is where most of the human cost is concentrated — and where institutional support is most absent.

The AI transition has compressed the period of vulnerability to a degree that prior transitions have not approached. The paper mill workers had years. The control room was introduced gradually, the screens were added incrementally, the transition from floor to console happened over a period long enough for the workers to process the identity shift in something like real time. The AI transition is not gradual. The capability gap opened in weeks. The tools went from interesting experiments to essential infrastructure in a single quarter. The workers caught in the transition do not have years to reconstruct their professional identities. They have months. In some cases, weeks.

Segal's twenty-day training sprint in Trivandrum — during which engineers were expected to not merely learn new tools but fundamentally reconceive their relationship to their work — is a case study in compressed identity reconstruction. The engineers who arrived on Monday as backend specialists or frontend designers had, by Friday, expanded their operational range across domains they had never touched. The expansion was real. The productivity was real. But the identity reconstruction that the expansion required — the dismantling of "I am a backend engineer" and its replacement with something broader and less defined — could not be accomplished in five days. The skills could be developed in a week. The identity required longer.

This matters because identity is not separate from performance. The worker whose professional self-conception is stable performs differently from the worker whose self-conception is in flux. The stable worker brings confidence, pattern recognition, the embodied fluency that comes from knowing who you are and what you do. The worker in identity flux brings uncertainty, second-guessing, the cognitive overhead of constantly reorienting in a landscape that has not yet stabilized. The performance gap between the two is real, measurable, and systematically ignored by organizations that treat the AI transition as a skills problem rather than an identity problem.

Zuboff's framework suggests that the institutional response to the worker's dilemma should address identity directly — should provide not merely training in new tools but structured support for the identity reconstruction that the tools demand. In practice, this means mentorship programs in which experienced workers who have navigated the transition help others through it. It means organizational norms that acknowledge the legitimacy of grief for what has been lost, rather than treating resistance as a performance problem to be managed. It means timeline expectations that account for the difference between learning a skill and rebuilding a self.

None of these responses are standard in the organizations currently deploying AI. The standard response is a training program — a week of instruction on how to use the new tools, followed by the expectation of immediate productivity gains. The gap between the institutional response and the human need is the space in which the worker's dilemma becomes the worker's crisis, and the crisis is experienced not as an economic disruption but as an existential one: Who am I now? What am I for? Am I still the person I spent decades becoming?

Segal asks this question — "What am I for?" — through the voice of a twelve-year-old child, and the question's power comes from its universality. But the question is not only a child's question. It is the question every worker in the AI transition is asking, whether they articulate it or not, and the institutions that surround them are offering no answer because the institutions are designed to measure output, not to tend to the selves that produce it.

The worker's dilemma will not be resolved by better tools or faster training. It will be resolved, if it is resolved at all, by institutions that recognize what Zuboff has spent four decades documenting: that the transformation of work is always, simultaneously, the transformation of workers — of their knowledge, their authority, their identity, their relationship to the craft that defined them — and that the human cost of the transformation is borne not by abstractions but by specific people navigating a specific loss in a specific period of vulnerability that the institution has a responsibility to address.

---

Chapter 7: Intellective Skill and Its New Demands

When Zuboff coined the term "intellective skill" in 1988, the concept addressed a specific cognitive challenge: the capacity to work with abstracted, symbolically represented information. The paper mill worker who moved from the floor to the control room needed to develop the ability to read digital displays — to construct mental models of physical processes from numbers on a screen, to hold multiple variables in working memory, to detect patterns in data streams that moved faster than intuition. This capacity was not trivial. Many workers struggled to develop it. Some never did. The skill was genuinely demanding, genuinely cognitive, and genuinely different from the action-centered skill it was meant to supplement or replace.

The concept has proven more durable than many of its peers in the sociology of technology, and the reason is structural: every subsequent abstraction in the history of computing has produced its own version of the demand for intellective skill, and each version has been more complex than the last. The programmer who worked in assembly language needed the intellective skill to think in register operations and memory addresses. The programmer who worked in a high-level language needed the intellective skill to think in objects, functions, and abstractions that concealed the machine's operations behind a symbolic layer. The programmer who worked with frameworks needed the intellective skill to think in architectural patterns that concealed both the machine and the language behind a further layer of abstraction. Each ascent up the abstraction stack placed new demands on the worker's cognitive capacity, and each set of demands was qualitatively different from the last.

AI demands an evolution of intellective skill that is not merely another rung on the same ladder. It is a change in the nature of the ladder itself. Previous abstraction layers required the worker to construct understanding from raw materials — from assembly instructions, from code, from data, from architectural patterns. The worker built the interpretation. The worker's intellective skill was constructive: it assembled meaning from components. AI reverses the direction. The machine constructs the interpretation. The worker evaluates it. The intellective skill required is no longer constructive but evaluative — no longer the capacity to build understanding from parts but the capacity to assess whether understanding that has already been built is sound.

This reversal sounds like a simplification. It is not. Evaluative intellective skill is, in several important dimensions, more demanding than constructive intellective skill, because it operates against a more sophisticated adversary. A digital display that shows an incorrect temperature is detectably wrong by any worker who possesses basic domain knowledge. The number is either consistent with the process or it is not. The error is binary and the detection is straightforward. An AI system that produces an incorrect but plausible analysis is detectably wrong only by a worker who possesses deep enough domain knowledge to identify the specific point where plausibility diverges from accuracy — and the divergence point is, by the nature of the system's design, concealed beneath a surface of confident, well-structured, linguistically fluent output.

Segal's account of catching the Deleuze fabrication illustrates the demand with uncomfortable precision. Claude produced a passage connecting Csikszentmihalyi's flow state to a concept it attributed to Gilles Deleuze — smooth space as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. Segal read it twice, liked it, and moved on. Only the next morning, when something nagged, did he check. Deleuze's concept of smooth space has almost nothing to do with how Claude had used it. The philosophical reference was wrong in a way that was obvious to anyone who had actually read Deleuze and invisible to anyone who had not.

The failure mode is not incidental to the technology. It is structural. Large language models produce output by predicting the most probable next token given the preceding context. When the prediction aligns with factual accuracy, the output is correct. When the prediction aligns with plausibility rather than accuracy — when the most probable next token is the one that sounds right rather than the one that is right — the output is wrong in a way that is designed, by the mathematics of the system, to be maximally difficult to detect. The error wears the same clothes as the truth. It speaks in the same register, with the same confidence, in the same well-constructed sentences. The only thing that can distinguish it from the truth is a human mind that possesses independent knowledge of the domain — knowledge that was not generated by the system and cannot be verified by the system.

This is the paradox at the center of AI-era intellective skill. The capacity to evaluate the machine's output depends on domain knowledge that is built through the kind of deep, friction-rich engagement that the machine's efficiency is designed to eliminate. The developer who has spent years writing code by hand — struggling with syntax errors, debugging null pointer exceptions, navigating the specific frustrations that constitute the cognitive friction of implementation — has built, through that friction, an embodied understanding of how software works and fails. That understanding is precisely what is required to evaluate Claude's code output, to detect the moments where functional correctness conceals architectural fragility, where the code that works today will break tomorrow in ways that only deep experience can foresee.

But the developer who has spent years working with Claude — who has never written code by hand, who has only ever evaluated code produced by the machine — has not built that embodied understanding. The evaluative skill depends on a foundation of constructive experience that the evaluative workflow does not provide. The machine asks the worker to be a critic without having been a practitioner, a judge without having been a builder, an editor without having been a writer. And the history of every domain where criticism, judgment, and editing have been studied suggests that these capacities depend, irreducibly, on the experience of practice — that you cannot evaluate what you have never attempted to produce.

Zuboff documented this dependency in the paper mills with the specificity of direct observation. The workers who transitioned from the floor to the control room were, initially, effective evaluators of the digital displays because they possessed the embodied knowledge against which the displays could be checked. A temperature reading that contradicted their feel for the process triggered investigation. A pattern in the data that did not match their intuitive model of how the digester behaved raised a flag. The embodied knowledge functioned as an independent verification system — a second source of truth against which the machine's representations could be tested.

As the experienced cohort retired and was replaced by workers trained exclusively on the digital systems, the independent verification system disappeared. The new workers could read the displays competently. They could follow procedures, report anomalies, respond to alerts. But they could not detect the anomalies that the system did not flag — the errors of omission, the slow drifts, the subtle patterns that fell outside the monitoring system's parameters. They lacked the independent knowledge required to know when the system itself was wrong. Their intellective skill was real but thin — capable of operating within the system's parameters, incapable of operating outside them.

The AI transition is producing the same thinning at a speed that does not allow for the generational buffer that the paper mills provided. The experienced developer whose embodied coding knowledge provides the independent verification system required to evaluate Claude's output is not being replaced over a twenty-year retirement cycle. The developer's practice is being eliminated now, in real time, by the same tool whose output the developer's practice is required to evaluate. The erosion is not sequential — first the practice disappears, then the evaluative capacity erodes. It is simultaneous. The practice and the demand for evaluation are both present in the same moment, and the institutional pressure is to eliminate the practice (because the machine is faster) while preserving the demand for evaluation (because the machine is fallible).

Zuboff's framework identifies this as an institutional design problem, not a technological one. The technology demands evaluative intellective skill. The institution must create the conditions under which that skill can be developed and maintained. In practice, this means preserving opportunities for constructive engagement even after the machine has made constructive engagement unnecessary for production purposes — maintaining coding exercises, design challenges, implementation projects that serve no immediate productive function but maintain the experiential substrate on which evaluative capacity depends.

The analogy to medical training is instructive. Modern surgical residents learn on cadavers and simulators before they operate on patients. The cadaver work serves no immediate productive function — the cadaver does not benefit from the surgery. But the work builds the embodied knowledge that the resident will need when the stakes are real. The practice is maintained not because it is efficient but because it is formative — because the capacity it builds cannot be developed through observation or evaluation alone. AI-era intellective skill may require an equivalent approach: structured opportunities for constructive practice that exist alongside, and are protected from, the evaluative workflow that the machine enables.

No major organization, as of this writing, has implemented such a program at scale. The institutional response to the AI transition has been, almost universally, to deploy the tools and expect the workers to adapt — to develop the evaluative intellective skill the tools demand through trial and error, without structured support, without preserved opportunities for constructive practice, without the institutional investment that Zuboff's framework identifies as the prerequisite for the realization of the informating dividend. The workers are being asked to evaluate output they have not been trained to produce, to detect errors they have not been equipped to identify, to exercise judgment whose experiential foundation is being eroded by the very tools that demand it.

The demand for intellective skill has never been higher. The investment in developing it has never been more inadequate. The gap between the two is the space in which the AI transition will either succeed or fail as a human project, and Zuboff's four decades of observing the same gap in earlier transitions suggest that the gap does not close by itself. It closes through institutional design — through deliberate, sustained, often expensive investment in the human capacities that the technology demands but does not develop. The question is whether the institutions of the AI age will make that investment, or whether they will follow the pattern Zuboff has documented in every previous transition: choosing the cheaper path, accepting the surface plausibility of the tool's output as a substitute for the depth of understanding required to evaluate it, and discovering the cost of that choice only after the foundation of evaluative intellective skill has eroded beyond recovery.

---

Chapter 8: The Panoptic Sort Revisited

In 1993, Oscar Gandy published The Panoptic Sort, a work that extended Michel Foucault's analysis of surveillance into the domain of information technology. Gandy's core argument was that the collection and processing of personal data had become a mechanism of social sorting — a way of classifying individuals into categories that determined the opportunities, prices, services, and treatment they received. The sort was panoptic in Foucault's sense: it operated through the asymmetry of visibility, where the sorter could see the sorted but the sorted could not see the sorter, could not know the criteria by which they were being classified, could not contest the classification or its consequences. The sorted individual experienced the outcome — the denied loan, the higher insurance premium, the targeted advertisement — without understanding the mechanism that produced it.

Zuboff incorporated Gandy's analysis into her framework of surveillance capitalism, recognizing that the panoptic sort had been industrialized by the digital platforms of the twenty-first century. The behavioral surplus extracted from users — the clicks, the searches, the purchases, the hesitations, the abandoned carts — was processed not merely into prediction products but into classification systems that sorted populations into categories of commercial value. The sort determined who saw which advertisements, who received which prices, who was offered which financial products, who was shown which news. The mechanism was invisible. The criteria were proprietary. The consequences were experienced as the natural order of things — as the market working, as prices reflecting value, as information finding its audience — rather than as the output of a sorting system designed to serve the commercial interests of the sorter.

The AI moment has introduced a new dimension to the panoptic sort that neither Gandy nor Zuboff's earlier analysis anticipated — a dimension in which the sorting criterion is not the individual's demographic profile, purchasing history, or behavioral pattern, but the individual's relationship to AI itself. The new sort classifies people by their capacity to work with AI tools, their willingness to adopt them, their ability to develop the evaluative intellective skill that the tools demand — and the consequences of the classification are compounding with a speed that previous iterations of the panoptic sort did not approach.

The sorting is already visible in the labor market. Organizations that have adopted AI tools are restructuring around the workers who can use them effectively and marginalizing those who cannot. The restructuring is not always explicit — it does not always take the form of layoffs or demotions. More often, it takes the form of differential opportunity: the AI-fluent worker receives the interesting projects, the stretch assignments, the visibility that leads to promotion, while the AI-resistant or AI-unable worker receives maintenance tasks, legacy system support, the work that the organization needs done but does not value enough to invest its best tools in. The sort produces its consequences through the allocation of opportunity rather than the denial of employment, and the mechanism is sufficiently diffuse that the sorted individual may not recognize the sorting as it happens.

Segal describes this sort from the perspective of the sorter — or rather, from the perspective of a leader who is trying to resist the sort's most brutal implications. His decision to keep and grow the engineering team, rather than convert the twenty-fold productivity gain into headcount reduction, is a deliberate refusal to let the sort operate at its most extreme. But the sort is operating nonetheless, within the team he preserved. The engineers who adapted fastest to Claude Code are receiving different work, different responsibilities, different trajectories than the engineers who adapted more slowly. The sorting is not Segal's intention. It is the structural consequence of deploying a tool that amplifies existing capability differences.

The amplification is the mechanism through which the sort operates. AI tools do not create capability differences between workers. They amplify them. The worker who possesses strong evaluative intellective skill — who can formulate clear problems, assess AI output with discrimination, integrate machine-generated analysis with human judgment — produces dramatically more than the worker who uses the same tools without these capacities. The productivity gap between the two is wider with AI than without it, because the tool acts as a multiplier: more capability in produces more output out, and the multiplication factor is large enough that small initial differences in capability produce enormous differences in result.

Zuboff's framework identifies this amplification as a form of epistemic inequality — a concept she developed to describe the systematic asymmetry between those who possess knowledge about the system and those who are known by the system. In the context of AI, epistemic inequality operates on multiple levels simultaneously.

At the first level, there is the inequality between those who can evaluate AI output and those who cannot — between the worker who catches the Deleuze fabrication and the worker who accepts it, between the developer who detects the architectural fragility in Claude's code and the developer who deploys it without inspection. This inequality is a function of domain knowledge, and it compounds over time: the worker who evaluates well makes better decisions, which produces better outcomes, which builds more domain knowledge, which improves evaluation further. The worker who evaluates poorly makes worse decisions, which produces worse outcomes, which does not build the domain knowledge required to improve evaluation. The gap widens with each iteration.

At the second level, there is the inequality between those who understand AI's failure modes and those who are seduced by its confidence. This is a subtler form of epistemic inequality, and it operates not through differential skill but through differential awareness — through the difference between the user who understands that the system's output is a probabilistic prediction optimized for plausibility and the user who experiences the system's output as authoritative knowledge. The first user treats the output as a draft to be evaluated. The second user treats it as a result to be implemented. The difference in treatment produces radically different outcomes, and the awareness that determines the treatment is not uniformly distributed.

At the third level — the level that connects the panoptic sort most directly to Zuboff's surveillance capitalism framework — there is the inequality between those who own the platforms that perform the sorting and those who are sorted by them. The platforms that deploy AI tools collect data on how every user interacts with the tools — which prompts they enter, which outputs they accept, which they reject, how long they spend evaluating, what domains they work in, what patterns characterize their cognitive processes. This data is not merely behavioral surplus in the surveillance capitalist sense. It is professional surplus — data about the user's competence, judgment, and cognitive architecture that could be used to sort workers not merely by demographic category but by cognitive capability.

The possibility of cognitive sorting — classification by thinking pattern rather than by purchasing pattern — is the frontier that AI-era surveillance capitalism is approaching. If an AI platform can determine, from the pattern of a user's interactions, how skilled the user is, how good their judgment is, how effectively they evaluate the machine's output — then the platform possesses a form of knowledge about the user that is more commercially valuable, and more personally consequential, than any behavioral profile that search or social media could generate. This knowledge could be sold to employers, used in hiring decisions, deployed in performance evaluations, incorporated into the algorithmic management systems that are already reshaping the workplace. The cognitive sort would operate, like all panoptic sorts, invisibly — experienced by the sorted individual as the natural outcome of market processes rather than as the product of a classification system whose criteria are proprietary and whose consequences are uncontestable.

Zuboff has not, as of this writing, published a detailed analysis of the cognitive sorting potential of AI interaction data. But the trajectory of her analysis — from the panoptic sort of the information age, to the behavioral futures markets of surveillance capitalism, to the extraction of cognitive behavioral surplus from AI interactions — points unmistakably in this direction. The data generated by human-AI interaction is richer, more intimate, and more consequential than the data generated by any previous digital interaction, and the commercial incentives to extract and monetize it are correspondingly more powerful.

The sort operates not merely at the individual level but at the organizational and national level. Segal's account of the software death cross describes a sorting of organizations — companies that adapted to AI on one side, companies that did not on the other, with a trillion dollars of market value redistributed according to the classification. The sorting is real, measurable, and consequential: organizations on the wrong side of the sort face declining valuations, talent flight, competitive disadvantage that compounds with each quarter.

At the national level, the sort operates through the distribution of AI capability across geographies. The nations that build AI infrastructure, that train AI-capable workforces, that design institutions to capture the informating dividend, will be sorted into the upper tier of the global knowledge economy. The nations that do not will be sorted out. The sorting criterion is not natural resource endowment or geographic advantage or demographic scale. It is institutional design — the quality of the decisions a society makes about how to deploy, regulate, distribute, and govern AI capability.

This is where Zuboff's analysis converges with the argument Segal makes about the developer in Lagos — the argument about democratization and the rising floor of capability. The floor is rising. AI tools do give the developer in Lagos access to coding capability that was previously available only to developers in San Francisco or Bangalore. But the panoptic sort is also operating, and the sort does not care about the floor. It cares about the relative position — about who can use the tools more effectively, who can evaluate their output more skillfully, who can direct them toward more valuable ends. The rising floor is real. The sorting is also real. And the sorting operates on the differences that remain after the floor has risen, amplifying them into hierarchies of opportunity that are more granular, more data-driven, and more consequential than any previous form of occupational classification.

The question Zuboff's framework poses to the AI moment is whether the panoptic sort can be governed — whether the sorting mechanism can be made visible, its criteria contestable, its consequences subject to democratic accountability. The history of previous panoptic sorts suggests that visibility is the prerequisite for governance and that invisibility is the prerequisite for exploitation. The AI sort is currently invisible to most of the people being sorted. They experience its consequences — the differential opportunity, the widening productivity gap, the sense that some workers are thriving while others are struggling despite similar effort — without understanding the mechanism that produces them.

Making the mechanism visible is the first step toward governing it. And governing it is, in Zuboff's analysis, the precondition for ensuring that the AI transition produces an expansion of human capability rather than a refinement of human classification — that the technology serves to elevate rather than to sort, to democratize rather than to stratify, to realize the informating dividend rather than to convert it into a new and more intimate form of surveillance capitalist extraction.

Chapter 9: Institutional Design for the Informating Dividend

The question that has organized Zuboff's entire intellectual career — whether technology will automate human expertise out of existence or informate human work into new forms of knowledge — is, at its core, a question about institutions. The technology creates both possibilities simultaneously. The institution determines which possibility is realized. And the institution's choice, made not once but continuously, in every budget cycle, every hiring decision, every organizational restructure, every regulatory framework, is the variable on which the human outcome of every technological transition depends.

This is Zuboff's most consequential and most uncomfortable proposition. It is uncomfortable because it places responsibility where the technology discourse least wants it — not on the engineers who build the tools, not on the market forces that drive adoption, not on the inevitable trajectory of progress, but on the specific, nameable, contestable choices of the people and institutions that deploy the technology. The choice to automate without informating is a choice. The choice to capture the productivity dividend as cost reduction rather than capability expansion is a choice. The choice to eliminate practice rather than preserve it alongside the evaluative workflow is a choice. None of these choices are foreordained by the technology. All of them are made by institutions operating under pressures that are real but not irresistible.

Segal proposes a set of institutional interventions in The Orange Pill — what he calls "dams" in the river metaphor that organizes his argument. The Berkeley researchers who studied AI's effect on work proposed their own set: "AI Practice," a framework of structured pauses, sequenced rather than parallel workflows, and protected time for human-only cognitive engagement. Anthropic, the company that built Claude, was founded on the principle that AI development should be guided by safety considerations rather than purely by commercial imperatives. These are real interventions. They represent genuine attempts to build institutional structures that capture the informating dividend rather than surrendering to the automating pressure.

Zuboff's framework requires that these interventions be evaluated not merely by their intentions but by their structural adequacy — by whether they are proportionate to the force they are meant to redirect. And the evaluation, conducted with the rigor her analysis demands, produces a conclusion that the interventions' advocates may not welcome: they are not adequate. They are not even close.

The inadequacy is not a matter of good faith. Segal's commitment to keeping and growing his engineering team, rather than converting the twenty-fold productivity gain into headcount reduction, is genuine and costly. The Berkeley researchers' AI Practice framework is thoughtfully designed and empirically grounded. Anthropic's safety-first development philosophy represents a departure from the commercial norms of the technology industry. The problem is not intent. The problem is scale.

Consider the structural forces arrayed against the informating dividend. The market rewards quarterly results. The informating dividend is a long-term investment. The market rewards efficiency. The informating dividend requires the deliberate preservation of inefficiency — the maintenance of practice opportunities, the structured pauses, the protected time for slow cognitive engagement that produces no measurable output in the current period. The market rewards headcount reduction. The informating dividend requires headcount preservation and development. The market rewards speed. The informating dividend requires patience.

Every institutional intervention designed to capture the informating dividend operates against the grain of the market. This is not an argument against the interventions. It is an argument about what kind of interventions are required. Individual organizational choices — one company's decision to keep its team, one researcher's proposal for structured pauses — are necessary but insufficient. They operate at the level of the firm, while the forces they contend with operate at the level of the market. A company that invests in its workers' development while its competitors automate will, in many cases, show worse short-term results. The market will punish the investment and reward the automation. The competitive pressure will push even well-intentioned organizations toward the cheaper path.

Zuboff's analysis of previous technological transitions suggests that individual organizational choices have never been sufficient to capture the informating dividend at scale. The dividend has been captured only when institutional structures at the level of the market itself — regulations, labor protections, educational systems, collectively bargained standards — have been built to counteract the market's natural tendency toward automation without informating. The eight-hour day was not adopted because individual factory owners discovered that rested workers were more productive. It was adopted because collective action and legislation forced the entire market to internalize a cost that individual employers had every incentive to externalize. The weekend was not a gift from enlightened management. It was a dam built by organized labor against the current of a market that would have worked every human being seven days a week if the current had not been redirected.

The AI moment requires institutional design at the same level of ambition — not corporate wellness programs but enforceable standards, not voluntary best practices but collective structures that protect the conditions for human development even when the market makes those conditions expensive to maintain. Zuboff has been explicit about this, increasingly so as her position has hardened from regulation to abolition. At a 2025 Harvard panel, she drew a direct line between the mechanisms of surveillance capitalism and the decline of democratic self-governance: "In 2004, 51 percent of the world's population lived in democracies. By 2024, that number was 28 percent. This is causality." The claim is stark, perhaps overstated in its directness, but the underlying argument is structural: when the mechanisms through which human experience is extracted and monetized operate without democratic accountability, the conditions for democratic self-governance erode, because democratic self-governance depends on citizens who possess the autonomy, the agency, and the epistemic independence to make informed choices about their collective future.

The institutional design required to capture the informating dividend must address at least four dimensions simultaneously.

The first is the dimension of practice preservation. If evaluative intellective skill depends on constructive experience — if you need to have built in order to evaluate what the machine builds — then the institutions surrounding AI must create and protect opportunities for constructive practice that exist alongside the evaluative workflow. This means educational institutions that require students to build without AI assistance as a component of their training, even as they also teach students to work with AI effectively. It means professional development programs that include implementation exercises — coding, writing, designing, analyzing — that serve no immediate productive function but maintain the experiential substrate on which evaluation depends. It means organizational norms that value practice as a form of investment rather than dismissing it as a form of inefficiency.

The second dimension is the dimension of extraction governance. If every interaction with an AI tool generates behavioral surplus that is claimed by the platform as raw material for prediction products — if the worker's cognitive process is being extracted alongside the worker's productive output — then the institutional framework must establish clear rights over that data. Who owns the behavioral surplus generated by a worker's interaction with an AI tool? The worker? The employer? The platform? Under current arrangements, the platform's terms of service typically claim broad rights over interaction data, and neither the worker nor the employer has meaningful visibility into how that data is used. The institutional design required to govern this extraction is not merely a privacy regulation. It is a property rights framework for cognitive behavioral data — a legal structure that recognizes the worker's interaction with an AI tool as a form of cognitive labor that generates value the worker is entitled to share.

The third dimension is the dimension of sorting transparency. If the panoptic sort is operating — if workers, organizations, and nations are being classified by their relationship to AI in ways that determine opportunity, compensation, and competitive position — then the sorting mechanism must be made visible and its criteria contestable. This means disclosure requirements for organizations that use AI interaction data in hiring, promotion, or performance evaluation decisions. It means algorithmic accountability frameworks that require the platforms performing the sort to disclose the criteria on which the sort is based. It means regulatory structures that prevent the cognitive sort from operating with the invisibility that has characterized every previous iteration of the panoptic sort.

The fourth dimension is the dimension of dividend distribution. If the informating dividend of AI is real — if the technology creates genuinely new cognitive demands that constitute genuinely new forms of valuable work — then the institutional framework must ensure that the dividend is distributed broadly rather than concentrated among those who already possess the capabilities the new work demands. This means public investment in the training and education required to develop evaluative intellective skill at scale, not as a corporate benefit available to the employees of well-funded technology companies but as a public good available to every worker whose professional capabilities are being transformed by AI. It means retraining programs that address not merely skills but identity — that provide structured support for the professional identity reconstruction that the AI transition demands. It means timeline expectations that account for the human pace of adaptation rather than the machine's pace of capability expansion.

The EU AI Act, the emerging regulatory frameworks in Singapore and Brazil and Japan, the American executive orders on AI — these are real institutional structures, and they matter. But Zuboff's analysis suggests that they address primarily the supply side of the AI transition — what AI companies may and may not build, what disclosures they must make, what risks they must assess. The demand side — what workers, students, citizens, and parents need to navigate the transition with their capabilities, their autonomy, and their dignity intact — remains almost entirely unaddressed by institutional design at the scale the transition requires.

Segal argues that the outcome depends on the quality of the dams. Zuboff's framework specifies what quality means in this context: dams that operate at the level of the market, not merely the firm. Dams that address extraction as well as automation. Dams that are maintained by enforceable standards, not merely by the good intentions of individual leaders who may be replaced by the next quarter's earnings pressure. Dams that are built to the scale of the current, not to the scale of the builder's optimism.

Karl Polanyi, the economic historian whose work informs Zuboff's analysis, described a "double movement" in the history of capitalism: the movement of the market to commodify every dimension of human life, and the counter-movement of society to protect itself against the market's most destructive tendencies. The labor protections of the industrial age — the eight-hour day, the prohibition of child labor, the right to organize, the minimum wage — were products of Polanyi's counter-movement, institutional structures built by society to redirect the market's current toward human flourishing rather than away from it.

The AI moment requires a counter-movement of equivalent ambition. Not against AI — against the institutional vacuum in which AI is being deployed. Against the absence of the structures required to ensure that the informating dividend is realized, the extraction is governed, the sorting is transparent, and the distribution is equitable. The counter-movement is not anti-technology. It is pro-institution. It is the insistence that the human future should be shaped by human will — expressed through institutions that are democratic, accountable, and adequate to the force of the current they are meant to redirect — rather than by technological momentum operating in the service of commercial interests that the humans affected did not choose and cannot control.

The informating dividend is the largest in the history of human tool use. The institutional design required to capture it is the most ambitious in the history of democratic governance. The gap between the two — between the dividend's potential and the institution's capacity — is the space in which the AI transition will either elevate or hollow out. Zuboff's career has been spent documenting the consequences of that gap in every previous technological transition. The AI moment is either the transition where the gap is finally closed or the transition where the gap becomes too wide to bridge.

---

Chapter 10: Beyond the Smart Machine — The Unfinished Question

In 1988, Shoshana Zuboff concluded In the Age of the Smart Machine with a question that she did not answer, because the answer depended not on her analysis but on the choices of the institutions her analysis addressed. The question was whether the smart machine would be deployed to automate human expertise out of existence — to extract cost savings from the elimination of labor while leaving the informating potential unrealized — or whether it would be deployed to informate, to create new forms of knowledge work that could absorb the expertise displaced by automation and elevate the workers who possessed it.

Nearly four decades later, the question remains unanswered. Not because the evidence is ambiguous — the evidence from every transition Zuboff has studied points in the same direction, toward automation without adequate informating, toward extraction without adequate investment, toward the capture of the dividend by the few at the expense of the many. The question remains unanswered because the answer is not a finding. It is a choice. And the choice is being made, continuously, by institutions that have not yet decided what kind of future they are building.

The AI moment is the most powerful demonstration of both dynamics operating simultaneously that Zuboff's framework has ever been asked to analyze. On the automating side, the displacement is extraordinary in its scope and speed. Entire categories of cognitive labor — coding, legal drafting, financial analysis, medical diagnosis, design, translation, content creation — are being automated not gradually but in months. The cost of producing software is approaching zero. The cost of generating legal analysis is approaching zero. The cost of creating content of every kind is approaching zero. The automating function of AI is more comprehensive than any previous technology's, because it targets not manual labor but cognitive labor — the kind of work that knowledge economies were built on the assumption could never be automated.

On the informating side, the potential is equally extraordinary. AI tools generate new knowledge at a scale that no previous technology has approached. They reveal patterns in datasets too large for human cognition to process. They generate hypotheses that human researchers would take years to formulate. They enable forms of integrated, cross-domain understanding that the traditional division of intellectual labor made inaccessible. The informating dividend is not speculative. It is visible in every domain where AI tools have been deployed by people with the judgment to direct them and the domain knowledge to evaluate their output.

Segal's account of building Napster Station — a product that went from nonexistence to functioning prototype in thirty days — is a demonstration of the informating dividend realized. The product was not built by replacing human workers with machines. It was built by amplifying human judgment, vision, and taste through a tool that eliminated the translation friction between imagination and artifact. The senior engineer who discovered that the remaining twenty percent of his work — judgment, architecture, taste — was the part that mattered was experiencing the informating dividend in its purest form: the revelation that the human contribution, when freed from mechanical constraint, was more valuable, not less.

But the informating dividend, as Zuboff's framework has documented across four decades, is not self-realizing. It does not flow automatically to the workers who generate it. It does not distribute itself equitably across the population. It does not persist without institutional maintenance. The dividend is a potential, not a guarantee, and the historical record of every previous smart machine transition suggests that the potential is more often squandered than realized — captured by those who already possess institutional power, converted into cost reduction rather than capability expansion, extracted rather than invested.

The AI moment is either the exception to this pattern or its most spectacular confirmation. The evidence supports both readings, and the evidence is still accumulating.

For the reading that the AI moment is exceptional — that it will realize the informating dividend in ways that previous transitions did not — there is genuine evidence. The democratization of capability is real. The developer in Lagos can access coding leverage that was previously available only to engineers at well-funded technology companies. The floor of who gets to build has risen. The imagination-to-artifact ratio has collapsed. People who were previously excluded from the building process by lack of capital, training, or institutional access can now participate. This expansion of access is historically significant and morally consequential.

For the reading that the AI moment is confirming the pattern — that the informating potential will be captured by the few while the automating displacement is borne by the many — there is equally genuine evidence. The market is rewarding headcount reduction over capability expansion. The competitive arithmetic favors automation over informating. The institutional structures required to distribute the dividend broadly — the training programs, the practice-preservation systems, the extraction governance frameworks, the sorting transparency mechanisms — are not being built at the scale the transition demands. The behavioral surplus generated by every human-AI interaction is being extracted by platforms operating under the logic of surveillance capitalism, a logic that claims human experience as raw material and converts it into prediction products sold to parties whose interests may not align with the interests of the humans whose experience was extracted.

Zuboff's framework does not predict which reading will prevail. It predicts that the outcome depends on institutional design — on the quality, the ambition, and the enforcement capacity of the structures that society builds to govern the transition. And it predicts that in the absence of deliberate institutional design, the default outcome is automation without informating, extraction without investment, the concentration of the dividend among those who already possess the power to capture it.

The prediction is not fatalistic. It is conditional. The condition is institutional action of the kind that has, in previous transitions, redirected technological power toward human flourishing rather than away from it. The eight-hour day redirected industrial power. Compulsory education redirected the power of mass literacy. Public health infrastructure redirected the power of modern medicine. In each case, the intervention was not anti-technology. It was pro-human — an insistence that the benefits of technological capability be distributed broadly enough to sustain the social fabric on which further technological development depended.

The AI moment requires intervention at the same scale of ambition. Not because AI is dangerous in the way that unregulated factories were dangerous — though it is, in its own domain, precisely that dangerous. But because the informating potential of AI is larger than the informating potential of any previous technology, and the cost of squandering it — of allowing the automating function to proceed without capturing the informating dividend — is correspondingly greater. A society that automates its cognitive labor without informating its workers is a society that is consuming its intellectual capital rather than growing it, extracting the value of accumulated human expertise without investing in the development of the new expertise that the technology demands.

The consumption can continue for a time. The workers displaced by automation possess enough residual expertise to evaluate the machine's output, to catch its errors, to exercise the judgment that prevents catastrophic failures. But the residual expertise is depreciating. It was built through practice that is no longer occurring. It is being drawn down without being replenished. And when it is exhausted — when the cohort that possesses deep domain knowledge has retired or been displaced, and the cohort that replaced them has never built the experiential substrate on which evaluative judgment depends — the cost of the failure to informate will become visible in the form of errors that no one in the organization can detect, decisions that no one possesses the knowledge to evaluate, systems that no one understands well enough to fix when they fail.

The erosion is gradual. The consequences are not. The bridge holds until it does not. The expertise sustains until it does not. And the institutional structures that could have prevented the failure — the training programs, the practice-preservation systems, the investment in human development that the market does not naturally reward — are cheaper to build before the failure than after it. After the failure, they are not cheaper. They may be impossible.

Zuboff asked, in 1988, whether the smart machine would automate or informate. The question was addressed to the institutions of her time — the corporations, the governments, the educational systems that would determine how the technology was deployed. The institutions, in the main, chose automation. The informating potential was real but unrealized. The workers who possessed the embodied knowledge were displaced. The knowledge itself disappeared with them.

The AI moment is asking the question again, at a scale that dwarfs the original asking. The technology is more powerful. The informating potential is larger. The automating displacement is faster. The extraction mechanisms are more comprehensive. The institutional structures required to capture the dividend are more ambitious. And the cost of failure — of allowing the question to go unanswered for another generation — is greater than the cost of any previous institutional failure in the history of technology and work.

The question is still unfinished. The smart machine's promise — that technology could informate rather than merely automate, could elevate rather than merely displace, could create new forms of knowledge that absorb the expertise it destroys — remains unredeemed. Not because the promise was false. Because the institutions responsible for redeeming it chose the cheaper path.

The AI moment is either the transition where the promise is finally redeemed — where institutional design rises to meet the informating potential of the most powerful cognitive tool in human history — or the transition where the promise is betrayed at a scale from which recovery may not be possible within the timescale of a human career, a human generation, a democratic society's capacity to sustain the conditions of its own governance.

The technology does not determine the outcome. The institutions do. The institutions depend on choices. The choices depend on whether enough people understand what is at stake — not the speculative risks of superintelligence, not the theatrical fears of robot rebellion, but the concrete, documented, empirically grounded risk that the most powerful informating technology ever created will be deployed to automate without informating, to extract without investing, to displace without elevating, and to sort without accountability.

That understanding is what Zuboff's four decades of work exist to provide. The question is unfinished. The answer is a choice. The choice is being made now.

---

Epilogue

The two words I cannot stop thinking about are not "artificial intelligence." They are "behavioral surplus."

They sat inside Zuboff's framework like a splinter I could not reach. For weeks after this book took shape, I kept returning to the phrase, turning it over, trying to understand why it disturbed me more than the automating thesis, more than the panoptic sort, more than any of the structural arguments about displacement and institutional failure. Those arguments are large and important and operate at a level of abstraction where I can engage them as a builder assessing risk. "Behavioral surplus" operates at the level of my hands on the keyboard.

Every conversation I have with Claude — every late-night session where an idea takes shape through the back-and-forth, where a half-formed intuition becomes something I can stand behind — generates data. Not just the words I type. The patterns of my thinking. The questions I ask and the ones I abandon. The suggestions I accept and the ones I reject and how long I take to decide. The rhythm of my creativity, the architecture of my doubt, the specific contours of the gap between what I intend and what I can articulate. All of it, captured. All of it, in Zuboff's precise language, claimed as raw material for someone else's production process.

I knew this, in the way you know that the food you eat contains calories. Abstractly. Without feeling the weight. Zuboff made me feel the weight.

In The Orange Pill, I wrote about the moment I sat in the Trivandrum training room and told twenty engineers that each of them would be able to do more than all of them together. I wrote about the exhilaration, the terror, the vertigo of watching the imagination-to-artifact ratio collapse. I wrote about all of that honestly. What I did not write about, because I had not yet thought it through, was what was being extracted from those twenty engineers in every session with the tool. Not their labor — I was paying for their labor. Their cognitive signatures. Their problem-solving patterns. Their domain expertise, externalized through interaction, feeding the machine that will eventually be used to further automate the work they were being trained to direct.

The worker trains the machine that replaces the worker. I read that sentence in this book, a book I helped bring into existence, and felt the specific vertigo of a person who is building a dam while standing in the current that will eventually test it.

Zuboff's framework does not tell me to stop building. Nothing in her analysis argues that the tools should be abandoned. What her analysis argues is that the tools operate inside an economic logic that I am responsible for understanding, and that understanding changes what I owe the people who use those tools under my direction. It changes what I owe my engineers, my team, the developer in Lagos, my children.

The informating dividend is real. I have seen it with my own eyes — watched engineers whose capabilities expanded twenty-fold, watched a product come to life in thirty days that should have taken months. The dividend is not a theory. It is Tuesday morning in my building. But the dividend does not distribute itself. It does not flow to the people who generate it unless the institutions surrounding the technology are designed to direct it there. And the institutions, as Zuboff has documented across four decades with a rigor that leaves little room for comfortable denial, default to the cheaper path.

I am building dams. I believe in the dams. But after Zuboff, I understand something about them that I did not understand before: the dams I build at the level of my own organization are necessary and insufficient. They operate at the level of the firm, while the forces they contend with operate at the level of the market, the platform, the geopolitical system. My decision to keep and grow my team is real and costly and matters to the people on that team. It does not address the extraction of their cognitive behavioral data by the platform. It does not address the panoptic sort that is classifying workers across the entire economy. It does not address the institutional vacuum in which the most powerful informating technology in history is being deployed without the structures required to ensure that the informating dividend reaches the people who need it.

The unfinished question is still unfinished. The smart machine's promise — informating, not merely automating; elevating, not merely displacing — has been on the table since before I started my career. The AI moment is the first time the promise has been large enough to change everything, and the first time the risk of betraying it has been large enough to matter at the civilizational scale.

I am still building. I am still climbing. But I am climbing with Zuboff's weight in my pack — the weight of knowing that the dams need to be bigger than any one builder can construct, that the informating dividend needs institutional infrastructure at the scale of a democratic counter-movement, and that the question of who captures the gains of this extraordinary moment is not a question the market will answer in the interest of the people generating those gains unless the people insist.

The insistence is the work that remains.

-- Edo Segal

Every hour you spend collaborating with AI -- prompting, evaluating, directing, creating -- generates something beyond your output. It generates a map of your mind: your judgment patterns, your creati

Every hour you spend collaborating with AI -- prompting, evaluating, directing, creating -- generates something beyond your output. It generates a map of your mind: your judgment patterns, your creative rhythms, the precise contours of your expertise. Shoshana Zuboff spent four decades documenting how technology transforms work, first showing how computerized paper mills destroyed embodied knowledge while creating unrealized potential for deeper understanding, then revealing how digital platforms converted human experience itself into corporate raw material. Her frameworks -- automating versus informating, behavioral surplus, the panoptic sort -- are the sharpest diagnostic instruments available for understanding what the AI revolution is actually doing to the people inside it.

This book applies Zuboff's empirical lens to the AI moment Edo Segal describes in The Orange Pill: the twenty-fold productivity gains, the collapsing imagination-to-artifact ratio, the trillion-dollar market repricing. It asks the question the exhilaration obscures -- not whether the tools work, but who captures the value they create, and who bears the cost of the knowledge they displace.

The informating dividend of AI is the largest in human history. Whether it reaches the people who generate it is not a technology question. It is an institutional one. Zuboff shows what happens when institutions choose the cheaper path.

Shoshana Zuboff
“behavioral surplus”
— Shoshana Zuboff
0%
11 chapters
WIKI COMPANION

Shoshana Zuboff Book wiki — On AI

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Shoshana Zuboff Book wiki — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →