By Edo Segal
The training data had owners. That thought arrived at 2 a.m. on a Tuesday, midway through a build session with Claude, and it stopped me cold.
I had been riding the wave. Twenty-fold productivity. Engineers reaching across domains. The imagination-to-artifact ratio collapsing to the width of a conversation. Everything I describe in *The Orange Pill* — the exhilaration, the vertigo, the genuine expansion of what a small team can accomplish. All of it real. All of it documented. All of it experienced in my own hands.
But that night, staring at a block of code Claude had generated — elegant, functional, better than what I would have written — a question surfaced that I had not been asking. Whose knowledge made this possible? Not Anthropic's engineers, though they built the architecture. The knowledge *inside* the output. The Stack Overflow answers written at midnight by developers who never imagined their explanations would become training data for a proprietary model. The open-source libraries contributed freely to a commons that no longer exists in the form its contributors intended. The decades of collective craft, ingested and encoded and sold back as a subscription.
I had been thinking about AI as a river — a force of nature flowing through civilization. David Noble would have called that a dangerous metaphor. Rivers are blameless. What I was using was not a river. It was a canal, dug by specific companies, funded by specific investors, routed through terrain chosen for specific commercial reasons. Calling it nature let everyone off the hook.
Noble spent his career documenting a single pattern: the moment when a powerful new technology arrives, the design choices embedded in that technology are presented as technical inevitabilities rather than political decisions. Numerical control over record playback. Centralized programming over shop-floor autonomy. In every case, an alternative existed that would have distributed power differently. In every case, the alternative lost — not because it was technically inferior, but because it threatened the institutional arrangements that the dominant design preserved.
I needed Noble's lens because my own was incomplete. *The Orange Pill* argues that AI amplifies whatever you bring to it. Noble asks the prior question: Who holds the amplifier? Who built it from whose materials? And what happens to the people whose knowledge was extracted to make the amplification possible but who have no seat at the table where the amplifier's future is decided?
This book will be uncomfortable for builders like me. It should be. The pattern Noble identified is operating right now, at civilizational scale, and the people inside the exhilaration are the last ones positioned to see it clearly.
-- Edo Segal ^ Opus 4.6
1945-2010
David Noble (1945–2010) was an American historian of technology whose work fundamentally challenged the assumption that technological development follows a neutral or inevitable path. A professor at York University in Toronto for much of his career, and previously at MIT and Drexel University, Noble is best known for *Forces of Production: A Social History of Industrial Automation* (1984), a meticulous study of how numerical control technology in American machine shops was chosen over viable alternatives not for technical superiority but for its capacity to transfer productive knowledge from skilled workers to management. His earlier work, *America by Design: Science, Technology, and the Rise of Corporate Capitalism* (1977), traced the institutional capture of American engineering education by corporate interests. Noble's central concepts — the political character of technical design, the suppression of worker-empowering alternatives, and the encoding of labor relations into the architecture of machines — have become foundational in the field of Science and Technology Studies. His insistence that design choices are always political choices, never merely technical ones, remains among the most potent challenges to techno-optimist narratives of progress.
In 1949, the United States Air Force faced a problem that had nothing to do with combat. The problem was machinists. Specifically, the problem was that the production of complex aircraft parts depended on skilled workers whose knowledge lived in their hands — in the feel of a cutting tool meeting metal, in the judgment about feed rates that no engineering manual could fully codify, in the accumulated intuition of decades spent listening to machines. These workers were expensive. They were organized. They could strike. And their knowledge, because it was embodied rather than written down, could not be transferred to a replacement worker on a Monday morning.
The Air Force wanted interchangeable parts produced to extraordinary tolerances by interchangeable workers. What it got, after investing millions in research at the Massachusetts Institute of Technology's Servomechanisms Laboratory, was numerical control — a system for automating machine tools by replacing the machinist's hands with coded instructions written by engineers in a separate department. The technology worked. It also did something the engineering specifications never mentioned: it transferred the locus of productive knowledge from the shop floor to the programming office, from the worker to the manager, from labor to capital.
David Noble spent the better part of a decade documenting this transfer. His 1984 book Forces of Production remains the most meticulous account of how a specific automation technology was chosen not for its technical superiority but for its political utility — its capacity to reorganize the relationship between workers and managers in ways that served management's interest in control. The book's central finding was not that numerical control was a bad technology. It was that numerical control was a chosen technology, selected from among competing alternatives that would have distributed power differently, and that the criteria driving the selection were political as much as technical. The alternative that lost — record playback, which captured the skilled machinist's actual movements and reproduced them mechanically — produced results that were in many applications superior, at lower cost, with faster implementation. It lost because it left productive knowledge in the wrong hands.
Noble did not frame this as conspiracy. He framed it as the ordinary operation of institutional interests within a capitalist economy. Managers did not gather in secret rooms to plot against machinists. They made decisions that seemed rational from within their own framework — decisions about efficiency, standardization, control — and the cumulative effect of those individually rational decisions was a systematic transfer of power from the people who made things to the people who managed the making. The politics was structural, not conspiratorial. It was embedded in the design itself.
Seventy-five years later, the Air Force's problem has returned in a new form. The expensive, organized, difficult-to-replace workers are no longer machinists. They are software developers, designers, analysts, writers — the entire class of knowledge workers whose expertise has been the engine of the information economy for half a century. And the technology that promises to solve the problem follows the same structural pattern as numerical control: it transfers productive knowledge from workers to a system controlled by someone else.
The Orange Pill describes this transfer as liberation. Its author, Edo Segal, recounts the moment he watched twenty engineers in Trivandrum begin using Claude Code and found that each could accomplish what had previously required an entire team. The language is exhilarated. The imagination-to-artifact ratio — the distance between an idea and its realization — collapsed to the width of a conversation. Engineers who had never written frontend code built user-facing features. The boundaries between specializations dissolved. Segal describes the experience as a "twenty-fold productivity multiplier" and frames it as the democratization of capability.
The question Noble's framework compels is not whether this description is accurate. It almost certainly is. The question is what the description omits. And what it omits is the political economy of the tool itself: who built it, from what materials, under what institutional pressures, and in whose interest.
Start with the materials. Claude was trained on a corpus that includes millions of code repositories, technical documentation, Stack Overflow answers, blog posts, open-source libraries, and the accumulated written output of the global software development community. This corpus represents decades of collective knowledge production by millions of workers. The knowledge was not donated to Anthropic. It was not licensed from its producers. It was extracted from the digital commons — the publicly accessible repositories and forums where developers had shared their knowledge freely, under the assumption that it would remain a shared resource rather than becoming the training substrate for a proprietary product.
Noble documented an identical dynamic in the numerical control story. The knowledge that was coded into numerical control programs was not invented by the programmers who wrote them. It was derived from the accumulated shop-floor expertise of machinists — the cutting speeds, the tool geometries, the material-handling techniques that generations of skilled workers had developed through practice. The programmers translated this knowledge into code. Management then owned the code. The knowledge that had been collective property of the craft became proprietary property of the firm. The machinists who had produced the knowledge received nothing for its appropriation. They received, instead, a demotion from skilled craftsman to machine operator.
The structural parallel is precise. The collective knowledge of the software development profession — deposited over decades in public repositories, forums, and documentation — has been extracted, encoded in a proprietary system, and is now sold back to the profession as a productivity tool. The developers whose knowledge trained the model receive no compensation for the training value their work provided. They receive, instead, a tool that makes their individual expertise less necessary, their specialized knowledge less scarce, their bargaining position less tenable. The tool is useful. It is also built from their own expropriated labor.
Segal acknowledges, with characteristic honesty, that the tool's power creates uncomfortable questions about professional identity. He describes a senior engineer oscillating between excitement and terror — excitement because the work flowed at unprecedented speed, terror because the implementation work that had consumed eighty percent of his career could now be handled by a machine. Segal resolves this tension optimistically: the remaining twenty percent, the judgment and architectural instinct, "turned out to be the part that matters."
Noble's research suggests that this resolution depends on institutional conditions that optimism alone cannot guarantee. When numerical control replaced the machinist's hands, management also promised that the displaced workers would ascend to higher-value roles — supervision, quality control, process optimization. In practice, the higher-value roles were fewer in number, required different credentials, and were filled predominantly by engineers and managers who had never worked the shop floor. The machinists who were told their expertise would be "relocated" found instead that it had been eliminated, and the new roles created by the technology were not designed for them.
The critical insight is not that Noble was right and Segal is wrong. It is that the question of who benefits from the reorganization of productive knowledge is a political question, and political questions are not settled by the quality of the technology. They are settled by the institutional structures that surround the technology — the labor agreements, the governance frameworks, the distribution of ownership, the presence or absence of worker voice in design decisions. Segal's Trivandrum engineers benefited because Segal chose to retain and develop them. But the choice to retain workers in the face of a twenty-fold productivity gain is not a market-rational decision. It is a moral one, made against the logic of cost optimization that drives most corporate decision-making. Noble would note, without cynicism, that moral choices made against market logic tend to be unstable. The next quarter's board meeting will present the arithmetic again. The arithmetic does not care about the moral choice.
Consider the interface itself. Claude Code was designed for individual use — a single person conversing with a machine, producing output that would previously have required a team. Segal celebrates this as the liberation of the individual from the constraints of coordination and specialization. From Noble's perspective, the design choice is legible as something else entirely: the atomization of collective work into individual transactions, each of which can be measured, optimized, and — critically — performed by a less expensive worker.
Teams are not merely organizational conveniences. They are structures of collective knowledge, mutual support, and bargaining power. A team of specialists who work together over time develops institutional memory — the understanding of why certain decisions were made, the awareness of where a system is fragile, the collective judgment that emerges from years of shared practice. When the tool enables individuals to bypass the team, this institutional memory does not relocate. It dissipates. The individual produces the feature, but no collective develops the understanding that would allow the organization to maintain, extend, and adapt the feature when the individual moves on.
Noble's analytical method was to trace the line from design choice to institutional consequence, from the shape of the tool to the shape of the workplace that forms around it. The shape of AI — individually operated, conversation-based, optimized for speed of output — will produce workplaces in which individuals are more productive and more isolated, in which coordination overhead decreases and collective knowledge erodes, in which the worker's bargaining position weakens because any individual worker can be replaced by another individual with the same tool.
This is not a prediction about intentions. Anthropic may genuinely intend to build technology that empowers workers. Segal may genuinely intend to build organizations that develop rather than exploit their employees. Noble's argument was never that the people who design and deploy technologies have malicious intentions. His argument was that institutional pressures — market competition, investor expectations, the structural incentives of capitalism — shape design decisions in ways that systematically favor the interests of capital over the interests of labor, regardless of the intentions of the individuals involved.
The deepest implication of Noble's framework is that the relevant question about any technology is not "What can it do?" but "Who controls it, and in whose interest?" The answer, for AI, is that a handful of corporations control the models, the training data, the deployment infrastructure, and the terms under which the technology can be used. The workers whose knowledge trained the models have no ownership stake, no governance role, and no mechanism for ensuring that the tool serves their interests rather than the interests of the shareholders who own the companies that built it.
Segal asks his readers whether they are "worth amplifying." Noble's framework suggests a prior question: Who holds the amplifier? And what happens to those whose voices it was not designed to carry?
---
The word "deskilling" carries an accusation, and the accusation makes people defensive. To say that a technology deskills is to say that it diminishes the worker — not just economically, but in some more fundamental sense that touches identity, dignity, and the meaning of work itself. The defensive reaction is predictable: the technology doesn't diminish anyone, it liberates them from drudgery, it frees them for higher-order thinking, it democratizes capability. The reaction is understandable. It is also, historically, almost always wrong about the distribution of consequences, even when it is right about the technology's potential.
Noble's account of numerical control is the most carefully documented deskilling story in the literature of technology studies, and its relevance to the present moment is not analogical but structural. The same mechanism — the transfer of productive knowledge from skilled workers to a centrally controlled system — operates in both cases, and the institutional dynamics that determine who benefits and who bears the cost have not changed in the intervening seventy years.
The mechanism requires careful description, because its power lies in its specificity.
Before numerical control, the production of a precision metal part was an act of embodied intelligence. The machinist read the engineering drawing — a two-dimensional representation of a three-dimensional object — and translated it into a sequence of operations: roughing cuts to remove bulk material, finishing cuts to achieve the specified tolerances, boring operations for internal cavities, threading operations for fasteners. Each operation required the machinist to select the appropriate cutting tool, set the spindle speed and feed rate, position the workpiece, and guide the tool through the cut. The selections were not arbitrary. They depended on the specific properties of the material being cut — its hardness, its grain structure, its tendency to work-harden or spring back — and on the condition of the cutting tool, which wore at rates that varied with the material, the speed, and the depth of cut. An experienced machinist could hear the cut — could distinguish, by the sound of the tool meeting metal, whether the feed rate was correct, whether the tool was dulling, whether the material was behaving as expected.
This knowledge was tacit. It lived in the machinist's body — in the hands that felt the vibration of the cutting tool through the machine's handwheel, in the ears that registered the pitch of the cut, in the eyes that read the curl of the chip as it came off the workpiece. It could not be fully written down, because it was not fully conscious. The machinist who was asked to explain how he knew when to slow the feed rate would often say something like "you just know" — not because he was being evasive, but because the knowledge operated below the threshold of explicit articulation. It was the kind of knowledge that could only be developed through years of practice, through thousands of cuts on thousands of parts, through the specific, irreplaceable experience of getting it wrong and learning from the wrongness.
Numerical control replaced this embodied intelligence with coded instructions. A programmer, working from the engineering drawing, wrote a sequence of commands specifying the tool path, the feed rate, the spindle speed, the depth of cut. The commands were punched into tape, and the tape was fed into the machine, which executed the instructions without the machinist's intervention. The machinist's role was reduced to setup and monitoring — loading the workpiece, verifying the first part, watching for malfunctions. The knowledge that had lived in the machinist's body was now encoded in the tape. The tape was written by an engineer in a separate department. The engineer had never operated a machine tool. The engineer worked from theoretical specifications rather than embodied experience.
The results were instructive. In many applications, the numerically controlled machines produced parts that met specifications. In many others, they did not — because the theoretical specifications encoded in the program did not account for the variables that the experienced machinist had managed through tacit knowledge: the batch-to-batch variation in material properties, the progressive wear of the cutting tool, the thermal expansion of the workpiece during cutting. When these variables produced out-of-tolerance parts, the solution was not to return knowledge to the machinist. It was to add more instrumentation, more sensors, more feedback loops — to build, at great expense, automated versions of the knowledge that the machinist had provided for free.
The parallel to natural language AI demands an uncomfortable level of precision.
Before Claude Code, the production of working software was an act of embodied intelligence structurally equivalent to the machinist's craft. The developer read the specification — a description of what the software should do — and translated it into code: data structures to hold the information, algorithms to process it, interfaces to present it, error handling to manage the inevitable failures. Each decision required the developer to select among alternatives based on experience — which data structure would perform best at scale, which algorithm would handle edge cases gracefully, which architectural pattern would make the system maintainable when the requirements changed, as requirements always do. An experienced developer could read code the way the machinist read the chip curl — could sense, from the structure and rhythm of the codebase, where the system was fragile, where the performance would degrade, where the next bug was hiding.
This knowledge, like the machinist's, was substantially tacit. It was built through years of writing code that failed, debugging the failures, refactoring the solutions, and accumulating the kind of understanding that operates below the level of explicit articulation. The developer who was asked why she chose one architectural pattern over another would often appeal to experience rather than principle — "I've seen this pattern break under load" — because the knowledge was indexed to specific situations rather than general rules.
Claude Code replaces this embodied intelligence with outputs generated from statistical patterns in the training data. A user describes the desired functionality in natural language. The model generates code that implements the description. The user reviews the output, tests it, and iterates. The developer's role shifts from writing code to directing and evaluating code — from the machinist's hands-on relationship with the material to the numerical control programmer's abstract specification of the process.
Segal describes this shift in terms of liberation — the backend engineer who starts building interfaces, the designer who begins writing features, the non-technical founder who prototypes over a weekend. The capability expansion is real. But the deskilling is also real, and it operates through the same mechanism Noble documented: the transfer of productive knowledge from the worker to a centrally controlled system.
The developer who uses Claude Code to generate a database schema does not develop the understanding of database design that the developer who wrote the schema by hand would have developed. The schema works. The developer moves on. But the knowledge that would have been deposited through the act of writing — the understanding of normalization, of indexing strategies, of the tradeoffs between read performance and write performance — was never acquired. It was bypassed. The gap between the developer who built the schema with AI and the developer who built it by hand is invisible in the output and enormous in the understanding.
Noble called this the elimination of the "worker's margin" — the space in the production process where the worker's judgment, experience, and tacit knowledge created value that was not captured in the formal specifications. The worker's margin was where quality lived. It was where the machinist slowed the feed rate because the material sounded wrong, where the developer restructured the code because the architecture felt fragile, where the professional exercised the judgment that separated competent work from excellent work.
AI does not eliminate the need for judgment. Segal is emphatic about this, and he is right. But AI does eliminate many of the developmental experiences through which judgment is formed. The senior engineer whose architectural intuition is the "part that matters" developed that intuition through decades of implementation work — the very work that AI now handles. If the implementation work is what builds the judgment, and the implementation work is automated, how does the next generation develop the judgment that the current generation is monetizing?
The ascending friction thesis — Segal's argument that difficulty relocates upward when lower-level friction is removed — has an answer to this question: the new generation will develop new forms of judgment through engagement with the new, higher-level challenges that AI creates. Noble's historical evidence offers a different prediction. When numerical control eliminated the machinist's developmental path, the next generation of shop-floor workers did not develop new forms of expertise at a higher level. They developed less expertise, period. The quality of machined parts declined in ways that took years to become visible and decades to address through expensive compensating mechanisms — automated inspection systems, statistical process control, quality management bureaucracies that partially substituted for the knowledge that had been designed out of the workforce.
The honest version of the deskilling argument does not deny that AI creates value. It insists that the value and the loss are distributed to different populations. The person who gains is the person who previously could not build — the non-technical founder, the career-switcher, the developer in Lagos whom Segal rightly celebrates. The person who loses is the person who could build, who spent years learning to build, whose economic value and professional identity were grounded in the difficulty of building. The tool that democratizes capability simultaneously commodifies expertise. Both things happen. The question is whether the institutional structures surrounding the technology — the labor market, the educational system, the organizational cultures — are designed to manage both consequences, or only to celebrate the first while ignoring the second.
Noble's verdict on numerical control was that the institutional structures were designed to serve management. The consequences for workers were not managed. They were imposed. The question for AI is whether the institutional structures currently forming around it will produce a different outcome, or whether the structural pressures of market competition will produce the same result under different language.
The language is different. The result, so far, is not.
---
The most consequential fact about numerical control is not that it was built. It is that something else was not.
Record playback was developed in the late 1940s, roughly contemporaneous with numerical control, and it solved the same industrial problem — the automation of complex machining operations — through a fundamentally different relationship with the skilled worker. In record playback, the machinist performed the machining operation by hand while the machine recorded every movement: the position of the tool, the feed rate, the spindle speed, the path through the material. The recording captured not just the explicit parameters but the tacit adjustments — the slight slowdown when the tool entered a harder zone in the material, the compensating angle when the workpiece deflected under cutting pressure, the thousand micro-decisions that an experienced machinist made without conscious deliberation.
The recording was then played back. The machine reproduced the machinist's operation with mechanical precision, part after part, without variation. The automation was real. The productivity gain was genuine. But the knowledge stayed where it had always been — in the hands of the person who understood the work.
Noble documented record playback's advantages with the specificity of a scholar who understood that the case needed to be overwhelming to overcome the assumption that the technology that won must have been the better technology. Record playback was cheaper to implement. It required no separate programming staff. It could be deployed on existing machines with relatively simple modifications. It was adaptable — when a new part required a new operation, the machinist simply performed the operation and recorded it, without the delay of writing and debugging a numerical control program. And in many applications, particularly those involving complex contours or unusual materials, record playback produced better results, because it captured the machinist's real-time adaptation to conditions that the numerical control programmer could only approximate theoretically.
Record playback had real limitations. The recordings were sensitive to material variation — a recording optimized for one batch of aluminum might not perform optimally on a batch with different properties. Editing a recording was difficult; changing a single parameter often required re-recording the entire operation. And record playback was less suited to the production of parts with complex three-dimensional geometries that required simultaneous multi-axis movements, because the human machinist could only control one or two axes at a time with sufficient precision.
These limitations were real engineering problems. They were not insuperable. They would have yielded to development effort — to the kind of sustained, funded research that numerical control received from the Air Force and MIT. Record playback did not receive this research, because the institutions that funded automation research — the military, the major manufacturers, the research universities — were not interested in a technology that preserved the skilled worker's centrality to the production process. They were interested in a technology that eliminated it.
The Air Force's interest was standardization. Military procurement required that parts manufactured by different suppliers be interchangeable to tight tolerances. Record playback, because it depended on individual machinists whose technique varied, could not guarantee this standardization. Numerical control, because it executed identical programmed instructions regardless of the operator, could. The standardization argument was technically coherent. It was also a political argument disguised as a technical one: standardization meant that no individual worker's skill was essential, which meant that no individual worker's bargaining power could hold up the production schedule.
Management's interest was control. A factory organized around record playback was a factory in which the skilled machinist remained the essential figure — the person whose knowledge was the foundation of the automated process, the person whose cooperation was necessary for the system to function, the person whose bargaining position was strengthened rather than weakened by the introduction of the machine. Management did not want this factory. Management wanted a factory in which the essential knowledge was encoded in programs written by engineers, where it could be standardized, audited, and deployed without dependence on any particular worker's skill or willingness.
MIT's interest was intellectual prestige. Record playback was an engineering solution — clever, practical, effective, but not theoretically interesting. Numerical control was a research program — mathematically sophisticated, computationally challenging, intellectually glamorous. The Servomechanisms Laboratory's reputation, and the careers of the researchers who staffed it, were better served by the program that advanced the state of the art in control theory than by the program that captured a machinist's hand movements on magnetic tape. Noble documented how the intellectual preferences of the research institution aligned with the political preferences of the funding institution to produce a technology whose design reflected both preferences — and neither the preferences nor the interests of the workers whose knowledge it displaced.
The AI equivalent of record playback would be a system designed to learn from and amplify the specific expertise of individual practitioners rather than to replace expertise with general-purpose capability. Such systems are technically feasible. They exist in narrow domains: medical diagnostic systems trained on the reasoning patterns of expert clinicians, legal analysis tools built on annotated case law curated by senior attorneys, engineering design systems that learn from the specific decision patterns of experienced designers. These systems enhance the expert's reach without eliminating the expert's role. They are most powerful in the hands of the most knowledgeable practitioners, creating a positive feedback loop between human knowledge and machine capability. The expert's judgment remains the irreplaceable input. The machine extends the output.
The dominant paradigm in AI development followed the numerical control pattern instead. Large language models are trained not on the specific knowledge of individual experts but on the aggregate of all publicly available text and code — the entire commons of human knowledge, undifferentiated, unattributed, and uncompensated. The models produce outputs that are statistically adequate across a vast range of tasks, without being specifically adapted to any particular practitioner's expertise. The design choice means that the tool is most useful to the person who lacks expertise — the novice for whom adequate output represents a dramatic improvement over what they could produce alone — and least useful to the person who already possesses deep knowledge, for whom the tool's general-purpose adequacy may be inferior to their own specific mastery.
This is not a technical inevitability. It is a market logic. Expert-amplifying AI serves a smaller market — the population of existing experts — than expertise-replacing AI, which serves the vastly larger market of everyone who lacks expertise and wants the tool to supply it. Anthropic and its competitors are businesses operating in a competitive market. They build what the market will buy. The market will buy tools that make expertise unnecessary, because expertise is expensive and the people who pay for it — employers, clients, institutions — would prefer not to.
Segal's ascending friction thesis offers a genuine response to the deskilling concern: when lower-level skills are automated, practitioners are freed to develop higher-level capabilities. But Noble's record playback research reveals the institutional assumption hidden inside this thesis. The machinists displaced by numerical control were also told that their expertise would ascend — from operating the machine to supervising the automated process, to quality control, to process engineering. In practice, the ascending path required credentials the machinists did not have, institutional sponsorship they were not offered, and organizational structures that did not exist. The higher-level roles were filled by engineers and managers from the engineering department — the same department that had written the programs that displaced the machinists in the first place.
The question is not whether ascending friction is theoretically possible. It is whether the institutions surrounding the technology are designed to support the ascent. Noble's evidence, across multiple technologies and multiple decades, suggests that they typically are not. The Trivandrum training that Segal describes — twenty engineers retrained over a week by a founder who chose investment over extraction — is real, and it is admirable. It is also the exception that illuminates the rule. The rule is that employers capture productivity gains and minimize labor costs, because market competition punishes employers who do otherwise. The employer who retains twenty engineers when five would suffice, who invests in retraining when replacement is cheaper, who builds organizational structures for ascending expertise when the tool makes expertise optional — this employer bears costs that competitors do not. The market does not reward this choice. It punishes it, reliably and repeatedly, until the moral commitment that motivated it exhausts the resources that sustain it.
Record playback was not a perfect technology. Expert-amplifying AI is not a perfect paradigm. But the choice between paradigms — between technologies that preserve the worker's knowledge as the foundation of automation and technologies that replace it with centrally controlled general capability — is a political choice with distributional consequences. The choice that was made in the 1950s served management's interests. The choice being made now serves the same interests, under different language, at civilizational rather than industrial scale. The road not taken remains not taken, and the reasons have not changed.
---
In 1911, Frederick Winslow Taylor published The Principles of Scientific Management and inaugurated a century-long project to transfer productive knowledge from workers to management. Taylor's insight, which he presented as science but which Noble recognized as strategy, was that the worker who controlled the knowledge of how work was done also controlled the pace, the quality, and the conditions of the work itself. The worker who knew the optimal way to shovel pig iron — how much to load on each shovel, how high to lift it, how many rest breaks to take — could use that knowledge to set a pace that suited the worker rather than the employer. Taylor's solution was to study the work scientifically, codify the knowledge in management-controlled procedures, and then prescribe the procedure to the worker, who was now required to follow instructions rather than exercise judgment.
The process had a name that was more revealing than Taylor perhaps intended: the separation of conception from execution. The worker who conceived and executed the work was autonomous. The worker who merely executed work conceived by someone else was controlled. Taylor's project was to make the separation systematic across all of industrial production.
Noble argued that numerical control was Taylor's project brought to its logical completion in the machine shop. The machinist who both conceived and executed the machining operation — who read the drawing, planned the sequence of cuts, selected the tools, set the parameters, and guided the work through the material — was the last autonomous craftsman in industrial production. Numerical control completed the separation: the programmer conceived, the machine executed, and the machinist was reduced to a monitor who loaded material and watched for malfunctions.
AI completes this separation for knowledge work. And it does so through a mechanism that Noble documented in the machine shop but that reaches its full expression only now, with the arrival of systems capable of encoding human knowledge at civilizational scale: the coding of tacit knowledge into explicit, transferable, and — critically — ownable form.
Tacit knowledge is knowledge that cannot be fully articulated. The machinist's feel for the cut, the surgeon's sense of when tissue is healthy or diseased, the developer's intuition about where a codebase is fragile — these are forms of understanding that operate below the threshold of explicit formulation. They are developed through practice, refined through experience, and transmitted through apprenticeship rather than instruction. They are also, from management's perspective, profoundly inconvenient. Tacit knowledge cannot be standardized. It cannot be audited. It cannot be transferred from one worker to another on a Monday morning. And because it cannot be separated from the worker who possesses it, it gives the worker bargaining power that management cannot eliminate through substitution.
Taylor's project was to make tacit knowledge explicit — to study what the worker did, codify it in written procedures, and thereby transfer control of the knowledge from the worker to the management system. The project was partially successful. Many forms of manual work yielded to time-and-motion study, and the codified procedures that resulted allowed less skilled workers to approximate the performance of more skilled ones. But tacit knowledge resisted full codification. The written procedure could capture the what but not the why — not the judgment, the adaptation, the real-time response to unexpected conditions that distinguished the skilled practitioner from the procedure-follower.
Noble documented how numerical control attempted to complete what Taylor had started. The programmer's task was to encode the machinist's knowledge — the cutting speeds, the tool geometries, the sequencing of operations — in a formal language that the machine could execute. The encoding was inevitably incomplete. The programmer worked from theoretical specifications rather than embodied experience. The specifications captured the parameters that could be measured and recorded — speeds, feeds, coordinates — but not the parameters that the machinist managed through feel: the vibration of the tool, the sound of the cut, the appearance of the chip. The encoded knowledge was a thin approximation of the original, and the quality of the output often reflected the approximation's thinness.
The structural parallel to large language models is not metaphorical. LLMs are, at the most fundamental level, machines for coding tacit knowledge. The millions of code repositories, Stack Overflow answers, technical blog posts, and documentation files that constitute the training data are the explicit residue of tacit knowledge — the written traces left by practitioners who were trying to communicate what they knew to others. The model ingests these traces and produces a statistical representation of the patterns they contain. The representation is powerful. It can generate code that works, prose that reads well, analyses that are structured coherently. It captures the "what" of professional knowledge — the patterns, the conventions, the standard approaches — with impressive fidelity.
What it cannot capture is what Taylor's procedures could not capture and what numerical control programs could not capture: the judgment that produces the pattern, the understanding that underlies the convention, the reasoning that makes the approach appropriate in one context and catastrophic in another. The model generates code that follows the patterns it learned from millions of developers. It does not understand why those patterns exist, when they should be violated, or what happens when the context deviates from the training distribution in ways that the statistics cannot anticipate. This is not a limitation that will be engineered away with the next model generation. It is a structural feature of the coding process itself — the inevitable information loss that occurs when embodied, contextual, judgment-laden knowledge is compressed into a statistical representation.
Segal recognizes this limitation with more precision than most AI advocates. His account of Claude generating a passage that "sounded like insight but broke under examination" — the Deleuze failure described in The Orange Pill's Chapter 7 — is a description of exactly what happens when coded knowledge substitutes for understanding. The output was plausible. It followed the patterns of genuine insight. It was wrong in a way that only someone who possessed the tacit knowledge — someone who had actually read Deleuze — could detect. The smoothness of the output concealed the absence of the understanding that would have prevented the error.
But Noble's analysis extends beyond the limitations of any particular model to the political economy of the coding process itself. When knowledge is coded into a system, the question of who owns the code becomes the question of who owns the knowledge. In the numerical control story, management owned the programs. The machinists' knowledge — extracted, encoded, and transferred to the programming department — became corporate property. The machinists had no ownership claim on the programs that embodied their expertise, no governance role in determining how the programs would be used, no compensation for the knowledge that the programs contained.
The AI training process replicates this dynamic at a scale the numerical control era could not have imagined. The collective knowledge of the global software development community — millions of person-years of accumulated expertise, deposited in public repositories and forums over decades — has been ingested by AI companies, encoded in proprietary models, and is now sold back to the community as a subscription service. The developers whose knowledge trained the models have no ownership stake in the models, no governance voice in the companies that built them, and no compensation for the training value their work provided. The knowledge that was collective — shared freely in the commons, under the assumption that it would remain a shared resource — has been privatized. The commons has been enclosed.
Noble would recognize this process immediately. The enclosure of the commons — the conversion of shared resources into private property — is one of the oldest mechanisms of capital accumulation, and Noble documented its technological variant across multiple industries. The specific form that AI enclosure takes is novel: the resource being enclosed is not land or water or mineral rights but human knowledge itself, the accumulated cognitive output of an entire profession. But the structural logic is identical. A shared resource is appropriated by a private actor. The private actor adds value through processing and infrastructure. The processed resource is sold back to the population from which it was extracted. The profits flow to the private actor. The costs — the loss of the shared resource, the devaluation of the expertise that the resource embodied, the erosion of the bargaining power that the expertise conferred — are distributed across the population.
Segal frames AI as a river — a natural force that flows through civilization, to be channeled but not controlled. Noble's framework suggests a correction. AI is not a river. A river is not designed. A river is not owned. A river does not have shareholders, a board of directors, or a terms-of-service agreement that specifies the conditions under which you are permitted to drink from it. AI is an artifact — designed by specific people, funded by specific institutions, shaped by specific interests, and deployed under conditions that those interests determine. To treat it as a natural force is to naturalize the political choices that produced it — to place those choices beyond contestation by presenting them as features of the landscape rather than decisions made by identifiable actors for identifiable reasons.
Taylor wanted to separate conception from execution. Noble documented the technological completion of that separation in the machine shop. AI promises to complete it for all of knowledge work. The conception — what should be built, for whom, and why — remains with the human. The execution — the writing of the code, the drafting of the document, the production of the artifact — is transferred to the machine. The separation sounds clean. It sounds liberating. It sounds, in Segal's telling, like ascension to a higher floor.
But the higher floor is accessible only to those who were already near the ceiling. The developer whose career consisted primarily of execution — the junior developer, the mid-career specialist, the worker whose value was located in the ability to implement rather than to conceive — does not ascend. That developer is displaced, in the same way the machinist was displaced, and for the same structural reason: the knowledge that made the worker valuable has been coded into a system that someone else controls.
The coding of knowledge is not a neutral technical process. It is an act of appropriation that redistributes power from those who produce knowledge to those who own the systems that encode it. Taylor understood this. Noble documented it. The AI industry is performing it at a scale that would have astonished them both. And the language of liberation, democratization, and empowerment in which the process is described does not change its structural character. It only makes the character harder to see.
The Hawthorne Works of Western Electric, on the western edge of Chicago, employed forty thousand people in the late 1920s when a team of researchers from Harvard arrived to study the relationship between lighting and productivity. The researchers changed the lighting. Productivity went up. They changed it again. Productivity went up again. They dimmed the lights to near darkness. Productivity went up. The workers, it turned out, were not responding to the illumination. They were responding to the observation. The mere fact of being watched altered behavior in ways that had nothing to do with the variable being studied.
The Hawthorne effect entered the vocabulary of industrial psychology as a curiosity — a methodological artifact, a reminder that human subjects are unreliable. Noble read it differently. The Hawthorne experiments were not a failure of experimental design. They were a demonstration of the relationship between surveillance and control. The workers who knew they were being observed worked harder, not because they were motivated by the attention but because they could not afford the consequences of being seen to work less. The observation was not a confound. It was the finding. Visibility produces compliance. And the architecture of the modern workplace — from the factory floor to the open-plan office to the AI-augmented workstation — has been designed, with increasing sophistication, to make work visible to the people who manage it.
Noble's research on the factory floor documented how the physical organization of production served management's interest in surveillance long before anyone used that word. The layout of machines in a numerically controlled shop was not determined solely by the logic of material flow. It was determined by the logic of sightlines — the manager's ability to observe multiple workstations from a single position, the foreman's capacity to monitor the pace of production without walking the floor, the arrangement of the workspace in ways that made deviation from prescribed procedures visible and therefore punishable. The panopticon — Jeremy Bentham's circular prison in which the inmates can always be seen but can never see the observer — was not merely a metaphor for industrial organization. It was a design principle.
The open-plan office translated this principle to knowledge work with an efficiency that the factory floor never achieved. When walls came down in the 1960s and 1970s — initially under the banner of collaboration and egalitarianism — the effect was to make every knowledge worker's activity visible to every other knowledge worker, and especially to management. The developer who stared out the window for twenty minutes, working through a difficult architectural problem in her head, was indistinguishable from the developer who stared out the window for twenty minutes because she was bored. Both were visible. Both were legible as not-working. The open plan rewarded visible activity — typing, talking, moving — and penalized invisible activity — thinking, reflecting, hesitating. It optimized for the appearance of productivity rather than productivity itself, and the workers who survived in it learned to perform busyness regardless of whether the busyness was productive.
AI-augmented work completes a trajectory that the factory floor began and the open plan advanced. It does so by making visible not just the worker's physical activity but the worker's cognitive process.
Every interaction with an AI tool is logged. Every prompt is recorded. Every response is preserved. The complete chain of the worker's thinking — the questions asked, the directions pursued, the alternatives considered and rejected — is captured in the interaction history with a granularity that no previous surveillance technology could approach. The factory floor could observe what the worker's hands were doing. The open-plan office could observe what the worker appeared to be doing. The AI tool observes what the worker is thinking, or at least what the worker is thinking about, because the prompts are a real-time record of cognitive activity directed at the tool.
Segal describes this transparency as a feature of the collaboration. Claude "holds the context," he writes. The tool remembers what was discussed, what was attempted, what direction the work has taken. This is presented as an enhancement of the creative process — the conversation partner who never forgets, who can retrieve any thread at any moment, who maintains continuity across sessions in ways that human memory cannot.
The same feature, read through Noble's framework, is a surveillance capacity of extraordinary power. The employer who has access to the interaction logs of an AI-augmented workforce possesses something that no previous generation of managers has possessed: a complete, timestamped, searchable record of what every employee was thinking about during every minute of the workday. Not what they produced — the code, the document, the deliverable — but how they produced it. Which questions they asked. Which approaches they considered. How long they spent on each problem. Whether they pursued dead ends. Whether they worked efficiently or inefficiently, as measured by the ratio of prompts to useful output.
The surveillance potential is not speculative. Enterprise AI platforms already offer analytics dashboards that track usage patterns, prompt frequency, output volume, and task completion rates. The data exists. The infrastructure for analyzing it exists. The organizational incentive to analyze it — to identify which workers are using AI effectively and which are not, to benchmark individual performance against team averages, to detect deviation from prescribed workflows — exists as powerfully as the foreman's incentive to monitor the factory floor.
Noble's insight was that surveillance technology does not need to be used to be effective. The panopticon works not because the guard watches every prisoner at every moment but because every prisoner knows the guard might be watching at any moment. The AI interaction log works the same way. The worker who knows that every prompt is recorded, that every cognitive detour is visible, that the entire thinking process is available for management review, adjusts behavior accordingly. The adjustment is not dramatic. It is subtle, pervasive, and largely unconscious: a tendency to prompt in ways that look productive, to avoid exploratory queries that might appear unfocused, to optimize the interaction history for the audience that might review it rather than for the problem that needs solving.
The Berkeley researchers whose work Segal discusses documented a related phenomenon without naming it as surveillance: the tendency for AI-accelerated work to colonize previously protected cognitive spaces. Workers prompted during lunch breaks, in elevators, in the minutes between meetings. The researchers attributed this to the tool's availability and the workers' internalized achievement pressure — what Han would call auto-exploitation, what Segal frames as the struggle between flow and compulsion.
Noble's framework suggests an additional mechanism. The workers prompted in elevators not only because the tool was available and the impulse was present but because the interaction log was accumulating, and the log was a performance record, and the performance record was visible to the institution that employed them. The colonization of pauses was not purely internal. It was also institutional — the product of a system in which cognitive downtime, because it leaves no trace in the interaction log, reads as absence, and absence, in a surveillance economy, reads as underperformance.
The distinction matters because it determines the appropriate response. If the problem is purely internal — auto-exploitation, the internalized achievement imperative — then the solution is individual: meditation, boundaries, the "AI Practice" that the Berkeley researchers proposed. If the problem is also institutional — the product of a surveillance architecture that penalizes cognitive rest — then the individual solution is insufficient. The worker who meditates while her colleagues prompt is the worker whose interaction log shows less activity. In a system that measures activity, less activity is a liability. The individual solution, adopted in isolation, makes the individual worker more vulnerable to the institutional pressures that the solution is meant to address.
Noble documented this dynamic in the factory. When individual machinists attempted to resist the speedup that numerical control enabled — working at a pace they considered sustainable rather than the pace the program prescribed — they were identified, disciplined, and replaced. Individual resistance to institutional pressure is possible only when it is collective — when enough workers resist simultaneously that the institution cannot replace them all. This is why Noble insisted that the labor question was always a collective question, never an individual one. The individual worker's capacity to set boundaries depends on the collective's capacity to enforce them.
AI-augmented work atomizes the collective. When a single developer can produce what a team produced, the team dissolves. When the team dissolves, the collective structures that enabled resistance — the shared understanding of reasonable pace, the mutual support that made individual boundary-setting possible, the solidarity that prevented management from isolating and punishing individual resisters — dissolve with it. The atomized, AI-augmented worker is more productive and more surveilled and more alone than the worker embedded in a team. The productivity is celebrated. The surveillance is architecturally invisible. The aloneness is not discussed.
Segal proposes organizational dams — protected mentoring time, sequenced workflows, structured pauses — and these proposals are genuine and well-intentioned. But organizational dams built by management serve management's interests, even when they are designed by compassionate managers. The dam that management builds is the dam that management can remove when the quarterly numbers come due. Noble would argue, as he argued throughout his career, that the only dams that durably protect workers are dams that workers build collectively — through unions, through professional associations, through governance structures that give workers a voice in determining the conditions under which surveillance is conducted, the metrics by which performance is measured, and the boundaries beyond which the employer's access to cognitive data does not extend.
The factory floor made the worker's hands visible. The open-plan office made the worker's body visible. The AI-augmented workstation makes the worker's mind visible. Each transition was presented as an improvement in collaboration, communication, and efficiency. Each was also an extension of management's capacity to observe, measure, and control the production process. The trajectory is consistent. The language that accompanies it changes with each generation, but the direction does not.
The question is not whether the surveillance capacity will be used. It is whether the people being surveilled will have any say in how.
---
In the numerical control story, one figure stands between the machinist and the machine: the programmer. The programmer was management's solution to the problem of the autonomous craftsman — the intermediary who translated engineering specifications into coded instructions, who converted the machinist's embodied judgment into a sequence of commands that the machine could execute without the machinist's participation. The programmer occupied a position of apparent authority: the person who determined what the machine would do, how it would do it, and at what pace. The programmer was, in the language of the era, the brains of the operation.
Noble documented a detail that the industry's self-presentation consistently obscured: the programmer was also controlled. The programming standards were set by management. The software platforms that the programmer used were selected by the engineering department. The specifications that the programmer translated into code were determined by a chain of authority that ran from the customer through the sales department through the engineering department to the programmer's desk. The programmer exercised judgment within constraints set by others — constraints that determined what kinds of judgment were permissible and what kinds were not.
The programmer could optimize a tool path. The programmer could not question whether the part should be manufactured at all. The programmer could select among standard subroutines. The programmer could not redesign the subroutine library. The programmer could write elegant code within the specification. The programmer could not change the specification. The appearance of autonomy was real at the level of individual operations and illusory at the level of structural decisions. The programmer was free to decide how, within a framework that predetermined what and why.
Noble called this the paradox of the programmer's autonomy: the more the programmer's skill was celebrated as the essence of the automated system, the more completely the programmer's authority was bounded by institutional constraints that the celebration obscured. The programmer replaced the machinist as the locus of productive knowledge, but the programmer was no more free than the machinist had been — only free in a different register, at a different level of abstraction, within different walls.
The AI transition produces a new figure in this lineage: the prompter. The person who converses with the large language model, who describes the desired output in natural language, who directs the tool through a sequence of specifications that gradually refine the result. Segal describes this figure in terms that emphasize liberation — the person freed from translation overhead, operating in natural language rather than formal syntax, directing the machine on the human's own terms rather than adapting to the machine's requirements.
The liberation is real at the level of interface. The constraints are real at every other level.
The prompter's autonomy is bounded, first, by the model's capabilities. The prompter can request only what the model can produce. The boundary between what the model can and cannot do is not set by the prompter. It is set by the AI company — by the training decisions, the safety constraints, the capability investments, the commercial priorities that determine what the model is good at and what it refuses to attempt. The prompter operates within a possibility space defined by someone else, in the same way that the numerical control programmer operated within a subroutine library defined by the software vendor. The possibility space is vast, but it is not unlimited, and its boundaries are not negotiable.
The prompter's autonomy is bounded, second, by the employer's directives. The developer who uses Claude Code at work prompts for what the employer needs built. The direction of the cognitive work — which problems to solve, which features to implement, which products to develop — is determined by the organizational hierarchy, not by the prompter's independent judgment. Segal describes engineers "reaching across domains," building features outside their traditional specialization. The reaching is real. The direction of the reach is institutionally determined. The backend engineer who builds a frontend feature does so because the product roadmap requires it, not because the engineer chose to explore interface design as an act of autonomous curiosity.
The prompter's autonomy is bounded, third, by the platform's terms of service. The conditions under which Claude can be used — what data can be submitted, what outputs can be generated, what uses are prohibited — are determined by Anthropic and encoded in a legal agreement that the user accepts as a condition of access. The terms are not negotiated. They are presented. The user who disagrees with a term has one option: not to use the platform. In a market where AI augmentation is rapidly becoming a professional necessity — where the developer who does not use AI tools is increasingly uncompetitive — the option not to use the platform is formal rather than practical. The constraint is real. The choice is nominal.
Noble would recognize this structure immediately. It is the structure of the company town — the community in which the employer owns the infrastructure, sets the terms of access, and presents the terms as a condition of participation in economic life. The worker in the company town is free to leave. The worker who leaves has nowhere to go. The freedom is juridically real and practically empty.
The AI platform is not a company town in the literal sense. Multiple platforms compete for users. The market provides alternatives. But the alternatives share structural features — the same training-data extraction model, the same individual-user interface paradigm, the same terms-of-service framework — because they were built within the same market logic and the same institutional pressures. The choice among platforms is a choice among variants of the same fundamental design, in the same way that the choice among numerical control systems in the 1960s was a choice among variants of the same fundamental approach — the approach that centralized programming authority in engineering departments and eliminated the machinist's autonomy.
Segal's concept of the "Creative Director" — the figure who directs AI toward a vision, who decides what should be built rather than building it — represents the highest expression of the prompter's role. The Creative Director exercises judgment, taste, strategic vision. These are genuine capacities, and they are genuinely valuable. But the Creative Director operates within a framework that constrains judgment in ways that are worth naming.
The Creative Director can direct the machine to build anything the machine is capable of building. The Creative Director cannot direct the machine to build something the machine's designers have decided it should not build. The Creative Director can choose among possibilities the platform makes available. The Creative Director cannot expand the platform's possibility space. The Creative Director's authority is real within the boundaries of the system, and the boundaries of the system are set by someone else.
This is the paradox of the prompter's autonomy, and it is structurally identical to the paradox of the programmer's autonomy that Noble documented forty years ago. The figure at the interface — the person who appears to direct the machine — is celebrated as the locus of creative authority. The celebration is not false. The authority is real. But it operates within constraints that the celebration does not mention, constraints set by the institutions that control the platform, the training data, the deployment infrastructure, and the terms under which access is granted.
The question Noble asked about the programmer — who programs the programmer? — replicates at the new level of abstraction. Who prompts the prompter? Who determines the possibility space within which the prompter's creativity operates? Who sets the boundaries that the prompter's judgment cannot cross?
The answer is the same answer Noble found on the factory floor: the institutions that control the means of production. The means of production have changed. The institutional logic has not.
---
In 1964, the United States government established the National Commission on Technology, Automation, and Economic Progress to study the impact of automation on American workers. The commission heard testimony from industrialists, economists, labor leaders, and technologists. The industrialists testified that automation was inevitable, that resistance was futile, and that the displaced workers would find new employment in the expanding economy. The labor leaders testified that their members were losing jobs, losing income, and losing the craft knowledge that had defined their professional identities for generations. The economists testified that the aggregate data supported the industrialists: productivity was rising, GDP was growing, the economy was expanding.
The commission's final report, issued in 1966, sided with the industrialists and the economists. Automation was progress. Progress was inevitable. The displaced workers would adapt. The report recommended job retraining programs and improved education. It did not recommend any alteration to the trajectory of automation itself, because the trajectory was presented as a natural process — a force of economic evolution that could be accommodated but not directed.
Noble spent years in the archives of commissions like this one, and the pattern he found was invariant. At every major technological transition, the same three-act drama played out. First, the new technology was presented as inevitable — the natural next step in a progression that could not be halted without sacrificing progress itself. Second, the costs of the transition were acknowledged but subordinated to the benefits — yes, some workers would be displaced, but the economy would grow, and growth would create new opportunities. Third, the responsibility for adaptation was placed on the workers who bore the costs rather than on the institutions that imposed them — retrain, relocate, reinvent yourself.
The mythology of inevitable progress serves a specific political function: it forecloses the question of alternative designs. If the technology is inevitable, then the specific form it takes is also inevitable, and the distributional consequences of that form are natural outcomes rather than political choices. The factory owner who chose numerical control over record playback was not making a political decision. He was responding to the inevitable march of progress. The AI company that chose general-purpose expertise-replacement over expert-amplifying design was not making a political decision. It was building what the technology demanded.
Noble's career was devoted to dismantling this mythology, and the dismantling requires a specific intellectual move: the recovery of the alternatives that the mythology buries. At every point where the mythology says "this was inevitable," Noble's research found a fork in the road — a moment where a different design was available, where a different choice was possible, where the technology could have developed along a different trajectory that would have distributed power differently. The alternatives were not imaginary. They were developed, tested, sometimes proven superior, and rejected — not because they failed technically but because they succeeded politically in ways that the institutions controlling the development process did not want.
The Orange Pill reproduces the mythology with a sophistication that makes the reproduction harder to detect. Segal does not claim that AI is inevitable in the crude sense of the 1966 commission report. He uses a more elegant formulation: AI is a river. Intelligence is a force of nature, flowing through increasingly complex channels for billions of years. The arrival of AI is not an invasion but a branching — the river finding a new channel, the way it has always found new channels.
The river metaphor naturalizes AI's development in exactly the way that Noble's work warns against. A river is not designed. A river does not have shareholders. A river did not choose to flow through this valley rather than that one because this valley's geography served the interests of a particular set of institutions. A river is a natural phenomenon, and natural phenomena are, by definition, beyond political contestation. You cannot argue with a river. You can only build dams.
Noble would insist on a different metaphor. AI is not a river. AI is a canal — a channel dug by specific people, with specific tools, funded by specific institutions, routed through specific terrain chosen for specific reasons. A canal looks like a river if you did not watch it being dug. It carries water like a river. It supports life like a river. But it was designed, and the design reflects the interests of the people who commissioned it, and the route it follows was chosen from among alternatives, and the alternatives that were rejected were rejected for reasons that the canal's beneficiaries prefer not to discuss.
Segal's five-stage model of technological transition — threshold, exhilaration, resistance, adaptation, expansion — is a progress narrative. It assumes that the trajectory bends toward expansion, that each transition ultimately produces more capability, more reach, more possibility. Noble would tell the same history through a different frame. The trajectory does bend toward expansion — but expansion of what, and for whom? The expansion of productive capability is real. The expansion of GDP is real. The expansion of the technological frontier is real. But the expansion of human welfare — of dignity, autonomy, security, and meaningful work — does not follow automatically from the expansion of productive capability. It follows only when the institutional structures surrounding the technology are designed to distribute the gains broadly, and those structures are never built automatically. They are built through political struggle, by the people who bear the costs of the transition, against the resistance of the people who capture the gains.
The Luddites did not fail because they were wrong about the technology. They failed because they lacked the institutional power to redirect the technology toward their interests. Their children got the eight-hour day and the weekend, as Segal notes. What Segal does not note is that the eight-hour day and the weekend were not natural consequences of industrial expansion. They were won through decades of labor organizing, strikes, legislative battles, and political violence. The dams were not built by the river. They were built by people who stood in the river and fought the current, often at the cost of their livelihoods and sometimes at the cost of their lives.
The mythology of inevitable progress turns this history into a bedtime story — a narrative in which the disruption was temporary, the adaptation was organic, and the expansion was shared. The actual history is a narrative of sustained political conflict in which the expansion was shared only because organized workers forced the sharing against the systematic resistance of the institutions that controlled the technology and preferred to keep the gains.
The AI transition will not be different because the technology is different. The institutional dynamics — the pressure to capture gains, the incentive to minimize labor costs, the structural advantage of capital over labor in determining the design and deployment of productive technology — have not changed. What has changed is the speed of the transition and the weakness of the institutions that historically mediated it. The labor movement is weaker than at any point since the early industrial era. The regulatory infrastructure is slower than the technology it attempts to govern. The political systems that might impose distributional constraints are captured by the economic interests that benefit from their absence.
Segal calls for dams. He is right that dams are needed. But the call for dams, absent an analysis of who will build them and against whose resistance, is an invocation rather than a strategy. Noble's career was devoted to the proposition that the dams do not build themselves. They are built by organized people who understand their interests and possess the collective power to advance them. The mythology of inevitable progress serves precisely to prevent this understanding — to convince the people who bear the costs that the costs are natural, temporary, and ultimately beneficial, so that they do not organize, do not resist, and do not demand a voice in determining the trajectory of the technology that is reshaping their lives.
The mythology is powerful. It is also a choice. And choices, unlike rivers, can be contested.
---
In the machine shops that Noble studied, there was a category of knowledge that management could never fully account for because it was never fully visible. The machinists called it "feel" — the capacity to sense, through the vibration of the handwheel and the sound of the cutting tool and the appearance of the metal chip curling off the workpiece, whether the operation was proceeding correctly or heading toward failure. A machinist with thirty years of experience could detect a failing bearing in a spindle motor by the change in its sound — a shift in frequency so subtle that an oscilloscope would have struggled to measure it, but that the trained ear registered instantly as wrong. A machinist could feel, through the resistance of the tool against the metal, whether the material was the alloy the specification called for or a harder substitute that required different cutting parameters. A machinist could look at the chip — at its color, its curl, its thickness — and know whether the cutting speed was correct, whether the tool was wearing, whether the depth of cut was producing the surface finish the part required.
This knowledge did not exist in any document. It could not be extracted through interviews, because the machinists who possessed it could not fully articulate it. When asked how they knew the bearing was failing, they said they could hear it. When asked what exactly they heard, they could not produce a frequency measurement or a decibel reading. The knowledge was real — it saved the company thousands of dollars in prevented machine damage, rejected parts, and production delays — but it was not representable in the formats that management's information systems could process.
Noble documented what happened when this knowledge was eliminated from the production process. The numerically controlled machines did not listen to their own bearings. They did not feel the resistance of the material. They did not read the chip curl. They executed their programs with mechanical precision, and when the conditions deviated from what the program assumed — when the material was harder than specified, when the tool was worn beyond its useful life, when the bearing was failing — the machine continued to execute the program, producing scrap parts or damaging itself or both, until a human operator noticed the problem or the machine stopped.
The quality of output declined. Not dramatically, not immediately, but steadily, in ways that took months to measure and years to fully understand. The decline was masked by the productivity gains — the numerically controlled machines produced parts faster than the machinists, and the speed compensated for the higher rejection rate in the aggregate statistics. But the quality problem was real. Parts that the experienced machinist would have caught during production passed through the automated process and were discovered only at inspection — or worse, after assembly, when a failed part in a complex assembly required the disassembly, diagnosis, and replacement of the failed component at a cost that dwarfed the savings from the automated production.
Management's response was not to restore the machinist's role. It was to add compensating systems: automated inspection equipment, statistical process control programs, quality management bureaucracies that monitored the output of the automated process and caught the errors that the machinist's feel would have prevented. These compensating systems worked, imperfectly and expensively. They substituted formal, measurable, manageable knowledge for the tacit, embodied, unmanageable knowledge they had replaced. The substitution was never complete. The compensating systems caught many of the errors. They did not catch all of them, because the errors that tacit knowledge prevented were precisely the errors that formal systems had difficulty anticipating — the unusual combination of conditions, the rare material defect, the subtle machine deterioration that would have been obvious to an experienced hand and was invisible to a sensor array designed to monitor known failure modes.
The parallel to AI-augmented knowledge work demands the same uncomfortable specificity that Noble brought to the machine shop.
Segal describes a developer who lost architectural intuition after Claude assumed the implementation work that had previously built it. The developer had spent years writing configuration files, managing dependencies, debugging integration failures — tedious work, certainly, but work that deposited, in Noble's language, thin layers of understanding with every hour. The understanding was not in the configuration files. It was in the experience of writing them — in the failures that revealed how systems connected, in the debugging sessions that exposed architectural assumptions the documentation had never made explicit, in the slow accumulation of pattern recognition that an experienced developer draws on when evaluating whether a proposed design will survive contact with production reality.
Claude eliminated the tedium. It also eliminated the developmental substrate. The configuration files are written correctly. The dependencies are resolved. The integration works. And the developer who directed the process has not built the understanding that the process would have produced, because the understanding was a byproduct of the struggle, and the struggle has been optimized away.
This is not a sentimental argument about the virtues of suffering. It is an empirical observation about the relationship between experience and expertise. The machinist's feel was not a romantic attachment to manual labor. It was a genuine perceptual capacity, developed through sustained interaction with physical materials, that produced information unavailable through any other channel. The developer's architectural intuition is not a romantic attachment to tedious code. It is a genuine cognitive capacity, developed through sustained interaction with complex systems, that produces judgment unavailable through any other method.
When the developmental substrate disappears, the capacity it produces disappears with it — not immediately, but generationally. The senior developers who built their intuition before AI retain it. They are, as Segal notes, more valuable than ever, because their judgment is the irreplaceable complement to AI's execution. But the junior developers who enter the profession after AI has eliminated the developmental experiences that built the senior developers' judgment will not develop equivalent judgment, because the path no longer exists. The ascending friction thesis promises that new developmental paths will emerge at higher cognitive levels. Noble's historical evidence suggests that the institutional support for these new paths is rarely provided, and that the knowledge gap between the generation that built its expertise through struggle and the generation that bypassed the struggle through automation becomes, over time, an unbridgeable structural deficit.
In the machine shops, the deficit manifested as a quality problem that was expensive and difficult to manage but not immediately catastrophic. In knowledge work, the potential manifestation is more severe, because the systems being built are more complex, more interconnected, and more consequential than machined metal parts. A software system designed by a developer who lacks architectural intuition — who can produce working code through AI direction but cannot evaluate whether the architecture underlying that code will scale, will fail gracefully, will resist the attack vectors that an experienced developer would anticipate — is a system that works until it doesn't, and when it doesn't, the failure mode is likely to be novel, unexpected, and difficult to diagnose, because the developer who built the system does not possess the understanding that would enable diagnosis.
Noble documented how the elimination of shop floor knowledge produced a specific institutional pathology: the organization that could produce but could not maintain. The numerically controlled shop could machine parts to specification at unprecedented speed. It could not diagnose the novel failures that occurred when conditions deviated from the specifications, because the diagnostic capacity — the machinist's feel — had been designed out of the organization. The compensating systems — the inspection equipment, the statistical controls, the quality bureaucracies — addressed the symptoms. They did not address the cause, which was the absence of the knowledge that would have prevented the symptoms from arising.
AI-augmented organizations face the same pathology, projected forward. The organization that builds with AI can produce at unprecedented speed. But the speed of production is not the same as the depth of understanding, and when the systems that were built at speed encounter conditions that the AI's training data did not anticipate — the novel failure mode, the unprecedented scale, the adversarial attack that exploits the architectural assumption that no experienced developer reviewed because the development process no longer required experienced review — the organization will discover that it has optimized away the knowledge it needs to respond.
Segal writes with genuine concern about the loss and proposes institutional responses — mentoring structures, protected time for friction-rich learning, deliberate creation of developmental experiences that AI would otherwise eliminate. These proposals are serious and worth implementing. Noble's framework adds a structural observation: the knowledge that disappears when the skilled worker is replaced by the automated system has never, in any documented case, been fully reconstituted through compensating institutional mechanisms. It has been partially compensated, expensively and imperfectly. The compensation has been good enough — good enough to sustain production, good enough to manage quality, good enough to keep the system running. But it has never been equal to the knowledge it replaced, and the gap between the original knowledge and its institutional substitute has been, in every case Noble studied, a permanent source of fragility in the automated system.
The knowledge that lives in the hands of the skilled practitioner — the machinist's feel, the developer's intuition, the diagnostician's clinical eye — is the product of a developmental process that cannot be shortcut, cannot be compressed, and cannot be reconstructed once the conditions that produced it are eliminated. It is, in the most literal sense, irreplaceable. And the systems that are built without it will carry its absence forward, as a structural weakness that no amount of speed, no quality of output, and no sophistication of compensating mechanism can fully address.
Sixteen miles south of MIT, in a brick building that nobody wrote about, a group of machinists at the Giddings & Lewis machine tool plant in Fond du Lac, Wisconsin, were doing something in the early 1970s that contradicted the official history of automation. They were programming their own machines.
The technology was called manual data input — a system that allowed the operator at the machine to write and edit programs directly, using a simplified programming language and a console mounted at the workstation. The machinist did not need a separate programming department. The machinist did not need to submit a job request, wait for an engineer to translate the drawing into code, wait for the code to be tested, wait for the tape to be punched and delivered. The machinist read the drawing, wrote the program, tested the first part, adjusted the parameters, and ran the production — the entire cycle, from conception through execution, in the hands of the person who understood the material, the machine, and the work.
Noble documented what happened next with the weary precision of a historian who has watched the same story end the same way enough times to know the ending before it arrives. Management did not celebrate the machinists' initiative. Management saw a threat. Machinists who could program their own machines were machinists who did not need a programming department, which meant they did not need the layer of management that supervised the programming department, which meant the organizational hierarchy that numerical control had created was being hollowed out from below. The machinists were not merely producing parts. They were demonstrating that the separation of conception from execution — the entire architectural premise of numerical control as implemented — was not a technical necessity. It was a political choice, and a reversible one.
Management's response was to restrict access to the programming consoles, to require that programs written by machinists be reviewed and approved by the engineering department, and in some plants to remove the manual data input capability entirely and return to the centralized programming model that kept knowledge where management wanted it: in the engineering department, under management control, away from the shop floor.
The episode is important not because it was unique — Noble found similar episodes across dozens of plants — but because it demonstrates that alternative designs are not merely theoretical. They are implemented, they work, and they are suppressed when they distribute power in ways that management finds threatening. The question of what a technology could be is always also a question about what the institutions controlling the technology will permit it to become.
This chapter concerns itself with what AI could have been, what it still could become, and the political conditions that would be required to make the alternatives real.
The technical alternatives are neither speculative nor utopian. They exist, in various stages of development, and they are suppressed not by technical limitations but by market logic — by the structural incentives that reward general-purpose expertise-replacement over expert-amplifying tools, individual productivity over collective capability, speed of output over depth of understanding.
Expert-amplifying AI preserves the practitioner's knowledge as the foundation of the system's capability. A medical diagnostic system trained on the reasoning patterns of a specific clinical team — capturing not just the diagnosis but the chain of inference, the weighting of symptoms, the contextual factors that shift the probability — would make that team's expertise available at scale without making the team's expertise dispensable. The system would be most powerful in the hands of the practitioners whose knowledge it encoded, because they would understand its reasoning, recognize its limitations, and know when to override its recommendations. The competitive advantage would accrue to the worker, not to the employer who could substitute a cheaper operator.
Collectively governed AI distributes ownership and governance of the model to the communities whose knowledge it embodies. A code-generation model trained on the contributions to a specific open-source ecosystem could be owned and governed by that ecosystem's contributor community, with revenue from commercial use flowing back to the contributors whose work made the model possible. The model would be a shared resource, governed democratically, rather than a proprietary product extracted from a shared resource without consent or compensation.
Transparent AI makes the system's reasoning visible and contestable. A coding assistant that showed its work — that explained why it chose a particular implementation, what alternatives it considered, what tradeoffs it made — would support the user's learning rather than bypassing it. The user who understood the reasoning could evaluate it, challenge it, learn from it. The user who received only the output, polished and confident, could only accept or reject it — a binary that produces neither understanding nor expertise.
Each of these alternatives is technically feasible. None of them is what the market built, because none of them maximizes the returns that the market rewards.
Expert-amplifying AI serves a smaller market than expertise-replacing AI. The population of existing experts is a fraction of the population of people who lack expertise and want AI to supply it. The market builds for the larger population, because the larger population generates more revenue. The expert's interests are sacrificed to the novice's convenience, and the sacrifice is presented as democratization.
Collectively governed AI distributes revenue rather than concentrating it. The AI company's business model depends on capturing the value of the training data; a model that returned value to the data's producers would be less profitable. The collective's interests are sacrificed to the shareholder's returns, and the sacrifice is presented as innovation.
Transparent AI is slower than opaque AI. Explanation takes time. Showing the reasoning adds complexity to the interface and latency to the interaction. The market rewards speed. The user's developmental interests are sacrificed to the user's immediate convenience, and the sacrifice is presented as efficiency.
Noble's analytical contribution was to demonstrate that these sacrifices are choices, not inevitabilities. The technology does not demand expertise-replacement over expert-amplification. The market demands it. The market is a human institution, governed by human rules, subject to human modification. The rules could be different. The outcomes would follow.
The democratic control of intelligent tools is not a utopian aspiration. It is a specific institutional design challenge with historical precedents. The labor movement's response to industrial automation was to demand a voice in how the technology was deployed — not to stop the machines, but to ensure that the people affected by the machines had a say in the terms of the transition. The eight-hour day, the weekend, the minimum wage, workplace safety regulation — these were dams built through collective action by people who understood that the technology's trajectory was being determined by institutions that did not represent their interests, and that the trajectory would not change until they organized the power to change it.
The AI transition requires equivalent institutional innovation. Not because the technology is evil — it is, as Segal documents, genuinely capable of expanding human possibility in ways that deserve celebration — but because the distribution of the technology's benefits is not determined by the technology. It is determined by the institutional structures that surround it: the ownership models, the governance frameworks, the labor agreements, the regulatory regimes, the educational systems that prepare people to participate in the technology's development rather than merely to consume its outputs.
Segal proposes dams: cultural practices, organizational norms, educational reforms, attentional ecology. These proposals are genuine and many are wise. Noble's framework adds a question that the proposals must answer to be more than aspirational: Who builds the dams? The dams that management builds serve management's interests, even when they are designed by compassionate managers, because compassionate managers are still subject to market pressures that reward cost reduction and punish the retention of workers whose skills have been automated. The dams that regulators build serve the regulatory logic, which is reactive, slow, and systematically captured by the industries it regulates. The dams that last — the ones that survived the industrial transition and produced the institutional infrastructure that turned automation from catastrophe into expansion — were built by organized workers who understood their interests, possessed the collective power to advance them, and sustained the political struggle over decades.
The AI transition will not produce equitable outcomes through the goodwill of enlightened founders or the wisdom of regulatory agencies. It will produce equitable outcomes when the people whose knowledge trained the models and whose livelihoods are reshaped by the models organize the collective power to demand a voice in the models' governance. This is not a radical proposition. It is the ordinary proposition of democratic participation applied to the technology that is reshaping the conditions of work and life for billions of people. The radical proposition is the one currently in effect: that a handful of corporations should control the most powerful knowledge-processing technology in human history, trained on the collective knowledge of the species, accountable primarily to their shareholders, and governed by terms of service that the affected populations accept as a condition of economic participation.
Noble would recognize the current arrangement. He documented its precursors across fifty years of industrial automation. He would recognize the language — inevitable, democratic, liberating — because it is the same language that accompanied every previous technology whose design served concentrated interests while promising distributed benefits. And he would insist, as he insisted throughout his career, that the language is not the reality. The reality is determined not by what the technology promises but by who controls it, in whose interest, and whether the people it affects have any meaningful power to contest the terms.
The machinists in Fond du Lac demonstrated that alternative designs are not merely possible. They are implementable, they produce superior results, and they are suppressed when they threaten the institutional arrangements that the dominant design supports. The suppression is not inevitable. It is political. And political arrangements, unlike rivers, can be changed — by people who organize, who build power, and who insist that the technology that reshapes their lives should be governed by their participation rather than by their exclusion.
---
There is a photograph from 1936, taken by Margaret Bourke-White for Life magazine, that captures something no statistic can. The photograph shows the Fort Peck Dam in Montana — an immense wall of earth and concrete stretching across the Missouri River, dwarfing the workers standing at its base. The workers are not named. They are tiny figures against the enormity of the structure, visible only as silhouettes — interchangeable, anonymous, reduced by the scale of the project to the status of components.
The photograph is celebratory. It appeared on the cover of Life's inaugural issue. It was meant to convey the grandeur of American engineering, the ambition of the New Deal, the capacity of organized human labor to reshape the landscape. But it also conveys, inadvertently and indelibly, the relationship between the structure and the people who built it. The dam is permanent. The workers are passing through. The dam is the achievement. The workers are the cost.
Noble's work on the labor process — the way work is organized, experienced, and controlled — was fundamentally concerned with this relationship: the structural tendency of capitalist production to treat the worker as a cost to be minimized rather than a participant to be developed. The tendency is not malicious. It is systemic. The firm that minimizes labor costs outcompetes the firm that does not, and the competitive pressure is relentless, continuous, and indifferent to the moral quality of the firm's intentions. The most compassionate employer in a competitive market is still an employer who must reduce costs to survive.
AI reorganizes the labor process of knowledge work along lines that Noble's framework predicts with dispiriting precision. The reorganization operates through three mechanisms, each of which has a precedent in the industrial automation Noble documented, and each of which produces consequences that the language of empowerment and liberation systematically obscures.
The first mechanism is intensification. The Berkeley researchers whose study Segal discusses found that AI did not reduce work. It multiplied it. Workers who adopted AI tools worked faster, took on more tasks, expanded into new domains, and filled every pause with additional productive activity. The tool that was supposed to free workers from drudgery instead generated new drudgery — more tasks, more projects, more expectations — that consumed the freed time with the efficiency of water filling a depression. Noble documented the same pattern in automated factories. Numerical control machines ran faster than manual machines, which meant that the operators had to keep up with the faster pace, which meant that the physical intensity of the work increased even as its skill content decreased. The machines were faster. The workers were more exhausted. The exhaustion was presented as a temporary adjustment period. It became permanent.
AI intensification operates through a different mechanism than industrial intensification — through the internalized achievement pressure that Segal, following Han, calls auto-exploitation rather than through the external pace-setting of the assembly line — but the effect is structurally identical. The worker produces more. The worker is more depleted. The surplus generated by the increased production flows to the employer. The depletion is borne by the worker. And the mechanism by which the intensification occurs — the fact that it feels voluntary, that the worker is choosing to prompt during lunch breaks rather than being ordered to — makes the intensification harder to resist, because there is no external authority to resist against. The whip and the hand that holds it belong to the same person, as Segal writes. Noble would add: and the person who profits from the whipping is someone else entirely.
The second mechanism is atomization. When AI enables a single worker to accomplish what previously required a team, the team dissolves. Segal celebrates this as liberation from the constraints of coordination and specialization. Noble's framework identifies a different consequence: the dissolution of the collective structures that provided workers with solidarity, mutual support, institutional memory, and bargaining power.
A team is not merely a production unit. It is a social structure. The team that works together over time develops shared knowledge — the understanding of why certain decisions were made, the awareness of where the system is fragile, the collective judgment that emerges from years of debugging, refactoring, and maintaining shared code. This shared knowledge is not stored in any document or codebase. It lives in the relationships between the team members, in the conversations they have, in the disagreements they resolve, in the accumulated trust that allows them to divide complex work with confidence that each part will be done well.
When the team dissolves, this relational knowledge dissolves with it. The individual who builds with AI produces the feature, but no collective develops the understanding that would allow the organization to maintain, extend, and adapt the feature when circumstances change. The individual's production is visible and measurable. The collective's understanding is invisible and unmeasurable. In a system that values what it can see, the visible production wins and the invisible understanding is sacrificed without anyone noticing the sacrifice until the consequences arrive — as they inevitably do, in the form of systems that cannot be maintained by the people who built them, architectural decisions that cannot be explained by the people who made them, and institutional memory that evaporated when the individuals who produced the output moved on to the next project.
Noble documented this dynamic in the machine shop with the specificity of someone who had spent years on the factory floor watching it happen. The numerically controlled shop produced parts faster. It also lost the institutional knowledge that had allowed the manual shop to adapt to unusual jobs, to diagnose novel failures, to improve processes through the accumulated wisdom of experienced workers sharing knowledge with apprentices. The speed was captured. The knowledge was sacrificed. And the sacrifice was invisible in every metric that management tracked, because the metrics tracked output, not understanding.
The third mechanism is substitutability. When the tool, rather than the worker, possesses the productive knowledge, the worker becomes interchangeable. The developer who uses Claude Code to build a feature possesses the judgment to direct the tool, but the tool possesses the knowledge that makes the feature possible. A different developer, with different experience and different expertise, could direct the same tool to build the same feature — not identically, perhaps not as well, but adequately. The adequacy is the threat. When adequate substitutes are available, the premium for excellence declines, because the market pays a premium only for scarcity, and AI makes adequate performance abundant.
Noble documented how numerical control transformed the machinist from an artisan — a person whose specific knowledge was essential to the production process and therefore commanded a premium — into an operator — a person whose role required minimal skill and could be filled by any worker willing to accept the wage. The transformation was not instantaneous. It took years. The skilled machinists who remained in the workforce continued to produce higher-quality work. But the organizational investment in developing new skilled machinists declined, because the automation reduced the return on that investment. Why spend five years training an apprentice machinist when a six-month machine operator produces acceptable results? The question was economically rational. Its long-term consequence was the erosion of the craft knowledge base that had sustained the industry's quality standards for generations.
The AI version of this question is being asked in every technology organization right now: Why invest in developing deep expertise when a competent generalist with AI tools produces acceptable results? The question is economically rational. Its long-term consequence, if Noble's historical evidence is any guide, will be the erosion of the professional knowledge base that sustains the quality, reliability, and security of the systems on which civilization increasingly depends.
Segal chose to retain and develop his team. The choice was moral, and it was also strategic — Segal recognized that the judgment his experienced engineers possessed was the irreplaceable complement to AI's execution capability. But the choice operated against the market logic that Noble identified as the dominant force in shaping the deployment of automation technology. The market logic says: capture the productivity gain, reduce the headcount, lower the cost per unit of output. The moral choice says: invest in the people, develop their capabilities, build something that lasts. Noble's career was a sustained documentation of which choice typically prevails when the two conflict.
The labor process in the age of language models is being reorganized according to a logic that is older than the models and more durable than any particular technology. The logic says: extract knowledge from the workforce, encode it in a system that management controls, reduce the skill requirements of the work, and use the reduced requirements to lower labor costs. The technology that executes this logic changes with each generation. The logic does not.
The question that Noble asked throughout his career — and that the AI transition demands we ask again — is not whether the technology is capable of producing better outcomes. It plainly is. The question is whether the institutional structures surrounding the technology are designed to realize those better outcomes, or whether the structural pressures of competitive markets will channel the technology's capability toward the same distributional pattern that every previous automation technology has produced: gains for capital, costs for labor, and a mythology of inevitable progress that presents the pattern as natural rather than chosen.
The dam workers in the Bourke-White photograph built something that lasted. The structure they built reshaped the landscape, controlled the river, generated power for millions. Their labor was essential. Their names are lost. The dam remembers the engineers. It does not remember the workers.
The pattern holds. Unless someone changes it.
---
The hands keep coming back.
Not as metaphor. As the most literal thing in this book. Hands on a lathe, reading the vibration of metal through a handwheel. Hands on a keyboard, writing code that deposits understanding line by line. Hands in soil, in Han's Berlin garden, resisting the smooth glass of the screen. The entire argument — Noble's argument, and the argument I tried to build in The Orange Pill — comes down to what happens when productive knowledge leaves the hands that made it.
I wrote about ascending friction. Noble showed me where that thesis breaks.
Not everywhere. I still believe that when lower-level difficulty is automated, genuine higher-level challenge can emerge. I watched it happen in Trivandrum. I felt it happen in my own work with Claude. The experience was real. But Noble's evidence forced me to hold a harder truth alongside the exhilaration: the machinists who were promised ascending expertise after numerical control were mostly not the ones who ascended. The engineers filled those roles. The machinists filled unemployment lines. The ascending happened. It happened to different people.
That gap — between the people who ascend and the people who are left standing where the floor used to be — is the gap I had not examined carefully enough. Noble forced the examination, and the examination changed what I think needs to be built.
I described AI as a river. Noble called it a canal. He is right that the distinction matters. A river is blameless. A canal was dug by someone, funded by someone, routed for someone's benefit. The design choices inside Claude — what it was trained on, who it was optimized for, what it amplifies and what it ignores — are choices made by identifiable people at identifiable institutions for identifiable reasons. Calling the result a force of nature lets those people off the hook. I do not want to let them off the hook. I do not want to let myself off the hook, either, because I am building with the canal and benefiting from its route.
What stays with me most is the photograph Noble never took but that his work makes visible: the collective knowledge of millions of developers — the Stack Overflow answers written at midnight, the open-source libraries contributed freely, the blog posts explaining obscure bugs — ingested, encoded, and sold back as a subscription. The developers whose knowledge trained the model have no seat at the table where the model's future is decided. That is not a river finding its channel. That is an enclosure of the commons, and it demands a political response, not just a cultural one.
Noble died in 2010. He never saw Claude Code. He never experienced the orange pill moment. But the framework he built — who controls the tool, whose knowledge does it encode, whose interests does its design serve — is the framework I needed and did not have when I started writing. My dams are not enough if I cannot answer his question: who builds them, against whose resistance, and with what collective power?
I still believe AI can expand human possibility. I believe it more than ever, because I have lived inside the expansion and know it is real. But Noble taught me that expansion and dispossession can coexist in the same technology, serving different populations simultaneously, and that the expansion does not cancel the dispossession. Both must be addressed. Both demand institutional structures that do not yet exist.
The hands on the lathe. The hands on the keyboard. The hands in the garden. The knowledge lives there, and it is leaving, and where it goes and who controls it when it arrives is the political question of our time.
Noble saw it first. The rest of us are catching up.
-- Edo Segal
Every technology embeds a political choice. David Noble spent his career proving it -- documenting how automation in American factories was designed not merely to increase efficiency but to transfer power from workers to managers, often suppressing superior alternatives that would have kept skilled hands at the center of the process. Now AI repeats the pattern at civilizational scale: the collective knowledge of an entire profession, extracted from the digital commons, encoded in proprietary models, and sold back as a subscription.
This book applies Noble's framework -- developed across decades of archival research and shop-floor observation -- to the AI revolution unfolding today. It examines what happens when productive knowledge leaves the hands that made it, who captures the gains when capability is democratized, and why the language of liberation so often accompanies the mechanics of dispossession.
For readers of The Orange Pill who felt the exhilaration and want to understand what it conceals, Noble provides the lens that technological enthusiasm alone cannot supply.
-- David Noble

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that David Noble — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →