By Edo Segal
The word that kept stopping me was not "Luddite." I expected "Luddite." I had used it myself in The Orange Pill, tried to rehabilitate it, thought I had done the work. Then I read Thompson, and I realized I had been handling the word with gloves on.
E.P. Thompson did not handle anything with gloves on.
He picked up the framework knitters of Nottinghamshire — people who had been dismissed for two centuries as frightened machine-breakers — and he held them up to the light and he said: these people understood their situation better than the factory owners, better than the magistrates, better than the political economists who wrote them off as obstacles to progress. They were not afraid of technology. They were angry about the terms on which it was being imposed. And the difference between those two things is the difference between a psychological problem and a political one.
That distinction hit me like a wall.
Because I had been treating the AI transition as a psychological challenge. In The Orange Pill, I wrote about fight-or-flight, about the silent middle holding contradictory truths, about the need for self-knowledge. All of that still stands. But Thompson revealed the dimension I was not seeing: the structural one. Who sets the terms? Who captures the gains? What happens when the people bearing the costs have no seat at the table where the terms are decided?
I kept the team in Trivandrum. I chose investment over extraction. I am proud of that choice. But Thompson forced me to sit with an uncomfortable fact: the choice was mine. Not theirs. The engineers did not negotiate the terms of their own transformation. They received terms I set, and because I am a decent person, the terms were decent. That is not governance. That is benevolence. And benevolence, however genuine, is not a structure. It is a disposition. Dispositions change with the quarterly earnings call.
Thompson spent his career demonstrating that working people are not objects to be managed through transitions. They are agents who understand their own situations with a clarity that their social superiors frequently lack, and whose participation in the governance of technological change is not a courtesy but a democratic requirement.
This book applies his framework to the AI revolution. It will not tell you what tools to use or how to prompt more effectively. It will ask you who decided the terms of this transition, and whether the people bearing its costs had any say in that decision.
The question is two centuries old. The answer is still being written.
— Edo Segal ^ Opus 4.6
1924-1993
E.P. Thompson (1924–1993) was a British historian, writer, and political activist whose work fundamentally reshaped how we understand the relationship between technological change, class formation, and democratic participation. His magnum opus, The Making of the English Working Class (1963), rescued generations of laborers, artisans, and radicals from what he called "the enormous condescension of posterity," demonstrating that ordinary people were not passive victims of industrialization but active agents who analyzed their situations, organized collectively, and fought for a voice in the governance of their own labor. His 1971 essay "The Moral Economy of the English Crowd in the Eighteenth Century" introduced the concept of the moral economy — the customary norms governing fair dealing that communities develop and defend when violated — a framework that has since been applied by scholars across dozens of disciplines worldwide. A committed socialist and anti-nuclear campaigner, Thompson insisted throughout his career that history is not something that happens to people but something people make, often under conditions they did not choose. His analytical tools — the moral economy, class as relationship rather than category, resistance as democratic practice — have proven remarkably portable across centuries, and in 2024 were applied directly to the AI transition by Nobel laureate Daron Acemoglu, confirming their enduring relevance to the governance of technological power.
There is a word that the comfortable classes deploy against anyone who questions the direction of technological change, and the word is Luddite. It arrives in conversation with a particular tone — not the tone of engagement but of dismissal, the rhetorical equivalent of turning one's back. The senior engineer who expresses concern about the pace of AI deployment is a Luddite. The graphic designer who objects to the scraping of her portfolio to train an image generator is a Luddite. The teacher who resists the integration of chatbots into her classroom, the journalist who questions the replacement of her colleagues with algorithmic content, the parent who wonders aloud whether her twelve-year-old's homework still matters — each is offered the same label, and the label is meant to end the conversation before the conversation can become dangerous.
Dangerous to whom is the question the label forecloses.
E.P. Thompson spent his career recovering political movements from precisely this kind of erasure. His life's work, beginning with The Making of the English Working Class in 1963 and extending through decades of historical scholarship and political activism until his death in 1993, was the rescue of ordinary people from what he called "the enormous condescension of posterity" — the habit of treating the poor, the displaced, and the resistant as objects of historical processes rather than as agents who understood their situations, analyzed their options, and acted with intelligence and purpose. Thompson demonstrated, through meticulous archival research and fierce moral commitment, that the movements dismissed by conventional history as primitive, irrational, or merely reactive were in fact sophisticated campaigns grounded in political analysis, economic understanding, and moral conviction.
Thompson never commented on artificial intelligence. He could not have; the technology that would provoke the current crisis was decades from emergence when he died. But the analytical framework he developed — for understanding how technological change intersects with class power, how the displacement of workers is experienced and resisted, how the language of progress is deployed to silence legitimate grievance — applies to the present moment with a precision that should unsettle anyone who believes the AI transition is merely technical.
The framework begins with a distinction so fundamental that its absence from the contemporary discourse constitutes a political failure. The distinction is between opposing technology and opposing a specific deployment of technology. The original Luddites of 1811–1816, the framework knitters of Nottinghamshire, the shearmen of Yorkshire, the cotton weavers of Lancashire, did not oppose machinery as such. They opposed specific machines, owned by specific manufacturers, deployed under specific conditions that violated the established customs of their trades. The framework knitters destroyed wide stocking frames that were being used to produce cut-up work — an inferior product that undercut the prices their skill could command — while leaving untouched the narrow frames of employers who maintained the quality standards and wage customs of the trade. The targeting was precise because the analysis was precise. The knitters knew exactly what was being done to them, by whom, and through what mechanism.
The contemporary use of "Luddite" erases this precision. It collapses the distinction between opposing technology and opposing the terms of its introduction, and in collapsing the distinction, it performs a specific political function: the delegitimation of dissent. If the resistance is merely psychological — fear, nostalgia, an inability to adapt — then the appropriate response is therapy. Retraining programs. Patience. Encouraging words about the future. If, however, the resistance is political — a response to a distribution of costs and benefits that the affected communities perceive as unjust — then the appropriate response is institutional reform, redistribution, and the democratic inclusion of the affected parties in the governance of the transition. The first framing protects the interests of those who benefit from the current arrangement. The second challenges those interests. The insult enforces the first framing and forecloses the second.
The Orange Pill, the document that occasions this analysis, handles the Luddite question with more honesty than most of the technology discourse. Its author, Edo Segal, opens his treatment of the subject by acknowledging that the word has been weaponized, that it "is spoken with a sneer," and that the original Luddites were "among the most skilled textile workers in England." This acknowledgment creates an opening that the broader discourse typically refuses. But the opening, once created, is not fully entered. The Orange Pill moves through the Luddite history toward a conclusion that, for all its sympathy, reproduces a familiar structure: the Luddites "were wrong about their options," they "could not see what would grow in the space the machines opened," and the lesson of their defeat is that "grief is not a strategy." The contemporary worker is advised to stop mourning and start climbing — ascending to the higher cognitive floors where human judgment, taste, and creativity retain their value.
Thompson's framework exposes what this advice conceals. The advice is addressed to individuals. The problem is structural. The framework knitters' defeat was not a failure of individual adaptation. It was the consequence of a political arrangement in which the people who bore the costs of the transformation had no institutional mechanism through which to negotiate the terms of their own displacement. They petitioned Parliament. Parliament ignored them. They appealed to the hosiers for the enforcement of existing trade customs. The hosiers refused. They sought legal protection under statutes that had regulated the trade for centuries. The statutes were repealed. Every peaceful avenue of redress was exhausted before the first frame was broken.
The machine-breaking was not a first resort. It was what Thompson recognized as "collective bargaining by riot" — the use of direct action by people who lacked formal mechanisms for the negotiation of their grievances. The riots were not random. They were not irrational. They were the political practice of people excluded from the political process, and they served the same structural function as the petition, the negotiation, and the strike: the assertion of interests that the formal governance mechanisms refused to acknowledge.
In 2024, the Nobel laureate economist Daron Acemoglu, writing with Simon Johnson, published a landmark paper titled "Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution and in the Age of Artificial Intelligence." The paper, which appeared in the Annual Review of Economics, does something remarkable: it takes Thompson's historical framework and applies it directly to the AI transition, arguing that the same dynamics Thompson identified in the early industrial period — the concentration of productivity gains in the hands of capital, the degradation of working conditions through surveillance and loss of autonomy, the dependence of outcomes on the balance of power between workers and employers — are reproducing themselves in the age of artificial intelligence.
Acemoglu and Johnson write: "Wages are unlikely to rise when workers cannot push for their share of productivity growth. Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed." And further: "If AI is used extensively for surveillance and worker control, it will shift the balance of power" — precisely the dynamic Thompson documented in the transition from cottage industry to factory system, when handloom weavers who had possessed "considerable control over when and how hard they worked" were forced into factory discipline where "all of this disappeared as tasks were taken over by weaving machines."
The Acemoglu-Johnson paper is significant not because it tells Thompson scholars anything they did not already know, but because it demonstrates that the analytical framework Thompson developed to understand the first Industrial Revolution has been independently rediscovered, by researchers at the pinnacle of mainstream economics, as essential for understanding the AI transition. The framework was not designed for this moment. It was designed for the framework knitters and the handloom weavers and the shearmen of two centuries ago. That it applies with such precision to the present is not a coincidence. It is evidence that the structural dynamics of technological displacement — the concentration of gains, the externalization of costs, the silencing of dissent through the rhetoric of progress — are consistent across historical periods and technological contexts.
The erasure performed by the word "Luddite" serves the same interests in the twenty-first century that it served in the nineteenth. The technology companies that develop and deploy AI systems benefit from a discourse that frames resistance as irrational, because if resistance is irrational, it can be dismissed rather than addressed. The investors who fund AI development benefit from the erasure because it removes from public discussion the possibility that the returns on their investment might be subjected to the same kinds of social and political constraints — labor protections, redistributive taxation, democratic governance — that were eventually imposed on industrial capital after decades of political struggle. The political class benefits because the erasure relieves them of the obligation to govern the transition: if the disruption is merely technological, the government's role is limited to removing obstacles and providing infrastructure; if the disruption is political, the government has an obligation to ensure that the costs and benefits are distributed justly.
The Orange Pill describes the AI transition as a river — a natural force flowing through channels of ever-increasing complexity. The metaphor is evocative, and it captures something real about the momentum of technological change. But Thompson's analysis reveals what the river metaphor obscures: rivers are not governed by the fish that swim in them. They are governed by the people who build the dams, dig the channels, and decide whose fields get irrigated and whose get flooded. The distribution of the river's benefits is not a natural phenomenon. It is a political decision. And the people whose fields are being flooded have as much right to participate in that decision as the people whose fields are being irrigated.
The recovery of the Luddite history is therefore not an antiquarian exercise. It is the restoration of a political vocabulary that the contemporary discourse urgently needs. The vocabulary includes concepts that the technology discourse has suppressed: the concept of the moral economy, the customary norms that govern economic relationships and whose violation constitutes a legitimate occasion for collective action; the concept of class as a relationship rather than a category, something that is made through shared experience and collective struggle rather than assigned by economic structure; and the concept of resistance as democratic practice, the assertion of the right to participate in decisions that affect one's own life by people who have been excluded from the formal mechanisms of governance.
These concepts were forged in the experience of the original Luddites and refined through a century of subsequent struggle. They were developed not by theorists observing from a distance but by working people who understood, from the inside, what it meant to watch the economic basis of their community destroyed by a transformation they had not chosen and could not control. The concepts are not relics. They are tools. And the present moment, the moment of the AI transition, is the moment when those tools are needed most.
The chapters that follow will apply Thompson's framework to the specific conditions of the AI transition: the moral economy of professional expertise, the class dynamics of displacement, the democratic deficit in the governance of technological change, and the political project of building the institutional structures through which the affected communities can assert their right to participate in the decisions that are reshaping their lives. The argument is cumulative. Each chapter builds on the preceding ones. And the foundation is here, in the recognition that the word "Luddite" erases a history, and that the history, once recovered, transforms the terms of every debate it touches.
The framework knitters of Nottinghamshire are speaking to us across two centuries. Their voices have been distorted by the insult, muffled by the condescension, buried under the accumulated weight of a progress narrative that has no place for dissent. The voices persist. They persist because the questions they asked — on whose terms is this transformation occurring, and who decided that these were the terms? — have not been answered. They persist because no amount of productivity statistics, no tower of ascending cognitive floors, no river of inevitability can substitute for the one thing the framework knitters demanded and were denied: a seat at the table where the future of their labor was decided.
---
In the winter of 1811, more British soldiers were deployed to suppress the Luddites in the English Midlands and North than were deployed with Wellington in the Peninsular War against Napoleon. The fact deserves a moment of silence. The most powerful nation on earth, locked in a struggle against the most formidable military commander in European history, judged the threat posed by textile workers breaking stocking frames to be severe enough to warrant a comparable military response. The government did not fear the broken frames. Frames could be rebuilt. The government feared what the frame-breaking represented: an organized, disciplined, and potentially revolutionary challenge to the emerging industrial order by people who understood that order far more clearly than the order's beneficiaries preferred.
Thompson's political history of the Luddite movement, developed across the central chapters of The Making of the English Working Class and refined in subsequent essays, demonstrates that the movement was neither spontaneous nor primitive. It emerged in three distinct regions, each with its own trade, its own customs, its own specific grievance, and its own form of organization — and the form the movement took in each region reflected the particular conditions of the local economy with a precision that makes mockery of the standard narrative of mindless machine-breaking.
In Nottinghamshire, the framework knitters attacked the wide stocking frames being used to produce cut-up work. The distinction between cut-up work and properly knitted stockings was not aesthetic. It was economic and moral. A properly knitted stocking was produced on a narrow frame by a skilled knitter who had served an apprenticeship, who understood the tensile properties of different threads, who could adjust tension and gauge by feel, and whose product bore the marks of that accumulated knowledge. Cut-up work was produced by sewing together pieces of fabric cut from wide frames — a cheaper process that required less skill, produced an inferior product, and undercut the prices that properly knitted stockings could command. The hosiers who commissioned cut-up work were, in the framework knitters' understanding, degrading the trade: violating the customs that governed quality, exploiting the surplus of labor to drive down wages, and destroying the economic basis of a community that had organized itself around the production of quality goods at fair prices.
The knitters' targeting was exact. They destroyed the frames of hosiers who produced cut-up work. They did not destroy the frames of hosiers who maintained the customs of the trade. This selectivity required intelligence — knowledge of which employers had violated the norms and which had observed them — and organization — the coordination of actions across communities, the maintenance of discipline in the ranks, the communication of targets through networks of trusted intermediaries. The knitters who broke into workshops at night knew the layout of the buildings, the habits of the watchmen, the best routes of approach and escape. They knew because they were members of the communities in which the workshops operated, because the information circulated through the dense networks of social relationship that sustained the trade, and because the collective intelligence of the community was applied to the problem of resistance with the same care and precision that it had been applied to the problem of production.
In Yorkshire, the shearmen — also called croppers — faced a different threat and mounted a different resistance. The finishing of woolen cloth required the raising of the nap with teasel frames and the shearing of the raised fibers with heavy hand-shears to produce a smooth surface. The work demanded years of training and a specific kind of physical strength; the hand-shears weighed forty pounds and had to be wielded with a precision that could distinguish between fibers separated by fractions of an inch. The gig mill could raise the nap mechanically. The shearing frame could crop it mechanically. Together, they could replace a dozen shearmen with a single machine operator and a boy. The shearmen understood this arithmetic with perfect clarity, and their resistance was proportioned to the threat: more violent than in Nottinghamshire, because the threat was more total. A framework knitter whose work was undercut by cut-up work retained his skill and could, in principle, find employment with a fair-dealing hosier. A shearman replaced by a gig mill retained nothing. The skill that had defined his working life, that had organized his identity and his place in the community, was rendered valueless overnight.
In Lancashire, the handloom weavers attacked the power looms that were being introduced into the cotton factories. The weavers' situation was complicated by a historical irony that Thompson traces with characteristic care: the mechanization of spinning, a generation earlier, had created an insatiable demand for weavers, temporarily driving wages to unprecedented heights and drawing thousands of new entrants into the trade. By 1811, the golden age was over. The power loom was making handloom weaving obsolete, and the weavers who had entered the trade during the boom found themselves competing against a machine that could produce cloth faster, cheaper, and in quantities that no army of handloom weavers could match. Acemoglu and Johnson, applying Thompson's analysis to the present, note that the handloom weavers had possessed "considerable control over when and how hard they worked on particular days or during the year. All of this disappeared as tasks were taken over by weaving machines." The loss was not merely economic. It was the loss of autonomy — the loss of the worker's control over the conditions and the rhythm of her own labor.
What unites the three regional movements, and what Thompson's analysis makes visible against the standard narrative of irrational resistance, is the political sophistication of the participants. The Luddites did not simply destroy machines. They petitioned Parliament for the enforcement of existing statutes that regulated the trades — statutes that had been on the books for centuries and that the manufacturers were violating with impunity. They sent delegates to London to lobby for legislative protection. They organized meetings, drafted resolutions, circulated pamphlets. They operated, in other words, within the established channels of political redress for as long as those channels remained open. The machine-breaking began only after those channels had been systematically closed: the statutes repealed, the petitions ignored, the combination laws deployed to criminalize collective organization, the magistrates aligned with the manufacturers against the workers.
The government's response confirmed Thompson's central insight about the relationship between state power and class interest. Frame-breaking was made a capital offense in 1812, over the passionate objection of Lord Byron, whose maiden speech in the House of Lords remains one of the most eloquent defenses of working people's rights in the history of English parliamentary debate. Byron asked the House to consider whether the frame-breakers' desperation might have been produced by the conditions imposed upon them — whether men who saw their families starving while their employers grew rich might have legitimate cause for anger. The House was not persuaded. The penalty for breaking a stocking frame became death. The penalty for introducing a stocking frame under conditions that destroyed the livelihoods of entire communities remained nothing at all.
The asymmetry was structural. It reflected a legal system that protected property more zealously than it protected people, a political system in which the people most affected by the transformation had no representation, and an economic ideology — the emergent doctrine of laissez-faire — that treated the market as a natural force whose operations should not be impeded by considerations of justice, custom, or community welfare. Thompson recognized in this ideology not a neutral description of economic reality but a political instrument: a doctrine that naturalized the power of capital while delegitimizing the resistance of labor, that presented the market's outcomes as natural and inevitable while obscuring the political decisions — the repeal of protective statutes, the enforcement of combination laws, the deployment of military force — that produced those outcomes.
The Orange Pill reproduces a version of this dynamic, though not with the cruelty of the Regency state and not with the conscious intent to suppress. When Segal describes sitting across from twenty engineers in Trivandrum and demonstrating a tool that would transform their work, the scene is presented as a moment of revelation — the engineers' capabilities expanding, their potential unlocked. And the expansion was real. But the scene is also, inescapably, a scene of imposition. The transformation was decided before the engineers entered the room. The terms were set by the person who controlled the technology and the employment relationship. The engineers' role was to adapt — skillfully, creatively, with the support of a leader who chose investment over extraction — but to adapt to conditions they had not chosen and could not refuse.
Thompson would have recognized this structure. He would have noted the benevolence of the employer and the genuineness of the opportunity. He would also have noted that benevolence is not governance, that opportunity offered is not the same as choice exercised, and that the most generous imposition is still an imposition. The framework knitters, too, had employers who were decent men, who paid fair wages and maintained the customs of the trade. The existence of decent employers did not resolve the structural problem. The structural problem was that the workers' security depended on the continued decency of the employer, and decency, however genuine, is not a structure. It is a disposition. And dispositions are subject to pressures — the pressure of the market, the pressure of competitors, the pressure of investors — that structures are designed to withstand.
The Luddites were defeated. They were defeated by military force, legal repression, and the economic logic of a system that rewarded cheap production regardless of its social cost. Frame-breakers were tried at York in January 1813; some were hanged, others transported to the penal colonies. The trades they had defended were transformed in precisely the ways they had predicted. Within a generation, the framework knitting trade was a factory operation, the shearmen were a remnant, and the handloom weavers had been driven into a poverty so extreme that it became a scandal to a society that had a high tolerance for the suffering of the poor.
But the defeat of the Luddites was not the end of the questions they raised. The questions persisted — through the Chartist movement, through the formation of the trade unions, through the factory acts and the reform bills — because they were the right questions, asked by people who understood their situation with a clarity that their social superiors could not or would not match. The questions were: Who decides the terms of the transformation? Who bears the costs? And what institutional structures exist to ensure that the people who bear the costs have a voice in the decisions that produce them?
These questions are being asked again, now, by the workers displaced by artificial intelligence. The vocabulary has changed. The trades have changed. The technology has changed in every particular. But the structural dynamic — one class of people deciding the terms of a transformation, another class of people expected to adapt — persists with a consistency that the historian must name and that the contemporary discourse, trapped within its narrative of individual adaptation and inevitable progress, has not yet found the language to address.
---
The framework knitters of Nottinghamshire have been remembered as destroyers. The frames they broke have been counted, the costs calculated, the destruction measured in pounds sterling and lost production. This is how power remembers resistance: by tallying the damage and ignoring the cause. The historian's task is the reverse — to recover the cause from beneath the rubble of the consequences, to read the destruction not as evidence of irrationality but as the final vocabulary of people whose more articulate forms of expression had been systematically suppressed.
Thompson's recovery of the framework knitters' actual demands reveals a political program that was coherent, specific, and entirely rational. The knitters wanted three things. They wanted the enforcement of existing customs governing the quality of work — specifically, the prohibition of cut-up work, which degraded the product, undercut fair prices, and destroyed the market for quality goods. They wanted the regulation of apprenticeship, which controlled entry to the trade, ensured that those who entered it were properly trained, and prevented the flooding of the labor market with unskilled workers whose availability drove wages below the level at which a skilled knitter could maintain a household. And they wanted a voice in the decisions that affected the conditions of their labor — not a veto over technological change, but a negotiation over the terms of its introduction.
None of these demands required the abolition of the stocking frame. The frame had been the instrument of the knitters' trade for generations. They were not opposing the tool. They were opposing the uses to which the tool was being put by employers who had abandoned the customary obligations that had governed the relationship between hosier and knitter for as long as the trade had existed.
The customs were not quaint. They were functional. They had evolved through generations of practice as the community's mechanism for governing the tensions inherent in any productive relationship: the employer's interest in minimizing cost, the worker's interest in maintaining wages, the consumer's interest in quality, the community's interest in stability. The customs balanced these interests imperfectly but practically, and the balance was maintained not by law but by the mutual recognition that the relationship depended on the observance of norms that no single party could violate without consequence.
The destruction of these customs by employers who saw in the new technology an opportunity to escape the constraints of customary obligation was experienced by the knitters not as progress but as betrayal. The betrayal was specific: the implicit bargain — invest in skill, and the investment will be honored — had been broken by employers who discovered that unskilled labor, operating new machines, could produce goods that were inferior but cheap enough to capture the market. The knitters were not afraid of the future. They were angry about the present — angry that the rules under which they had organized their working lives were being discarded by people who had no intention of replacing them with anything except the naked operation of market power.
Thompson's concept of the moral economy describes this dynamic with analytical precision. The moral economy is not a sentimental attachment to the past. It is a set of customary norms governing economic relationships — norms that the community regards as legitimate, that carry the authority of long practice, and that the community is prepared to defend when they are violated. The food rioters of the eighteenth century, whom Thompson studied in his seminal 1971 essay "The Moral Economy of the English Crowd in the Eighteenth Century," did not simply steal bread. They enforced what they understood to be a just price: selling the grain at the price the community regarded as fair and returning the proceeds to the merchant. The action was disciplined, principled, and grounded in a shared understanding of economic justice that the rioters regarded as more legitimate than the market economy that the merchants championed.
The framework knitters were defending an analogous moral economy. Their customs — the regulation of quality, the control of apprenticeship, the expectation of fair wages — constituted a system of economic governance that the community had developed through generations of practice and that the community regarded as binding on all parties. The hosiers who violated these customs by producing cut-up work and flooding the trade with unskilled labor were not merely making business decisions. They were breaking a social contract. And the knitters' response — the targeted destruction of the offending frames — was not vandalism. It was enforcement. The enforcement of norms that the legal system refused to enforce and that the political system refused to protect.
The contemporary knowledge economy has its own moral economy, though the term is not used and the norms are not always articulated. The moral economy of professional expertise includes the expectation that years of training and dedication will be rewarded with stable employment, professional respect, and a meaningful degree of autonomy over the conditions of one's work. It includes the expectation that the practitioner's judgment will be valued — that the accumulated wisdom developed through years of practice carries an authority that cannot be replicated by speed or volume. It includes the expectation that the quality of the work matters, that there is a difference between work that is adequate and work that is excellent, and that the market will recognize and reward the difference.
The AI transition violates this moral economy with a thoroughness that Thompson's framework helps to name. The violation is not that the machine produces better work. In many cases, it does not. The violation is that the machine produces work that is adequate — good enough for most purposes, cheap enough to undercut the prices that skilled practitioners can command, and available at a speed that makes the slower, deeper work of the human practitioner appear not merely expensive but unnecessary. The market does not distinguish between adequate and excellent when adequate is available at a fraction of the cost. The framework knitters discovered this when cut-up stockings, an inferior product, captured the market from properly knitted ones. The contemporary designer, the writer, the programmer discovers the same thing when AI-generated output, competent but shallow, captures the market from work that is deeper but slower and more expensive to produce.
The violation extends to the mechanisms of transmission. Every profession, every craft, every skilled trade develops customs for the transmission of knowledge from experienced practitioners to novices. The apprenticeship, the mentorship, the clinical rotation, the articling period — each serves a function that extends beyond the transmission of technical skills. It transmits the profession's values. It develops the judgment that separates a competent technician from a wise practitioner. It creates the social relationships that sustain the profession as a community rather than a collection of isolated individuals competing for contracts.
The AI tool disrupts this transmission by reducing the novice's dependence on the experienced practitioner. The junior developer who can use a coding assistant to solve problems does not need to consult the senior developer as frequently. The consultation, however, was never merely about getting the answer. It was about watching an experienced mind approach a problem — observing how judgment is exercised under uncertainty, how the values of the profession are applied in practice, how the thousand small decisions that separate adequate work from excellent work are made by someone who has been making them for decades. The AI provides the answer. It does not model the process. And the process, the slow accumulation of practical wisdom through friction and failure and mentorship, is what the moral economy of the profession exists to protect.
The Orange Pill addresses this dynamic through what its author calls "ascending friction" — the observation that AI does not eliminate difficulty but relocates it to a higher cognitive floor. The observation is valuable. But the moral economy analysis reveals a dimension that the ascending friction thesis does not capture: the social dimension. The friction that is being relocated is not merely cognitive. It is relational. The struggle that builds professional judgment occurs not in isolation but within a community — in the exchanges between novice and mentor, in the informal conversations that transmit not just knowledge but standards, not just technique but taste. When the AI tool interposes itself between the novice and the community of practice, the cognitive friction may indeed ascend. The social friction — the friction that builds relationships, transmits values, and sustains the profession as a living institution — does not ascend. It disappears.
The framework knitters wanted the preservation of a system that worked. Not perfectly, not without friction, not without the inevitable tensions between employer and worker, between efficiency and quality, between individual interest and communal welfare. But a system that worked: that governed the trade's internal conflicts through established customs, that maintained quality through shared standards, that transmitted knowledge through apprenticeship, and that provided the workers with a voice, however informal and however imperfect, in the decisions that affected their livelihoods.
What they got instead was the market — the naked, unmediated market, stripped of every customary constraint, governed by the single principle of cheapest production. The market was efficient. It was also, for the communities that bore its costs, catastrophic. The quality of the product declined. The wages of the workers collapsed. The customs that had maintained a livable relationship between employer and worker were replaced by an arrangement in which the employer's only obligation was to pay the lowest wage the market would bear and the worker's only option was to accept it or starve.
The contemporary knowledge worker faces the same choice, dressed in different clothes. The moral economy of professional expertise — the expectations of quality, of autonomy, of voice — is being replaced by the logic of the platform, the gig, the contract, the prompt. The market does not care about the moral economy. The market cares about the cheapest adequate output. And the AI tool, like the wide stocking frame before it, provides that output at a price that the moral economy cannot match.
The framework knitters' demands were not met. Their moral economy was destroyed. But the questions they asked — what is owed to the people whose skill built the trade? what happens to quality when the market rewards only cheapness? who speaks for the community when the community's voice has been silenced? — persist. They persist because they are the right questions. They persist because every generation that confronts a transformation of its labor must answer them anew. And they persist because the answers, when they come, determine whether the transformation produces a future worth inhabiting or merely a more efficient version of exploitation.
---
Thompson introduced the concept of the moral economy to describe something that the political economists of the eighteenth century could not see and that the market fundamentalists of every subsequent century have refused to acknowledge: that economic relationships are governed not only by prices and contracts but by norms, customs, and shared expectations about what constitutes fair dealing. The moral economy is the set of rules by which a community governs its own economic life — rules that are not written in statute, that are not enforced by courts, but that carry the weight of long practice and shared conviction, and that the community is prepared to defend when they are violated.
The concept has proven remarkably portable. Originally developed to explain the food riots of eighteenth-century England — events that earlier historians had dismissed as "spasmodic" reactions to hunger but that Thompson demonstrated were disciplined assertions of a popular consensus about just prices and fair distribution — the moral economy framework has since been applied by scholars across political science, sociology, and anthropology to explain collective action in contexts ranging from peasant resistance in Southeast Asia to labor disputes in industrial Europe. In 2024, researchers applied the concept to the deployment of AI in Chinese elder-care programs, examining how moral appeals about filial duty were used to legitimize technological change while masking the unevenly distributed burdens that the change imposed. The concept travels because the phenomenon it describes — the existence of customary norms governing economic life, and the eruption of resistance when those norms are violated — is not historically specific. It is structural. It recurs wherever communities develop shared expectations about fair dealing and wherever those expectations are threatened by changes imposed from outside the community's control.
The moral economy of professional expertise is the most important application of Thompson's concept to the AI transition, and it is the application that the contemporary discourse has most systematically neglected. Every profession, every craft, every form of practiced expertise develops a moral economy. The norms are specific to the practice: the physician's duty to the patient, the engineer's duty to public safety, the lawyer's duty of confidentiality, the programmer's commitment to code review, the teacher's refusal to pass a student who has not learned, the journalist's obligation to verify before publishing. These norms are not merely professional standards in the bureaucratic sense — rules imposed by licensing bodies and enforced through sanctions. They are moral customs, expressions of a community's understanding of what it means to practice the work with integrity. They emerge from the accumulated experience of generations of practitioners who have confronted the specific tensions, temptations, and trade-offs that the practice involves, and they represent the community's collective wisdom about how those tensions should be navigated.
The moral economy of expertise includes at least four interrelated expectations, each of which the AI transition is violating in specific and identifiable ways.
The first is the expectation of honored investment. The implicit bargain of the knowledge economy says: invest in the development of expertise — years of training, thousands of hours of practice, the slow accumulation of judgment through failure and repetition — and the investment will be rewarded with stable employment, professional recognition, and compensation that reflects the difficulty and the duration of the preparation. This bargain is not contractual. No document guarantees it. But it is real — as real as any customary norm — and the professional communities that organize themselves around skilled work depend upon its continued validity. Parents advise their children on the basis of this bargain. Educational institutions are structured around it. Career decisions spanning decades are made in reliance upon it.
The AI transition demonstrates, with the blunt efficiency of a market correction, that the bargain can be broken. The expertise developed through years of patient immersion can be approximated by a machine in seconds. The approximation is not perfect — the Orange Pill is correct that human judgment, taste, and architectural intuition retain value — but it is adequate for an expanding range of purposes, and adequacy at low cost is what markets reward. The photographer whose eye for composition was developed through decades of practice now competes with anyone who can prompt an image generator. The writer whose command of language was built through years of reading and revision now competes with anyone who can edit an AI-generated draft. The programmer whose understanding of systems was deposited, layer by layer, through thousands of hours of debugging now competes with a tool that produces working code from natural language descriptions.
The violation is not that the machine is better. It is that the effort — the dedication, the years of patient immersion — is suddenly worth less in the marketplace, and the devaluation feels like a betrayal because it is a betrayal: a violation of the implicit bargain under which the investment was made.
The second expectation is the expectation of craft quality. The moral economy of every skilled practice includes standards about the quality of the work produced. These standards are not arbitrary. They represent the accumulated wisdom of the community about what constitutes work that is adequate to its purpose. The physician's thoroughness in diagnosis, the engineer's rigor in design, the lawyer's precision in analysis — these are not luxuries that can be dispensed with when a cheaper alternative becomes available. They are the minimum standards the community has determined, through generations of practice, to be necessary for the work to serve its function.
The AI tool introduces a competing standard: acceptable output at minimal cost. The AI-generated legal brief covers the relevant cases and makes the standard arguments. It does not exhibit the depth of analysis that an experienced lawyer would bring, but for routine matters, it is adequate. The AI-generated code compiles and runs. It may not exhibit the architectural elegance that a skilled engineer would produce, but it works. The AI-generated medical assessment covers the most probable diagnoses. It does not reflect the clinical intuition that years of patient contact develop, but for triage purposes, it serves.
The market, as Thompson and every subsequent analyst of capitalist dynamics has observed, selects for the cheapest adequate option. The framework knitters discovered this when cut-up stockings, inferior but cheap, captured the market from properly knitted goods. The market did not care about the difference in quality. The market cared about the difference in price. And the consumers who purchased the cheaper product were often unable to judge the quality difference, because the ability to assess quality requires exactly the kind of expertise that the market was in the process of devaluing.
The third expectation is the expectation of apprenticeship — the transmission of knowledge, values, and judgment from experienced practitioners to novices through sustained personal relationship. Thompson documented the centrality of apprenticeship to the moral economy of the trades: it controlled entry, maintained standards, and created the intergenerational bonds through which the community's practical wisdom was preserved and transmitted. The destruction of apprenticeship was not merely an economic event. It was a cultural catastrophe — the severing of the mechanism through which a community reproduced itself.
The AI tool disrupts apprenticeship by reducing the novice's dependence on the experienced practitioner. When the junior developer can get an immediate answer from an AI assistant, the occasion for consultation with the senior developer — the occasion that was never merely about the answer but about the modeling of judgment — occurs less frequently. The senior practitioner's role shifts from mentor to reviewer, from the person who teaches how to think to the person who checks whether the output is acceptable. The shift is subtle, and it may appear, from the outside, to be efficient. But the efficiency conceals a loss. The novice who learns by watching an experienced mind work develops capacities — the capacity for judgment under uncertainty, the capacity to recognize when a correct answer is nonetheless the wrong answer, the capacity to see a problem whole rather than in isolated components — that no amount of AI-generated correct answers can produce.
The fourth expectation is the expectation of voice — the right of the practitioner to participate in decisions that affect the conditions of the practice. The physician expects to be consulted about the implementation of diagnostic tools. The teacher expects to be consulted about the adoption of educational technologies. The engineer expects to be consulted about changes to the design process. The expectation is not that the practitioner will have a veto but that the practitioner's knowledge, developed through years of direct engagement with the work, will be represented in decisions that affect how the work is done.
The AI transition is proceeding, in most organizations and most industries, without meaningful practitioner voice. The decisions about which AI tools to adopt, how to deploy them, at what pace, under what conditions are made by managers, executives, and technology vendors whose understanding of the work is necessarily different from — and often shallower than — the understanding of the people who do the work. The practitioner is informed. She is trained. She is given the opportunity to adapt. She is not consulted about whether the adaptation serves the interests of the practice, the clients, the students, the patients — the people whom the practice exists to serve.
Acemoglu and Johnson's application of Thompson's framework to AI underscores the political content of these violations. They write that "whether workers gain or not depends on who has power. When political power is in the hands of a narrow elite and workers lack the ability to bargain collectively, their wages and working conditions may not improve" — even when the technology itself would permit improvement. The insight is Thompson's, translated into the vocabulary of contemporary economics: the distribution of technological gains is determined not by the technology but by the power relations within which the technology is deployed. The moral economy of expertise is being violated not because AI is inherently destructive of professional norms but because the power to determine the terms of AI's deployment is concentrated in hands that do not share, and are not structurally required to share, the practitioner's interest in maintaining those norms.
Thompson's moral economy framework reveals something that the discourse of individual adaptation cannot see: the violation of the moral economy is not a series of individual misfortunes. It is a structural event — the systematic destruction of the customary norms that governed the relationship between practitioners, their communities, and the society they serve. The destruction is proceeding under the banner of efficiency, productivity, and progress, the same banner under which the framework knitters' moral economy was destroyed two centuries ago. And the consequences — the erosion of quality standards, the degradation of mentorship, the silencing of practitioner voice, the concentration of gains in the hands of those who control the technology — are predictable, because they are the same consequences that Thompson documented in the original industrial transition.
The moral economy of expertise will not be preserved by nostalgia, by individual resistance, or by the hope that enlightened leaders will choose investment over extraction. It will be preserved, if it is preserved at all, by the creation of institutional structures through which practitioners can collectively articulate and enforce the norms that the market, left to its own devices, will destroy. The framework knitters' moral economy was destroyed because no such structures existed. The question for the present is whether the knowledge workers whose moral economy is now under assault will build those structures before the destruction is complete — or whether they will discover, as the framework knitters discovered, that the market's indifference to everything except price is a force that individual merit and individual adaptation cannot withstand.
The phrase "collective bargaining by riot" was coined to describe the political practice of people who possessed grievances but not institutions. The food rioters of eighteenth-century England did not riot because they were hungry, or not only because they were hungry. They rioted because the institutional channels through which their grievances might have been addressed — the magistrates' courts, the assize of bread, the paternalist regulation of the grain trade — had been dismantled or captured by the interests they were designed to constrain. The riot was not a breakdown of order. It was the assertion of an order — the moral economy of the crowd — by people who had been denied every legitimate mechanism for its enforcement. Thompson insisted on this distinction because the distinction was political: to characterize the riot as mere hunger was to deny the rioters their analysis, to reduce a political act to a biological reflex, and thereby to excuse the institutional failures that made the riot necessary.
The structural logic of collective bargaining by riot — the use of direct action by people excluded from formal negotiating mechanisms — illuminates a dynamic in the contemporary AI transition that the discourse has largely failed to name. The workers affected by AI displacement are developing their own forms of collective action, and these forms, while differing in every particular from the actions of the eighteenth-century crowd, serve the same structural function: the assertion of interests that formal governance mechanisms refuse to acknowledge.
The forms deserve specification, because their diversity is part of their significance.
In January 2023, a group of visual artists discovered that their published work — paintings, illustrations, photographs uploaded to portfolio sites and social media platforms — had been scraped without consent and used to train Stability AI's image generation model. The discovery did not produce a riot. It produced a class-action lawsuit, Andersen v. Stability AI, filed in the Northern District of California, in which three named plaintiffs — Sarah Andersen, Kelly McKernan, and Karla Ortiz — alleged copyright infringement on a scale that the existing legal framework was not designed to address. The suit was not filed by anonymous rioters under cover of darkness. It was filed by named individuals, publicly, through the formal legal system. But the structural function was identical to the function Thompson identified in the crowd actions of the eighteenth century: the assertion of a claim — our labor has been taken without consent and used to enrich others — through the only mechanism available when the customary norms governing the use of that labor had been violated and no alternative mechanism for redress existed.
In July 2023, the Screen Actors Guild–American Federation of Television and Radio Artists called a strike that shut down Hollywood production for 118 days. Among the central issues was the use of AI to replicate actors' likenesses — their faces, their voices, the embodied characteristics that constituted their professional identity — without consent and without compensation. The strike was, in Thompson's vocabulary, a classic instance of collective bargaining through organized withdrawal of labor. But it was also something more specific: a defense of the moral economy of performance against a technology that threatened to sever the connection between the performer's body and the performer's compensation. The actors were not opposing digital technology. They were opposing a specific deployment of digital technology that would permit studios to scan a background actor's likeness in a single session and use that likeness in perpetuity, in any production, without further payment. The targeting was as precise as the framework knitters' selection of offending frames.
In September 2023, the Authors Guild organized an open letter signed by more than ten thousand writers — Margaret Atwood, Jonathan Franzen, Jhumpa Lahiri, James Patterson, among thousands of others — demanding that AI companies obtain consent, provide credit, and offer compensation before using published work to train language models. The letter was a petition, the mildest form of collective action, the form that the framework knitters exhausted before turning to direct action. Its signatories were not smashing servers. They were asking, through the most polite available channel, for the recognition of a principle so basic that its contested status reveals the depth of the institutional failure: the principle that the use of a person's labor to enrich a corporation requires the person's consent.
Each of these actions — the lawsuit, the strike, the petition — represents a form of what might be called collective bargaining by code: the use of legal, economic, and communicative tools by workers in the knowledge economy to assert their interests in the governance of AI systems that affect their livelihoods. The term is not a metaphor. It describes a structural function. When formal mechanisms for the negotiation of technological change do not exist — when there is no table at which the affected workers sit, no process through which their concerns are heard, no institutional channel through which their interests are represented — the workers create their own mechanisms. The mechanisms are improvised from the materials available: the lawsuit, the strike, the petition, the viral social media campaign, the open-source alternative, the platform cooperative, the professional association's code of ethics.
The improvisation is both the strength and the weakness of the phenomenon. The strength is responsiveness: collective bargaining by code can emerge and scale with a speed that formal institutional processes cannot match. The Andersen lawsuit was filed within weeks of the discovery that the artists' work had been used without consent. The Authors Guild letter gathered ten thousand signatures in days. The SAG-AFTRA strike mobilized an entire industry within the timeframe of a traditional labor action but around a set of concerns — AI replication of human likeness — that no existing labor agreement had anticipated. The speed reflects the density of the networks through which contemporary workers communicate: digital networks that permit the rapid sharing of information, the rapid identification of common grievances, and the rapid coordination of collective responses.
The weakness is fragility. Collective bargaining by code lacks the institutional infrastructure that sustains traditional collective bargaining over time. A lawsuit can be settled, dismissed, or decided on narrow grounds that leave the broader question unresolved. A strike ends. A petition is received, acknowledged, and filed. The viral social media campaign captures attention for a news cycle and then yields to the next cycle's content. Each action compels a moment of attention. None creates the durable institutional structure through which the workers' interests are represented continuously, across the full range of decisions that affect their livelihoods, over the years and decades during which the AI transition unfolds.
Thompson understood this weakness, because he documented it in the movements he studied. The food rioters' enforcement of the just price was effective in the moment — the grain was sold at the price the crowd demanded — but the effectiveness did not persist. The next market day, the same merchants charged the same prices, and the same crowd had to assemble again. The framework knitters' destruction of offending frames compelled attention — the government deployed twelve thousand soldiers, which is attention of a particularly concentrated kind — but the attention did not produce institutional reform. It produced military repression, capital prosecution, and the acceleration of the very dynamics the frame-breakers had sought to resist. The actions were effective as signals. They were ineffective as structures.
The contemporary workers engaged in collective bargaining by code face the same structural limitation. The Andersen lawsuit has been partially dismissed and partially allowed to proceed; its ultimate resolution, years hence, will address copyright doctrine in a specific jurisdiction, not the global governance of AI training data. The SAG-AFTRA strike produced contract language protecting actors' likenesses, a genuine achievement, but the protection applies only to SAG-AFTRA members working under SAG-AFTRA contracts, a fraction of the creative workers affected by AI replication. The Authors Guild petition produced publicity but not legislation.
The signals are clear. The structures are not yet built.
The Orange Pill describes the emergence of practitioner communities around AI tools — communities that share techniques, report problems, develop best practices, and collectively negotiate the terms of their relationship with the technology. These communities are real, and their existence represents a form of collective intelligence that Thompson would have recognized: the distributed knowledge of a community of practice, developed through shared experience and mutual exchange. But the communities that The Orange Pill describes are communities of adopters — practitioners who are using the tools and refining their use. The communities that Thompson's framework identifies as politically necessary are communities of the affected — practitioners who are bearing the costs of the transition and who need institutional mechanisms through which to assert their interests in its governance.
The two communities overlap but are not identical. The adopter community develops best practices for the use of the tool. The affected community demands a voice in the governance of the tool's deployment. The first is a community of practice. The second is a political formation. And the political formation — the organized assertion of collective interests in the face of a structural transformation — is what the present moment most urgently requires and most conspicuously lacks.
Acemoglu and Johnson, in their application of Thompson's framework to AI, identify the political content of this absence with the directness of economists who have read enough history to know what it costs. They write that "whether workers gain or not depends on who has power" and that the current trajectory of AI deployment — heavy on automation and surveillance, light on the augmentation of workers' capabilities — reflects a power imbalance that will not self-correct through market mechanisms. The correction, they argue, requires deliberate institutional action: "Critically, this is a choice." The choice is political, not technological. It is a choice about who governs the transition and in whose interest the governance operates.
The framework knitters made their choice in the absence of institutions. They chose direct action because no alternative was available. Their action was brave, disciplined, and ultimately insufficient — not because their analysis was wrong but because the power arrayed against them was overwhelming and the institutional structures that might have given their analysis political force did not exist. The contemporary workers displaced by AI are in a structurally analogous position: their analysis is clear, their grievances are specific, their collective actions are inventive and responsive, but the institutional structures through which their interests might be represented in the ongoing governance of the AI transition — not for a single news cycle but continuously, durably, with the structural power to compel attention and negotiate outcomes — have not yet been built.
The building of those structures is the political project that collective bargaining by code points toward but cannot, on its own, accomplish. The lawsuit, the strike, the petition, the viral campaign — each is a signal that the structures are needed. None is the structure itself. The structure, when it comes, will be something new — adapted to the conditions of the digital economy, to the dispersion of the workforce across industries and jurisdictions, to the speed of technological change, to the specific forms of displacement and devaluation that AI produces. But it will serve the function that every effective structure of collective bargaining has served since the first trade unions organized in the wake of the Luddite defeat: the ongoing, institutional representation of workers' interests in the governance of the conditions of their labor.
The framework knitters fought without institutions and lost. The trade unionists who came after them built institutions and won — not everything, not permanently, but enough to establish the principle that the governance of technological change is a matter for negotiation, not imposition. The workers of the AI age are in the space between: fighting without institutions, winning moments of attention, losing the structural battle. The question is whether the institutions will be built in time — before the transition is complete, before the distribution of gains and costs has hardened into a new normal that the affected communities had no voice in creating.
Thompson spent his career demonstrating that institutions are not given. They are made — forged through struggle, sustained through solidarity, and defended against the continuous pressure of interests that would prefer the workers to remain unorganized, unrepresented, and silent. The making has begun. The lawsuits, the strikes, the petitions, the open letters are its earliest expressions. Whether the making continues — whether it produces the durable structures that the moment requires — depends on the choices of the workers themselves, and on whether the broader society recognizes, before it is too late, that the signals are not noise. They are the sound of a political necessity asserting itself against an institutional absence.
---
Class is not a thing. It is a happening. This formulation, which Thompson spent decades developing and defending, provides the essential analytical instrument for understanding the social dynamics of the AI transition — and for understanding why the discourse has so persistently avoided the class question, preferring the vocabulary of individual adaptation, personal responsibility, and skill development to the vocabulary of collective interest, structural power, and political economy.
Thompson insisted that class is not a category imposed by analysts on inert populations. It is a relationship that people make and experience in their daily interactions with the economic order. "Class is defined by men as they live their own history, and, in the end, this is its only definition." The definition is not vague. It is precise in its refusal of abstraction: class is what happens when people who share a common relationship to the means of production come to recognize that commonality, develop a shared understanding of their situation, and create the institutions through which their shared interests can be collectively articulated and pursued. The making of a class is a historical process — it happens over time, through experience, through the gradual accumulation of shared grievance and shared analysis — and it is an active process: the class makes itself through its own cultural practices, organizational innovations, and political choices.
The AI transition is producing a class relationship. The relationship is being produced not overnight but through the accumulated effect of millions of individual decisions — hiring decisions, deployment decisions, investment decisions, regulatory decisions — that are restructuring who directs the work and who is directed by it, who captures the gains and who absorbs the costs, who speaks and who is spoken about.
The emerging class structure has at least three tiers, and the tiers are defined not by income alone but by relationship to the AI systems that are reshaping the economy.
At the top are the owners and directors of AI capital. This class is small — a few thousand individuals worldwide whose decisions about the development and deployment of AI systems affect the livelihoods of hundreds of millions. It includes the founders and executives of the major AI companies, the venture capitalists and institutional investors whose funding decisions determine which AI capabilities are developed and at what pace, and the senior executives of the corporations that deploy AI systems at scale. Its interests are served by the rapid and unregulated deployment of AI technology, by the maximization of the productivity gains that AI generates, and by the capture of those gains in the form of profit and capital appreciation. This class does not require malice to produce harmful outcomes. It requires only the normal operation of the incentive structures within which it operates: the quarterly earnings report, the venture capital return cycle, the competitive pressure to deploy faster than the competitor.
In the middle are the skilled practitioners who work with AI tools — the people The Orange Pill primarily addresses. Software engineers, designers, researchers, professionals across every industry who are using AI to augment their capabilities. This class benefits from the productivity gains that AI provides. A single engineer, equipped with an AI coding assistant, can produce what formerly required a team. A designer can cross from her specialty into adjacent domains. The Orange Pill documents these gains with detailed specificity and genuine excitement.
But the middle tier's relationship to the AI transition is structurally ambiguous in a way that Thompson would have recognized immediately, because it mirrors the ambiguous position of the skilled artisans in the early industrial period — the very framework knitters and shearmen whose story he told. The skilled artisans benefited from the demand for their expertise. They also faced the constant threat that the mechanization of their skills would eliminate the market advantage on which their position depended. Their leverage was real but contingent — contingent on the continued value of skills that the technology was in the process of commoditizing.
The middle tier of the AI economy occupies the same contingent position. The software engineer whose judgment directs the AI tool is valuable today. But the capabilities of the tools are expanding with each model generation, and the boundary between the work that requires human judgment and the work that the machine can handle autonomously is not stable. It is moving. Acemoglu and Johnson's analysis of this dynamic is blunt: "AI may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed." The middle tier's current advantage — the possession of judgment that the tool cannot replicate — is a moving target, and the confidence that the target will remain within reach requires a faith in the permanence of human cognitive advantage that the trajectory of AI development does not obviously support.
At the bottom are the workers whose labor is being directly displaced — the customer service representatives replaced by chatbots, the content moderators whose work is being automated, the translators displaced by neural machine translation, the junior professionals across every industry whose entry-level positions are being eliminated or restructured. This class is the largest of the three and the least represented in the discourse. Its members do not write books about the AI transition. They do not speak at conferences. They do not have the platforms, the networks, or the cultural capital to make their experience visible in the spaces where the meaning of the transition is being negotiated.
Their experience is being negotiated about rather than spoken from. They appear in the discourse as statistics — the number of jobs projected to be displaced, the percentage of tasks that AI can automate — or as abstract categories requiring "reskilling" and "adaptation." They do not appear as Sarah, who spent eight years developing expertise in medical transcription and whose department was eliminated in a single quarterly reorganization. They do not appear as Marcus, who trained for three years as a paralegal and discovered, six months into his first position, that the research tasks he had been hired to perform were being transferred to an AI system that could produce comparable output in minutes. They do not appear as Priya, who built a freelance translation business over a decade and watched her client base evaporate as neural machine translation reached a quality threshold that made her rates, however justified by the nuance and cultural sensitivity of her work, commercially uncompetitive.
These are the people Thompson would have insisted on naming, on locating, on quoting at length — not as representative types but as specific individuals whose specific experiences constitute the evidence on which any honest analysis must rest. Their absence from the discourse is not an oversight. It is a structural feature of a conversation conducted by and for the people least affected by the transition — a conversation in which the displaced are objects of concern rather than subjects of their own experience.
The class dynamics of the AI transition are obscured by several features of the contemporary economy that did not apply to the industrial transition Thompson studied. The first is the geographic dispersion of the affected workers. The framework knitters of Nottinghamshire lived and worked in concentrated communities where shared experience could be recognized, discussed, and organized around. The workers displaced by AI are scattered across industries, across national boundaries, across employment categories that range from full-time corporate employment to gig work to independent freelancing. The geographic and occupational dispersion makes the recognition of common condition — the first and most essential step in class formation — extraordinarily difficult.
The second obscuring feature is the ideology of individual responsibility that pervades the knowledge economy. The worker who is displaced by AI is told to reskill, to adapt, to develop the judgment that the machine cannot replicate. The advice is not wrong in its content. But it is wrong in its framing, because it addresses a structural problem with an individual prescription. The structural problem is that the terms of the AI transition are being set by the owners of AI capital without the participation of the workers who bear the costs. The individual prescription — become more valuable, develop higher-order skills, climb the cognitive tower — does not address the structural problem. It accepts the structure as given and asks the individual to compete within it more effectively. Thompson spent his career demonstrating the political function of this kind of advice: it transforms a question about justice — is this arrangement fair? — into a question about competence — are you good enough to survive it? — and in doing so, it transfers the burden of adjustment from the system that produced the displacement to the individual who suffers it.
The third obscuring feature is the temporal compression of the AI transition. The industrial transition that Thompson studied unfolded over generations. The framework knitters' trade was not destroyed overnight; it was degraded over decades, through a slow accumulation of changes that eroded the customary norms and concentrated the gains. The AI transition is compressed into years. The graphic designer who was fully employed in 2022 is struggling for commissions in 2025. The journalist who was a staff writer in 2023 is a freelancer competing with algorithmic content in 2026. The speed of the displacement leaves no time for the gradual development of shared analysis, institutional innovation, and collective organization that Thompson documented in the making of the English working class.
And yet the making has begun. It has begun in the lawsuits and the strikes and the petitions documented in the preceding chapter. It has begun in the online forums where displaced workers share their experiences and discover that their individual difficulties are expressions of a common condition. It has begun in the professional associations that are developing standards for the use of AI in specific industries. It has begun in the labor organizing campaigns at technology companies, in the platform cooperative movements, in the emerging proposals for data dividends and AI taxation.
The process is in its earliest stages. The class that is being formed does not yet have a name, a coherent identity, or the institutional infrastructure that the English working class developed over a century of struggle. But the raw materials are present: the shared experience of displacement, the emerging recognition of common condition, the first tentative experiments in collective organization. Thompson's framework does not predict the outcome. It illuminates the process — the process by which individual grievance becomes collective analysis, collective analysis becomes organized action, and organized action produces the institutional structures through which a class asserts its interests in the governance of its own labor.
The making of the digital working class, if that is what it becomes, will follow its own path. The institutions it creates will be shaped by the specific conditions of the digital economy — the geographic dispersion, the occupational diversity, the speed of change — and they will look nothing like the trade unions and friendly societies that the English working class built in the nineteenth century. But they will serve the same function: the collective representation of workers' interests in the face of a technological transformation whose terms are being set without their participation. That function is not historically contingent. It is a structural requirement of any arrangement in which the power to determine the conditions of labor is concentrated in hands that do not belong to the people who do the laboring.
---
The artists who protested against AI image generators were not Luddites, though the label was applied to them with the reflexive contempt that the comfortable classes reserve for anyone who questions the terms of technological change. The writers who signed petitions against the use of their work to train large language models were not Luddites. The actors who struck over the use of their likenesses by AI systems were not Luddites. The software engineers who raised concerns about the pace and governance of AI deployment were not Luddites. Each of these groups had a specific, articulate, and entirely rational grievance, and the application of the Luddite label served the same function it has served for two centuries: the delegitimation of dissent through the imputation of ignorance.
The real grievance of each group was political, not technological. The specificity of the grievances matters, because precision is the antidote to the erasure that the label performs, and because the tendency to lump all resistance into a single category — "people who are afraid of AI" — conceals the diversity of the concerns and the legitimacy of each.
The artists' grievance is a property grievance and a consent grievance. Sarah Andersen draws a webcomic called Sarah's Scribbles. Her art is recognizable — a specific visual style, developed over years, that her audience identifies with her and that constitutes her professional identity and her livelihood. In 2022, she discovered that her published work had been scraped from the internet and used, without her knowledge or consent, to train Stable Diffusion, an AI image generator. Users of the system could now prompt it to produce images "in the style of Sarah Andersen" — images that approximated her visual signature closely enough to compete with her for the commissions and the audience that her years of work had built. The artist's labor had been taken, without consent, to build a tool that would devalue that labor. The parallel to the framework knitters' situation is exact in its structure: the knitters' skill was used to produce the very goods that undercut the value of that skill. The mechanism has changed — algorithmic training data rather than wide stocking frames — but the dynamic is the same.
The writers' grievance extends the artists' into the domain of language itself. The Books3 dataset, which was used to train several major language models, contained approximately 196,000 books — novels, nonfiction works, academic texts — obtained from a shadow library without the permission of the authors. The writers who discovered their work in the training data experienced the discovery as a specific kind of violation: not merely the unauthorized copying of their text, which would be a conventional copyright claim, but the transformation of their accumulated creative output into raw material for a system that could now produce text that competed with theirs. The violation was not that the AI system could write as well as any individual author. It was that the system could produce adequate text at a volume and a speed that no individual author could match, and that the adequacy of the output was built, literally, on the foundation of the authors' own work.
The actors' grievance reaches beyond the economic to the bodily. During the 2023 SAG-AFTRA negotiations, the Alliance of Motion Picture and Television Producers proposed contract language that would have permitted studios to scan a background performer's likeness in a single session and use that digital replica in any future production, in perpetuity, for a single day's pay. The proposal was not hypothetical. It described a capability that the technology already supported and that several productions had already experimented with. The actors' resistance was not resistance to digital effects or to the use of technology in filmmaking. It was resistance to the specific proposition that a person's physical appearance — the thing that is most irreducibly theirs — could be captured, owned, and deployed by a corporation without ongoing consent or compensation.
The software engineers' grievance is internal to the industry and therefore the most politically complex. Engineers at major AI companies — Google, OpenAI, Anthropic, Meta — have raised concerns about the pace of deployment, the adequacy of safety testing, and the governance of systems whose capabilities are expanding faster than the understanding of those capabilities. Several have left their positions to raise these concerns publicly, at significant personal and professional cost. Their grievance is not about displacement — they are, for the moment, among the primary beneficiaries of the AI transition. Their grievance is about governance: the concentration of decisions about the development and deployment of world-shaping technology in the hands of a small number of executives and investors whose competitive incentives are not aligned with the interests of the broader public.
What these four groups share is not a common technology or a common industry. What they share is a common political condition: the exclusion from the governance of decisions that affect their lives. The artists were not consulted about the use of their work in training data. The writers were not consulted about the scraping of their books. The actors were not consulted about the replication of their likenesses. The engineers who raised concerns found that the internal channels for the expression of those concerns were inadequate, captured, or ignored. In each case, the grievance was produced not by the technology itself but by the governance — or the absence of governance — under which the technology was deployed.
Thompson would have recognized the pattern immediately, because it is the pattern he documented across every chapter of the Luddite movement. The framework knitters' grievance was produced not by the stocking frame but by the absence of governance over the frame's deployment. The frame had existed for generations without provoking resistance, because its use had been governed by customs that balanced the interests of employers and workers. The resistance began when the customs were violated — when the hosiers introduced cut-up work, flooded the trade with unskilled labor, and refused to negotiate — and when the institutional channels for the redress of the resulting grievance were closed. The machine-breaking was the consequence of institutional failure, not of technological fear.
The contemporary resistance follows the same logic. The artists did not protest AI image generation when the technology was in its research phase, generating abstract patterns and distorted faces. They protested when the technology was deployed commercially, using their work without consent, to produce output that competed with their livelihoods. The writers did not object to natural language processing as a field of research. They objected when their published books were scraped and used as training data without permission or compensation. The actors did not resist digital effects. They resisted a specific contractual proposition that would have permitted the commercial exploitation of their likenesses without ongoing consent. In each case, the resistance was triggered not by the technology's existence but by the terms of its deployment — terms that were set without the participation of the affected parties and that concentrated the benefits while externalizing the costs.
The Orange Pill engages with this dynamic more honestly than most of the technology discourse. Segal acknowledges the legitimacy of the resistance. He describes the senior engineer who felt "like a master calligrapher watching the printing press arrive" — a person whose grief was real, whose loss was genuine, whose expertise had been built through years of patient investment. But the engagement, for all its sympathy, stops short of the political analysis that Thompson's framework demands. The Orange Pill's response to the contemporary Luddites' grievance is a call to adaptation: climb higher, develop judgment, ascend to the cognitive floors where human value persists. The advice is sound as far as it goes. But it addresses the individual and ignores the structure. The master calligrapher does not need advice on how to develop new skills. She needs an institutional mechanism through which her expertise — her judgment about quality, her understanding of the craft, her knowledge of what the machine cannot yet do — is represented in the decisions about how the printing press is deployed.
The contemporary Luddites want what the original Luddites wanted. They want fair compensation for the use of their labor. They want protection against output that undercuts quality work by flooding the market with adequate alternatives. They want a voice in the decisions about how the technology is deployed in their industries. None of these demands requires opposing AI. All of them require institutional structures that do not currently exist — structures through which the affected communities can negotiate the terms of the transition rather than merely adapting to terms that were set without them.
The demands are not radical. They are, in the deepest sense, conservative — conserving the principle that the people affected by a decision have a right to participate in making it, that the costs of progress should be distributed rather than concentrated, and that the customs of a community deserve respect even when the technology makes it possible to override them. These are the principles that every legitimate governance arrangement in the democratic tradition has been built upon. The contemporary Luddites are not rejecting progress. They are demanding that progress include them.
---
The distribution of the benefits and costs of the AI transition is not a technical problem with a technical solution. It is a political problem that the technology discourse has systematically disguised as a technical one, and the disguise serves a specific function: it forecloses the political questions — Who decides? Who benefits? Who pays? — by presenting the distribution as the natural outcome of market processes that should not be interfered with. The disguise is old. It was worn by the doctrine of laissez-faire in the early nineteenth century, when the political economists argued that the market must be free, that interference with the market would produce worse outcomes than non-interference, and that the individual worker must adapt to the conditions the market created rather than attempting to govern the market through collective action. The framework knitters encountered this doctrine and rejected it, not because they were ignorant of economics but because they understood, from their own experience, that the market the economists described as natural was a political construction — a set of rules established and enforced by people with the power to make rules, and that the rules systematically favored the interests of capital over the interests of labor.
The most common disguise in the contemporary discourse is the trickle-down argument: the claim that the productivity gains generated by AI will eventually benefit everyone, because the increased wealth produced by the technology will diffuse through the economy via employment, investment, and consumption. The argument has a long pedigree and a poor historical record. The wealth generated by the factory system in the early nineteenth century did not trickle down to the workers of Manchester and Birmingham until political struggle — organized labor, factory legislation, the extension of the franchise — compelled the redistribution that the market, left to its own devices, did not produce. The interval between the generation of the wealth and its redistribution was measured in decades, and during those decades, the workers whose labor generated the wealth lived in conditions that even the most committed laissez-faire economist eventually found difficult to defend.
Acemoglu and Johnson, in their application of Thompson's framework, are precise about this dynamic. They observe that "wages are unlikely to rise when workers cannot push for their share of productivity growth" — a sentence that contains, in twenty words, the essential political insight that the entire trickle-down argument is designed to obscure. Productivity growth and wage growth are not mechanically linked. They are linked only when workers possess the collective power to claim their share, and the possession of that power depends on institutional structures — unions, collective bargaining agreements, labor law, political representation — that the current economy has been systematically weakening for decades. The AI transition arrives in an economy where union density is at historic lows, where gig work and independent contracting have eroded the employment relationships through which collective bargaining traditionally operated, and where the political system's responsiveness to working people's concerns has been diminished by the influence of corporate lobbying and the Citizens United decision. The structural conditions for an equitable distribution of AI's gains are worse than they have been at any point since the early industrial period.
The Orange Pill engages with the distribution question with characteristic directness. Segal describes the arithmetic of the AI transition with an honesty that most technology leaders avoid: if five people can do the work of one hundred, the question of what happens to the ninety-five is not rhetorical. He describes the board conversation where the twenty-fold productivity number is on the table and the pressure to convert productivity gains into headcount reduction is structural, not personal. He chose to keep the team. He invested the gains in capability expansion rather than margin extraction. This was the right choice, and the integrity of the choice deserves acknowledgment.
But the structural analysis exposes what the individual choice cannot address. The choice was Segal's to make because he occupied the position of the person who makes such choices — the owner, the leader, the person at the top of the hierarchy. The engineers in Trivandrum did not make the choice. They received the choice that was made on their behalf, and they received it with the particular mixture of relief and dependency that characterizes every paternalist arrangement. The outcome was good. The process was not democratic. And the distinction between a good outcome and a democratic process is the distinction that the distribution analysis exists to insist upon, because good outcomes that depend on the character of individual leaders are structurally unstable — subject to the pressures of the next quarter, the next investor conversation, the next competitive threat — while democratic processes, however messy, produce outcomes that carry the legitimacy of the participants' consent.
The distribution of AI's benefits and costs has several dimensions that the discourse has not adequately distinguished.
The first is the distribution between capital and labor. The productivity gains generated by AI accrue, in the first instance, to the owners of AI capital — the companies that develop AI systems, the companies that deploy them, and the investors who fund both. Whether those gains are shared with the workers whose labor the AI augments or replaces depends on the balance of power between capital and labor. Acemoglu and Johnson write: "In line with Thompson's emphasis, even technological developments that favor labor are not sufficient to guarantee that workers will benefit. Whether workers gain or not depends on who has power." The statement is as true of 2026 as it was of 1812. The technology is different. The power dynamics are the same.
The second dimension is the distribution between high-skill and low-skill workers. The Orange Pill celebrates the raising of the floor — the expansion of who gets to build, the democratization of capability. The celebration is not wrong. The developer in Lagos who can now access coding leverage comparable to an engineer at a major technology company has gained something real. But the raising of the floor has consequences for the workers who were standing on the floor: the skilled practitioners whose market position depended on the scarcity of their skills now compete with a vastly larger pool of practitioners who can, with AI tools, produce comparable output. The net economic effect may be positive — more people producing more things. The distributional effect is regressive — the gains are captured by the newly enabled, the costs are borne by the formerly scarce, and the overall surplus flows to the owners of the tools that made the exchange possible.
The third dimension is temporal — the distribution between the present and the future. The productivity gains are captured now. The costs — the erosion of mentorship, the degradation of professional judgment, the hollowing out of the knowledge base from which future expertise would have been built — are borne over years and decades. The factory that pollutes the river profits today; the community downstream suffers tomorrow. The AI deployment that replaces the junior professional's research role saves money this quarter; the profession that no longer develops junior researchers through apprenticeship discovers, a decade hence, that the pipeline of experienced practitioners has dried up. Temporal displacement is the most insidious form of distributional injustice, because the beneficiaries and the cost-bearers are separated by time as well as class, and the cost-bearers, being future persons, have no voice in the present decisions that produce their disadvantage.
The political nature of the distribution question becomes visible when one examines the institutional structures — or the absence of structures — through which the distribution is governed. The technology companies that develop AI systems operate within regulatory environments shaped by their own lobbying. The workers displaced by those systems have no equivalent political infrastructure. They have no union of AI-displaced workers. They have no parliamentary caucus dedicated to their interests. They have, at most, the sympathy of commentators who acknowledge the problem while offering no structural solution.
Thompson understood that sympathy without structure is politically inert. The framework knitters had sympathy — Byron's maiden speech in the House of Lords was sympathy of the most eloquent and public kind — and the sympathy did not save them, because sympathy does not create the institutional mechanisms through which interests are represented and enforced. What the framework knitters needed was not a sympathetic lord but a political structure through which their interests could be asserted with the force of collective organization. What the workers displaced by AI need is the same: not the sympathy of technology leaders who acknowledge the difficulty of the transition, not the advice to reskill and adapt, not the reassurance that the future will produce new forms of employment to replace the ones being destroyed, but institutional structures through which their interests are represented in the decisions that are determining the terms of their displacement.
The creation of these structures is the central political challenge of the AI transition. It is a challenge that the technology discourse, focused as it is on individual capability and organizational efficiency, has not yet adequately addressed. The discourse asks: How can individuals adapt to the new economy? Thompson's framework asks the prior question: Who is building the new economy, and in whose interest?
The answer to the first question is personal. The answer to the second is political. And the distribution of AI's costs and benefits will be determined not by the personal answers — however wise, however conscientious — but by the political ones: the institutional structures that determine who has power, who has voice, and who decides the terms under which the most powerful technology in human history is deployed.
The most consequential political act in England between 1811 and 1816 was not performed in Parliament. It was performed in the dark, by men whose names were not recorded, in workshops they entered without permission, carrying hammers they had no legal right to wield. The framework knitters who broke the offending frames of Nottinghamshire, the shearmen who destroyed the gig mills of Yorkshire, the cotton weavers who attacked the power looms of Lancashire — each was engaged in an activity that the legal system classified as criminal destruction of property and that the government punished with military force, transportation, and hanging. Each was also engaged in an activity that Thompson recognized as something the legal classification was designed to obscure: a democratic practice.
The claim requires defense, because it sounds paradoxical. Democracy, in its conventional usage, refers to governance through legitimate institutions — elections, legislatures, courts. The Luddites operated outside these institutions. They broke laws rather than making them. They destroyed property rather than petitioning for its regulation. How can lawbreaking constitute democratic practice?
Thompson's answer was rooted not in abstract political theory but in historical evidence about the conditions under which the lawbreaking occurred. The framework knitters had petitioned Parliament for the enforcement of existing protective statutes. Parliament repealed the statutes instead. They had appealed to the magistrates for the enforcement of trade customs. The magistrates sided with the manufacturers. They had organized within the bounds of the law. The Combination Acts criminalized their organization. Every legitimate channel through which their interests might have been represented in the governance of their trade was closed, captured, or abolished — not accidentally, not through bureaucratic oversight, but deliberately, by a political system that represented the interests of capital and that treated the interests of labor as obstacles to be removed.
Under these conditions, Thompson argued, the riots were not a breakdown of the democratic order. They were a response to its absence. The rioters were asserting a right that the formal political system denied them: the right to participate in decisions that affected their livelihoods and their communities. The assertion was illegal. It was also democratic in the most fundamental sense — a claim to self-governance by people who had been excluded from every legitimate mechanism of self-governance.
The distinction between institutional democracy and democratic practice is Thompson's most radical contribution to political theory, and its application to the AI transition is immediate. Institutional democracy — the formal apparatus of elections, legislatures, regulatory agencies — has responded to the AI transition with a sluggishness that approaches paralysis. The European Union's AI Act, adopted in 2024, represents the most comprehensive regulatory response to date, and its provisions are primarily directed at the supply side — what AI companies may build and how they must disclose — rather than at the demand side: what protections exist for the workers, communities, and citizens who bear the consequences of AI deployment. In the United States, regulatory response has been fragmented across executive orders, agency guidance documents, and state-level initiatives that do not constitute a coherent governance framework. The institutional machinery of democratic governance is operating on a timeline measured in years and decades. The AI transition is operating on a timeline measured in months and quarters.
The gap between institutional time and technological time creates the conditions for what Thompson would have recognized as democratic practice outside institutional channels — not because the practitioners prefer illegitimate action to legitimate governance, but because the legitimate governance mechanisms are not operating at the speed or the scope that the situation requires.
The artists filing lawsuits, the actors striking, the writers petitioning, the engineers whistleblowing — each is engaged in democratic practice in Thompson's sense. Each is asserting the right to participate in decisions about AI deployment from which the formal institutional mechanisms have excluded them. The exclusion is not the product of malice. It is the product of structural mismatch: institutional democracy was designed for a pace of change that the AI transition has outstripped, and the people whose lives are being transformed are operating in the space between the change and the governance, improvising mechanisms of voice from the materials available.
The improvisation has limits. Thompson was unsentimental about the limits of extra-institutional action. The food riots enforced a just price for a day. The framework knitters compelled attention for a season. Neither created the durable institutional structures through which working people's interests could be continuously represented. The durable structures — the trade unions, the factory acts, the franchise extensions that eventually gave working people political voice — came later, built through decades of sustained organizing and political struggle, and they came because the extra-institutional actions had demonstrated, at significant human cost, that the institutional deficit was real and that the consequences of leaving it unaddressed were intolerable.
The contemporary resistance to the terms of AI deployment is performing the same demonstrative function. It is making visible the institutional deficit — the absence of mechanisms through which the affected communities participate in the governance of the transition. The visibility does not, on its own, create the mechanisms. But it creates the political pressure for their creation, and the political pressure, if sustained, produces institutional innovation.
The innovation, when it comes, will need to address a problem that the industrial-era institutions did not face: the speed and the geographic scope of the transformation. The trade union organized the workshop. The AI transition crosses every workshop, every industry, every national boundary simultaneously. The institutions that govern the AI transition must operate at a scale and a speed that the trade union model cannot accommodate — and they must do so while preserving the essential function that the trade union served: the structural representation of workers' interests in the governance of the conditions of their labor.
What might such institutions look like? The question is speculative, but the historical pattern provides constraints. Effective institutions for the governance of technological change have, in every previous case, included three elements: a mechanism for the articulation of workers' interests (the union, the professional association, the guild), a mechanism for the negotiation of those interests with the interests of capital (the collective bargaining agreement, the regulatory process, the legislative hearing), and a mechanism for the enforcement of the resulting arrangements (the labor law, the inspection regime, the strike as last resort).
The digital equivalents might include data trusts through which workers collectively govern the use of their creative output in AI training — a structural response to the consent violation that the artists, writers, and actors have identified. They might include sectoral bargaining arrangements that set minimum standards for AI deployment across industries, rather than relying on firm-by-firm negotiation that atomized workers cannot effectively conduct. They might include mandatory impact assessments before AI systems are deployed in ways that affect significant numbers of workers — assessments that include the testimony and the analysis of the affected communities, not merely the projections of the deploying organizations. They might include representation requirements on the governance boards of AI companies — a structural mechanism for ensuring that the perspectives of the affected communities are present in the rooms where decisions are made.
Each of these proposals is debatable. None is utopian. Each has precedents in existing institutional arrangements — data trusts in information governance, sectoral bargaining in European labor law, impact assessments in environmental regulation, worker representation in German codetermination. The question is not whether such mechanisms are conceivable but whether the political will exists to create them — and whether the political will is generated before the transition has hardened into arrangements that the affected communities had no part in shaping.
Thompson's framework insists that the political will is not given. It is made — made through the democratic practice of the affected communities, through the assertion of their right to participate, through the sustained pressure of collective action that compels the institutional response. The framework knitters' resistance did not produce the trade unions directly. It produced the conditions — the visible injustice, the public awareness, the political pressure — under which the trade unions could be built. The contemporary resistance to the terms of AI deployment may serve the same function: not resolving the institutional deficit directly, but creating the political conditions under which the deficit can be addressed.
The resistance is therefore not a problem to be solved. It is a signal to be heeded. It signals that the governance of the AI transition is inadequate, that the institutional mechanisms for legitimate participation are absent, and that the affected communities will continue to improvise mechanisms of voice — lawsuits, strikes, petitions, public advocacy, collective withdrawal — until the institutional mechanisms are built. The signal has been sent. The question, as Thompson would have framed it, is whether the institutions will be built by design or by crisis — whether the comfortable classes will recognize the democratic content of the resistance in time to respond with institutional innovation, or whether they will dismiss the resistance as Luddism and discover, as the comfortable classes of the early nineteenth century discovered, that dismissed grievances do not disappear. They accumulate. And the accumulation, if it is not addressed through legitimate channels, finds other outlets.
---
The framework knitters of Nottinghamshire did not know how their story would end. They could not have known that the customs they defended would be destroyed, that the trades they practiced would be transformed beyond recognition, that the communities they belonged to would be dispersed and reconstituted in forms they could not have imagined. They could not have known that the resistance they mounted, and lost, would contribute to a tradition that would produce, over the following century, the institutional framework — trade unions, factory acts, the extension of the franchise, the welfare state — through which working people's interests would be structurally protected for the first time in the history of industrial civilization.
They acted without the comfort of knowing the outcome. They acted because the alternative — passive acceptance of a transformation imposed without consent, under terms set by people who would not bear the costs — was intolerable to people who understood their own dignity and refused to relinquish their claim to a voice in the governance of their own labor. Thompson wrote about them with a tenderness that never softened into sentimentality, because he understood that the tenderness was owed to the courage of acting in the dark, of building structures that might not hold, of demanding justice from a system that had no structural reason to provide it.
The AI transition poses the question of the framework knitters to the present generation, and the question has not changed: What future is being built, by whom, and in whose interest?
The question is not rhetorical. The AI transition is not a natural force whose trajectory is determined by the physics of computation. It is a human enterprise, governed — or ungoverned — by human institutions, shaped by human decisions, subject to the same political dynamics that have shaped every previous technological transformation. The future it produces will be determined not by the capabilities of the technology but by the governance of the technology: by the decisions about who benefits and who bears the costs, who has a voice and who is silenced, whose labor is honored and whose is treated as raw material.
Two futures are possible, and the choice between them — Acemoglu and Johnson insist that "critically, this is a choice" — is being made now, in the specific decisions of specific institutions, by specific people who may or may not understand that the decisions they are making constitute a political choice about the kind of society they are building.
The first future is the future that the market, left to its own devices, will produce. It is a future in which the productivity gains of AI are captured primarily by the owners of AI capital, in which the costs of displacement are borne primarily by the workers whose labor is no longer needed or whose labor is needed only in degraded forms — monitored, managed, stripped of autonomy and judgment. It is the future that Thompson documented in the factory towns of the 1830s: productive, efficient, and intolerable to the majority of the people who lived in them. It is the future that produced Chartism, the trade union movement, and the political radicalism that shook the foundations of the English state — not because the radicals were irrational but because the arrangement was unjust, and the injustice, left unaddressed, generated the resistance that every unjust arrangement eventually produces.
The second future is the future that deliberate institutional action can produce. It is a future in which the governance of the AI transition reflects the interests of the affected communities as well as the interests of capital, in which the distribution of costs and benefits is negotiated rather than imposed, in which the moral economy of expertise — the customs of quality, mentorship, professional autonomy, and practitioner voice — is maintained not through nostalgia but through new institutional structures adapted to the conditions of the digital economy. It is a future in which the productivity gains of AI are real and are shared, in which the raising of the floor does not collapse the ceiling, in which the judgment and the craft and the accumulated wisdom of the practitioner are valued not because the market spontaneously values them but because institutional structures exist to ensure that they are valued.
The difference between the two futures is not technological. The technology is the same in both. The difference is political. It is the difference between a transition governed by the interests of the few and a transition governed by the participation of the many. It is the difference between a river that floods and a river that irrigates — not a difference in the river's power but in the structures that direct it.
The creation of those structures is not an abstraction. It is a specific set of institutional innovations that the present moment demands and that the historical tradition of collective action in the face of technological change provides the resources to imagine.
It means data governance frameworks that give workers collective control over the use of their creative and intellectual output in AI training — replacing the current arrangement, in which the default is extraction without consent, with an arrangement in which the default is negotiation. It means sectoral standards for AI deployment that are developed with the participation of the workers they affect — standards that include not merely technical requirements but protections for professional autonomy, quality, and the mentorship relationships through which expertise is transmitted. It means labor law adapted to the conditions of the digital economy — conditions in which the employment relationship that anchored industrial-era protections has been replaced by freelancing, gig work, and contract arrangements that provide none of the protections and all of the vulnerability. It means educational investment that prepares citizens not merely to use AI tools but to govern them — to participate as informed democratic actors in the decisions about how the most powerful technology in human history is deployed.
And it means, most fundamentally, a recognition that the displaced, the concerned, the resistant are not obstacles to progress. They are the democratic constituency whose participation in the governance of progress is the condition of progress's legitimacy. Their analysis deserves the same respect as the analysis of the people building the tools. Their interests deserve the same structural representation. Their voices deserve to be heard not as background noise to the triumphant narrative of technological advance but as essential contributions to the deliberation about what kind of advance is worth having.
Thompson spent his career insisting on this recognition. He insisted on it for the framework knitters of Nottinghamshire, for the handloom weavers of Lancashire, for the shearmen of Yorkshire, for the food rioters and the Chartists and the trade unionists and every generation of working people who demanded, against the enormous condescension of posterity, that their experience be taken seriously and their interests be represented in the governance of the forces that shaped their lives.
The insistence continues. It continues in the artists who demand consent. In the writers who demand compensation. In the actors who demand control of their own likenesses. In the engineers who demand responsible governance. In the workers who are beginning to recognize that their individual difficulties are expressions of a common condition, and that the common condition requires a collective response.
Thompson would have recognized them. He would have studied their methods, quoted their words, traced their networks of solidarity and their moments of courage. He would have insisted, against every attempt to dismiss them as backward or irrational, that their resistance was the sound of democracy asserting itself in the face of a transformation that democracy had not sanctioned.
The question is not whether the AI transition will proceed. It will. The question — the political question, the democratic question, the question that the framework knitters asked and that every generation since has asked in its own vocabulary — is on whose terms. The answer depends on whether the institutions are built, whether the voices are heard, and whether the society that produces the most powerful tools in human history proves capable of governing those tools in the interest of all the people they affect.
Thompson trusted the capacity of ordinary people to build the institutions they needed. He documented that capacity across centuries of evidence. He believed — not as a pious hope but as a historical conclusion drawn from the most rigorous examination of the most difficult evidence — that working people, given the space to organize and the institutional channels through which to act, would create arrangements more just than any arrangement created on their behalf by benevolent elites.
That trust is the foundation on which the future must be built. Not trust in the technology, which is indifferent to justice. Not trust in the market, which is indifferent to everything except price. Not trust in the benevolence of leaders, which is real but unstable. Trust in the capacity of the people whose lives are at stake to participate in the decisions that shape those lives, and to build, through their participation, a future worth inhabiting.
The making has begun. Whether it succeeds depends on the choices of the present generation — the generation that holds, in its hands, the most powerful tools ever created and the most urgent political question ever asked: Will we govern these tools, or will we be governed by those who own them?
---
Two hundred and fourteen years separate the framework knitters of Nottinghamshire from the engineers in my office in Trivandrum. The distance is enormous. The stocking frame and Claude Code share almost nothing in their mechanism, their capability, their place in the productive order. The men who gathered in the dark to break machines and the engineers who gathered in a conference room to learn one — there is no equivalence between their situations that survives close inspection. The danger of historical analogy is always the same: it flatters the analyst's pattern-seeking instinct while obscuring the specificities that matter most.
And yet Thompson's questions arrived in my life and would not leave.
Not the historical details. The questions beneath them. Who decides the terms of the transformation? Who bears the costs? What happens when the people who bear the costs have no voice in the decisions that produce them? These are not historical questions. They are structural ones. They recur because the dynamics that generate them — the concentration of technological power, the externalization of displacement costs, the silencing of dissent through the rhetoric of progress — are not confined to any century or any technology.
I wrote in The Orange Pill about the beaver and the river. The beaver builds dams that redirect the flow of technological change toward conditions that support life. I still believe in the beaver. I am still trying to be one. But Thompson forced me to confront the limitation I was not seeing: the beaver builds for the community, but the community did not choose the beaver. The dam is the beaver's design. The pool is the beaver's pool. And the creatures in the pool — the engineers, the designers, the writers, the translators, the customer service workers, the content moderators, the millions of people whose labor is being reshaped by a technology they had no part in creating — depend on the beaver's continued benevolence in a way that should make everyone, including the beaver, uncomfortable.
Benevolence is not governance. This is Thompson's sentence, though he never wrote it in exactly these words, and it is the sentence that changed how I think about my own work. I kept the team in Trivandrum. I invested the productivity gains in capability rather than extracting them as margin. I chose the pool over the channel. And I would make the same choice again. But the choice was mine. Not theirs. And the next leader who stands where I stood may not make the same choice, because the market does not reward the beaver's choice — it rewards the quarterly number, the headcount reduction, the margin expansion. The beaver swims against a current that the system generates, and if the beaver tires, the dam fails.
What Thompson taught me — what the framework knitters, through Thompson, taught me — is that the creatures in the pool need something more durable than a beaver. They need structures. Institutional mechanisms through which their interests are represented not at the discretion of the leader but as a matter of right. The trade union was one such structure. The factory act was another. The labor law, the collective bargaining agreement, the works council, the professional licensing board — each was created because some generation recognized that the interests of working people could not be left to the goodwill of employers, however genuine that goodwill might be.
I do not know what the equivalent structures look like for the AI transition. Nobody does yet. Data trusts, sectoral standards, updated labor law, mandatory impact assessments, worker representation on AI governance boards — these are possibilities, not blueprints. The blueprints will be drawn by the people who need the structures, through the same kind of collective experimentation and political struggle that produced every previous institutional innovation in the history of labor.
What I know is that the structures are needed. That the signals — the lawsuits, the strikes, the petitions, the quiet withdrawal of skilled practitioners who see no channel through which their expertise is valued — are real, and they are the sound of a democratic deficit asserting itself. That my role, as a builder who benefits from the transition, is not to dismiss the signals or to answer them with advice about adaptation, but to use whatever influence the position affords to advocate for the institutional response that the signals demand.
Thompson trusted ordinary people to build the institutions they needed. He trusted them because he had studied, across centuries of evidence, their capacity to organize, to analyze, to create structures of mutual aid and collective representation under conditions that would have crushed anyone's faith in human agency. I find that trust harder to summon than he did. The pace of the AI transition, the geographic dispersion of the affected workers, the erosion of the institutional infrastructure that previous generations built — all of it makes the task of collective organization more difficult than it has ever been.
But the difficulty does not diminish the necessity. And the necessity does not depend on my confidence in the outcome. The framework knitters acted in the dark. They built structures that did not hold. They demanded justice from a system that had no structural reason to provide it. And the tradition they inaugurated — the tradition of working people insisting on a voice in the governance of their own labor — produced, over the following century, the institutional framework that made industrial civilization tolerable.
The AI transition will produce its own institutions, or it will produce its own crisis. The choice, as Acemoglu and Johnson insist, is ours. I hope we choose the institutions. I hope we choose them before the crisis makes the choice for us.
The AI revolution is being narrated as a story of individual adaptation — reskill, climb higher, develop judgment the machines can't replicate. E.P. Thompson spent his life exposing what that narrative conceals. When the framework knitters of 1812 were dismissed as fearful machine-breakers, Thompson recovered what was actually happening: skilled workers with precise grievances, exhausting every legitimate channel of redress, and turning to direct action only after every institutional door had been closed in their faces. Their demands were not radical — fair compensation, quality standards, a voice in the decisions reshaping their trades. This book applies Thompson's framework to the AI transition and finds the same structural dynamics reproducing with unsettling precision: gains concentrated at the top, costs externalized to the displaced, and an institutional vacuum where democratic governance should be. The artists filing lawsuits, the actors striking, the writers petitioning — Thompson would have recognized them instantly. They are not Luddites. They are the democratic constituency whose participation in governing this transition is the condition of its legitimacy. — E.P. Thompson, The Making of the English Working Class

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that E.P. Thompson — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →