By Edo Segal
The number that won't leave me alone is twelve thousand.
Twelve thousand British soldiers deployed to the Midlands in 1812. More troops than Wellington had fighting Napoleon on the Iberian Peninsula. Not aimed at a foreign army. Aimed at framework knitters with hammers who had done something the British state found more dangerous than any battlefield threat: they had asked, with surgical precision, who was capturing the gains from the new machines.
I wrote about the Luddites in *The Orange Pill*. I wrote about them with genuine respect — their diagnostic clarity, their accurate predictions, the legitimacy of their grief. But I was telling half the story. The half where individuals make wise choices. The half where the beaver builds a dam and the ecosystem flourishes.
Eric Hobsbawm told the other half. The half about what happens when the dam needs to be bigger than any single beaver can build.
Hobsbawm spent sixty years asking one question about every technological revolution he studied: Who benefits? Not in the abstract. Not eventually. Right now, in this transition, with these specific people bearing these specific costs — who captures the gains, and what institutions exist to ensure the rest aren't destroyed in the process?
The answer, across two centuries and four volumes of meticulous history, was remarkably consistent. The technology arrives. The gains concentrate. The displaced bear the cost. And the institutions that eventually redistribute those gains — the Factory Acts, the trade unions, the weekend itself — arrive a generation too late for the people who needed them most.
That gap between the technology's arrival and the institutional response is the gap I'm most afraid of right now. Not because I doubt AI's power. Because I've seen how fast the river moves and how slowly dams get built when they require collective action rather than individual will.
In *The Orange Pill*, I argued that we are beavers — that we can shape where the current flows. I still believe that. But Hobsbawm forced me to confront what I was avoiding: some dams cannot be built by one beaver, no matter how sharp the teeth. Some dams require movements. Institutions. The organized political will of people who refuse to let the distribution question be answered by default.
This book is that confrontation. It is not comfortable reading for a builder who prefers to act alone. But the framework knitters taught us — at a cost measured in generations — what happens when the people with legitimate grievances are dismissed as afraid of progress while the structural question goes unanswered.
The historians are already watching us. The question is what they'll find.
-- Edo Segal ^ Opus 4.6
1917-2012
Eric Hobsbawm (1917–2012) was a British-Egyptian historian widely regarded as one of the most influential historians of the twentieth century. Born in Alexandria and raised in Vienna and Berlin before settling in England, he spent the bulk of his academic career at Birkbeck College, University of London. His four-volume history of the modern world — *The Age of Revolution* (1962), *The Age of Capital* (1975), *The Age of Empire* (1987), and *The Age of Extremes* (1994) — remains a defining work of modern historiography, tracing the political, economic, and social transformations from the French Revolution through the collapse of the Soviet Union. His 1952 essay "The Machine Breakers" reframed the Luddite movement as rational collective bargaining rather than irrational technophobia, and his concept of "invented traditions" (developed with Terence Ranger) revealed how societies construct narratives of continuity to legitimize present arrangements. A lifelong Marxist who nonetheless maintained analytical independence from doctrinal orthodoxy, Hobsbawm insisted throughout his career that the central question of any economic transformation was distributional: not whether aggregate wealth increased, but who captured the gains and who bore the costs.
In the winter of 1811, twelve thousand British soldiers marched into the textile counties of the English Midlands. This was a larger military force than the Duke of Wellington commanded at that moment on the Iberian Peninsula, where he was fighting Napoleon for control of Europe. The soldiers were not being deployed against a foreign enemy. They were being deployed against framework knitters in Nottinghamshire, croppers in Yorkshire, and handloom weavers in Lancashire — skilled British workers who had begun, under cover of darkness and with remarkable organizational discipline, to break machines.
The standard narrative of what happened next has been repeated so often and so carelessly that it has acquired the status of folklore. Ignorant workers, afraid of progress, smashed machines they did not understand. The word "Luddite" entered the language as a synonym for technophobic backwardness — a person who cannot see the future because they are too busy clinging to the past. When a contemporary software engineer resists adopting AI tools, or a writer objects to having their work ingested by a language model without consent, the dismissal arrives pre-formed: Don't be a Luddite.
Hobsbawm's 1952 essay "The Machine Breakers," published in the inaugural issue of Past & Present, demolished this narrative with the patience of a scholar who understood that the most dangerous historical errors are the ones that flatter the present. The essay was not long — barely fifteen pages — but its analytical precision was devastating. Hobsbawm demonstrated, through close reading of contemporary accounts, trial records, and parliamentary testimony, that the Luddites were not ignorant, not irrational, and not opposed to machinery as such. They were skilled artisans who understood the machines they attacked more intimately than the manufacturers who deployed them. They understood the tensile properties of the thread, the relationship between frame gauge and cloth quality, the difference between a stocking produced on a wide frame by an unskilled worker and one produced on a narrow frame by a craftsman who had spent years learning to regulate tension with his hands and feet simultaneously. The machines were not mysterious to them. The machines were familiar. What was new was the use to which the machines were being put.
The distinction Hobsbawm drew was between hostility to machines and hostility to the deployment of machines in ways that violated established norms of fair dealing. The Nottinghamshire framework knitters did not break every stocking frame in the county. They broke the frames owned by manufacturers who were using wide frames to produce cut-up stockings — an inferior product that could be made by unskilled workers at a fraction of the cost — and selling them as genuine articles. The frames belonging to manufacturers who continued to produce quality goods at established rates were left untouched. This selectivity was the essay's central revelation. The violence was not indiscriminate rage. It was targeted enforcement of trade standards, conducted by people who had exhausted every legal avenue available to them and found each one closed.
The Combination Acts of 1799 and 1800 had made collective organizing illegal. Parliament represented factory owners, not factory workers — the framework knitters had no vote, no representation, no institutional channel through which to register their grievances. The magistrates who administered local justice were frequently the employers themselves, or their social allies. Petitions to Parliament had been submitted, considered, and ignored. The legal framework of the early-nineteenth-century British state offered the framework knitters precisely one channel for negotiating the terms of their displacement: none.
Hobsbawm's archival reconstruction of the movement's internal organization revealed something closer to a military campaign than a riot. The attacks were coordinated across counties. Intelligence networks identified which manufacturers were violating trade norms. The raids were conducted at night by organized groups with specific objectives and specific limits. Frames were broken; people were not harmed, at least in the movement's early phases. The discipline was maintained through oaths of secrecy and a command structure organized under the mythical figure of "Ned Ludd" — a name borrowed, most likely, from a Leicester apprentice who had broken frames years earlier, and now deployed as a kind of collective pseudonym that allowed coordinated action without identifiable leadership.
The British state's response confirmed Hobsbawm's analysis more eloquently than any archival document. A government that understood the movement as mere vandalism would not have deployed twelve thousand troops. A government that saw confused workers lashing out at machines they did not understand would not have passed the Frame Breaking Act of 1812, which made machine-breaking a capital offense punishable by death. The scale of the military and legislative response was proportional not to the physical damage — broken frames could be replaced — but to the political threat. The ruling class understood, even if subsequent historians did not, that the Luddites were mounting an organized challenge to the terms on which the new technology would be deployed, and that this challenge, if successful, would have imposed constraints on the factory owners' freedom to use machinery as they pleased.
The framework knitters were right about what the machines would do to them. This point requires emphasis because the subsequent triumphalist narrative depends on treating the Luddites' fears as exaggerated, as though the displacement they predicted was a phantasm born of ignorance rather than a precise forecast that history confirmed in every particular. Wages in the stocking trade collapsed. The wide-frame cut-ups that the Luddites had identified as inferior products became the industry standard. Skilled workers who had earned twenty shillings a week found themselves competing against unskilled factory operatives earning a fraction of that. The apprenticeship system that had trained craftsmen, maintained quality, and constituted a community of mutual support dissolved over two decades. The framework knitters' children did not become master stockingers. They became factory hands.
Hobsbawm's essay insisted that the accuracy of the Luddites' predictions mattered — that it was not sufficient to acknowledge the displacement as an unfortunate side effect and then move on to celebrate the aggregate productivity gains. The gains were real. The textile industry's output increased enormously after mechanization, and the price of cloth fell, and consumers benefited, and the aggregate wealth of the nation grew. None of this was in dispute. What Hobsbawm disputed was the analytical move that treated the aggregate as the whole story — the move that allowed the historian to write "the Industrial Revolution raised living standards" without adding "eventually, partially, and only after a century of political struggle during which the people who bore the costs of the transition were systematically excluded from its benefits."
The Luddites appear in Segal's The Orange Pill as a cautionary tale — figures who "understood their situation clearly and chose the wrong instrument." This reading treats the historical Luddites with considerably more respect than the standard narrative allows. Segal acknowledges the precision of their diagnosis, the legitimacy of their fears, and the reality of their losses. But Hobsbawm's framework pushes the analysis further than Segal takes it, in a direction that is uncomfortable for anyone whose primary commitment is to the forward motion of technological capability.
The Luddites did not merely understand their situation. They analyzed it. They identified the specific mechanisms by which the new technology was being used to concentrate gains and distribute costs. They distinguished between deployments of machinery that were consistent with the established norms of the trade and deployments that violated those norms. They developed a collective strategy for resisting the latter while accepting the former. And they executed that strategy with a discipline that, had it been displayed by an army rather than by workers, would have been admired as tactical competence.
What Hobsbawm's analysis reveals, when extended to the present, is that the contemporary habit of dismissing resistance to AI as irrational technophobia performs the same analytical error that historians performed for a century and a half before Hobsbawm corrected them. The senior software engineer who refuses to adopt Claude Code is not afraid of the tool. The senior software engineer understands the tool — in many cases more intimately than the enthusiasts who celebrate it — and has made a judgment about what the tool's deployment will cost, not in the abstract but in the specific: in degraded code quality, in atrophied skills, in the destruction of mentorship relationships, in the erosion of the professional standards that the engineering community built over decades. The engineer may be wrong about the magnitude of the cost. The engineer may be wrong about the available alternatives. But the engineer is not irrational, and treating the resistance as irrational is not analysis. It is politics — the politics of delegitimizing opposition so that the distribution question need not be addressed.
The twelve thousand soldiers deployed to the Midlands in 1812 represented a political judgment by the British state: that the framework knitters' challenge to the terms of mechanization was serious enough to require military suppression. The equivalent judgment in the contemporary AI landscape is not military — it is cultural. The cultural deployment against resistance takes the form of the word "Luddite" itself, weaponized as a dismissal, stripped of its historical content, emptied of the precision and rationality that Hobsbawm painstakingly documented. To call someone a Luddite in 2026 is to perform a rhetorical operation that Hobsbawm spent his career exposing: the conversion of a political question — who benefits from this technology, and who bears its costs? — into a psychological diagnosis — why are you afraid of progress?
The conversion is effective because it relieves everyone involved of the obligation to answer the political question. If the displaced are merely afraid, then the solution is reassurance, retraining, adaptation — individual remedies for what is in fact a structural problem. If the displaced have a legitimate political claim — if the distribution of gains and costs is genuinely unjust, if the institutions that might redistribute the gains do not exist or are inadequate — then the solution is institutional construction, collective action, political struggle. The latter is harder, more expensive, and more threatening to the people who currently capture the gains. The psychological diagnosis is therefore preferred, not because it is more accurate, but because it is more convenient.
Hobsbawm understood convenience as a category of historical analysis. The convenient interpretation is the one that serves the powerful. The accurate interpretation is the one that survives the archives. He spent his career demonstrating that the two rarely coincide, and that the gap between them is where the real history lives — in the space between what the powerful prefer to believe and what the evidence actually shows.
The evidence shows that in 1811, skilled workers in the English Midlands identified, with diagnostic precision, the mechanisms by which a new technology was being deployed to destroy their livelihoods, their communities, and their craft. The evidence shows that they exhausted every legal channel available to them before turning to direct action. The evidence shows that their direct action was targeted, disciplined, and strategically coherent. The evidence shows that the British state treated their challenge as a serious political threat requiring military suppression. And the evidence shows that they were right about the consequences: the wages collapsed, the skills were devalued, the communities dissolved, the apprenticeship system died, and the aggregate gains of mechanization were captured by factory owners for the better part of a century before political struggle redirected them toward broader distribution.
The standard narrative — ignorant workers, afraid of progress, smashing machines — is not a historical account. It is an invented tradition, constructed after the fact to legitimate the distribution that actually occurred. Hobsbawm saw through it in 1952. The question for the present is whether the same invented tradition, now dressed in the vocabulary of "innovation" and "disruption" rather than "progress" and "improvement," will be allowed to close the conversation about AI's distribution before the conversation has properly begun.
---
Hobsbawm's phrase "collective bargaining by riot" was not a euphemism and not a provocation. It was an analytical category — a precise description of the function that machine-breaking served in the political economy of early-nineteenth-century England. The phrase worked by taking two concepts that the conventional wisdom held apart — "collective bargaining," which implied legitimate negotiation between parties of roughly equal standing, and "riot," which implied irrational violence — and demonstrating that they described the same phenomenon, viewed from different positions in the class structure.
The mechanism was straightforward. In any economic system, the terms of exchange between capital and labor are negotiated. The negotiation may be formal — through unions, contracts, arbitration — or informal — through customs, norms, tacit understandings about what constitutes fair dealing. When the formal channels are closed, the informal ones become more important. When both are closed, the negotiation does not cease. It finds other channels. The framework knitters of Nottinghamshire and the croppers of Yorkshire had no unions, no vote, no parliamentary representation, and no legal right to organize. The Combination Acts had criminalized collective action. The petitions they had submitted to Parliament had been received and ignored. The magistrates who adjudicated local disputes were, in many cases, the same men who owned the offending frames or depended socially on those who did.
Every channel was closed. The negotiation continued.
Hobsbawm's conceptual innovation was to recognize that machine-breaking was not a breakdown of negotiation but a form of it — the last form available when all others had been suppressed. The framework knitters were not expressing rage. They were applying pressure. The destruction of a manufacturer's frames imposed a cost — financial, reputational, and operational — that was calibrated to exceed the cost of meeting the workers' demands. The calculus was explicit: it was cheaper for a manufacturer to pay fair wages and maintain quality standards than to have his frames broken repeatedly, his production disrupted, and his property placed under perpetual threat. The machine-breaking worked, in many documented cases, precisely because the manufacturers recognized it as negotiation and responded accordingly — adjusting wages, reverting to established practices, reaching accommodations that, while informal and unenforceable, restored something approximating the prior terms of trade.
Hobsbawm distinguished this sharply from what he called "hostility to machines as such" — the categorical rejection of new technology that the standard narrative attributed to the Luddites but that the evidence did not support. The first sort of machine-breaking, he wrote, "implies no special hostility to machines as such, but is, under certain conditions, a normal means of putting pressure on employers." The Nottinghamshire, Leicestershire, and Derbyshire Luddites "were using attacks upon machinery as a means of coercing their employers into granting them concessions with regard to wages and other matters." The targeting was the proof: machines operated within the established norms were left untouched, because the grievance was not with the machine but with the terms of its deployment.
This distinction — between hostility to a technology and hostility to specific deployments of a technology — is the single most important analytical tool that Hobsbawm's framework offers the contemporary AI debate. The distinction collapses the moment you apply it, because the collapse reveals that nearly all contemporary resistance to AI falls into Hobsbawm's first category rather than the second. The resistance is not to AI as such. It is to specific deployments of AI that violate established norms — norms about fair compensation for expertise, fair attribution of creative work, fair treatment of workers displaced by automation, fair distribution of the gains that the technology produces.
Consider the open-source developers who have restricted the licensing of their code to prevent its use as training data for large language models. The restriction is not directed at language models in the abstract. It is directed at a specific deployment: the extraction of value from publicly shared code by companies that profit from the extraction without compensating the developers who created the training data. The grievance is precise. The developers contributed their code to a commons governed by specific norms — norms about attribution, reciprocity, and the shared benefit of collaborative development. The AI companies used that commons as raw material for commercial products that generated billions of dollars in revenue, returning nothing to the developers whose work made the products possible. The licensing restriction is collective bargaining by copyright — the deployment of the one legal instrument available to impose a cost on the party that violated the norms of fair dealing.
Consider the Hollywood writers who struck in 2023 over, among other issues, the use of AI to generate scripts or to use AI-generated material as a basis for human rewriting. The writers were not opposed to word processors, or to screenwriting software, or to the computational tools that had been part of their workflow for decades. They were opposed to a specific deployment of a specific technology that threatened to reduce their compensation, eliminate their credit, and destroy the economic model that sustained a professional community of screenwriters. The targeting was precise. The demands were specific. The mechanism — the withdrawal of labor until the terms of deployment were renegotiated — was collective bargaining in its most traditional form, adapted to a threat that Hobsbawm would have recognized instantly as structurally identical to the one his framework knitters faced.
Consider the engineers whom Segal describes as "running for the woods" — the senior software developers who, facing the AI disruption of their profession, have chosen withdrawal rather than adaptation. Segal frames this as a fight-or-flight response, mapping it onto a primal survival instinct. Hobsbawm's framework offers a more precise reading: withdrawal is itself a form of collective bargaining. The framework knitters who moved to rural areas to continue their trade beyond the reach of factories were not fleeing. They were refusing to participate in a system whose terms they had not set and could not negotiate. The refusal imposed a cost — the loss of their skill, their experience, their institutional knowledge — that the system would eventually feel, even if it did not feel it immediately. The senior engineers who are leaving the technology industry in 2026 are performing the same function. Their departure is a withdrawal of expertise that the industry cannot replace with junior developers and AI tools alone, however much the triumphalist narrative insists otherwise. The cost of the withdrawal will be measured not in broken frames but in degraded systems, in subtle architectural failures, in the slow erosion of quality that becomes visible only after the people who maintained it are gone.
Hobsbawm would also have recognized the forms of voice that characterize the contemporary resistance. The authors, artists, and journalists who have spoken publicly about the costs of AI adoption — who have documented the displacement, named the companies, and made the human consequences visible — are performing the function that Friedrich Engels performed in The Condition of the Working Class in England (1845), that Elizabeth Gaskell performed in Mary Barton (1848), that Charles Dickens performed in Hard Times (1854). They are witnessing. Making visible what the beneficiaries of the transition prefer not to see. Segal's The Orange Pill performs this function with unusual honesty for a technology book — his compassion for the displaced is genuine, his documentation of their experience is specific, and his acknowledgment that real loss is occurring is not a rhetorical gesture but a structural commitment that runs through the entire text.
But witnessing, in Hobsbawm's framework, is necessary and insufficient. Visibility creates the conditions for political action. It does not constitute political action. The displaced workers of the Industrial Revolution became visible through the witness literature of the 1840s and 1850s, and that visibility contributed to the reform movements that eventually produced factory legislation. But the contribution was indirect, and the timeline was measured in decades. The framework knitters whose suffering Dickens fictionalized in 1854 had been displaced since the 1810s. Two generations bore the cost before the institutions that might have mitigated it were constructed.
The question that Hobsbawm's concept of collective bargaining by riot raises for the contemporary AI moment is not whether resistance is occurring — it manifestly is, in multiple forms, across multiple professions and multiple jurisdictions — but whether the channels available for the resistance are adequate to the scale of the transformation. The framework knitters' channels were inadequate: the Combination Acts had closed the legal ones, and the machine-breaking that remained was ultimately suppressed by military force. The contemporary channels — copyright restriction, labor organizing, regulatory advocacy, public voice — are wider, more numerous, and more legally protected. But the question of adequacy is not answered by counting channels. It is answered by measuring the power differential between the parties and the speed of the transformation relative to the speed of the institutional response.
On both counts, the contemporary situation is cause for concern. The power differential between the companies that build and deploy AI and the workers displaced by it is enormous — comparable, in Hobsbawm's framework, to the power differential between the factory owners who controlled the new machinery and the framework knitters who possessed only their skill and their willingness to act collectively. The speed of the AI transformation is far greater than the speed of the Industrial Revolution: what took decades in textiles is taking months in software. And the institutional response — the EU AI Act, the American executive orders, the emerging regulatory frameworks — is moving at the pace of institutional deliberation, which is to say, at a pace that is rational and careful and comprehensively inadequate relative to the speed of the transformation it is attempting to govern.
Hobsbawm's framework does not offer a solution to this mismatch. It offers something more valuable: a precise description of what happens when the mismatch persists. When the channels for negotiation are inadequate and the power differential is large and the speed of transformation outpaces the speed of institutional response, the outcome is determined not by the technology and not by the market but by the balance of political forces. The framework knitters lost that balance decisively. Their resistance was suppressed, their communities were destroyed, and the gains of mechanization were captured by capital for the better part of a century. The broader distribution that eventually followed — the Factory Acts, universal education, social insurance — was achieved not by the framework knitters but by their grandchildren, organized into trade unions and political parties that had the institutional power the framework knitters lacked.
The lesson is not that resistance is futile. The lesson is that resistance, to be effective, requires institutional form. Individual acts of withdrawal, copyright restriction, and public voice are the beginning of collective bargaining, not its fulfillment. The fulfillment requires the construction of institutions — unions, professional associations, regulatory bodies, democratic movements — that can negotiate the terms of the transformation at a scale commensurate with the transformation itself. Hobsbawm spent his career documenting the construction of such institutions in the aftermath of the Industrial Revolution. The question for the present is whether the construction can happen proactively, before the displacement is complete, or whether it will happen reactively, after a generation has already borne the cost.
History, as Hobsbawm would have observed with the practiced melancholy of a scholar who had read this story in the archives of every century since the sixteenth, suggests the latter.
---
The political economy of technological displacement is simple to state and excruciating to confront. A new technology increases aggregate productivity. The increase generates aggregate wealth. The aggregate wealth is distributed. The distribution is unequal. The inequality is not a byproduct of the technology. It is a product of the institutions — the ownership structures, the legal frameworks, the bargaining arrangements, the political systems — that determine who captures the gains and who bears the costs. The technology determines the magnitude of the gains. The politics determine the distribution.
Hobsbawm documented this dynamic across four volumes of modern history — The Age of Revolution (1962), The Age of Capital (1875), The Age of Empire (1987), and The Age of Extremes (1994) — with a consistency that amounted to a law of political economy: every major technological transition produces aggregate gains that are initially captured by the owners of the new technology and distributed more broadly only through the deliberate construction of institutions designed to redistribute them. The construction is never automatic. It is always political. It is always resisted by the beneficiaries of the initial distribution. And it always takes longer than the displaced can afford.
The Industrial Revolution is the paradigmatic case because Hobsbawm studied it more closely than anyone else and because the evidence is comprehensive enough to trace the distribution with precision. The mechanization of the textile industry produced enormous productivity gains. Output increased. Prices fell. Consumers benefited. The aggregate wealth of the nation grew. These are the facts that the triumphalist narrative foregrounds, and they are accurate. But the same evidence, examined from the position of the people who bore the costs, tells a different story — one that Hobsbawm insisted was not a supplement to the triumphalist narrative but a corrective to it.
The framework knitters of Nottinghamshire earned approximately twenty shillings a week in the first decade of the nineteenth century — a wage that sustained families, supported apprenticeships, and maintained a standard of living that, while modest by later standards, provided economic security and social dignity. By the 1830s, the same workers, competing against machine-produced goods, earned a fraction of that. The wage collapse was not gradual. It was catastrophic — a transformation of economic circumstances within a single generation that destroyed not merely individual livelihoods but the entire economic ecosystem of artisan production. The guilds that had regulated the trade dissolved. The apprenticeship system that had trained the next generation of craftsmen became economically unviable — why would a young person spend seven years learning a skill that the market no longer valued? The communities that had been sustained by the trade's prosperity contracted and, in many cases, disappeared entirely.
The aggregate gains were real. They were also captured almost entirely by factory owners for the better part of a century. Real wages for industrial workers did not begin to rise meaningfully until the 1850s — four decades after mechanization had devastated the artisan trades. The improvement, when it came, was not a natural consequence of economic growth. It was a political achievement, won through the construction of institutions that the framework knitters did not live to see: trade unions (legalized in 1824), factory legislation (beginning with the Factory Act of 1833), public education (the Elementary Education Act of 1870), and eventually the apparatus of the welfare state that distributed the gains of industrial capitalism more broadly than the market, left to its own devices, ever would.
Hobsbawm's insistence on this timeline — the decades-long gap between the technology's arrival and the institutional response that eventually distributed its gains — is the aspect of his analysis most directly relevant to the AI moment that Segal describes in The Orange Pill. Segal acknowledges the gap. His discussion of the historical Luddites concludes with the observation that "the Luddites teach us what it costs to choose otherwise" — to refuse engagement rather than to build the institutional structures that might redirect the transformation's gains. Hobsbawm would recognize the concern and then push it toward the structural question that compassion alone cannot address: the institutional structures that eventually redistributed the Industrial Revolution's gains were not built by individual builders exercising wise judgment. They were built by movements — labor movements, reform movements, democratic movements — that organized millions of people around shared demands and applied collective pressure sufficient to compel the redistribution of gains that the owners of capital had no incentive to redistribute voluntarily.
The distinction between individual wisdom and collective power is not a minor qualification. It is the central analytical difference between Hobsbawm's framework and the individualist frameworks that dominate the contemporary AI discourse. Segal's book operates within an individualist framework — the builder, the beaver, the person who takes the orange pill and makes wise choices about how to direct the tool's capabilities. This framework is not wrong. Individual wisdom matters. The choices that individual builders, leaders, and parents make will shape the texture of the transition in ways that matter to the people immediately affected by those choices. But individual wisdom operates within a structure, and the structure is not set by individuals. It is set by institutions — by the legal frameworks, the ownership arrangements, the regulatory bodies, the collective bargaining structures, and the political systems that determine the parameters within which individual choices are made.
The framework knitters' tragedy was not a failure of individual wisdom. Many of them were wise — wise enough to see the future clearly, wise enough to analyze the mechanisms of their displacement with precision, wise enough to develop a collective strategy for resisting the most destructive deployments of the new technology. Their tragedy was structural: the institutions that might have redirected the transition toward broader benefit did not exist, and the political system that might have built them was controlled by the people who benefited from the existing distribution.
The contemporary AI transition operates within a comparable structural context, though the specific features differ in important ways. The ownership of AI tools is concentrated among a small number of companies — Anthropic, OpenAI, Google DeepMind, Meta — whose combined market capitalization exceeds the GDP of most nations. The workers displaced by AI are not factory operatives but knowledge workers — software engineers, designers, writers, translators, analysts, lawyers — whose professional communities, while considerable, lack the organizational density and collective bargaining power of industrial trade unions at their peak. The regulatory frameworks that might govern the transition's distribution are in their earliest stages — the EU AI Act addresses the supply side of AI governance but leaves the demand side, the question of what citizens, workers, and communities need to navigate the transition, almost entirely unaddressed.
Segal observes in The Orange Pill that the productivity gains from AI are real and measurable — a twenty-fold multiplier for a team of engineers in Trivandrum, working software produced from natural-language descriptions in hours rather than months. Hobsbawm would not dispute the reality of these gains. The textile industry's productivity gains were equally real. The question Hobsbawm would pose is the one he posed across four volumes of modern history: twenty-fold productivity for five engineers means the work of a hundred is now done by five. What institutional mechanism ensures that the ninety-five whose labor is no longer required participate in the gains their displacement made possible?
Segal addresses this question directly in one crucial passage: he chose to keep his team and expand its ambitions rather than reduce headcount. Hobsbawm would credit the choice and then observe that it was available to Segal because of his position in the ownership structure — he controlled the organization, and his control gave him the discretion to choose broad benefit over narrow efficiency. The choice was admirable. It was also exceptional. The vast majority of workers whose employers face the same arithmetic will not be led by people who make the same choice, because the market rewards efficiency more reliably than it rewards generosity, and because the shareholders who own the companies are not the workers who staff them.
This is the structural problem that individual wisdom cannot solve. However many individual leaders make Segal's choice, the aggregate outcome will be determined by the structural incentives that govern the majority of choices — incentives that, in the absence of institutional intervention, point toward concentration rather than distribution. The factory owners of the early Industrial Revolution were not, as a class, unusually cruel or unusually greedy. They were operating within a structure that rewarded the capture of productivity gains and imposed no cost on the displacement of workers. The structure, not the individuals, produced the outcome. The institutional construction that eventually altered the structure — trade unions, factory legislation, universal education — was the work of generations, achieved through political struggle that was neither inevitable nor pleasant nor fast enough for the people who bore the cost of the transition while the institutions were being built.
Hobsbawm's prediction, made in one of his final published works, that "social distribution and not growth would dominate the politics of the new millennium" acquires, in the context of the AI revolution, a prophetic quality that even Hobsbawm might not have anticipated. The growth is extraordinary. The distribution question is urgent. And the institutional mechanisms that might answer the distribution question are, at present, inadequate by every measure that Hobsbawm's historical analysis would recognize as meaningful.
The question is whether this generation will build the institutions before the displacement is complete — or whether, as in every previous transition Hobsbawm documented, the institutions will arrive a generation too late, purchased at a cost that was entirely avoidable and borne entirely by the people who could least afford it.
---
Hobsbawm learned from Marx, though his Marxism was never doctrinaire, that the first question to ask about any economic arrangement is not whether it is efficient but who benefits from it. Efficiency is a measure of the relationship between inputs and outputs. It says nothing about the distribution of the outputs. A factory that produces twice the cloth at half the cost is more efficient than the workshop it replaced. It is not, on that basis alone, better — because "better" is a distributional judgment that efficiency cannot make. Better for whom? Better by what measure? Better at what cost to the people who bore the transition?
These are the questions that Hobsbawm asked across his entire career, and the answers he documented were remarkably consistent across two centuries and multiple technological transitions. The first beneficiaries of a new technology are almost always its owners. The second beneficiaries are its consumers, who receive cheaper or better goods. The last beneficiaries — if they become beneficiaries at all — are the workers whose prior skills the technology has rendered obsolete. The timeline between the first beneficiaries and the last is measured in decades. The suffering that fills those decades is measured in individual lives.
The power loom is the clearest case because Hobsbawm documented it most thoroughly, but the pattern repeats with structural fidelity in every subsequent transition. The railway benefited railway companies and their investors, who captured the enormous returns of a new transportation monopoly. It benefited consumers, who received faster, cheaper transport. It devastated the canal system — the canal companies, the bargemen, the communities along the canal routes that depended on canal traffic for their livelihood. The devastation was not incidental. It was structural: the railway did not merely outcompete the canal. It destroyed the entire economic ecosystem that the canal had sustained. The bargemen did not become railway workers. They became unemployed, or they took work at lower wages in occupations that bore no relationship to the skills they had spent their lives developing.
The automobile followed the same pattern. It benefited automobile manufacturers and their investors. It benefited consumers. It devastated the horse-drawn economy — the farriers, the stable-keepers, the harness-makers, the coachmen, the fodder merchants, the vast infrastructure of animal-powered transportation that had sustained millions of livelihoods across centuries. The automobile did not merely replace the horse. It destroyed the world the horse had sustained.
In each case, the aggregate gains were enormous and the aggregate welfare of society, measured over a sufficiently long time horizon, improved. In each case, the immediate distribution of those gains was radically unequal. In each case, the broader distribution required institutional construction — legal frameworks, regulatory bodies, collective bargaining arrangements, welfare systems — that took decades to build and was achieved only through political struggle by the people who had been excluded from the initial distribution.
The AI revolution of 2025-2026 follows the same pattern with a precision that Hobsbawm's framework predicts, though the specific features of the current transition differ from its predecessors in ways that matter for the analysis.
The first beneficiaries are identifiable. The companies that build and deploy large language models — Anthropic, OpenAI, Google DeepMind, Meta, and the smaller firms that compete for position in an industry whose aggregate valuation is climbing toward the trillions — capture the primary gains. Their revenue models convert the aggregate productivity increase into corporate revenue through subscription fees, API pricing, and enterprise licensing. The investors who finance these companies capture the secondary gains through equity appreciation. The small number of workers whose skills complement the technology — the researchers who build the models, the engineers who maintain the infrastructure, the executives who direct the strategy — capture a tertiary share through compensation packages that reflect their scarcity value in an industry where the demand for their specific expertise outstrips the supply.
The second beneficiaries are the users who gain new capabilities. This is the population that Segal celebrates in The Orange Pill — the builders whose imagination-to-artifact ratio has collapsed, the engineers in Trivandrum who achieved a twenty-fold productivity multiplier, the designer who built complete features without prior coding experience, the solo entrepreneur who shipped a revenue-generating product in a year without writing a line of code by hand. The gains for this population are real, measurable, and genuinely transformative of what individuals can accomplish. Hobsbawm's framework does not deny these gains. It asks what they cost, and to whom.
The cost-bearers are the professionals whose expertise the technology commoditizes. A Hobsbawm-informed analysis identifies them with the same precision he brought to the framework knitters: they are the senior software engineers whose decade-long investment in deep architectural knowledge is being devalued by a tool that produces architecturally competent code without the investment. They are the specialized designers whose visual and interactive skills, built through years of practice and critique, are now accessible to anyone who can describe what they want in natural language. They are the translators, the technical writers, the legal researchers, the financial analysts, the radiologists — the entire class of knowledge workers whose economic value was predicated on the scarcity of their expertise, and whose expertise is becoming less scarce with every improvement in the models' capabilities.
The standard response to this analysis — the one that pervades the technology industry's discourse and appears, in sophisticated form, in Segal's book — is that the displaced workers will ascend. The mechanical work disappears; the judgment work remains and becomes more valuable. The framework knitters' grandchildren became factory managers, educators, engineers. The contemporary equivalent: the software engineer who loses the coding work gains the architectural work, the strategic work, the work of deciding what should be built rather than merely building it. Segal calls this "ascending friction" — the difficulty does not disappear but climbs to a higher cognitive floor.
Hobsbawm would examine this argument with characteristic precision and find it structurally familiar. It is the same argument that was made about every previous transition, and it has the same peculiar property: it is true in the aggregate and over the long term, and it is false for the specific individuals who bear the cost of the transition in the short and medium term.
The framework knitters did not become factory managers. Their grandchildren did — some of them. The handloom weavers did not become industrial designers. The bargemen did not become railway engineers. The coachmen did not become automobile mechanics. In each case, the ascending-friction argument described a real phenomenon — the creation of new, higher-level work that required new, higher-level skills — and simultaneously elided the biographical fact that the people who possessed the old skills were not, in general, the people who filled the new roles. The transition was generational, not individual. The generation that bore the cost did not capture the gain. Their children did, or their grandchildren did, if the institutional conditions were favorable enough for those children and grandchildren to access the education, the training, and the opportunities that the new economy required.
This is the biographical reality that the ascending-friction argument obscures when it is deployed without a distributional analysis. The argument is true as a description of the economy's trajectory. It is false as a description of any individual worker's experience. And the gap between the economy's trajectory and the individual's experience is the space in which the distribution question lives — the space that Hobsbawm spent his career insisting that historians, economists, and policymakers could not afford to ignore.
The author of The Orange Pill grasps this more clearly than most technology writers, and the passage in which he describes the senior engineer whose "years of deep knowledge about systems architecture" became "the judgment layer that directed the tool" is honest about the transformation's human texture. But Hobsbawm would push the analysis toward the question that individual stories cannot answer: what is the aggregate distributional outcome? How many senior engineers become judgment layers, and how many become redundant? What institutions exist to support the ones who become redundant, and are those institutions adequate?
These are not questions that can be answered by individual acts of wisdom, however commendable. They are structural questions that require structural answers — institutional mechanisms that redistribute the gains of the technology widely enough that the transition does not destroy the lives of the people who bear its costs. The Factory Acts, the trade unions, the welfare state — these were the structural answers to the Industrial Revolution's distribution question. They were not built by benevolent factory owners. They were built by political movements that had the power to compel redistribution. The AI transition will require structural answers of comparable ambition. Whether the political will to construct them exists — and whether it can be mobilized before the displacement is complete — is the question that Hobsbawm's framework places at the center of the analysis, where it belongs.
The most revealing detail in the archive of Luddite activity is not the violence. It is the restraint.
On the night of March 11, 1811, a group of framework knitters in the village of Arnold, just north of Nottingham, broke into a workshop and destroyed sixty-three stocking frames. The workshop next door, owned by a different manufacturer, contained frames of comparable value. Those frames were left untouched. The manufacturer whose frames were destroyed had been producing cut-up stockings on wide frames using unskilled labor — a practice that violated the established customs of the trade and undercut the wages of skilled workers who produced full-fashioned goods. The manufacturer whose frames were spared had continued to operate within the customary framework, paying fair wages for skilled work and producing goods of established quality.
This selectivity was not an exception. It was the rule. Hobsbawm's archival research, extended and deepened by E.P. Thompson in The Making of the English Working Class and by subsequent historians who followed the trail Hobsbawm opened, demonstrated that Luddite attacks across all three affected counties — Nottinghamshire, Yorkshire, and Lancashire — displayed a consistency of targeting that is explicable only as the product of intelligence, analysis, and collective deliberation. The attackers knew which manufacturers were offending. They knew which machines were being used for which purposes. They distinguished between the same model of frame operated by different employers for different ends. The distinction required local knowledge of extraordinary granularity — knowledge that could only have been produced by workers who were embedded in the trade, who understood its economics from the inside, and who had been watching the specific deployments that threatened their position with the diagnostic attention of people whose survival depended on getting the analysis right.
Hobsbawm drew from this evidence an analytical conclusion of considerable power: the precision of the targeting was itself the strongest argument against the standard narrative. Ignorant workers lashing out in blind fear do not distinguish between frames owned by fair employers and frames owned by exploitative ones. Enraged mobs do not spare the workshop next door. The selectivity implied analysis — a collective judgment about which deployments of the new technology violated the norms of fair dealing and which did not — and the execution of that judgment with a discipline that the British Army, deploying twelve thousand troops to suppress the movement, struggled to match.
The contemporary resistance to artificial intelligence displays a comparable precision, though the instruments of resistance are legal and cultural rather than physical. The resistance is not directed at AI in the abstract, in the way that the Luddites were not directed at machinery in the abstract. It is directed at specific deployments — specific uses of specific tools by specific companies under specific conditions that violate established norms of professional practice, creative attribution, and economic fair dealing.
The open-source licensing movement offers the clearest parallel. When developers began modifying software licenses to prohibit the use of their code as training data for large language models, the modification was targeted with a specificity that Hobsbawm would have recognized immediately. The developers did not withdraw their code from public access. They did not abandon the principle of open-source collaboration. They modified the terms of access to exclude a specific use — the commercial extraction of value from collectively produced code by companies that contributed nothing to the commons from which they extracted. The rest of the open-source ecosystem continued to function under its established norms. Only the specific deployment that violated those norms was targeted.
The precision reveals the analysis. The developers had examined the economics of large language model training and identified the specific mechanism by which their collective labor was being converted into corporate revenue without reciprocity. The AI companies ingested publicly available code repositories — millions of lines of code contributed by hundreds of thousands of developers under licenses that assumed collaborative use — and used that code to train models that generated billions of dollars in subscription and licensing revenue. The developers who wrote the training data received nothing: no compensation, no attribution, no share of the revenue their work had made possible. The licensing modification was a response to this specific extraction — collective bargaining by copyright, targeted at the specific deployment that violated the established norms of the commons.
The Hollywood writers' strike of 2023 displayed the same analytical precision. The Writers Guild of America did not demand the prohibition of AI tools in the entertainment industry. The Guild's demands were specific: that AI-generated material could not be used as source material that writers would be asked to rewrite at reduced compensation, that writers would not be required to use AI tools as a condition of employment, and that the use of writers' existing work to train AI models would require negotiation and compensation. Each demand addressed a specific deployment mechanism by which the studios could use AI to reduce writers' compensation, credit, and professional standing. The targeting was as precise as the framework knitters' distinction between fair employers and exploitative ones — and it emerged from the same analytical process: workers embedded in the trade, who understood its economics from the inside, identified the specific mechanisms of extraction and directed their resistance at those mechanisms rather than at the technology as such.
The visual artists who have filed copyright claims against AI image generators display comparable specificity. The claims do not assert that AI-generated images should not exist. They assert that AI-generated images that were trained on copyrighted work without consent or compensation constitute a specific form of extraction — the appropriation of artistic labor through a mechanism that the existing copyright framework was not designed to address. The targeting is directed at the gap between the technology's capability and the legal framework's coverage — a gap that the AI companies exploit, and that the artists, lacking the political power to close through legislation, are attempting to close through litigation.
In each case, the contemporary resistance mirrors the Luddite pattern that Hobsbawm identified: the targeting is precise, the analysis is specific, and the resistance is directed not at the technology but at the deployment. The implications of this precision are considerable, because they demolish the narrative that frames resistance as irrational technophobia — the same narrative that Hobsbawm demolished for the historical Luddites and that subsequent generations of commentators reconstructed, in slightly different vocabulary, for each new technological transition.
Hobsbawm understood that the narrative of irrational resistance served a political function that was independent of its accuracy. If the Luddites were merely afraid of machines, then their resistance required no political response — only reassurance, education, the passage of time. If the Luddites were conducting a rational analysis of who benefited from specific deployments of specific machines and targeting their resistance accordingly, then their resistance demanded a political response: an examination of the distribution question, a negotiation over the terms of deployment, the construction of institutional mechanisms that ensured the gains of the new technology were not captured entirely by its owners.
The same political function is served by the contemporary narrative of AI resistance as technophobia. If the developers, writers, artists, and engineers who resist specific deployments of AI are merely Luddites in the pejorative sense — afraid, backward, unable to adapt — then their resistance requires no political response. Retrain them. Reassure them. Wait for them to catch up. But if they are conducting a rational analysis of who benefits from specific deployments and directing their resistance at the mechanisms of extraction, then the political response is different in kind: it requires examining the distribution, negotiating the terms, and constructing the institutions that the market, left to its own devices, will not build.
The precision of the targeting is the evidence that determines which narrative is accurate. And the evidence, in the contemporary case as in the historical one, points decisively toward rational analysis rather than irrational fear. The developers who modified their licenses understood the economics of model training. The writers who struck understood the economics of script production. The artists who filed claims understood the economics of image generation. The senior engineers who withdrew from the industry understood the economics of their own displacement. In each case, the understanding preceded and produced the resistance. The resistance was not a reaction to incomprehension. It was a product of comprehension — comprehension that was, in many cases, more precise than that of the enthusiasts who celebrated the technology without examining its distributional consequences.
Segal writes in The Orange Pill that the Luddites "could not see what would grow in the space the machines opened." Hobsbawm would observe that the framework knitters could see with painful clarity what was dying in the space the machines were closing, and that the inability to see what would eventually grow does not invalidate the analysis of what was being destroyed. The framework knitters' inability to foresee the Factory Acts, universal education, and the welfare state — institutions that would not be built for decades — does not make their analysis of the immediate destruction any less accurate. Similarly, the contemporary workers' inability to foresee the institutional responses to AI that may eventually redistribute its gains does not make their analysis of the immediate extraction any less precise.
The targeting is the proof. It has always been the proof. Hobsbawm saw it in the archives of 1811. It is visible in the licensing disputes, the labor actions, the copyright claims, and the professional withdrawals of 2025 and 2026. The framework knitters broke specific frames. The contemporary resisters target specific deployments. The specificity reveals not fear but analysis — not incomprehension but a comprehension so thorough that it can identify, with diagnostic precision, the exact mechanisms by which a new technology is being deployed to concentrate gains and distribute costs.
Dismissing this precision as technophobia is not an intellectual judgment. It is a political act — the same political act that Hobsbawm identified in the standard narrative of the historical Luddites, and that serves the same political function: the conversion of a distributional question into a psychological diagnosis, so that the distributional question need not be addressed.
The question will be addressed regardless. The only variable is whether it will be addressed proactively, through institutional construction and political negotiation, or reactively, through the accumulation of grievances that history shows will eventually find expression — in whatever channel remains available when the others have been closed.
---
The forms of resistance to technological displacement have always been shaped by the channels available for their expression. When legal channels for negotiation exist, resistance takes legal form: collective bargaining, regulatory advocacy, legislative lobbying. When legal channels are inadequate or closed, resistance finds other forms — some dramatic, some quiet, all diagnostic of the relationship between the displaced and the institutions that are supposed to represent them.
Hobsbawm catalogued the forms of pre-industrial labor resistance with the exhaustiveness of a naturalist classifying species. Machine-breaking was the most visible form, but it was not the only one. There were food riots — collective actions to enforce customary prices when merchants attempted to sell grain above the level that communities considered just. There were turnouts — organized work stoppages that predated the formal trade union. There were arson campaigns against the property of employers who violated established norms. There were anonymous letters — threatening communications, often signed with pseudonyms like "Ned Ludd" or "Captain Swing," that functioned as collective ultimatums delivered without identifiable authorship. Each form was adapted to the specific circumstances of the community that employed it, the specific grievance that motivated it, and the specific channels that were open or closed at the time.
Thompson extended this analysis in his concept of the "moral economy" — the framework of customary norms and mutual obligations within which pre-industrial communities judged economic arrangements. Food riots, Thompson demonstrated, were not eruptions of hunger-driven chaos. They were disciplined enforcement actions directed at merchants who violated the community's sense of just price. The crowds did not loot indiscriminately. They seized grain, sold it at the customary price, and returned the proceeds to the merchant. The action enforced the moral economy against the encroachment of the market economy. It was, in Hobsbawm's language, collective bargaining by riot — negotiation conducted through the only channel available.
The taxonomy of contemporary resistance to AI follows a recognizable pattern, though the forms have evolved with the available channels. Four principal forms are visible in the landscape of 2025 and 2026, each with its historical precedent and each revealing something about the relationship between the displaced and the institutions that are supposed to represent them.
The first form is withdrawal. Senior software engineers, the ones Segal describes as "running for the woods," are leaving the technology industry. Some are retiring early. Some are moving to adjacent fields where AI's impact is less immediate. Some are relocating to lower-cost areas, reducing their expenses to match the reduced income they anticipate in a market that no longer values their expertise at its former level. This withdrawal is not flight in the sense of panic. It is a calculated response to a market signal — the recognition that the return on decades of specialized investment is declining, and that the rational response to a declining asset is to cut losses rather than to continue investing.
Hobsbawm would have recognized this pattern from the handloom weavers of the 1820s and 1830s, who retreated to rural areas where the cost of living was lower and where the factory system had not yet penetrated. The retreat sustained them for a time — some handloom weavers continued to practice their trade into the 1850s, serving niche markets that valued the quality of handwoven cloth. But the retreat was a holding action, not a solution. The factory system eventually reached the countryside. The niche markets contracted. The handloom weavers' children did not become handloom weavers.
The contemporary withdrawal displays the same structural limitations. The engineers who leave the industry preserve their dignity and their savings. They do not preserve their profession. Their departure removes their expertise from the ecosystem — expertise that, as Hobsbawm's analysis of social infrastructure would predict, cannot be replaced by junior developers armed with AI tools, however productive those tools may be. The cost of the withdrawal will be felt not by the withdrawing engineers but by the systems they leave behind — systems whose long-term maintainability depends on the kind of deep architectural understanding that withdrawal removes from the pool of available knowledge.
The second form is collective bargaining by copyright. The open-source licensing modifications, the copyright claims by visual artists, the contractual demands of the Writers Guild — each uses the legal framework of intellectual property to impose costs on AI deployments that extract value without compensation. This form of resistance is available because copyright law, unlike the Combination Acts that suppressed the Luddites' organizing, provides a legal channel for asserting ownership claims. The channel is imperfect — copyright law was not designed for the specific extraction mechanisms that AI training employs, and the legal questions are genuinely novel — but it is open, and the resistance that flows through it is producing real effects: settlements, licensing agreements, contractual protections that did not exist before the resistance created the pressure for them.
The third form is collective bargaining by politics — the organized effort to produce regulatory frameworks that govern the terms of AI deployment. The EU AI Act, the American executive orders, the emerging frameworks in Singapore, Brazil, Japan, and elsewhere represent the institutional response that Hobsbawm's historical analysis identifies as the eventual (and always delayed) consequence of technological displacement. The regulatory efforts are moving at the pace of democratic deliberation — slowly, carefully, comprehensively — and the gap between the speed of the regulation and the speed of the transformation it seeks to govern is wide and widening. But the efforts are real, and they represent the construction of institutional channels that, if adequately designed and adequately enforced, could redirect the distribution of AI's gains in ways that the market alone will not.
The fourth form is voice — the public articulation of the human costs of AI adoption by the people who bear them and by the witnesses who document their experience. Brian Merchant's Blood in the Machine, published in 2023 and described by Wired as "the most important book to read about the AI boom," builds directly on Hobsbawm's scholarship, connecting the Luddite movement to the contemporary resistance against Big Tech with a historical specificity that popular technology writing rarely achieves. The journalists, scholars, and displaced professionals who document the costs of AI adoption perform the function that the industrial novelists performed in the 1840s — making visible what the beneficiaries of the transition prefer not to see.
Segal's own book performs this function with an honesty that Hobsbawm's framework would credit. The Orange Pill does not hide the costs. The engineers who feel vertigo, the professionals who face obsolescence, the parents who lie awake wondering what world their children will inherit — these are not peripheral figures in Segal's narrative. They are central to it. The book's willingness to hold the gains and the costs in the same account, without resolving the tension between them, is its most valuable analytical feature. The voice that says "something precious is dying" alongside the voice that says "something extraordinary is being born" is the voice that Hobsbawm would recognize as historically literate — the voice that understands that both statements are true and that the tension between them is not a flaw in the analysis but the analysis itself.
Each form of contemporary resistance corresponds to a historical precedent, and each is shaped by the channels available. Withdrawal echoes the rural retreat of displaced artisans. Copyright bargaining echoes the legal strategies that labor movements developed when direct action was suppressed. Political regulation echoes the factory legislation that eventually constrained the worst excesses of industrial capitalism. Voice echoes the witness literature that made the costs of industrialization visible to a public that preferred comfort to comprehension.
The question that Hobsbawm's framework raises is not whether these forms of resistance are adequate — they are not, by historical standards, adequate to the scale of the transformation — but whether they will develop into the institutional forms that the scale requires. The Luddites' machine-breaking did not produce institutional change directly. It produced repression: the Frame Breaking Act, the military deployments, the executions and transportations that destroyed the movement as an organized force. But the energy that the movement expressed — the conviction that the terms of technological deployment were unjust and the determination to challenge those terms — did not disappear with the movement's suppression. It was absorbed into the reform movements of the 1830s and 1840s, the trade unions of the 1850s and 1860s, the political parties of the late nineteenth century. The institutional construction that eventually redistributed the Industrial Revolution's gains was built, in part, on the foundations that the Luddites laid — not because they succeeded, but because their failure demonstrated the necessity of institutional channels adequate to the scale of the problem.
The contemporary resistance to AI is in its earliest phase — comparable, in Hobsbawm's periodization, to the machine-breaking that preceded the construction of durable institutions. Whether the copyright claims, the regulatory frameworks, the professional organizing, and the public voice that characterize the current resistance will develop into institutional forms of comparable durability is the question that will determine the AI transition's distributional outcome. History provides the pattern. The present provides the test.
---
A framework knitter's expertise was not a private possession. It was the visible surface of an invisible structure — a social system built over generations that produced, maintained, and transmitted the knowledge on which the trade depended. The apprenticeship lasted seven years: seven years during which a young person learned not only the mechanical operation of the stocking frame but the properties of different fibers, the relationship between tension and drape, the judgment calls that separated competent production from excellent production, and the professional norms — about quality, about fair dealing, about mutual obligation between masters and apprentices — that constituted the trade's moral and practical infrastructure.
The seven-year investment was not made by the apprentice alone. It was made by the master who trained him, by the guild that regulated the terms of apprenticeship, by the community that sustained the guild, and by the customers who paid the premium that quality production commanded and that, in turn, financed the entire system of training and quality maintenance. The investment was social in the precise sense that it required the coordinated participation of multiple parties over extended time, that no single party could have made it alone, and that the value it produced — skilled workers, quality goods, professional standards, community identity — was distributed across the entire network of participants rather than captured by any one of them.
When the power loom destroyed the market for hand-produced stockings, it did not merely eliminate the jobs of individual framework knitters. It destroyed the entire institutional ecology that the social investment had built. The apprenticeship system became economically unviable — why would a master invest seven years in training an apprentice when the trade the apprentice was being trained for would not sustain a livelihood by the time the apprenticeship was complete? The guilds that had regulated the trade lost their economic foundation — the fees that sustained them depended on a trade that was contracting. The quality standards that the guilds had maintained became irrelevant — the market no longer rewarded quality when machine-produced goods were cheaper. The community identity that had been built around the trade dissolved as the trade itself dissolved.
Hobsbawm documented this cascading destruction with the specificity of a historian who understood that aggregate statistics conceal the texture of what they measure. The decline of the framework knitting trade was not a line on a graph. It was the dissolution of communities — specific communities, in specific places, with specific relationships and specific traditions that had been sustained for generations by the economic foundation that the trade provided. When the foundation was removed, the communities did not adapt. They disintegrated, over years and decades, in a process that was invisible in the aggregate data but devastating at the level of individual lives.
The contemporary parallel operates at a different scale but follows the same structural logic. The software engineering profession has built, over approximately four decades, a social infrastructure of considerable depth and complexity. The infrastructure includes formal institutions — university computer science programs, professional certifications, industry conferences, standards bodies — and informal ones — mentorship networks, open-source communities, code review practices, career ladders that define the expected trajectory from junior developer to senior engineer to technical lead to architect. These institutions did not arise spontaneously. They were built through sustained collective investment by the people who participated in them — investment of time, expertise, reputation, and the kind of institutional knowledge that accumulates only through years of engaged practice.
The mentorship relationship is perhaps the most fragile component of this infrastructure and the one most directly threatened by AI's intervention in the workflow. A senior engineer mentoring a junior engineer transmits not only technical knowledge but professional judgment — the accumulated wisdom about what works and what breaks, about when to trust the elegant solution and when to suspect it, about how to read a codebase the way a diagnostician reads symptoms, finding the pathology that the surface presentation conceals. This transmission requires proximity: the mentor and the junior working on the same problems, encountering the same failures, discussing the same trade-offs. The transmission is slow. It is inefficient by any metric that measures output per hour. And it is the mechanism by which the profession reproduces its deepest capabilities across generations.
When AI tools allow junior developers to produce competent output without the mentorship that would have developed their judgment, the immediate effect is positive: more output, faster delivery, expanded capability. The downstream effect, visible only over years, is the attenuation of the transmission mechanism. The junior developer who prompted their way to a working solution has a working solution. They do not have the understanding that would have developed through the struggle of arriving at the solution without assistance — the debugging, the architectural reasoning, the encounter with failure that deposits layers of intuition the way sedimentation deposits geological strata.
Segal describes this dynamic with a specificity that Hobsbawm's framework would credit. The engineer in Trivandrum who lost "the ten minutes of formative struggle buried in four hours of plumbing" is a case study in the mechanism by which efficiency destroys the social investment in depth. The four hours of plumbing were tedious. They were also the medium through which architectural intuition was built — not through the tedium itself but through the rare moments, buried unpredictably in the tedium, when something unexpected happened and forced the engineer to understand a connection she had not previously seen. The AI tool removed the tedium and the moments simultaneously. The tedium was not missed. The moments were not noticed until their absence produced a deficit — a deficit in confidence, in architectural judgment, in the specific kind of understanding that only friction produces.
The code review process illustrates the social dimension of the infrastructure's fragility. Code review is not merely a quality-assurance mechanism. It is a social practice through which professional standards are maintained, transmitted, and negotiated. When a senior engineer reviews a junior engineer's code, the review serves multiple functions simultaneously: it catches errors, it teaches the junior engineer about the senior's standards and reasoning, it maintains the shared understanding of what constitutes acceptable code within the team, and it reproduces the professional norms that distinguish competent engineering from mere functionality. The review is slow, it is expensive in terms of senior engineer time, and it is irreplaceable — because the functions it serves cannot be disaggregated and assigned to separate mechanisms without destroying the integration that makes the practice valuable.
When AI-generated code enters the review process, the dynamics shift in ways that are subtle but structurally significant. AI-generated code is syntactically competent, logically consistent, and stylistically uniform. It does not contain the idiosyncratic errors that characterize human-written code and that, paradoxically, are the most educationally valuable targets of the review process. A junior engineer's characteristic mistakes — the misunderstood abstraction, the naively optimized function, the architectural choice that reveals a gap in understanding — are diagnostic opportunities. They reveal what the junior engineer does not yet understand, and the review conversation that addresses them is the mechanism through which understanding develops. AI-generated code does not produce these diagnostic opportunities. It produces code that works, that follows established patterns, that gives the reviewer nothing to correct because it was generated by a system that has already internalized the patterns the reviewer would enforce.
The review process, conducted on AI-generated code, becomes a verification exercise rather than an educational one. The senior engineer confirms that the code works. The confirmation takes less time. The team moves faster. And the mechanism by which professional standards were transmitted from one generation to the next — the slow, friction-rich, educationally dense conversation between a senior engineer who has seen the failure modes and a junior engineer who has not yet encountered them — atrophies from disuse.
Hobsbawm would recognize this pattern from every transition he documented. The social infrastructure of a skilled trade is always the most fragile casualty of technological displacement, because the infrastructure is invisible in the metrics that measure the transition's impact. Output is measured. Revenue is measured. Headcount is measured. Time to delivery is measured. The quality of mentorship, the density of professional community, the robustness of knowledge transmission, the resilience of shared standards — these are not measured, because they are not measurable in the same way, and because the people who would measure them are the same people whose attention is consumed by the measurable metrics.
The framework knitters' guilds did not disappear in a single event. They contracted over decades, as the economic foundation that sustained them eroded and as the young people who would have entered the trade chose other paths. The contraction was invisible in the aggregate productivity statistics, which showed only the rising output of the factory system. The loss — of training capacity, of quality standards, of community identity, of the specific kind of knowledge that could only be built through years of embedded practice — appeared in the statistics as absence, which is to say it did not appear at all.
The same invisibility characterizes the contemporary erosion. The mentorship that does not happen, the code review that becomes perfunctory, the career ladder that shortens as the middle rungs are automated away, the professional community that fragments as the shared experience of struggle is replaced by the shared experience of prompting — none of these losses will appear in the metrics that technology companies use to assess the AI transition's impact. The metrics will show rising productivity, faster delivery, expanded capability. The loss will appear later, when the systems built without deep understanding begin to fail in ways that the people maintaining them do not understand, because the transmission mechanism that would have developed their understanding was optimized away years earlier.
Hobsbawm's insistence on documenting what the metrics did not measure — the specific, granular, biographical reality of what displacement cost the people who experienced it — is the analytical commitment that the AI transition most urgently requires. The metrics are not wrong. They measure real phenomena. But they measure the phenomena that matter to the people who capture the gains, not the phenomena that matter to the people who bear the costs. The social investment that displacement destroys is always in the second category, and it is always invisible until its absence produces consequences that the first category cannot explain.
---
In 1971, E.P. Thompson published an essay that would reshape the study of pre-industrial protest. "The Moral Economy of the English Crowd in the Eighteenth Century" demonstrated that the food riots that erupted periodically across England were not hunger-driven spasms but disciplined enforcement actions conducted within a framework of shared norms about just prices, fair dealing, and the mutual obligations between merchants, millers, bakers, and their communities. The crowd that seized grain from a merchant's warehouse did not loot. It sold the grain at the customary price and returned the proceeds to the merchant. The action enforced the community's understanding of fair exchange against the merchant's attempt to maximize profit by selling at whatever price the market would bear.
Thompson's concept of the "moral economy" — the unwritten but deeply felt framework of norms, obligations, and expectations within which a community judges economic arrangements — was built on and extended Hobsbawm's earlier work on pre-industrial labor protest. Where Hobsbawm had demonstrated the rationality of machine-breaking as collective bargaining, Thompson demonstrated the rationality of food riots as price enforcement. Both scholars insisted on the same analytical point: that pre-industrial workers possessed a coherent analysis of their economic situation, that their actions followed logically from that analysis, and that dismissing those actions as irrational served the political interests of the people who benefited from the arrangements that the workers were challenging.
The moral economy is not a contract. It is not written down, and its terms are not enforceable through legal mechanisms. It is an implicit consensus about what constitutes fair dealing — a consensus that develops over years of shared practice and that functions, when the consensus holds, as a regulatory framework more effective than law, because it is enforced not by external authority but by the community's collective judgment. The moral economy defines the boundaries of acceptable behavior: what a fair wage looks like, what a fair price looks like, what obligations employers owe to workers and workers owe to employers, what constitutes quality and what constitutes exploitation. When a market actor violates the moral economy — when a merchant charges above the just price, when a manufacturer produces shoddy goods, when an employer drives wages below the level of subsistence — the community responds. The response may be formal or informal, peaceful or violent, but it is always structured by the moral economy's norms rather than by raw economic interest.
The software engineering profession possesses a moral economy of considerable complexity, though it is rarely described in those terms. The moral economy of the coder defines, through implicit consensus rather than formal contract, the norms of professional practice that the community expects its members to observe. These norms include expectations about compensation — that expertise developed over years of practice should command a premium, that the premium should reflect the difficulty and scarcity of the expertise, that the erosion of the premium through the deployment of cheaper alternatives is a violation of the implicit understanding between the profession and the market it serves. They include expectations about quality — that code should not merely function but should be maintainable, readable, and architecturally sound, that the pressure to ship fast should not override the obligation to ship well, that the distinction between code that works and code that is good is a moral distinction as well as a technical one. They include expectations about attribution — that the person who wrote the code should be credited, that the person who designed the architecture should be recognized, that the contribution of expertise to a collective product should be acknowledged rather than rendered invisible. And they include expectations about mentorship — that senior engineers have an obligation to develop junior engineers, that this obligation is not merely a corporate investment in human capital but a professional duty, that the transmission of knowledge across generations is part of what it means to be a member of the profession rather than merely an employee of a company.
These norms are not universal. They vary across organizations, across sub-disciplines, across national cultures of software development. They are not always observed. Like the moral economy of the pre-industrial crowd, the coder's moral economy is an ideal rather than a description — a set of norms against which actual practice is judged, often found wanting, and sometimes enforced through the collective judgment of the professional community. A company that burns out its engineers, ships code it knows to be defective, or fails to credit the people who built its products is not violating a law. It is violating the moral economy. The community's response — attrition, public criticism, the quiet withdrawal of the best practitioners to organizations that observe the norms — is the enforcement mechanism.
AI disrupts this moral economy in multiple dimensions simultaneously, and the disruption explains, in Thompson's and Hobsbawm's framework, why the resistance to AI is so deeply felt by so many practitioners who are not, by any reasonable measure, afraid of technology.
The compensation norm is violated when AI tools allow unskilled or lightly skilled users to produce output that was previously the province of deeply trained professionals. The violation is not that the output exists — the moral economy does not claim a monopoly on production — but that the output devalues the expertise without compensating the experts whose work built the foundation on which the AI operates. The large language models that generate code were trained on the accumulated output of millions of developers. The developers are not compensated for this use of their work. The companies that deploy the trained models capture the revenue. The market for the developers' expertise contracts as the models improve. The moral economy — which holds that expertise should command a premium and that the erosion of the premium through extraction is unjust — is violated at every point in the chain.
The quality norm is violated when the pressure to ship fast, amplified by AI's capacity to produce code at unprecedented speed, overrides the obligation to ship well. The moral economy of the coder holds that quality is a professional obligation — not merely a business consideration but a matter of craft integrity. Code that works but is unmaintainable, architecturally unsound, or incomprehensible to the next developer who must work with it violates this norm. AI-generated code frequently meets the functional standard while violating the craft standard — it works, but no human understands why it works in the specific way it does, and the architectural choices embedded in the generated code may not be the choices that a human engineer, reasoning about the system's long-term maintainability, would have made. The moral economy says this matters. The market says it does not, as long as the code ships and the tests pass. The tension between the moral economy and the market is the tension that the practitioners feel, and that the resistance expresses.
The attribution norm is violated when AI-generated output is presented without disclosure, when the contribution of the tool is rendered invisible, when the line between human and machine production is blurred in ways that obscure who actually did what. Segal addresses this directly in The Orange Pill — the book is transparent about its collaborative authorship, and the transparency is itself an argument about the moral economy of creative production. But the transparency is exceptional. The vast majority of AI-assisted work is presented without disclosure, and the non-disclosure serves the interests of the people who benefit from the ambiguity: the employers who can claim the output as their employees' work, the professionals who can claim the output as their own, the companies that can present AI-generated content as human-produced without the stigma that still, in many contexts, attaches to machine production.
The mentorship norm is violated when AI tools substitute for the human relationship through which professional knowledge is transmitted across generations. Thompson's moral economy depended on the transmission of norms — the older generation teaching the younger not only how to produce but how to judge production, what standards to apply, what obligations to observe. The mentorship relationship in software engineering serves the same function: it transmits not only technical skill but professional judgment, the capacity to evaluate code not merely for functionality but for quality, maintainability, and architectural integrity. When AI tools allow junior developers to bypass the mentorship relationship, the immediate effect is efficiency. The downstream effect is the attenuation of the transmission mechanism, and with it, the erosion of the professional norms that the mentorship was transmitting.
Hobsbawm and Thompson would have recognized the contemporary resistance to AI as, in substantial part, a defense of the moral economy against the encroachment of a market logic that recognizes no value beyond efficiency. The defenders are not technophobes. They are practitioners who understand, from inside the profession, that the norms being violated — compensation for expertise, commitment to quality, attribution of contribution, obligation of mentorship — are not arbitrary preferences but the structural foundations on which professional practice is built. The violation of these norms does not merely offend the practitioners' sense of fairness. It undermines the institutional conditions on which the profession's long-term capacity to produce quality work depends.
The moral economy of the framework knitters did not survive mechanization. The norms of fair wages, quality production, and mutual obligation between masters and apprentices were replaced by the market economy of the factory system, in which the only standard was profitability and the only regulation was supply and demand. The replacement produced cheaper goods and higher aggregate output. It also produced child labor, sixteen-hour shifts, industrial disease, and a generation of workers whose lives were consumed by a system that recognized no obligation to them beyond the payment of the lowest wage the market would bear.
The moral economy of the coder may or may not survive the AI transition. The outcome depends not on the technology — which is, as always, indifferent to the norms of the communities it disrupts — but on the institutional structures that the profession and the broader society build to protect the norms that matter. The Factory Acts eventually imposed legal constraints on the factory system's worst violations of the moral economy. The trade unions negotiated terms that restored, in modified form, some of the obligations that the moral economy had previously enforced through custom. The construction was slow, contested, and incomplete. But it was real, and it demonstrated that the moral economy, while vulnerable to market disruption, is not helpless against it — provided that the communities whose norms are threatened possess the organizational capacity and the political power to defend them.
Whether the software engineering community possesses that capacity and that power is an open question. The profession is large but organizationally diffuse. It lacks the union density that gave industrial workers collective bargaining power. Its practitioners are distributed across jurisdictions, across companies, across sub-disciplines that share professional identity but not organizational structure. The channels available for collective action — licensing, litigation, regulation, voice — are wider than those available to the framework knitters but narrower than those available to organized industrial labor at its peak.
The moral economy is not a nostalgic fantasy. It is a description of the norms that make professional practice possible — norms that the market, left to its own devices, will erode whenever erosion is profitable. The defense of those norms is not resistance to progress. It is the insistence that progress, to deserve the name, must meet standards that the market alone cannot set.
Every technological revolution produces two histories. The first is written while the revolution is underway, by the people who benefit from it, in the vocabulary of progress, innovation, and the forward march of human capability. The second is written afterward, sometimes decades afterward, by historians who have access to the evidence that the first history omitted — the testimony of the displaced, the records of the communities that dissolved, the statistical trails of wages that collapsed and livelihoods that disappeared. The first history celebrates. The second corrects. The gap between them is not an accident. It is a structural feature of how technological transitions are narrated, and understanding the structure is essential to understanding why the AI discourse of 2025 and 2026 takes the shape it does.
Hobsbawm spent his career writing the second history. His four-volume narrative of the modern world — from the dual revolution of the late eighteenth century through the catastrophes of the twentieth — was built on the conviction that the standard narrative of progress, the one taught in schools and repeated in boardrooms and assumed in policy documents, systematically excluded the experience of the people who bore the costs of the transformations it celebrated. The exclusion was not a conspiracy. It did not require coordination or malice. It was the natural product of a narrative structure in which the aggregate outcome — rising productivity, falling prices, expanding capability — was treated as the whole story, and the distributional outcome — who captured the gains, who was destroyed in the process — was treated as a footnote, a regrettable but temporary cost of the progress that the aggregate measured.
The erasure operates through several mechanisms that Hobsbawm identified across multiple transitions and that are visible, with structural fidelity, in the contemporary AI discourse.
The first mechanism is temporal displacement. The suffering of the displaced is acknowledged but placed in a different temporal frame than the gains of the beneficiaries. The gains are measured in the present: productivity today, revenue this quarter, capability this year. The suffering is placed in the past — the costs of the transition, regrettable but behind us — or in the future — the displaced will eventually benefit, through retraining, adaptation, the creation of new categories of work. The temporal displacement allows the triumphalist to acknowledge the cost without confronting it, because the cost is always somewhere else in time: already over, or not yet arrived, but never here, never now, never simultaneous with the gains being celebrated.
Segal's The Orange Pill is more honest than most technology writing about the simultaneity. The engineer who oscillates between excitement and terror, the parent who lies awake, the professional who cannot tell whether the ground is rising or falling — these are descriptions of people experiencing the gain and the cost at the same time, in the same body, without the comfort of temporal displacement. But even Segal's narrative tends, in its structural arc, toward resolution — toward the sunrise on the roof of the tower, toward the assertion that the trajectory bends toward expansion, toward the historically grounded but distributionally incomplete claim that previous transitions eventually produced broader prosperity. The "eventually" is doing enormous work in that sentence, and the work it does is precisely the temporal displacement that Hobsbawm spent his career exposing.
The framework knitters of 1811 did not experience "eventually." They experienced the collapse of their wages, the dissolution of their communities, and the destruction of the social infrastructure that had sustained their families and their craft. Their grandchildren may have experienced broader prosperity — some of them, in some places, under some institutional conditions. But the framework knitters themselves did not. And the narrative that subsumes their experience under the category of "the transition was eventually beneficial" performs a specific political operation: it converts a distributional injustice that demands institutional response into a temporal inconvenience that demands only patience.
The second mechanism of erasure is the conversion of structural problems into individual diagnoses. When a skilled worker is displaced by a new technology, the structural explanation is that the technology has altered the distribution of economic value in ways that devalue the worker's expertise. The individual diagnosis is that the worker has failed to adapt — failed to retrain, failed to update skills, failed to recognize the direction of the market and position accordingly. The structural explanation demands institutional response: regulation, redistribution, collective support. The individual diagnosis demands individual response: personal responsibility, lifelong learning, resilience. The shift from structural to individual is not merely an analytical preference. It is a political operation that relieves institutions of the obligation to act by transferring that obligation to the individuals who are least equipped to bear it.
The contemporary AI discourse is saturated with individual diagnoses. The engineer who resists AI tools has failed to adapt. The professional who expresses anxiety about displacement lacks resilience. The parent who worries about a child's future is insufficiently optimistic. Each diagnosis locates the problem in the individual rather than in the structure, and each thereby converts a political question — how should the gains and costs of AI be distributed? — into a therapeutic one — how can you personally become more adaptable?
Hobsbawm would have recognized this conversion as a sophisticated version of the same operation that was performed on the framework knitters. The standard narrative did not deny that the framework knitters suffered. It attributed their suffering to their failure to adapt rather than to the structure that distributed the gains of mechanization to factory owners and the costs to the workers whose skills the machines replaced. The attribution is politically convenient because it absolves the beneficiaries of responsibility for the distribution. If the displaced failed to adapt, then the displacement is not the beneficiaries' problem. It is the displaced workers' problem, individually, one by one, and the solution is individual — each displaced worker must find their own path to the new economy, without collective support and without institutional intervention.
The third mechanism is the narrative of inevitability. The technology was coming regardless. Resistance is futile. The transition is as natural and unstoppable as a river — a metaphor that Segal deploys explicitly in The Orange Pill, and that Hobsbawm would have examined with the suspicion he brought to all claims of historical inevitability. The inevitability narrative serves the same political function as the temporal displacement and the individual diagnosis: it removes the distribution question from the agenda. If the technology is inevitable, then the question of how it is deployed — who captures the gains, who bears the costs, what institutional structures govern the distribution — is not a question at all. It is a fait accompli. The only available response is adaptation, and adaptation is, once again, an individual rather than a collective matter.
Hobsbawm was professionally allergic to inevitability. The historian who documents what actually happened, in specific places, at specific times, to specific people, develops an instinctive resistance to narratives that treat outcomes as foreordained. The Industrial Revolution was not inevitable. Mechanization was not inevitable. The specific form that industrialization took — the factory system, the displacement of artisans, the concentration of gains in the hands of factory owners — was the product of specific political choices, specific institutional arrangements, specific distributions of power. Different choices would have produced different outcomes. The technology would still have arrived, but the terms of its deployment, the distribution of its gains, and the institutional structures that governed its integration into the social fabric were all contingent — products of human decision rather than natural law.
The same contingency applies to the AI transition. The technology exists. Its capabilities are real. The direction of improvement is clear. None of this is in dispute. What is contingent — what is the product of human decision rather than technological determinism — is the distribution. Who captures the productivity gains? What institutional mechanisms redistribute those gains? What protections exist for the displaced? What obligations do the beneficiaries owe to the communities whose expertise built the foundation on which the technology operates? These are questions that the inevitability narrative forecloses, and foreclosing them serves the interests of the people who benefit from the current distribution — the same people whose interests were served by the inevitability narrative of the Industrial Revolution, and the railway revolution, and the automobile revolution, and every other transition in which the narrative of progress was deployed to close the conversation about distribution before it had properly begun.
Hobsbawm made the displaced visible not as an act of sentimentality but as an act of analytical necessity. The aggregate statistics that measured the Industrial Revolution's impact — rising output, falling prices, growing national wealth — told a true story. They did not tell the whole story. The whole story included the framework knitters whose wages collapsed, the handloom weavers who starved, the children who worked fourteen-hour shifts in the mills, the communities that dissolved when the trade that sustained them was destroyed. These were not footnotes to the story of progress. They were the distributional reality that the story of progress concealed.
The AI transition's distributional reality is being concealed by the same mechanisms, and the concealment is more effective now than it was in the nineteenth century, for a reason that Hobsbawm would have appreciated: the displaced, in the contemporary case, are knowledge workers — educated, articulate, economically comfortable relative to the framework knitters — and their displacement does not produce the visible, photogenic suffering that galvanized reform movements in the industrial era. A software engineer who loses professional status and economic security does not look like a child in a mill. The suffering is internal, invisible, distributed across thousands of individual experiences that do not aggregate into a compelling visual narrative. The quietness of the suffering makes it easier to erase, and the erasure is more complete because the displaced themselves, having internalized the individual diagnosis, are reluctant to name their experience as structural rather than personal.
Segal's The Orange Pill resists the erasure with a determination that Hobsbawm's framework would credit. The book's willingness to document the vertigo, the loss, the ambivalence — to hold the gains and the costs in the same account without allowing the gains to subsume the costs — is an act of analytical integrity that most technology writing does not attempt. Hobsbawm would acknowledge this and then push further: visibility is necessary. It is not sufficient. Making the displaced visible creates the conditions for political action. It does not constitute political action. The distributional question — who benefits, who bears the costs, what institutions govern the allocation — remains unanswered until the political will to answer it is organized, mobilized, and applied with sufficient force to alter the structure.
The triumphalists erase the losers not because the triumphalists are cruel but because the erasure is convenient. It allows the celebration to proceed without the discomfort of confronting its distributional consequences. It allows the beneficiaries to enjoy their gains without the obligation of acknowledging who financed them. It allows the narrative of progress to remain simple, clean, and uplifting, unburdened by the complexity that distributional analysis introduces.
Hobsbawm's career was a sustained argument against this convenience. The convenience is the enemy of historical accuracy, of analytical honesty, and of the institutional construction that distributional justice requires. The displaced deserve visibility not because their suffering is inherently more important than the beneficiaries' gains, but because the distribution cannot be assessed, much less corrected, if one side of the ledger is invisible.
The historians of 2075, looking back at this moment, will judge the AI transition by the completeness of the account — by whether the story that survives includes both the builders and the displaced, both the sunrise and the people who were already drowning when the sun came up.
---
The historian's discipline is retrospection — the examination of the past from a position of sufficient distance that the patterns invisible to the participants become visible to the observer. Hobsbawm practiced this discipline across the entirety of the modern era, from the French Revolution to the collapse of the Soviet Union, and the patterns he identified were remarkably consistent despite the enormous variation in the specific events he examined. Technologies arrive. They concentrate productive capacity. The gains are captured by the owners of the technology. The displaced bear the costs. Institutional construction, achieved through decades of political struggle, eventually redistributes the gains — partially, imperfectly, and always too late for the generation that bore the cost.
The exercise of imagining what future historians will say about the present is not prediction. It is analytical discipline — a technique for examining the present from outside the assumptions that make the present feel natural and inevitable. Hobsbawm used this technique sparingly but effectively, most notably in The Age of Extremes, where he examined the twentieth century's catastrophes from the perspective of a historian who had lived through them and who could see, with the clarity that proximity sometimes permits, the gap between what the participants believed they were doing and what they were actually accomplishing.
The historians of 2075 — working from archives that will include not only the corporate records, the policy documents, and the economic statistics that historians have traditionally relied upon, but also the digital record of the AI transition in its entirety: the social media posts, the internal communications, the code repositories, the prompt logs, the training data controversies, and the real-time documentation of displacement and adaptation that no previous transition has produced — will examine the period from 2025 to 2035 with access to evidence of unprecedented granularity. What they find will depend on what happens in the intervening years. But Hobsbawm's framework, applied with the rigor that the evidence will demand, permits the identification of the trajectories that are already visible.
The first trajectory is concentration. In this scenario, the gains of AI are captured primarily by the companies that build and deploy the technology and by the investors who finance them. The productivity gains that Segal documents — the twenty-fold multipliers, the collapsed imagination-to-artifact ratios, the solo entrepreneurs shipping products that would have required teams — are real, but they flow predominantly to the owners of the platforms through which the gains are realized. The subscription fees, the API charges, the enterprise licenses, the equity appreciation — these are the mechanisms of capture, and they concentrate wealth in a small number of firms whose market power resembles, in Hobsbawm's framework, the monopolistic position of the factory owners in the early Industrial Revolution. The displaced — the professionals whose expertise is commoditized, the communities whose social infrastructure dissolves, the workers whose bargaining position deteriorates — bear the costs without capturing a proportional share of the gains.
The historians who examine this trajectory will write a social history of AI displacement comparable in tone and analytical structure to the social histories of the Industrial Revolution that Hobsbawm and Thompson produced. They will document the specific communities that were destroyed: the professional networks that fragmented, the mentorship relationships that atrophied, the career ladders that collapsed when the middle rungs were automated away. They will trace the biographical trajectories of specific individuals — the senior engineers who withdrew, the junior developers who never developed deep expertise because the friction that would have built it was optimized away, the translators and legal researchers and financial analysts whose specialized knowledge became economically worthless within the span of a few years. They will note, as Hobsbawm always noted, that the aggregate statistics — rising GDP, falling costs, expanding capability — told a true story that was not the whole story, and that the distributional reality concealed by the aggregate was a reality of extraordinary unevenness.
The second trajectory is distribution. In this scenario, the institutional construction that Hobsbawm identified as the eventual (and politically achieved) response to every previous technological transition occurs — and occurs, this time, with sufficient speed and ambition that the generation bearing the cost of the transition also captures a meaningful share of its gains. The regulatory frameworks that are currently in their earliest stages — the EU AI Act, the emerging frameworks in other jurisdictions — develop into genuine governance structures that constrain the most destructive deployments and redirect the gains toward broader benefit. New institutional forms emerge — new kinds of professional associations, new models of collective bargaining adapted to the specific features of AI displacement, new social insurance mechanisms that protect workers during the transition — and these institutions are built proactively rather than reactively, through democratic deliberation rather than through the crisis and conflict that preceded the institutional construction of the Industrial Revolution.
The historians who examine this trajectory will write a political history of institutional innovation — the story of how democratic societies, confronted with a technological transformation of unprecedented speed and scale, constructed institutional responses of comparable ambition. They will document the debates, the compromises, the political struggles that produced the new institutions, and they will note that the construction was neither easy nor automatic — that it required sustained political effort, collective organizing, and the willingness of democratic polities to constrain the market's tendency to concentrate gains in the hands of the technologically powerful.
The third trajectory — and the one that Hobsbawm's analysis of every previous transition identifies as the most probable — is a combination of the first two, unevenly distributed across sectors, jurisdictions, and social classes. In some domains and some countries, the institutional construction will be adequate. In others, it will be delayed, captured by the interests of the technologically powerful, or simply absent. The distributional outcome will be mixed — better than the worst-case scenario of pure concentration, worse than the best-case scenario of proactive distribution, and characterized by the same geographical and social unevenness that marked the distributional outcome of every previous technological revolution Hobsbawm documented.
The historians who examine this third trajectory will face the same analytical challenge that Hobsbawm faced throughout his career: the challenge of writing a history that holds the gains and the costs in the same account without allowing either to subsume the other. They will be tempted, as historians are always tempted, to resolve the tension — to write a triumphalist narrative in which the costs are acknowledged as regrettable but ultimately justified by the gains, or to write a declinist narrative in which the costs overwhelm the gains and the transition is judged a catastrophe. The best among them will resist both temptations and will produce, instead, a history that preserves the tension — that documents the builder who shipped a product in a weekend and the engineer who lost a career that had defined a life, the parent who marveled at a child's capabilities and the parent who worried about a child's depth, the society that gained extraordinary productive power and the society that lost something in the gaining that it did not have a name for until it was gone.
Hobsbawm's analytical commitment — the insistence on documenting both the aggregate and the distributional, both the celebrated and the erased, both the factory's output and the worker's experience — is the commitment that the present moment demands of everyone who attempts to write honestly about the AI transition. The commitment is difficult because the tension is uncomfortable. It is easier to be a triumphalist, celebrating the gains and treating the costs as temporary inconveniences. It is easier to be a declinist, mourning the losses and treating the gains as illusory. The difficult position — the position that Hobsbawm occupied for half a century — is the one that insists on holding both, without resolution, because the tension is not a flaw in the analysis but the analysis itself.
The question that the historians of 2075 will answer is not whether AI was good or bad. It is how the gains were distributed — whether the institutional construction was adequate, whether the displaced were supported, whether the political systems that governed the transition served the broad public or the narrow interests of the technologically powerful. The technology will be judged not by its capabilities but by its consequences — not by what it could do but by what was done with it, and by whom, and for whose benefit.
Hobsbawm spent his career insisting that this was the right question to ask about every technological transformation in modern history. The answer, in every case he examined, was mixed — gains and costs, expansion and destruction, progress and suffering, distributed unevenly across classes, communities, and generations. The present transition will produce its own mixed answer. The quality of the mixture — the ratio of expansion to destruction, of broadly shared gain to narrowly concentrated capture — is the variable that the present generation's choices will determine.
History does not repeat. But the patterns that Hobsbawm identified — the concentration of initial gains, the destruction of social infrastructure, the erasure of the displaced, the eventual and politically achieved institutional construction — are consistent enough across two centuries and multiple transitions to constitute something very close to a law of political economy. The law does not determine the outcome. It identifies the forces that will shape it. The institutional construction that redirected the Industrial Revolution's gains took a century. The AI transition is moving faster, and the window for institutional construction is correspondingly narrower. Whether the construction will happen in time — or whether, as in every previous case Hobsbawm documented, it will arrive a generation too late — is the question that the present is in the process of answering.
The answer will be written not in code but in institutions. Not in the language of technology but in the language of politics. Not by the machines that produce the gains but by the people who determine their distribution.
That is what the historians will say about us. The question is which historians — the triumphalists who celebrate the aggregate, or the social historians who insist on the distributional account. The best historians will be the ones who refuse to choose, who hold both accounts in the same narrative, who insist that the full story of the AI revolution cannot be told without both the builders and the displaced, both the productivity gains and the communities that dissolved, both the extraordinary expansion of human capability and the specific, granular, biographical cost that the expansion imposed on the people whose prior capabilities it rendered obsolete.
Hobsbawm would have written that history. He would have written it from the archives of displacement and the records of institutional construction, with the patience of a scholar who understood that the most important historical evidence is always the evidence that the powerful prefer to leave out of the account. He did not live to see the AI revolution. But the analytical tools he built — the insistence on the distributional question, the rationality of resistance, the destruction of social infrastructure, the political economy of who benefits — are the tools that the moment most urgently requires.
The archives are being built in real time. The evidence is accumulating. The historians are watching. The question of what they will find when they examine this moment is being determined, right now, by the choices of the people who live inside it.
---
The twelve thousand soldiers are the detail I cannot put down.
Twelve thousand soldiers — more than Wellington took to the Peninsula — marched into the Midlands to suppress framework knitters. Not a foreign army. Not revolutionaries with cannons. Skilled workers with hammers, who had identified, with a precision that the British state found more threatening than Napoleon, exactly which machines were being used to destroy their lives.
That ratio — the disproportionate force deployed against a rational grievance — is the ratio that keeps showing up. Not in military terms. In rhetorical ones. The intensity with which the technology industry dismisses anyone who asks "who benefits?" is not proportional to the naivety of the question. It is proportional to the question's danger. You deploy twelve thousand soldiers against a threat you take seriously.
In The Orange Pill, I wrote about the Luddites with compassion. Hobsbawm's framework taught me that compassion, while necessary, is the beginning of the analysis, not its conclusion. I described the framework knitters as people who "were correct, with a precision that bordered on the prophetic, about exactly what the power looms would do to them." I still believe that. But what Hobsbawm added was the structural question I was not yet asking: What institutions existed to redirect the gains? Who was building them? And what happened to the people who bore the cost while the institutions were under construction?
The answer, in 1812, was: nothing. Nothing institutional existed. The Factory Acts were twenty years away. The trade unions were decades away. The welfare state was a century away. The framework knitters occupied a gap — the gap between the technology's arrival and the institutional response — and the gap consumed them.
That gap is the thing I think about most. Not because I fear that software engineers will share the framework knitters' specific fate — the channels available today are wider, the political context different, the speed of institutional response somewhat faster. But because Hobsbawm demonstrated, across four volumes and two centuries, that the gap always exists, that it is always wider than the optimists predict, and that the people inside it always pay a price that the people outside it do not see.
I chose to keep my team. I wrote about that choice as though it were a moral achievement. Hobsbawm's analysis clarified something I was reluctant to see: the choice was available to me because I controlled the organization. The ninety-five out of a hundred whose labor the twenty-fold multiplier rendered redundant — in companies led by people who will not make my choice, because the market does not reward my choice — do not have the option of keeping themselves.
The moral economy of the coder was the chapter that shook me most. Not because the concept was new — I had always understood, intuitively, that the engineering community operated within a set of unwritten norms about quality, attribution, mentorship, and fair dealing. But because Hobsbawm and Thompson gave those norms a name and a history, and the name made visible something I had been watching erode without knowing what to call it. The pressure to ship fast, amplified by tools that can produce code at unprecedented speed, is not just a business decision. It is a violation of something the community built over decades — something that made the community worth belonging to.
I do not have Hobsbawm's answer. He did not leave a prescription, because historians do not prescribe. They diagnose. And his diagnosis is that the distribution question — who captures the gains, who bears the costs, what institutions mediate between them — is the question that determines whether a technological revolution becomes broadly beneficial or narrowly catastrophic. The technology does not decide. The politics decide. The institutions decide. The choices of the people who build the dams decide.
What I take from this journey through Hobsbawm's framework is a single, uncomfortable addition to the argument I made in The Orange Pill: individual wisdom is necessary and insufficient. The beaver building alone cannot construct dams at the scale the river requires. The dams that redirected the Industrial Revolution were built by movements — collective institutions with the power to compel redistribution. The AI transition will require dams of comparable ambition, and comparable ambition requires collective construction.
The historians are watching. The archives are accumulating. The question of what they will find is being answered right now, by every choice about distribution that is made or deferred, by every institution that is built or left unbuilt, by every voice that insists the displaced be counted in the same ledger as the gains.
I intend to build. But I intend to build as though the framework knitters are watching too.
-- Edo Segal
Every technological revolution produces two histories. The first is written by the winners, in the language of progress. The second is written decades later, by historians who finally count the cost. Eric Hobsbawm spent sixty years writing the second kind -- and the patterns he found across two centuries of industrial disruption map onto the AI revolution with uncomfortable precision.
This book applies Hobsbawm's framework to the central evasion of the AI discourse: the assumption that aggregate productivity gains automatically translate into broadly shared benefit. They never have. From the framework knitters of 1812 to the knowledge workers of 2026, the gap between a technology's arrival and the institutional response that distributes its gains has been measured in generations -- and filled with human wreckage the triumphalists prefer not to see.
The question is not whether AI expands capability. It does. The question Hobsbawm would ask -- the question this book insists on -- is who captures the expansion, who bears the transition, and what institutions are we building fast enough to close the gap before another generation pays the price.
-- Eric Hobsbawm, The Age of Extremes

A reading-companion catalog of the 10 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Eric Hobsbawm — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →