In early 1812, a British Army officer named Sir Thomas Maitland faced a peculiar assignment. He had been deployed not to fight Napoleon, not to defend a coastline, but to protect a textile mill in Nottinghamshire from the people who lived nearby.
The mill contained power looms. The people were skilled stocking-frame knitters who had spent years mastering their craft, who had built their identities and their livelihoods around a specific, hard-won expertise, and who had watched that expertise become economically worthless in the space of a few years.
They were angry. They were organized. And on certain nights, under cover of darkness, they broke machines.
We call them Luddites now, and we use the word as an insult. A Luddite is a person who is afraid of technology, who resists progress, who cannot adapt. The word has become shorthand for a specific kind of stubborn obsolescence, the clinging to the old way out of fear or sentiment or simple failure of imagination. When someone today refuses to use a smartphone, or resists adopting a new piece of software, or expresses anxiety about artificial intelligence, the dismissive response is ready-made: Don't be such a Luddite.
This is a profound misreading of history. And it is a misreading that is actively dangerous right now, because the real story of the Luddites – not the cartoon but the flesh-and-blood historical movement – contains lessons we cannot afford to miss at this particular moment.
Let me tell you what actually happened.
The original Luddites were not afraid of technology in the abstract. They were not philosophically opposed to change. They were skilled workers from various geographies and backgrounds – framework knitters in Leicestershire, hand-loom weavers in Yorkshire, croppers and shearers in Lancashire – who had spent years, sometimes decades, developing craft expertise that the market now rewarded handsomely. They had apprenticed. They had practiced. They had built guilds and communities and economies around the specific kind of knowledge that lived in their hands.
And they were correct, with a precision that bordered on the prophetic, about exactly what the power looms would do to them.
Not to progress. Not to the economy in aggregate. To them.
To their wages, their status, their communities, their children's futures.
The machines did exactly what the Luddites said they would. Skilled weavers earning twenty shillings a week found themselves competing against unskilled factory workers earning just a few. The earnings gap closed by collapsing downward. The expertise that had taken a lifetime to build became a rounding error in the factory owner's cost structure.
The Luddites were not wrong about the facts. They were wrong, fatally, about their options. And they were wrong about something even more fundamental: They could not see what would grow in the space the machines opened.
This is the pattern that repeats. The fear is accurate. And the long arc bends in a direction the fearful cannot see from where they're standing.
I want to be careful here, because there is a version of this argument that is easy and wrong. The easy version goes: The Luddites were afraid of progress, progress came anyway, everything was fine, therefore everyone who worries about technological disruption is foolish and should relax. this is the argument that Silicon Valley has been making, with varying degrees of subtlety, for forty years. It is the argument that says the displaced textile worker of 1812 was just being dramatic, because his grandchildren would eventually get factory jobs, and his great-grandchildren would eventually get office jobs, and the aggregate productivity of the economy increased substantially over the following century, so what exactly was the problem?
The problem was the transition. The problem was the generation that bore the cost. The problem was that "the long arc bends toward expansion" is cold comfort to the skilled craftsman watching his guild dissolve, his wages collapse, and his children grow up in conditions that Charles Dickens would struggle to invent.
The Luddites were not wrong to be angry. They were wrong to think that breaking machines was the response that the situation required. And that distinction between the legitimacy of the fear and the inadequacy of the response is precisely where our current moment demands the most honest reckoning.
Because there are Luddites in the age of artificial intelligence. They are not breaking machines. They do not gather under darkness to destroy data centers (not yet at least). Their refusal is quieter, more socially acceptable, and in some ways more understandable. But the underlying structure is the same: The fear is partly right, the cost is very real, and the response – refusal, avoidance, the insistence that the old expertise must still be worth what it used to be worth – will not produce the outcome they are hoping for.
I meet them regularly. They are not bad people. Many of them are brilliant people. They are experienced professionals who have spent careers building expertise in domains that AI can now enter at a competitive level in minutes, and their response to this fact is a version of what the Nottinghamshire weavers felt in 1812: This cannot be right. This cannot be legitimate. This cannot be allowed to simply happen.
Some of them argue that AI-generated work is fundamentally inferior, a claim that is getting harder to sustain with each passing month.
Some of them argue that using AI is a form of cheating, a moral position that, examined carefully, reveals itself to be about professional identity more than ethics.
Some of them argue that the adoption of these tools will lead to atrophied skills and shallow practitioners, an argument I take seriously, because it contains genuine truth. But taking it seriously does not mean allowing it to function as a reason for wholesale refusal.
The contemporary Luddite is often the most skilled person in the room. That is precisely the problem. The investment has been made. The identity has been formed. The prospect of starting again, of being a beginner in a new landscape, of having decades of hard-won knowledge suddenly worth less than a graduate student with a subscription and a good eye for prompts, is not merely inconvenient. It is existentially threatening.
I understand this. I have felt versions of it myself, in domains where I had built what I thought was durable expertise and watched the terrain shift beneath my feet. But understanding the feeling does not mean endorsing the response.
The Luddites understood their situation clearly and chose the wrong instrument. Breaking the loom did not save the trade. It accelerated the social hostility toward the movement, justified the deployment of soldiers, and produced a legal framework that made machine-breaking a capital offense. The machines were not stopped. The craftsmen were criminalized. The transition happened anyway, on terms that were worse for the resisters than if they had engaged.
There is a second Luddite error that is less discussed and more relevant to where we are now. Call it the expertise trap.
The framework knitters were afraid of losing the thing that made their income meaningful: the skill itself, the years of practice, the mastery of a craft that was difficult and therefore valuable and therefore identity-defining. When they looked at the power loom, they did not just see a machine that could produce more cloth than they could. They saw a machine that made their expertise irrelevant. And they were right; the power loom did not need to understand the tensile properties of different fibers, or the relationship between thread count and drape, or the thousand small adjustments that a master weaver made by feel.
It just needed to run.
The expertise that the craftsmen had built was not the wrong thing to have built. It was genuinely valuable, genuinely hard to acquire, genuinely the product of years of intelligent, attentive practice. The problem was that the expertise had been built to solve a problem that the machine could now solve without it, and no amount of expertise in solving the old problem provided automatic leverage in the new landscape.
This is the trap. The expertise can be real. The investment can be rational. The mastery can be genuinely hard to achieve. And none of that can protect you from the fact that the problem can change entirely.
The Luddites could not have foreseen that the industrial economy they were fighting would eventually create entirely new categories of expertise, new forms of mastery, new ways to be skilled that did not exist before the machines arrived. And even if they could, why would they welcome it? Their present was crumbling, and that was most pressing.
I am watching the same trap spring in real time. Developers who spent years mastering the lower floors of the stack – the syntax, the frameworks, the specific languages and tools that were the gatekeepers of the profession – are now watching those floors fill with AI. And some of them are responding the way the framework knitters responded: with a combination of denial and defeatism, a refusal to accept that the gatekeeping function of their expertise has changed, an insistence that the lower floors are where the real work happens and anyone who builds on top of them without fully understanding them is a fraud. Some of them are running from the arena into the woods to avoid the hard work of reinventing themselves.
Here is what I want to say to those people, directly and with genuine respect for what they have built: You are right that something is being lost. The loss is real. The friction of learning the lower floors was truly formative. Skills built through difficulty compound in ways that skills acquired easily do not. I will spend the majority of Part Three examining this, because a philosopher I admire has examined the argument more rigorously than I could. The loss deserves grief, not dismissal.
But grief is not a strategy. And the Luddites teach us, at enormous cost, what happens when grief becomes the primary response to a structural change that cannot be stopped by grief.
There is a third Luddite lesson, the one I find most uncomfortable, because it implicates not the resisters but the builders.
The actual historical Luddites were not simply afraid. They were also, in significant part, right about the distribution of the gains. The power looms did not make everyone richer. They made factory owners richer. The productivity gains of the industrial revolution took generations to translate into broadly distributed improvements in living standards, and the translation was not automatic: It required labor movements, legislation, decades of political struggle, the explicit construction of institutions that did not exist at the time of the first power loom.
The technology did not determine the outcome. The dams that were built around it did.
This is the lesson that the triumphalists in every technology cycle miss. The question is never simply, "Will this technology expand capability?" Almost all powerful technologies do that. The question is, "Who captures the expansion, and who bears the cost of the transition?" Those are political, societal questions, not technical ones. They are answered by institutions, by norms, by the quality of the social conversation during the period when the new technology is reshaping the landscape but before the new landscape has fully settled.
We are in that period now. the AI expansion is real. The productivity gains are measurable and accelerating. And the question of who captures those gains, whether they flow broadly, as the democratization of capability suggests they might, or narrow, as the historical pattern of technological revolutions suggests they often do, is not yet answered. It is being answered, right now, by the choices of the people who build these tools, the people who deploy them, the people who regulate them, and the people who refuse to engage with them at all.
The Luddites who refused to engage left the answer to others. That is the deepest lesson of 1812. Not that resistance is foolish; sometimes resistance is necessary and right. But that disengagement is never neutral. When the people with legitimate grievances about who bears the cost of technological transition remove themselves from the conversation about how the transition unfolds, the conversation happens without them. And the dams that get built are built by the people who stayed in the room.
The Luddites who broke machines spent their energy on a gesture that was emotionally satisfying and strategically catastrophic. The ones who survived the transition with their dignity intact were the ones who found ways to apply their knowledge of materials, drape, quality, and design to new problems that the machines created but could not solve. They did not pretend the machines hadn't arrived. They asked what questions the machines could not answer.
They climbed to the next floor of the building, and they built something there.
The framework knitters of Nottingham would have understood a senior python developer's angst completely. They too had seen the machine clearly. They too had understood, with genuine sophistication, what it would cost. The machines did not, could not, notice. They just did what they were asked to do. And that is what will happen now, and next time, and the next.
The Luddites experienced the disappearance of their trades as total loss. They could not see that what remained, be it the understanding of materials, the knowledge of quality, or the ability to envision and evaluate, was the thing of lasting value. Because no one had told them that. Because there was no forum in which that insight could be developed and shared. Because the dams that would have redirected the transition toward their flourishing were not built in time.
We have the opportunity to build those dams now.
The ground is moving, and you are standing on it, whether you have acknowledged it or not. The question is what you will do as the earth shifts – what structures you’ll reshape, what practices you’ll adjust, what norms you’ll set, what dams you’ll build. That question requires your presence, your expertise, and your willingness to engage with what you fear.
The capacity — demanded by the expanded economy of research — to perceive the logical relationships among lines of inquiry and allocate scarce investigative resources across them.
The Zen quality of approaching each moment with openness and fresh perception — 'In the beginner's mind there are many possibilities; in the expert's mind there are few.'
The contemporary equivalent of Thompson's collective bargaining by riot — the use of lawsuits, strikes, petitions, and viral campaigns by workers in the knowledge economy to assert interests that…
Thompson's term for the use of direct action by people possessing grievances but lacking institutions — the structural function the framework knitters served, and the function contemporary lawsuits,…
The specific isolation risk structure of the AI discourse — simultaneous threat of exclusion from both triumphalist and catastrophist communities, producing systematic silencing of the nuanced middle…
The rational, strategically sophisticated opposition by skilled workers to technological reorganization threatening their autonomy, knowledge, and bargaining power—dismissed as 'Luddism' by…
The Mannheimian reading of AI democratization claims — attending not to whether access is expanded (it is) but to whose interests are served by the specific form the expansion takes and what the…
The deliberate engineering of stopping points, session boundaries, and reflective pauses into absorbing interfaces—tested in Norway's Multix system, absent from AI tools.
The systematic reduction of worker skill requirements through technological design — not a side effect of automation but frequently its central purpose, documented by Noble across industrial…
The transformation of complex judgment-work into routine supervision—not simplification but a qualitative change in what 'skill' means.
Professional associations adapted to the AI-augmented knowledge economy — bringing together practitioners across employment boundaries for mutual support, standards-setting, and collective advocacy.
Escobar's analytical distinction — central to the postdevelopment reading of AI — between the distribution of a product and the democratization of power, and the demonstration that conflating the two…
Shklar's interpretive principle that the fear of the vulnerable is not a psychological failing to be remedied through motivational intervention but accurate diagnostic information about the adequacy…
The rational withdrawal of experienced practitioners from AI discourse and transformation — not irrationality but the structural response to a collective action problem they cannot solve…
Bandura's most powerful source of self-efficacy — direct, personal experience of succeeding at a genuinely challenging task — and the specific developmental currency that AI's output-without-process…
The application of Scott's moral economy framework to professional labor — the customary norms of training, craft quality, and reciprocity that AI deployment is violating in ways structurally…
The false belief that technologies are passive instruments whose effects depend entirely on users' choices—a myth concealing that design decisions embed institutional values before users encounter…
The communal and individual dissolution that occurs when AI renders the jurisdiction on which a professional identity was built less defensible, forcing practitioners through a grief trajectory…
Those who see what the powerful cannot and speak it in terms the powerful cannot dismiss—critics, displaced workers, researchers documenting costs metrics miss.
The cluster of beliefs and dispositions that constitutes the self-understanding of the American technology industry — the ideological water in which Andreessen has operated for three decades and…
The structural observation that skeptics and enthusiasts employ identical cognitive operations — mirror-image dissonance reduction, asymmetric scrutiny, and social reinforcement — producing opposite…
The vast, inarticulate substrate of understanding that operates beneath conscious awareness and cannot be captured in any specification, no matter how detailed—Polanyi's foundational insight that "we…
The rhetorical operation by which capability distribution is marketed as democratization while governance power remains concentrated — the sequel to the internet delusion, replaying its notes at…
The structural challenge that AI creates by eliminating the bodily engagement through which expertise was historically developed and transmitted between generations.
Moles's name for the receiver's difficulty in determining the human contribution to AI-collaborative output — a channel problem of signal authentication, not a moral problem of attribution.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes…
The psychological dislocation experienced by super-creative workers when AI democratizes the verb I build — eroding the singularity around which professional identity was organized without…
The Engels Simulation's structural analogy between Manchester's mill children and the attention-economy children of the AI age — not a comparison of magnitude but of externalized developmental costs…
The artists, writers, actors, and engineers who have raised specific, articulate grievances about AI deployment — and whose dismissal as Luddites performs the same delegitimating function the…
The disproportionate burden borne by the people least positioned to absorb it — the structural pattern that has characterized every technological transition in the archival record.
The structural tension between genuine capability expansion AI delivers to individual builders and the concentrating institutional architecture through which that expansion is delivered — both…
The distance between what a practitioner understands about a system and what the system requires her to understand when it fails — a gap that abstraction widens invisibly, that AI-generated code has…
Every communication regime has its gatekeepers, and every communication revolution displaces them — monastic scribes by printers, QA departments by solo builders — creating a dangerous gap between…
The Engels Simulation's central political question: not whether AI increases total wealth, which is not disputed, but how the increase is distributed among those who produce it.
The specific psychological prison produced when decades of fixed-mindset reinforcement fuse professional identity with a specific technical competence that subsequently becomes obsolete — The Orange…
The specific calculation that governs every deployment decision in a competitive market — if five workers can do the work of one hundred, why not just have five? — and the structural reason moral…
Ellul's hardest claim: that resistance to technique at the individual level, however morally admirable, is structurally insufficient against a force that operates at the level of institutions,…
Hobsbawm's core prediction: the gap between a technology's arrival and the institutional response that redistributes its gains is always wider than optimists predict, and the people inside the gap…
Brooks's closing meditation in The Mythical Man-Month — the pleasures of making things, the fascination of complex structures, weighed against the obligation to meet others' specifications and the…
The figure at the technological culmination of Fukuyama's worst fear — the Nietzschean Last Man updated for the AI era, who outsources not only physical labor but cognition, creativity, and decision…
The Winner volume's recovery of the Luddite as a political actor making legitimate democratic demands — rather than a psychological casualty of progress.
The structural obligation a new tradition incurs to the practitioners whose specific knowledge and specific lives the transition consumed — a debt that aggregate prosperity cannot discharge, only…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant…
Mannheim's reframing of machine-breaking as class consciousness — the recognition that the Luddites possessed situated knowledge the factory owners could not access, knowledge that has been…
The paradigmatic case of Young's political diagnosis — victims of structural injustice whose justified rage translated into the strategic catastrophe of withdrawal from the institutions that were…
Honneth's reframing of the 1812 machine-breakers not as strategic error but as a recognition demand denied institutional channel — the pattern recurring in every subsequent technological displacement.
The four-stage loop — performance, failure, feedback, reflection — that produces deep expertise through thousands of iterations, and whose interruption at any stage thins the learning from every…
The unwritten but deeply felt framework of professional norms—about compensation, quality, attribution, mentorship, and fair dealing—that governs the software engineering community, and whose…
The Myrdalian completion of Segal's beaver metaphor — every institutional intervention is a political act that benefits some interests and constrains others, and dams are built only when political…
The claim that AI-generated work is fundamentally inferior to human-produced work — the first weapon in the contemporary Luddite's arsenal, operating as technical observation and moral contestation…
The economically rational retreat of senior knowledge workers to lower-cost locations and simpler lives when the expected return on their specific human capital collapses — portfolio rebalancing, not…
Coyle's framework for the systematic underestimation of AI transition costs — the visible tool price (one hundred dollars per month) conceals submerged mass in organizational restructuring, human…
Landes's comparative framework applied to AI: every transformative technology produces a gap between capability and institutions, and who bears the cost of the gap is determined politically.
The single question that Ehrenreich's framework insists on asking about every technological transition, and that the AI discourse systematically avoids — the question whose answer determines whether…
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
British historian and political activist (1924–1993) whose rescue of working people from the enormous condescension of posterity reshaped historical method — and whose analytical tools, independently…
The skilled textile workers whose 1811–1816 destruction of wide stocking frames became the founding Luddite event — and whose ontological error, Ellul's framework suggests, was believing they faced a…
Winner's 1980 essay — the most cited work in the history of science and technology studies — that established the hard claim: the design of a technology is itself a political act.
Hobsbawm's fifteen-page essay in the inaugural issue of Past & Present that demolished the standard narrative of the Luddites as ignorant technophobes and reframed machine-breaking as collective…
Heidegger's 1954 essay arguing that the essence of technology is nothing technological — and that the essence is a mode of revealing that culminates in enframing.