By Edo Segal
The job I am most proud of eliminating was my own.
Not literally — I still show up, I still build, I still lose sleep over product decisions. But the version of my job that involved sitting in status meetings, reviewing progress reports, translating between departments that spoke different institutional languages, managing the choreography of people who already knew what needed doing — that job is gone. Claude Code killed it. And when it died, I felt something I did not expect: relief so profound it was almost grief. Relief because the real work, the judgment, the creative direction, the hard calls about what deserves to exist, had been buried under all that coordination for years. Grief because I had to ask how much of my career had been the burial.
That question led me to David Graeber.
Graeber was an anthropologist who spent his final decade asking a question that the technology industry has never been brave enough to face: What if a significant portion of the modern economy is not just inefficient but pointless? Not underpaid, not unpleasant — pointless. What if the meetings, the reports, the layers of management, the compliance theater exist not because the work requires them but because the institutions require them? What if the entire apparatus of corporate coordination is, in significant part, a mechanism for distributing salaries rather than producing value?
The technology discourse talks about disruption. Graeber talks about what the disruption reveals. And what it reveals is uncomfortable: that much of what we called "work" was scaffolding around nothing. The AI revolution did not create this emptiness. It exposed it, the way draining a lake exposes the junk people threw in when they thought nobody would see.
Every other lens in this series examines what AI can build. Graeber forces you to ask what it should stop pretending to need. His taxonomy of pointless work — the flunkies, the goons, the duct-tapers, the box-tickers, the taskmasters — is not a relic of pre-AI sociology. It is a diagnostic manual for the organizations deploying AI right now, today, as you read this. Because an amplifier fed institutional dysfunction produces dysfunction at scale. And the dysfunction was always larger than any of us wanted to admit.
This is not a comfortable book. Graeber will make you look at your own calendar and wonder which meetings are load-bearing and which are decorative. He will make you ask whether the headcount you manage exists because the work demands it or because your authority demands it. He will make you wonder about the thirty-seven percent.
That wondering is the point. You cannot build the right dams if you do not understand what the river has been carrying.
— Edo Segal ^ Opus 4.6
David Graeber (1961–2020) was an American anthropologist, activist, and author whose work reshaped how millions of people think about work, debt, and institutional power. Born in New York to a working-class family — his father fought in the Spanish Civil War, his mother was a garment worker and labor organizer — Graeber earned his doctorate at the University of Chicago and held academic positions at Yale and the London School of Economics. He is best known for Debt: The First 5,000 Years (2011), which reframed the history of money and obligation, and Bullshit Jobs: A Theory (2018), which argued that a vast proportion of modern employment is recognized as pointless by the very people performing it. A self-described anarchist, Graeber was instrumental in the Occupy Wall Street movement and is widely credited with popularizing its slogan, "We are the 99 percent." His posthumous work The Dawn of Everything (2021), co-authored with archaeologist David Wengrow, challenged conventional narratives about the origins of social inequality. He died unexpectedly in Venice at the age of fifty-nine, leaving behind a body of work that continues to provoke debate about the nature of work, value, and human possibility.
In 2013, David Graeber published a short essay in Strike! magazine and accidentally detonated a bomb in the middle of the global professional class. The essay was titled "On the Phenomenon of Bullshit Jobs," and within weeks it had been translated into dozens of languages and shared millions of times. The response was not curiosity. It was confession. Workers from every continent wrote to say that Graeber had named something they had been feeling for years but could not articulate without risking their livelihoods: the gnawing, corrosive suspicion that their jobs were pointless. Not unpleasant. Not underpaid. Pointless. That if their positions were eliminated tomorrow, the world would continue to turn without noticing.
The volume of confession told Graeber something the labor statistics could not. He had not discovered an obscure sociological curiosity. He had touched a nerve that ran through the entire body of the modern economy. And when he expanded the essay into a book five years later, he did what anthropologists do best: he held the familiar up to the light until it became strange enough to actually see.
Graeber's taxonomy identified five species of pointless work, each illuminating a different pathology of the modern workplace. Flunkies exist to make someone else look important — the personal assistant with nothing to assist, the receptionist at a firm that receives no visitors, the retinue of subordinates whose primary function is to signal the status of the person they orbit. Goons exist because competitors have them, creating arms races of mutual antagonism in which each side must employ people simply because the other side does — lobbyists, telemarketers, certain categories of corporate lawyers whose work is aggressive but whose absence, if all sides disarmed simultaneously, would cost society nothing. Duct-tapers apply temporary fixes to problems that should not exist — the employee who spends eight hours manually entering data between systems that were never designed to communicate. Box-tickers demonstrate that processes have been followed regardless of whether those processes accomplish anything — the compliance officer generating reports that no one reads. And taskmasters supervise workers who do not need supervision, assigning tasks to people who already know what needs to be done, creating work for subordinates to justify the taskmaster's own existence.
Five species. One ecosystem. An economy that generates pointless work with the same reliability that a forest generates leaves.
Now consider what happens when artificial intelligence enters that ecosystem.
The technology that emerged in the winter of 2025 — described in detail in The Orange Pill — represents the most powerful tool for eliminating bullshit that has ever been built. A single engineer with Claude Code could produce in a weekend what a coordinated team had previously required months to deliver. The imagination-to-artifact ratio collapsed to near zero. The translation layer between human intention and machine execution — the layer that had justified entire categories of intermediary work — dissolved in a matter of weeks.
Apply the taxonomy. Start with flunkies. A flunkey's function is to make a powerful person feel powerful. The doorman at a building that could function with a buzzer system. The executive assistant whose calendar could be managed by software. AI eliminates the functional need for these positions with brutal efficiency. But it does not eliminate the psychological need they serve. A CEO who replaces a team of human assistants with an AI system may discover that the absence of visible subordinates diminishes the felt experience of authority. The solution, historically, has not been to dispense with the signaling but to update its vocabulary. The "Chief AI Strategy Officer" whose real function is to sit in meetings and demonstrate that the CEO takes AI seriously is a flunkey adapted to the new technological landscape. The title changes. The function persists.
Goons present a similar dynamic. Corporate lobbying does not disappear when AI automates the research that lobbyists previously performed. It intensifies. AI makes it possible to generate more sophisticated lobbying materials at greater speed, which means the other side must generate equally sophisticated counter-materials, which means both sides need more people managing the AI systems that produce the materials that neither side would need if both sides stopped simultaneously. The arms race does not end. It accelerates. Each side acquires more powerful weapons, and the human headcount required to operate those weapons may actually increase.
Duct-tapers, however, present a categorically different case — and this is where the analysis becomes genuinely interesting. Duct-taping exists because organizations tolerate dysfunction. The employee who manually transfers data between incompatible systems exists because no one fixed the systems. AI can fix the systems. When it costs less to integrate two databases than to pay a human to bridge the gap between them, the economic argument for duct-taping collapses. This is not merely faster patching. It is the elimination of the need for patches. The dysfunction itself becomes economically indefensible.
The question becomes: what happens to the duct-taper? The Orange Pill proposes an answer it calls the ascending friction thesis — that when AI eliminates low-level friction, humans ascend to higher-level problem-solving. The duct-taper who spent years manually reconciling systems becomes the architect who redesigns them. The skills developed through years of patching — the intimate knowledge of where the seams are and why they leak — become the foundation for higher-order work.
Graeber's framework suggests skepticism. Not because the ascending friction thesis is wrong in principle, but because it assumes that the structures governing the modern workplace will permit the elevation to occur. Those structures did not generate duct-taping by accident. They generated it because dysfunction creates employment, and employment is how the system distributes income. The duct-taper's job was simultaneously a symptom of organizational failure and a solution to a political problem: how to give a person a salary. When AI eliminates the symptom, the political problem remains.
Box-tickers face the most ironic fate. AI can automate compliance reporting with an efficiency that should, in principle, eliminate the need for the army of humans who currently populate compliance departments. AI can generate the reports, fill the forms, track the metrics, produce the dashboards. But automation does not eliminate box-ticking. It generates new box-ticking at a higher level of abstraction. The AI system that automates compliance must itself be audited. The audit generates its own documentation requirements. The documentation generates its own compliance frameworks. And suddenly a new cadre of workers has been hired to tick the new boxes — boxes that did not exist before the technology that was supposed to eliminate box-ticking was deployed.
Stuart Mills and David Spencer, writing in the Journal of Business Research, coined a term for this phenomenon that Graeber would have savored: "efficient inefficiency." AI performing bullshit tasks faster is not the elimination of bullshit. It is bullshit at scale. One study they examined found that programmers using an AI co-pilot wrote more code but also increased code churn — broken code requiring editing and fixing. The tool appeared to make programmers more productive. Factoring in the churn, it became unclear whether AI was improving efficiency or simply doing something inefficient more efficiently. Writing more bad code, faster.
Finally, taskmasters. If AI enables self-directed work — if individuals can direct AI tools to accomplish complex tasks without requiring coordination by managers — then the raison d'être of the taskmaster dissolves. But taskmasters do not dissolve voluntarily. They reinvent themselves. The manager who once supervised a team of developers becomes the manager who supervises a team of AI-augmented developers, and the manager's new function is to ensure that the AI tools are being used correctly, that the outputs meet quality standards, that the workflows comply with organizational policy. The supervision persists. The question of whether it adds value is no less pressing than before.
The deeper pattern that emerges from applying Graeber's taxonomy to AI is not a story about technology at all. It is a story about institutional adaptation — about the remarkable capacity of organizations to absorb technological change without altering the power structures that the change should, by rights, disrupt. Previous commentators framed the AI question as "Will AI create or destroy jobs?" Graeber's framework reframes it: Will AI eliminate bullshit specifically, or will it generate new forms of bullshit to replace the old?
The evidence so far is mixed and troubling. The Berkeley researchers who studied AI's impact on a 200-person technology company found that AI did not reduce work. It intensified it. Workers took on more tasks, expanded into areas previously outside their domain, and filled every minute that AI freed with additional activity. The boundaries between roles blurred — but the blurring did not produce the liberation that the ascending friction thesis predicts. It produced exhaustion. The bullshit did not disappear. It shape-shifted.
Alan Blackwell, at Cambridge's Department of Computer Science, made the connection between Graeber's taxonomy and AI's outputs with disarming directness. AI systems like ChatGPT are trained on text from social media, forums, and other vast archives of human communication — archives that include, by Graeber's own research, an enormous quantity of bullshit. The output is, in Frankfurt's philosophical sense, literally bullshit: language produced without regard for its truth value, optimized for plausibility rather than accuracy. Graeber found that over thirty percent of British workers believed their own jobs contributed nothing of value. Blackwell's implication was that every part of those jobs could easily be performed by a large language model — because the work was already bullshit, and generating bullshit is precisely what the model excels at.
This is the paradox that sits at the center of the AI-and-work discourse: the technology that is most celebrated for its potential to eliminate pointless work is also the technology best suited to automate it. The same tool that could free the duct-taper to become an architect could equally well be deployed to generate more elaborate duct tape, faster, in more languages, with better formatting. The tool does not choose. The institution chooses. And institutions, as Graeber spent his career demonstrating, choose to preserve themselves.
The five categories of bullshit are not merely a diagnostic tool. They are a warning. Each category represents a different mechanism through which organizations generate pointless activity, and each mechanism will adapt to the presence of AI with the same ingenuity that it has adapted to every previous technology. The question is not whether AI is powerful enough to eliminate bullshit. It manifestly is. The question is whether the people who control the deployment of AI want to eliminate it — or whether they need the bullshit, because the bullshit serves functions that have nothing to do with productivity and everything to do with the distribution of income, the maintenance of hierarchy, and the preservation of the social order that depends on both.
That is a political question. And political questions are not resolved by technology, no matter how powerful. They are resolved by the choices of the people who control the technology — and by the willingness of everyone else to demand that those choices serve human needs rather than institutional self-preservation.
The most striking feature of Graeber's analysis is not the taxonomy. It is the explanation he offered for why bullshit jobs exist at all — an explanation that overturns one of the deepest assumptions of mainstream economics.
The assumption is this: markets eliminate inefficiency. If bullshit jobs are genuinely unproductive — if the organizations employing bullshit workers could function identically without them — then competitive pressure should have eliminated them decades ago. The fact that bullshit jobs not only persist but proliferate in the most competitive economies on earth suggests that something other than market efficiency is at work.
Graeber's answer is disarmingly simple. The modern economy distributes income through employment. Employment requires jobs. When the economy cannot generate enough genuinely productive jobs to employ the population, it generates unproductive ones instead. The alternative — distributing income through mechanisms that do not require employment, such as a universal basic income — is politically unacceptable in most societies because employment has become a moral imperative, not merely an economic arrangement. People should work for their income. This is presented not as a proposition subject to empirical testing but as an axiom. To receive income without working is to be a parasite. It does not matter whether the work produces anything. It matters that the work exists.
The axiom is so deeply embedded that it persists even when the evidence flatly contradicts it. A YouGov poll found that thirty-seven percent of British workers reported that their job made no meaningful contribution to the world. A Dutch study produced similar figures. These are workers who are, by their own assessment, receiving income in exchange for performing tasks that serve no genuine social purpose. The moral framework that insists employment is virtuous is maintained by a collective agreement to pretend that all employment is productive — an agreement that the workers themselves, in private, do not believe.
Now examine where bullshit jobs are concentrated. Not, as one might expect, in government bureaucracies, though bureaucratic bloat is real. They are concentrated in the private sector — specifically, in the administrative and managerial layers of large corporations. The growth of administrative employment over the past half-century has been staggering. In American universities, the ratio of administrators to faculty has inverted: many institutions now employ more administrators than professors. In healthcare, administrative staff have multiplied at rates that far outpace clinical staff. In the corporate sector, the number of managers, supervisors, coordinators, analysts, and specialists has grown at rates that dwarf the growth of the front-line workers they nominally support.
The growth cannot be explained by increased complexity of the underlying work. A hospital does not need five times as many administrators as it did fifty years ago because medicine has become five times more complex. The administrative apparatus has expanded to fill the space that institutional logic creates for it, generating requirements that generate positions that generate requirements — an ever-expanding cycle that has nothing to do with patient care and everything to do with the self-perpetuating dynamics of organizational bureaucracy.
The pattern is particularly visible in the technology sector, where the contrast between the productive work of engineers and the administrative overhead of the supporting apparatus has become a source of dark humor and genuine frustration. Companies that began as lean, product-focused organizations have evolved into bureaucratic hierarchies that would not look out of place in a nineteenth-century government ministry. The ratio of managers to engineers has grown relentlessly. The managers have generated the meetings, processes, and reporting requirements that justify their existence while consuming the time and energy of the engineers who are supposed to be building things. Engineers commonly report spending less than half their time on engineering. The rest goes to administrative overhead — overhead that is, in Graeber's terms, a measure of the bullshit content of their nominally genuine jobs.
AI enters this analysis at a critical juncture. When a single engineer with AI tools can produce what twenty engineers without those tools previously produced, the question of why the twenty existed becomes impossible to avoid. Was the team necessary because the work required it? Or was it necessary because the organization required it — because budgets correlate with headcount, because managerial authority is measured by team size, because the project plan needed complexity to justify the timeline that justified the budget?
The historical pattern does not inspire confidence. The mechanization of agriculture eliminated most agricultural labor. The displaced farm workers were absorbed into manufacturing. When manufacturing was automated, the displaced factory workers were absorbed into services. But the crucial question is one that most economists glide past: Were the service jobs that absorbed displaced workers genuinely productive? Some were — healthcare, education, scientific research. But a staggering proportion consisted of precisely the administrative, managerial, and clerical positions that Graeber's taxonomy identifies as bullshit. The economy did not fail to generate new jobs. It failed to generate new meaningful jobs. Displaced workers were absorbed into bureaucratic apparatuses that gave them salaries while contributing nothing to the world.
If this is the historical pattern — technological displacement leading not to unemployment but to the generation of new bullshit — then AI may break the pattern in one of two ways. The optimistic break: AI eliminates bullshit so efficiently that the institutional structures cannot regenerate it fast enough. The pessimistic break: AI eliminates bullshit jobs faster than new ones can be generated, producing not new bullshit but actual mass displacement without a replacement mechanism for income distribution.
The speed of the AI transition makes the pessimistic scenario more plausible than in any previous technological revolution. Agricultural mechanization displaced workers over decades. Industrial automation displaced them over years. The AI displacement documented in The Orange Pill — a Google engineer's year of work replicated in an hour — operates on a timeline measured in weeks. The institutional mechanisms for absorbing displaced workers — retraining programs, new industry creation, gradual career transition — were designed for a world in which displacement occurs slowly enough for muddling to work. The AI displacement is too fast for muddling.
Moreover, the nature of the work being eliminated is different. Previous transitions primarily displaced physical labor — the manual work of farming and assembly lines. AI displaces cognitive labor — the analytical, communicative, administrative work that constitutes the bulk of modern employment. The bullshit jobs that were generated to absorb displaced physical workers were predominantly cognitive. If AI eliminates cognitive bullshit, what category of work will be generated to replace it? The question has no obvious answer.
But there are also reasons to expect the historical pattern to hold. The political forces that generate bullshit are not technological but institutional. They operate regardless of the specific technology available. As long as societies insist that income must be earned through employment, and as long as employment requires jobs, the system will generate jobs — productive or otherwise. The specific form of the bullshit may change. The duct-taper of 2020 may become the AI auditor of 2027. But the function persists: providing employment that justifies income that maintains social stability.
Graeber drew an important distinction that popular discussions of his work frequently miss. There is work that is bullshit because the job itself serves no purpose, and there is work that contains bullshit — genuine jobs loaded with pointless tasks, unnecessary bureaucratic requirements, and time-consuming overhead that prevents the worker from doing the genuinely valuable part of the job. A teacher whose job is to educate children is not performing bullshit work. A teacher who spends sixty percent of her time on paperwork, compliance documentation, and standardized test preparation rather than actually teaching has a job that is half genuine and half parasite.
AI has clear potential to eliminate the parasitic bullshit within genuine jobs. The teacher who uses AI to handle paperwork can spend more time teaching. The doctor who uses AI for electronic health records can spend more time with patients. This is the optimistic scenario — AI as the liberator of genuine workers from the barnacles that have accumulated on their practice.
But Graeber's framework demands a caveat. The parasitic bullshit did not accumulate accidentally. It was generated by institutional demand — the requirement for accountability without trust, for documentation that proves compliance rather than ensures quality, for evidence that managers are managing. If AI eliminates the current parasitic bullshit, the institutions that generated it will respond by generating new parasitic bullshit adapted to the AI-enabled workplace. The teacher who saves ten hours a week through AI may find those hours filled by AI ethics training, AI-generated learning analytics that must be reviewed, AI audit processes that document how AI was used in the classroom, and mandatory participation in AI integration committees. The hours saved by technology are recolonized by the institutional logic that filled them in the first place.
This is the fundamental problem: not a technological failure but a political one. The question is not whether AI can eliminate bullshit. It can. The question is whether the political and institutional structures that produce bullshit will permit AI to eliminate it — or whether they will adapt, as they have adapted to every previous technology, to generate new bullshit in the spaces that AI clears.
The answer depends on whether societies are willing to confront the moral axiom that underlies the entire edifice: the insistence that income must be earned through work, regardless of whether the work produces anything of value. If that axiom holds, the system will continue to generate employment — bullshit or otherwise — because the alternative is to acknowledge that millions of people could live perfectly well without working, and that acknowledgment is politically and morally intolerable within the current framework.
Universal basic income represents the most direct challenge to this axiom. If income is distributed unconditionally — if every citizen receives enough to cover basic needs regardless of employment status — then the compulsion to accept bullshit jobs evaporates. Workers can refuse positions they know to be pointless. Employers who want to fill positions must make them genuinely attractive. The labor market stops functioning as a mechanism for distributing income and starts functioning as a mechanism for matching people with work they find meaningful.
Finland's basic income experiment provided suggestive evidence: recipients reported higher well-being, lower stress, and — contrary to the moral objection that people will not work if they do not have to — were slightly more likely to find employment than the control group. People want to work. They want to contribute, create, participate. What they do not want to do is bullshit. The elimination of the compulsion to accept bullshit is not a social harm. It is a precondition for genuine work.
The AI moment makes this question urgent in a way it has never been before. The productive capacity is extraordinary. The surplus is real. The technology exists to free millions from meaningless labor. What does not yet exist is the political will to distribute the surplus in a way that does not require meaningless labor as its vehicle. And until that will is summoned, AI will be deployed within the existing political economy — an economy that needs bullshit, that generates bullshit, and that will use the most powerful anti-bullshit technology ever invented to produce new bullshit at unprecedented scale and sophistication.
Graeber had a gift for analogies that illuminated by provoking, and among his most provocative was his comparison of the modern corporation to a feudal estate. The comparison was not metaphorical. Graeber was an anthropologist, and he deployed the feudal analogy with anthropological precision. He was describing a structural homology — a pattern of social organization in which status is conferred through the number of subordinates one commands, resources flow upward through chains of obligation, and the primary function of each layer in the hierarchy is to justify its own existence by generating work for the layers below it and reports for the layers above it.
A medieval lord did not need forty retainers to manage a household that five could have run. The retainers existed to enact the lord's importance in visible form. The size of the retinue measured not the work to be done but the lord's standing in the social hierarchy. Graeber's argument was that the modern corporation operates on identical logic. A vice president does not need seven direct reports who each manage five people because the work requires thirty-five bodies. The team exists because the vice president's organizational status is measured by its size, because the budget is justified by the headcount, and because the org chart requires the layers that thirty-five people fill.
This is not a claim that all management is bullshit. Graeber was careful to distinguish genuine coordination — the triage nurse, the attending physician, the surgical team whose activities must be synchronized because patients' lives depend on it — from the managerial feudalism that generates coordination requirements in order to justify the existence of coordinators. A hospital emergency room requires real management. A corporate marketing department's seven layers of approval may or may not. Graeber's research suggested that in many cases, the coordination was not a response to complexity but a generator of it. Meetings begot meetings. Reports begot reports. The coordination overhead became the primary activity, and the work it was supposed to facilitate receded into background noise.
The feudal hierarchy of the modern corporation is sustained by information friction — the cost of transmitting knowledge between people, departments, and organizational levels. The product manager exists because the engineer and the business stakeholder speak different languages and need a translator. The project manager exists because the work of multiple engineers must be coordinated, and coordination requires someone tracking who is doing what. The scrum master exists because the coordination process itself requires process management. Each layer is justified by the friction involved in connecting the layers below it.
AI eliminates that friction. Not all of it — friction ascends to higher levels, as The Orange Pill argues — but enough to expose the degree to which the hierarchy was sustained by friction rather than by productive necessity. When the friction disappears, the layers lose their justification. The product manager who translated between business and engineering is no longer needed when AI translates directly. The project manager who coordinated between engineers is no longer needed when each engineer handles a project from conception to deployment. The scrum master who facilitated team processes is no longer needed when there is no team.
The experience described in the Trivandium room — engineers discovering that AI enabled them to work autonomously across boundaries that had previously required specialists and coordinators — is the exposure of managerial feudalism in real time. The specialist silos that had seemed structural turned out to be artifacts of translation cost. When the cost dropped to the price of a conversation, the silos dissolved. The org chart did not change. The actual flow of contribution changed beneath it, like water finding new channels under a frozen surface.
But feudal structures do not dismantle voluntarily. This is Graeber's most important insight for the AI era, and the one most frequently overlooked by technological optimists. The managers whose positions are threatened will not accept obsolescence quietly. They will resist, and their resistance will take forms that are predictable, sophisticated, and difficult to counter — because the managers control the organizational structures within which the resistance plays out.
The first form of resistance is the generation of new complexity. When AI eliminates the need for coordination between engineers, the managers who previously coordinated them do not concede that their function was unnecessary. They generate new coordination requirements: AI governance protocols, AI ethics review processes, AI integration strategies, AI risk assessments. The complexity is real in the sense that it involves genuine activities, but it is feudal in the sense that its primary function is to justify the continued existence of the management layer that oversees it. The substance may be partially genuine — AI does raise questions that deserve institutional attention — but the volume of the complexity will be determined not by genuine need but by the political need to maintain the hierarchy.
The second form of resistance is the redefinition of value. When individual engineers can produce what teams previously produced, the obvious conclusion is that teams are less necessary. The managerial response is to redefine value in terms that only teams can produce. Team cohesion becomes a value. Collaborative culture becomes a value. Cross-functional alignment becomes a value. These concepts are not empty — genuine collaboration produces outcomes individuals cannot — but their elevation to organizational imperatives serves a political function alongside the productive one. It preserves the team structure that justifies the management hierarchy, even when the productive work no longer requires it.
The third form of resistance is the weaponization of risk. AI introduces genuine risks — errors, biases, security vulnerabilities, intellectual property concerns — and managers who feel their positions threatened will emphasize these risks in ways that slow AI adoption and preserve the status quo. The emphasis may be proportionate to the actual risk, or it may be disproportionate, and the difficulty of telling which is which plays in the managers' favor. When a middle manager argues that AI-generated code must be reviewed by a human team before deployment, the argument may reflect genuine concern for quality — or it may reflect the manager's interest in preserving the team that the manager manages. The two motivations are difficult to disentangle, and the difficulty is itself a feature of the system.
Graeber's historical analysis illuminates what happens after the feudal structure is exposed but before it is dismantled. There is a period — sometimes lasting decades — in which the hierarchy persists through institutional inertia and active political maneuvering even after the productive justification for it has evaporated. The feudal structures of medieval Europe persisted for centuries after the economic conditions that originally justified them had changed, precisely because the people who benefited from those structures controlled the political mechanisms that could have dismantled them. The same dynamic applies to managerial feudalism in the age of AI. The managers whose positions are threatened control the organizational processes through which AI is deployed. They have every incentive to deploy AI in ways that preserve their positions rather than in ways that expose those positions as unnecessary.
Graeber also distinguished between what he called "bureaucratic technologies" and "poetic technologies" — a distinction that maps directly onto the AI landscape. Bureaucratic technologies are tools of surveillance, control, and administration. Poetic technologies are tools of imaginative liberation — technologies that expand what human beings can create, explore, and become. The printing press was a poetic technology. The time clock was a bureaucratic one. The internet was conceived as a poetic technology and has been substantially captured by bureaucratic logic. AI could go either way.
When AI is deployed as a bureaucratic technology — to monitor employee productivity, to enforce compliance protocols, to generate the documentation that demonstrates institutional responsibility — it reinforces the feudal hierarchy rather than dismantling it. The manager who tracks AI-generated productivity metrics for each team member is not coordinating productive activity. The manager is administering surveillance. The technology has changed. The power relationship has not.
When AI is deployed as a poetic technology — to enable individual creative production, to collapse the distance between imagination and artifact, to free workers from mechanical drudgery and liberate them for judgment and care — it genuinely threatens the feudal structure, because it removes the informational friction on which the structure depends.
The outcome is not determined by the technology. It is determined by who controls the deployment. And Graeber would note, with the mordant humor that characterized his best work, that the people who control AI deployment in most organizations are precisely the feudal lords whose positions the technology threatens. Asking them to deploy AI in ways that dismantle their own authority is like asking medieval lords to abolish serfdom out of a commitment to economic efficiency.
Some will. The leaders who restructure their organizations around AI's capabilities — who flatten hierarchies, empower individual contributors, and measure value by outcomes rather than headcount — will build organizations that are more productive, more innovative, and more humane. The leaders who deploy AI within the existing feudal structure — adding AI to the toolkit while preserving every layer of the hierarchy — will produce organizations that are more efficient at generating bullshit.
The competitive pressure between these two organizational types will, over time, favor the former. But "over time" can mean decades, and in the interim, millions of workers will live inside feudal structures that AI has made more productive without making more meaningful. They will work harder, produce more, and continue to attend meetings that could have been emails — meetings that are now, thanks to AI, documented, transcribed, summarized, and filed with an efficiency that the medieval scribe could not have imagined and that Graeber would have found hilarious.
The question is whether the dismantlement of managerial feudalism will be driven by competitive pressure alone — a slow, painful, generation-long process — or whether it will be accelerated by deliberate institutional design. Cooperative governance structures, democratic workplaces, outcome-based evaluation, the reduction of management to its genuinely necessary minimum — these are not utopian fantasies. They are organizational experiments that have been conducted, documented, and shown to work. AI makes them more viable than ever by handling the coordination that previously justified hierarchy. The question is whether the political will exists to implement them at scale, or whether the feudal lords will succeed in absorbing the most powerful anti-feudal technology in history into the service of the feudal order.
Graeber's research uncovered a pattern so consistent that he elevated it to something approaching a sociological law: the inverse relationship between the social value of work and its compensation. The people who do the work that matters most — nurses who hold the hands of dying patients, teachers who spend decades coaxing understanding from reluctant minds, elder-care workers who perform the intimate, exhausting labor of keeping fragile human beings alive and dignified — are among the most poorly compensated workers in advanced economies. Meanwhile, the people whose work contributes least to the common good — financial engineers who design derivatives that extract value without creating it, management consultants who produce reports that are never implemented, lobbyists who distort the political process on behalf of the already powerful — are among the most handsomely rewarded.
This is not a market failure that a more efficient market would correct. It is a structural feature of an economic system that measures value in terms of what someone will pay rather than what a community needs. The market does not reward social value. It rewards economic value, which is a fundamentally different thing. Economic value is determined by the willingness and ability of a buyer to pay for a good or service. Social value is determined by the contribution that a good or service makes to human well-being. The two diverge systematically, and nowhere more visibly than in the compensation of care workers versus the compensation of bullshit workers.
Graeber put the point with characteristic bluntness in a 2018 interview: "These things will become ever more important as automation makes caring labor more important — especially because these are the areas we would not want to automate. We wouldn't want a robot talking down drunks or comforting lost children. We need to see the value in the sort of labor we would only really want humans to do."
AI intensifies the divergence in ways that Graeber did not live to observe but predicted with grim accuracy. The democratization of building that The Orange Pill celebrates — the capacity to give individuals the productive power of entire teams — is a democratization of a specific kind of work: cognitive, technical, creative work that was already relatively well compensated. The engineer whose productivity is multiplied twentyfold was not a minimum-wage worker before the multiplication. The designer who starts writing features end-to-end was already earning a professional salary.
The work that AI cannot democratize is the work that matters most in Graeber's analysis. The nurse who bathes an elderly patient and notices the early signs of a pressure ulcer. The teacher who recognizes that a student's disruptive behavior is anxiety rather than defiance and adjusts her approach accordingly. The social worker who builds trust with a family in crisis over months of patient, unglamorous interaction. This work is embodied, relational, emotionally complex, and stubbornly resistant to technological substitution.
AI can assist care workers in specific and valuable ways. It can handle paperwork that consumes a nurse's time. It can generate individualized lesson plans so a teacher can focus on interaction. It can analyze data patterns that help a social worker identify families at risk. These contributions are genuine and should not be dismissed. But they do not touch the core of care work — the human presence, the embodied attention, the emotional labor of being with another person in their vulnerability. AI cannot bathe a patient. It cannot hold a dying person's hand. It cannot sit with a frightened child until the fear subsides.
The economic implication is stark. AI amplifies the productivity of workers whose work is already valued by the market while leaving largely untouched the productivity of workers whose work is undervalued. The engineer who uses AI to multiply output by twenty can reasonably expect rising market value. The nurse who uses AI to halve paperwork burden cannot reasonably expect proportionate compensation increase, because the market does not value nursing in proportion to its social importance, and AI does not change the market's valuation.
The result is a widening of the gap between economic returns to cognitive-technical work and economic returns to care work. This widening is not a temporary distortion that market forces will correct. It is the structural consequence of an economy that rewards scalability above all other virtues. Care work is labor-intensive and resistant to the productivity gains that drive profitability in other sectors. A nurse caring for a patient in 2026 cannot care for that patient significantly faster than a nurse in 1976. The care requires the time it requires. There is no efficiency gain to capture, no margin to expand, no scalability to achieve.
Economists document a "care penalty" — the measurable reduction in wages that workers experience when they move into care occupations, controlling for education, experience, and skill level. A woman with a college degree who enters nursing earns significantly less over her career than an identically credentialed woman who enters corporate management. The penalty is not explained by lower skill requirements. Nursing demands extensive education and grueling practical training. It is not explained by lower productivity. Nurses contribute directly and measurably to patient outcomes. It is explained by the cultural devaluation of care itself — the assumption that care work is a natural extension of domestic labor, motivated by love or duty rather than professional commitment, and therefore undeserving of the compensation accorded to work motivated by ambition or profit.
AI does nothing to correct this devaluation and may worsen it. The AI-enabled economy rewards the skills that AI amplifies: analytical reasoning, creative design, strategic planning, system architecture. These are the skills that produce scalable, market-valued output. The skills that AI does not amplify — emotional intelligence, physical gentleness, relational patience, the capacity to be present with another person in their suffering — are the skills that define care. As the economy rewards AI-amplified skills more generously and care skills no more generously than before, the relative position of care workers deteriorates further.
The implications for who enters care work are already visible. Nursing shortages have reached crisis proportions across advanced economies — not because people have stopped caring about caring, but because the opportunity cost of a career in care has risen relative to every alternative that AI amplifies. A young person choosing between software engineering and nursing faces a widening gap in compensation, prestige, and working conditions. AI widens that gap further. Every year the gap grows, fewer people choose care, and the societies that can build anything find themselves unable to care for their own.
The gender dimension compounds the problem. Care work has historically been performed disproportionately by women, both in paid employment and in the unpaid domestic sphere. The devaluation of care work is, in significant part, a manifestation of the devaluation of women's work — a pattern that feminist economists have documented for generations. AI threatens to deepen this pattern by widening the compensation gap between care work, performed disproportionately by women, and AI-augmented technical work, performed disproportionately by men. If the AI economy fails to address the care penalty, it will not merely perpetuate but intensify the gender architecture of economic inequality.
The Orange Pill argues that "caring is what makes us human" — a moral claim of the highest importance. Graeber would have endorsed it passionately. He spent the final years of his career arguing that a society's treatment of its care workers is the truest measure of its values. Not GDP, not technological sophistication, not military power. The willingness to recognize, compensate, and honor the people who do the unglamorous, indispensable work of caring for other human beings.
But Graeber would have insisted, with equal passion, that the moral claim must be accompanied by an economic and political one. It is not enough to say caring is what makes us human. That is sentiment. What is needed is institutional machinery — wages commensurate with social contribution, working conditions that allow care workers to provide genuine care rather than processing patients through assembly-line efficiency metrics, public investment in care infrastructure that treats care as essential social provision rather than as a market commodity to be optimized.
The AI moment makes these demands more urgent, not because AI threatens care workers directly — it does not, not in the near term — but because AI threatens to render the economic position of care work permanently subordinate. If AI doubles the productivity of engineers and their compensation rises accordingly while leaving the productivity and compensation of nurses unchanged, the relative economic position of nurses deteriorates even though their social contribution remains as essential as ever. Over time, the widening gap makes care work economically irrational for anyone who has alternatives. The result is a society with extraordinary technological capability and deteriorating care for its most vulnerable members.
The recognition that care cannot be automated is both its limitation and its dignity. In a world where everything else can be scaled, accelerated, and optimized, the irreducibility of human care represents something essential. The nurse who sits with a dying patient for an hour — not because there is a billable procedure to perform but because the patient should not die alone — is performing an act that no algorithm can replicate, not because the algorithm lacks sophistication but because the act is constituted by human presence. Remove the human, and the act ceases to exist. There is nothing left to optimize.
The societies that pass the test of the AI era will be those that use the surplus generated by AI-driven productivity to invest in care — to raise wages, improve conditions, reduce administrative burden, and accord care workers the recognition their work deserves. The societies that fail will use the surplus to further enrich the already enriched, to generate new forms of bullshit employment for displaced cognitive workers, and to allow the care economy to deteriorate while celebrating the technological marvels that the remaining economy produces. The technology creates the surplus. The politics determines where it flows. And the question of whether it flows toward care or away from it is the most consequential political question of the AI era — a question that no algorithm, however powerful, can answer on our behalf.
Among the most disturbing findings in Graeber's research was not an economic pattern but a psychological one — a form of suffering so specific and so pervasive that he reached for language usually reserved for physical harm. He called it spiritual violence: the damage inflicted on a human being who is forced to pretend, day after day, that meaningless activity is meaningful.
The term was not hyperbole. Graeber grounded it in testimony from hundreds of workers across industries, cultures, and continents, and the consistency of what they described was itself a finding. These were not people complaining about low wages or long hours or difficult bosses — the ordinary grievances of working life that most adults learn to endure. They were describing something more specific and more corrosive: the experience of knowing, with certainty, that their work contributed nothing to anyone, combined with the social requirement to perform as though it did. The performance was the violence. Not the boredom. Not the tedium. The pretense.
A strategic vision coordinator at a major corporation described spending entire days producing PowerPoint presentations that were never opened, attending meetings in which nothing was decided, and generating memos that were filed without being read. The distress arose not from the tedium of the tasks — tedium can be endured — but from the knowledge that they served no purpose, combined with the impossibility of saying so. The admission would have been professionally fatal and socially humiliating, because the moral framework governing work equates employment with usefulness. To say "I spent eight hours doing nothing useful" is to confess a kind of existential fraud — even when the fraud was designed not by the worker but by the institution.
A Spanish financial consultant described the experience in explicitly spiritual terms. His soul, he said, was being consumed — not by overwork or exploitation in the traditional sense but by the systematic meaninglessness of his activities. He was not tired. He was not stressed. He was hollowed out. Graeber noted the parallel to descriptions of acedia in medieval monastic literature — the torpor of the soul, the inability to pray, the sense that the routines of daily life have been drained of all significance. The monks who suffered acedia were not lazy. They were trapped in rituals that had lost connection to the purpose that originally justified them. The bullshit worker is in the same position: trapped in routines of employment that have lost connection to any productive purpose that employment is supposed to serve.
The suffering has a specific mechanism that distinguishes it from ordinary work dissatisfaction. Human beings are meaning-making creatures. The capacity to find or create meaning in one's activities is not an optional enhancement of the human experience. It is a fundamental psychological need — as essential to mental health as food is to physical health. When that need is systematically denied, when a person must spend the majority of waking hours performing activities known to be meaningless, the result is not mere unhappiness but a distinctive form of psychological erosion that existing diagnostic categories capture poorly. It is not depression, though it produces depressive symptoms. It is not anxiety, though anxious symptoms follow. It is closer to what existential psychologists call anomie — a pervasive disconnection from meaning that seeps into every corner of lived experience.
The bullshit worker goes home and cannot explain to family or friends what was accomplished, because nothing was accomplished. The construction worker who comes home exhausted knows that a building stands because of the day's labor. The nurse who comes home drained knows that patients received care. The bullshit worker knows nothing of the sort. The day was consumed. The hours were filled. The world was not changed by a single atom.
The cruelest dimension of this violence is what Graeber identified as its self-reinforcing paradox. The same society that insists on the moral necessity of employment — that treats unemployment as personal failure and idleness as sin — generates millions of jobs that are, by the workers' own assessment, devoid of social value. The worker is told that employment is virtuous, that the employed person is contributing to society, while simultaneously experiencing the daily reality that the work contributes nothing. This forces a permanent state of cognitive dissonance. Either the moral narrative is wrong and employment is not inherently virtuous, or the worker's perception is wrong and the work somehow contributes something invisible. Neither option is comfortable. The oscillation between them produces the specific anguish that Graeber documented — not the sharp pain of a wound but the dull erosion of a self slowly separated from its own capacity for meaning.
The opposite of this suffering — the state in which work produces not anguish but genuine satisfaction — is what Mihaly Csikszentmihalyi described as flow: total absorption in an activity that is intrinsically rewarding, where challenge matches skill and the boundaries between self and activity dissolve. The Orange Pill describes engineers entering flow states as they built with AI tools — losing track of time, forgetting to eat, experiencing creative production at a pace they had never achieved. This is the antithesis of spiritual violence. Work so meaningful, so engaging, so connected to the worker's sense of purpose that it generates not suffering but joy.
The question Graeber's analysis poses for the AI era is stark: will artificial intelligence move workers from spiritual violence toward flow, or from whatever imperfect equilibrium they currently inhabit toward new forms of administered meaninglessness?
The optimistic case is that AI eliminates the meaningless components of work and frees humans for the genuinely meaningful ones — the judgment, the creativity, the care, the architectural thinking that constitutes the irreducibly human contribution. In this scenario, AI liberates workers from the tyranny of pointless tasks and restores them to the experience of genuine work.
The pessimistic case, which Graeber's research supports with uncomfortable weight, is that the liberation does not occur because the structures producing meaningless work are adaptive. The duct-taper who no longer reconciles data between systems is assigned to manage the AI system that reconciles the data, to audit its outputs, to generate reports on its performance, and to sit on committees that govern its deployment. The box-ticker who no longer fills compliance forms is assigned to verify that the AI fills them correctly, to document the verification process, and to submit reports on the documentation. The hours saved by technology are recolonized by new forms of pointless activity. The spiritual violence continues under new management.
There is also a third possibility that neither optimists nor pessimists fully address. AI may eliminate bullshit jobs and replace them with nothing — not with new bullshit, and not with meaningful work, but with unemployment, underemployment, or the precarious gig economy. In this scenario, the worker who previously suffered the spiritual violence of meaningless work now suffers the spiritual violence of meaninglessness itself — the absence of any work, meaningful or otherwise, around which to organize a sense of purpose.
This third possibility raises questions that transcend economics. Studies of unemployment consistently find that the loss of work structure is as damaging as the loss of income, and sometimes more so. The alarm clock, the commute, the colleagues, the end of the day — these mundane elements of working life constitute a framework of meaning that, however shallow, holds psychological weight. Unemployed workers report not only financial stress but pervasive purposelessness, loss of identity, and social isolation that the restoration of income alone does not repair.
Graeber did not romanticize bullshit work. The spiritual violence is real. But the alternative to meaningless work is not no work. It is meaningful work. And the distance between eliminating bullshit and creating meaning is the distance between demolition and architecture — between tearing down a building and constructing one worth inhabiting. AI is spectacularly good at demolition. Whether it contributes to architecture depends entirely on the institutional context of its deployment.
The Berkeley researchers who studied AI's impact on a working organization found results that map onto Graeber's analysis with disquieting precision. AI intensified work. It expanded scope. It colonized pauses. Workers were more productive — and more exhausted, less empathetic, more prone to the flat affect of a nervous system that has been running too hot for too long. The spiritual violence had not been eliminated. It had been transformed — from the violence of meaninglessness into the violence of relentless, boundary-less productivity that provides no space for the worker to determine whether the work is worth doing.
This transformation illuminates something important about the relationship between meaninglessness and intensity. Graeber documented the suffering of too little genuine work. The Berkeley data documents the suffering of too much. The common element is the absence of worker control — the inability of the worker to determine the pace, the scope, and the purpose of the work. In the bullshit job, the worker cannot choose to do less because the employment contract demands presence regardless of whether there is anything to do. In the AI-augmented job, the worker cannot choose to do less because the tool makes more always possible and the internalized imperative converts possibility into obligation.
The resolution requires institutional design that protects what Graeber would have called the worker's sovereignty over time — the capacity to decide not only what to work on but when to stop, when to reflect, when to allow the mind to wander into the unstructured space where genuine thinking develops. This is not laziness. It is the ecological requirement of a mind that generates meaning through cycles of engagement and withdrawal, intensity and rest, production and contemplation.
The spiritual violence of meaningless work will not be cured by AI. It will be cured, if it is cured at all, by institutions that grant workers the authority to refuse meaningless activity — whether that activity takes the form of bullshit jobs or the relentless task-filling that AI-augmented productivity makes possible. The technology provides the productive surplus that could support such institutions. The politics determines whether it will.
Graeber died in September 2020, before the winter of 2025, before Claude Code, before the orange pill moment. He did not live to see the technology that could serve as the most rigorous test of his thesis. But the thesis does not require his presence to be tested. The test is happening now, in every organization that deploys AI and confronts the question of what to do with the humans whose pretense of purpose the technology has stripped away. The answer to that question will determine whether the spiritual violence documented in Graeber's research was a diagnosis — a condition that can be treated — or a prophecy of the permanent condition of work in a system that cannot distinguish between employment and meaning.
Of all five species in Graeber's taxonomy, the duct-taper is the one most directly in AI's crosshairs — and the one whose fate most clearly illuminates the political choices that the technology forces into the open.
Duct-tapers apply temporary fixes to problems that exist because of institutional dysfunction. They are the human bridges between incompatible software systems, the manual translators between departments that speak different institutional languages, the patient processors of paperwork that exists only because some previous administrator created it. Their work is often genuinely skilled, frequently exhausting, and universally recognized by those who perform it as addressing symptoms rather than causes. The duct-taper's job is simultaneously real and absurd — real because the patching requires actual competence, absurd because the patching would be unnecessary if anyone had fixed the underlying problem.
AI can automate duct-taping in two fundamentally different ways, and the difference between them determines whether the outcome is liberation or merely more sophisticated dysfunction.
The first way is to automate the patching itself — to use AI to perform the manual data transfer, the translation between systems, the reconciliation of incompatible formats, more quickly and consistently than any human. This is the straightforward application: same task, different performer. The AI patches the system the way the human patched it, only faster. Most organizations reach for this application first because it requires the least institutional change. The dysfunction persists. The patch persists. Only the patcher has changed.
The second way is more radical. Use AI to fix the underlying dysfunction — to integrate the systems that should never have been separate, to create the common language that makes translation unnecessary, to eliminate the paperwork that should never have existed. This approach does not merely automate the duct-taper's job. It eliminates the reason for the job's existence. It addresses causes rather than symptoms.
The distinction matters because the two approaches produce entirely different institutional outcomes. Automating the patch preserves the organizational structure that created the dysfunction. The systems remain separate. The departments remain siloed. The paperwork remains — it is simply processed by machine rather than by hand. The duct-taper is displaced, but the duct-tape remains. Fixing the dysfunction dismantles part of the organizational structure itself. Systems are integrated. Silos are dissolved. Paperwork is eliminated rather than accelerated. The result is not merely a more efficient organization but a differently structured one — an organization that no longer needs certain categories of coordination because the things being coordinated have been unified.
The Orange Pill describes instances of both approaches, with emphasis on the second. The engineer who builds a complete feature in two days using AI is not patching an existing system. The engineer is creating something that renders old patches unnecessary. The Trivandium experience — engineers working across boundaries that had previously required specialists and coordinators — represents the elimination of the institutional friction that generated the need for duct-taping. When one person can handle frontend, backend, database, and deployment, the seams between specializations that duct-tapers previously patched cease to exist.
Graeber's framework, however, introduces a complication that the technological optimist must confront. Duct-taping serves a social function that extends beyond its productive function. Duct-taping creates employment. In an economy that distributes income through employment, the jobs created by institutional dysfunction are not merely costs — they are mechanisms for giving people salaries. When AI eliminates the dysfunction, it eliminates the employment the dysfunction created. The person who spent years manually transferring data must either find genuinely productive work or become a casualty of efficiency.
The ascending friction thesis predicts the first outcome: the duct-taper ascends. Freed from manual patching, the former duct-taper becomes the systems architect — the person who designs integration rather than performing reconciliation. The intimate knowledge developed through years of patching — where the seams are, why they leak, what breaks when you fix one thing without fixing another — becomes the foundation for higher-order work. The duct-taper's knowledge was always architectural knowledge viewed from below. AI lifts the vantage point.
Graeber's analysis predicts that this outcome, while possible for some, will not be the norm. The reason is structural. The skills of duct-taping are skills of accommodation, not transformation. The duct-taper knows how to work around problems, not how to redesign the systems that create them. The mechanic who has spent twenty years keeping a decrepit machine running is not automatically equipped to design a new one, even though the mechanic's knowledge of the old machine's failures is intimate and invaluable. The cognitive distance between patching and architecture is real. It can be crossed, but crossing it requires support — retraining, mentorship, institutional permission to operate at a new level — that most organizations do not provide.
Moreover, the transition requires a different relationship to organizational power. Duct-tapers occupy a peculiar position in institutional hierarchies: essential to the organization's functioning but invisible to its leadership. They possess deep knowledge of how things actually work — as opposed to how they are supposed to work — but they lack the authority to change the official version. The transition from patching to architecture requires visibility, authority, and political capital that duct-tapers typically do not possess. The people who have authority to redesign systems are not the people who have been patching them. The knowledge flows in one direction. The power does not flow at all.
The speed of the transition compounds every difficulty. Agricultural mechanization displaced workers over generations. Industrial automation over decades. AI displaces duct-tapers in months. The engineer's year of work replicated in an hour is an extreme case, but the pattern holds broadly: processes that required teams of duct-tapers for years are being automated in weeks. The social safety net was designed for gradual transitions. The career guidance infrastructure assumes that workers have time to retrain, relocate, and reorient. AI's timeline permits none of this.
The duct-taper's institutional knowledge deserves particular attention because it represents an asset that organizations routinely undervalue until it is gone. The worker who has spent years patching a dysfunctional system has developed understanding of the system's hidden dependencies, undocumented workarounds, and failure modes that official documentation does not capture — because official documentation describes the system as it is supposed to work, not as it actually works. When the duct-taper is automated out of existence, this knowledge departs with the person. The organization may discover only later, when the AI encounters a failure mode the duct-taper would have anticipated, that the knowledge was irreplaceable. The patch was visible. The understanding was not.
AI introduces one genuinely novel possibility into this dynamic. It may give the duct-taper the tools to perform the redesign without institutional permission. If a duct-taper can use AI to build the integrated system that renders manual data transfer unnecessary, the duct-taper does not need authorization from the organizational hierarchy. The duct-taper can build the solution and present it as accomplished fact. This is productive anarchy — the liberation of individual capability from institutional constraint that The Orange Pill celebrates.
But productive anarchy has limits. The duct-taper who builds an integrated system without authorization faces the same obstacles that innovators within hierarchies have always faced: resistance from managers who did not approve the innovation, skepticism from colleagues invested in the status quo, and the organizational inertia that makes continuing to patch easier than adopting a new system — even when the new system is demonstrably superior. Innovation from below is technically possible and institutionally improbable. The feudal lords do not reward vassals who redesign the estate without permission.
When duct-tapers are displaced, the human cost falls disproportionately on workers least equipped to absorb it. Duct-tapers are typically mid-level — not senior enough to have accumulated significant savings or professional networks, not junior enough to have the career flexibility to pivot easily. They are often middle-aged, with mortgages, families, and community ties that constrain geographic mobility. Their skills are organization-specific rather than industry-portable. The duct-taper who spent fifteen years patching the systems of a healthcare company possesses knowledge that is specific to those systems and those processes. When AI renders the patching unnecessary, the knowledge becomes worthless in the labor market, and the worker must acquire entirely new knowledge — on a timeline that the speed of AI displacement does not accommodate.
The duct-taper embodies the ambiguity of the AI moment more completely than any other figure in Graeber's taxonomy. The work is both genuinely skilled and genuinely absurd. The obsolescence is both a liberation and a threat. The future is open to unprecedented possibility and vulnerable to predictable neglect. What happens to the duct-taper is, in miniature, what happens to the broader working population as AI transforms the landscape of productive activity. Whether that fate is liberation or abandonment depends not on what the technology can do — it can do almost anything — but on whether institutions are redesigned to support the humans whose patch-shaped careers the technology has rendered obsolete.
There is a particular species of institutional activity that Graeber described with analytical precision and barely contained fury: the production of documentation, reports, metrics, and processes that exist not to serve any genuine need but to demonstrate that requirements have been satisfied. The box-ticker does not produce outcomes. The box-ticker produces evidence of process. The distinction is the hinge on which the entire relationship between AI and institutional governance turns.
Box-ticking proliferates across every sector of advanced economies with a consistency that suggests not accident but structural necessity. In universities, faculty spend increasing proportions of time producing documents that demonstrate teaching and research are being conducted according to approved methodologies — time that would otherwise be spent improving the teaching and conducting the research. In hospitals, clinicians document that patient care is delivered according to approved protocols — time that would otherwise be spent on patient care. In corporations, managers produce reports demonstrating that productive work is being accomplished — time that would otherwise be spent accomplishing it.
The irony that Graeber relished: box-ticking actively undermines the goals it purports to serve. The quality assurance process that requires faculty to document their methods consumes the time they would have spent improving those methods. The accreditation process that requires compliance demonstration consumes the time that would have been spent on care. Box-ticking is not merely wasteful. It is counterproductive — a parasite that weakens the host it claims to protect.
AI appears to offer a clean solution. If producing compliance documentation is the problem, automate the production. AI can generate reports, fill forms, track metrics, produce dashboards, and satisfy regulatory requirements with efficiency no human can match. The hours clinicians spend on documentation return to patient care. The hours teachers spend on quality assurance paperwork return to teaching.
But Graeber's analysis reveals a dynamic that the clean solution fails to address. Box-ticking is not a fixed quantity of work that can be automated once and permanently eliminated. It is the output of a generative process — an institutional logic that produces compliance requirements in response to institutional anxieties and that responds to the automation of existing requirements by generating new ones. When that process encounters AI in the workplace, it generates new compliance requirements specifically tailored to AI.
The evidence is already abundant. AI governance frameworks currently under development in corporations, governments, and international organizations represent an entirely new category of compliance that did not exist before AI. AI ethics review boards. Algorithmic impact assessments. AI transparency reports. AI bias audits. AI safety certifications. AI deployment approval processes. Each generates its own bureaucratic apparatus — consultancies, certification bodies, legal advisors, internal compliance teams — that bears every hallmark of Graeber's box-ticking taxonomy.
Some of this governance is genuinely necessary. High-risk AI applications in healthcare, law enforcement, and critical infrastructure require oversight that market forces alone will not provide. But the history of institutional governance — which Graeber documented with exhaustive and darkly comic thoroughness — suggests that the volume of governance activity will far exceed genuine need. The reason is that governance serves a political function independent of its productive function: the demonstration of institutional responsibility. The organization must be seen to be governing AI responsibly. The documentation that demonstrates responsible governance is valued independently of whether responsible governance is actually occurring.
Stuart Mills and David Spencer's concept of "efficient inefficiency" captures the dynamic precisely. AI performing compliance tasks faster is not the elimination of compliance overhead. It is compliance overhead at scale. The AI system that automates a hospital's clinical documentation triggers a cascade of new requirements. The AI-generated documentation must be reviewed by a human clinician for accuracy — creating a new category of work that did not previously exist. The review process must itself be documented — creating a meta-documentation requirement. The organization must demonstrate to regulators that the AI system was validated for clinical use — creating a certification requirement. The certification must be periodically renewed, each renewal requiring performance review, which generates its own data collection and reporting requirements. The regulatory framework governing AI in healthcare is itself evolving, requiring a dedicated team to track changes and update compliance processes accordingly.
The net effect may be that automating clinical documentation generates a new bureaucratic apparatus consuming as many person-hours as the old one. The clinicians spend fewer hours writing notes and more hours reviewing AI-generated notes, approving them, and documenting their review and approval. The hours saved by technology are consumed by the governance of technology.
This recursive generation of compliance is not an AI failure. It is an institutional feature. The institutions that generate box-ticking are adaptive organisms. They respond to technological change not by eliminating pointless activity but by evolving it. The box-ticking changes form while its function remains constant: to distribute employment, to justify hierarchies, to produce the appearance of oversight where the substance of oversight may or may not exist.
The parallel with financial regulation is instructive. The Sarbanes-Oxley Act, enacted in 2002 to improve corporate financial disclosure, produced genuinely more accurate reporting in some respects. It also generated enormous compliance costs, a significant portion consisting of box-ticking activities that improved the appearance of accountability without improving accountability itself. The risk is that AI regulation follows the same trajectory: genuine improvements accompanied by enormous expansion of compliance bureaucracy that generates employment while providing the appearance of oversight.
The European Union's AI Act, the most comprehensive regulatory framework yet enacted, exemplifies both necessity and risk. Risk categories for AI systems, conformity assessments for high-risk applications, transparency obligations, enforcement mechanisms with substantial penalties — much of this addresses genuine concerns. But implementation is already generating a compliance industry that would be instantly recognizable to Graeber: consultancies specializing in AI Act compliance, certification bodies issuing conformity assessments, legal firms advising on regulatory interpretation, internal compliance teams producing documentation to demonstrate conformity. The productive work of building responsible AI systems is being accompanied — and in some organizations, overwhelmed — by the box-ticking work of demonstrating compliance.
The deeper issue is cultural. Box-ticking persists not only because of institutional incentives but because of a deep societal anxiety about trust. The underlying logic is the logic of distrust: the assumption that people will not do the right thing unless required to document that they have done it, and that documentation can substitute for trust. AI could, in principle, address this anxiety directly — by monitoring outcomes rather than processes. Track whether patients recover rather than whether clinicians filled the right forms. Measure whether students learn rather than whether teachers followed approved curricula. Assess whether software works rather than whether developers followed approved methodology.
Outcome-based governance would make process-based box-ticking unnecessary by replacing "Did you follow the process?" with "Did you achieve the result?" But the transition requires a cultural shift from distrust to accountability — from the assumption that people must be monitored to the recognition that they can be evaluated. The distinction is political and moral. No technology resolves it unilaterally.
Graeber would have observed, with the dark humor that characterized his treatment of these phenomena, that the box-ticking apparatus is one of the few institutional structures that AI is genuinely incapable of threatening — because the apparatus feeds on the very technology that is supposed to starve it. Every AI capability generates new questions about governance, accountability, and risk. Every question generates new documentation requirements. Every requirement generates new positions for the people who produce, review, and file the documentation. The box-ticker is not merely surviving the AI revolution. The box-ticker is thriving in it — multiplying at every level of institutional complexity, generating new forms of administered meaninglessness with a creativity that the AI systems themselves might envy.
The governance frameworks being designed now will establish institutional patterns that persist for decades. If those frameworks embed genuine engagement — sunset clauses that force periodic reassessment, outcome-based criteria that measure effectiveness rather than documentation volume, institutional cultures that reward judgment over compliance — then the box-ticking apparatus can be contained. If they embed procedural compliance as a substitute for substantive governance, the apparatus will expand without limit, consuming the time and energy that AI was supposed to liberate, and producing the appearance of responsibility without its substance.
The window for the design choice is narrow. Governance frameworks calcify quickly. Once established, the bureaucratic interests that form around them — the consultancies, the compliance teams, the certification bodies — become constituencies that resist reform. The box-ticker's revenge is not merely the generation of new compliance requirements. It is the creation of a permanent institutional ecosystem that depends on compliance for its survival and that will resist any attempt to replace compliance with genuine accountability.
The AI era's most quietly devastating outcome may not be mass unemployment or existential risk or any of the catastrophes that dominate public discourse. It may be the generation of an entirely new stratum of pointless institutional activity — a compliance layer so vast, so elaborate, and so deeply embedded in organizational life that it consumes the productivity gains that AI provides, leaving the human experience of work no more meaningful than it was before the technology arrived. The tools change. The ticking continues.
Graeber observed a phenomenon that management consultants noticed but consistently misinterpreted: workers in bullshit jobs spending significant portions of their time on personal activities. Browsing the internet, writing personal emails, reading novels concealed within spreadsheets, planning vacations, conducting side businesses — the repertoire of workplace time theft is extensive, creative, and remarkably consistent across industries and cultures. Management literature treats this as a discipline problem. Employees who steal time are lazy, unmotivated, or lacking professional ethics. The prescribed solution is some combination of surveillance, incentive alignment, and motivational intervention.
Graeber inverted the analysis. Time theft is not a symptom of worker laziness. It is a rational response to institutional absurdity. A worker in a bullshit job — a job known to be pointless — is required to maintain the appearance of productive activity for eight hours despite the actual work requiring perhaps two or three. The remaining hours must be filled. Since the worker cannot leave or openly acknowledge that the work is done, personal activities disguised as work fill the gap. What management calls theft, Graeber called recovery — the worker reclaiming hours that the institution stole first by demanding presence without productive justification.
The moral framework governing time theft is deeply confused, and Graeber took visible pleasure in exposing the confusion. An employer who demands eight hours of presence from a worker whose job requires three hours of effort is, in a meaningful sense, appropriating five hours of the worker's life each day. The worker who uses those five hours for personal purposes is recovering what was taken. But the moral discourse inverts this entirely: the worker who uses work time for personal purposes is the thief, while the employer who demands unproductive hours is exercising a legitimate contractual right. The powerful define the moral vocabulary. The vocabulary consistently favors the powerful.
AI reconfigures this dynamic with a symmetry that is both illuminating and disturbing. Where time theft involves workers reclaiming hours from the employer, what the Berkeley researchers documented as "task seepage" involves the employer reclaiming hours from the worker.
In the pre-AI workplace, bullshit jobs created a surplus of empty hours that workers filled with personal activity. The time was not genuinely productive, but it was partially under the worker's control. The worker could read, think, pursue interests — could experience a form of autonomy within the employment relationship's constraints. Graeber described it as the freedom of the prisoner who has found a way to smuggle novels into the cell. Imperfect freedom, but freedom.
AI eliminates this imperfect freedom by eliminating the empty hours that made it possible. When AI automates the bullshit tasks that consumed the day, the worker does not gain five hours of leisure. The worker gains five hours of availability — and organizational logic, which abhors unoccupied workers the way nature abhors a vacuum, fills those hours with new demands. The Berkeley researchers documented the process in real time: AI freed minutes and hours that were immediately colonized by additional tasks. Workers prompted during lunch breaks, squeezed requests into gaps of a minute or two, filled every interval that had previously served, invisibly, as cognitive rest.
The result is paradoxical: an improvement in productive terms and a deterioration in human ones. The worker now does real work for eight hours instead of bullshit for three and disguised leisure for five. Output rises. The work is more meaningful. The employer captures more value. But the worker has lost the breathing room — the pockets of unstructured time that made the workday psychologically survivable. Greater productivity. Greater intensity. Greater exhaustion. Greater difficulty establishing boundaries between work and the rest of life.
The distribution of productivity gains follows a historical pattern that Graeber documented across centuries. When technology increases output, the gains must go somewhere — to workers in shorter hours or higher wages, to employers in higher output at constant cost, to consumers in lower prices, or to some combination. The history of post-industrial economies is overwhelmingly a history of gains captured by capital rather than shared with labor. Despite dramatic increases in per-worker productivity over fifty years, working hours in most advanced economies have not meaningfully decreased. The gains have flowed to profits and executive compensation rather than to shorter workweeks.
AI follows the same distributional logic. The engineer whose productivity is multiplied twentyfold does not, as a consequence, work one-twentieth of the hours. The engineer works the same hours — or more — producing twenty times the output. The gain is captured entirely by the organization. The worker's contribution to the bargain is an intensification of effort that leaves no room for the time theft that was the bullshit worker's last form of resistance.
The loss of unstructured time carries consequences that the productivity literature tends to ignore but that neuroscience and creativity research illuminate with uncomfortable clarity. Research on insight — the psychology of the "aha moment" — suggests that creative breakthroughs frequently occur during apparently unproductive activity: during walks, showers, daydreams, and the aimless mental wandering that psychologists associate with the default mode network. The brain in its default mode is not idle. It is consolidating memories, making associative connections, testing hypothetical scenarios — performing the cognitive background processing that surfaces as insight when the conscious mind stops trying to force solutions.
The bullshit worker who spent hours browsing the internet was, inadvertently, providing the brain with the unstructured input that the default mode network converts into novel connections. When every hour of the workday is consumed by goal-directed activity, the opportunity for this incubation is eliminated. The most productive workday may not be the most creative workday. The elimination of apparently wasted time may carry costs that become visible only when the creative capacity of the workforce begins to decline — when the problems that require genuine insight go unsolved because no one has the unstructured cognitive space to solve them.
Graeber would have connected this to a broader philosophical tradition. Josef Pieper argued in Leisure, the Basis of Culture that genuine intellectual and creative work requires a foundation of unproductive contemplation — time that is not directed toward any goal, that exists for its own sake, that allows the mind to encounter itself without the mediation of task and deadline. The Greeks called it scholē — the root of the English word "school" — and considered it the precondition for all higher thought. The bullshit worker's covert leisure, parasitic and concealed as it was, preserved a shadow of this tradition within the modern workplace. AI threatens to eliminate even the shadow.
The institutional challenge is to design workplaces that capture AI's productivity benefits while protecting the unstructured time that human cognition requires. This means recognizing that the time AI frees is not automatically available for reallocation to more tasks. Some of it must be protected as genuinely free — time the worker controls, time explicitly shielded from organizational demands, time that exists for the worker's cognitive and psychological benefit rather than the employer's productive benefit.
This recognition runs counter to every organizational instinct. Organizations seek to maximize utilization of employees' time. Every unproductive hour represents waste. The concept of protected free time within the workday strikes most managers as indulgent or absurd. But the alternative — the total colonization of the workday by productive demands — is not merely inhumane. It is, Graeber's analysis suggests, counterproductive in the long run. Workers with no breathing room burn out faster, produce lower-quality work over time, and lose the capacity for the creative thinking and independent judgment that organizations claim to value most.
The labor movements of the nineteenth and twentieth centuries fought for and won institutional protections of non-work time: the eight-hour day, the weekend, paid vacation. These victories were not achieved by technology. They were achieved by political organization. The same political forces could, in principle, win further reductions — could use AI-driven productivity gains as justification for a shorter workweek. If AI enables a worker to produce in twenty hours what previously required forty, the gain could be shared: the employer gets equivalent output, the worker gets twenty hours of genuine leisure. Not covert leisure smuggled into the workday. Not "personal development" or "recovery" rebranded as productivity optimization. Actual free time — time that belongs to the person living it.
Whether this happens depends on power. The history of productivity gains suggests that employers will capture the surplus unless workers organize to claim their share. AI does not change this dynamic. It intensifies it — creating larger surpluses to fight over at faster speeds, in institutional contexts where the balance of power has shifted decisively toward capital over the past four decades.
The reclamation of time is ultimately a question about what the workday is for. If it is for producing maximum output, then AI's elimination of empty hours is an unqualified good, and every freed minute should be filled with productive activity. If it is for something more complex — for the exercise of human capability in a way that includes but is not exhausted by production — then the reclamation must be genuine. The worker must gain actual time. Not a reallocation from one kind of work to another. Actual time, belonging to the worker, protected by institutional design and political will.
The bullshit worker's time theft was a symptom of a broken arrangement — hours demanded without productive purpose, reclaimed by stealth because they could not be reclaimed by right. AI has the power to fix the arrangement. Whether it does depends on whether societies treat the productivity surplus as an opportunity to liberate time or merely as an opportunity to fill it differently. Graeber's life work suggests which outcome the institutions will choose if left to their own devices. The question is whether anyone will choose differently.
Graeber spent the final decade of his career documenting pathology. The bullshit jobs, the managerial feudalism, the spiritual violence, the recursive compliance — all of it diagnostic. He was the doctor who could name the disease with devastating precision. The question he left unfinished, the question that his death in September 2020 prevented him from answering in the context of generative AI, is the constructive one: What does genuine work look like in an economy that has the technological capacity to eliminate bullshit but may lack the institutional will to do so?
The answer requires defining terms that the modern economy has systematically confused. Work is not the same as employment. Employment is an institutional arrangement — a contract between a person and an organization that exchanges time for money. Work is something older, more fundamental, and more varied: the application of human effort and attention to the transformation of the world. A parent raising a child is working. A volunteer rebuilding a flood-damaged house is working. An artist spending years on a novel that may never sell is working. None of them are employed in the sense that labor statistics recognize, and the failure of institutions to recognize their work as work is not a semantic oversight. It is a political choice — a choice to value only those forms of human effort that generate measurable economic output and to treat everything else as leisure, hobby, or personal indulgence.
Graeber's anthropological work — particularly the research assembled in The Dawn of Everything, his posthumous collaboration with David Wengrow — demonstrated that this confusion is historically anomalous. Most human societies throughout the tens of thousands of years of recorded and reconstructed human history did not organize productive activity through employment. They organized it through kinship obligations, communal labor, seasonal rhythms, ceremonial requirements, and informal reciprocity. The idea that a person should spend the majority of waking hours performing tasks assigned by an institution in exchange for tokens redeemable for goods is not a natural arrangement. It is a specific historical invention — one that has produced extraordinary material abundance and extraordinary spiritual impoverishment in roughly equal measure.
AI creates the possibility — not the certainty, but the possibility — of reorganizing productive activity along lines that are closer to the anthropological norm than to the industrial anomaly. If a single person with AI tools can produce what previously required a team, then the institutional apparatus that organized the team — the management hierarchy, the coordination mechanisms, the reporting requirements, the entire scaffolding of corporate employment — becomes optional rather than necessary. What remains necessary is the work itself: the judgment about what should be built, the care for the people it serves, the creative vision that gives the building purpose.
The transition from bullshit to genuine work requires dismantling three barriers simultaneously, each corresponding to one of the forces that Graeber identified as generators of pointless employment.
The first barrier is institutional. The organizational structures that produce bullshit — hierarchies that require headcount, compliance frameworks that generate documentation, management systems that reward team size over team output — must be redesigned. AI provides the tools. The coordination functions that justified hierarchy can be handled by systems rather than by people. The compliance functions that generated documentation can be automated or, better, replaced by outcome-based accountability that measures results rather than process adherence. The management functions that rewarded empire-building can be replaced by structures that reward impact.
This redesign is technically straightforward. It is politically treacherous. The people who control organizational structures benefit from those structures. The vice president whose authority derives from managing seven teams will not voluntarily reduce the teams to two, even when AI makes five of them unnecessary. The compliance department whose budget depends on the volume of compliance activity will not voluntarily adopt outcome-based measurement that could demonstrate that most of the activity is theater. Institutional redesign requires either competitive pressure — organizations that redesign outperform those that do not, eventually forcing the laggards to follow — or leadership willing to dismantle the structures from which their own authority derives. Both forces are real but slow. The competitive pressure takes years to manifest. The visionary leadership is rare.
The second barrier is economic. The current system distributes income through employment. If genuine work requires fewer employed people than bullshit work — as it almost certainly does, since much of current employment exists to manage the dysfunction that AI eliminates — then fewer people will receive income through employment. The remainder need alternative mechanisms. Universal basic income is the most widely discussed, but it is not the only possibility. Public investment in care infrastructure — paying people to perform the essential, embodied, relational work that AI cannot do and that the market chronically undervalues — is another. Cooperative ownership structures that distribute the gains of AI-augmented productivity to all contributors rather than concentrating them in the hands of capital owners represent a third. Graeber advocated for all of these at various points in his career, and the AI moment makes each more urgent and more feasible than when he first proposed them.
The economic barrier is the one most resistant to political solution because it requires confronting the moral axiom that income must be earned through labor. Finland's basic income experiment, Kenya's GiveDirectly program, and smaller-scale pilots across multiple countries have consistently shown that unconditional income does not produce the mass idleness that critics predict. People who receive basic income work — often more than before, because the security of a guaranteed floor enables risk-taking, education, and the pursuit of work that is meaningful rather than merely available. The evidence is suggestive rather than conclusive, because no experiment has been conducted at the scale and duration that a genuine national program would require. But the evidence consistently contradicts the assumption on which the moral objection rests.
The third barrier is cultural. The belief that employment is inherently virtuous — that people should work for their income regardless of whether the work produces anything — is not merely an economic arrangement but a moral conviction embedded in the deepest structures of identity and self-worth. Challenging it is not a policy matter. It is a cultural transformation, and cultural transformations are slow, contested, and uncertain.
Graeber's anthropological evidence demonstrates that the work-virtue equation is historically contingent — a product of the Protestant Reformation, the industrial revolution, and the specific political arrangements of early capitalism, not a universal feature of human psychology. Most human societies did not equate worth with labor. Many explicitly valued leisure — the Greek scholē, the Roman otium — as the precondition for the highest forms of human activity: philosophy, art, political participation, contemplation. The elevation of busyness to moral virtue is a recent and culturally specific development, and one that Graeber spent his career attempting to denaturalize.
What does genuine work look like once the barriers are addressed? Graeber never provided a systematic answer — his gifts were diagnostic rather than prescriptive — but the outline can be assembled from his scattered remarks, his anthropological observations, and the evidence that The Orange Pill provides about work in the AI era.
Genuine work has three characteristics that distinguish it from bullshit. First, the worker can identify the contribution the work makes. Not to a quarterly report or a process metric, but to a human being or a community. The nurse knows that the patient received care. The engineer knows that the system works. The teacher knows that the student understood. The contribution is legible — visible to the worker as a real change in the real world, not buried under layers of institutional mediation that obscure the connection between effort and effect.
Second, genuine work engages the worker's judgment. Not merely the worker's time, not merely the worker's compliance with a process, but the worker's active discernment about what should be done, how it should be done, and whether it has been done well. Judgment is the cognitive activity that cannot be reduced to procedure — the assessment that requires experience, context, values, and the willingness to be wrong. AI amplifies judgment by handling execution. It does not replace judgment, because judgment is constituted by the stakes that the judge bears. The engineer who decides what to build bears the consequence of a wrong decision. The AI that executes the decision does not.
Third, genuine work allows the worker sovereignty over pace. Not unlimited sovereignty — deadlines exist, and some work genuinely requires sustained intensity. But the capacity to determine when to push and when to pause, when to engage and when to withdraw, when to produce and when to contemplate. The bullshit worker lacked sovereignty because the institution demanded presence regardless of productive need. The AI-augmented worker risks losing sovereignty because the tool makes more always possible and the internalized imperative converts possibility into obligation. Genuine work requires the institutional protection of the worker's right to stop — not because stopping is easy or always desirable, but because the capacity to stop is what distinguishes voluntary engagement from compulsion.
These three characteristics — legible contribution, engaged judgment, sovereign pace — are not utopian aspirations. They describe the lived experience of workers in roles that Graeber identified as genuinely valuable: care workers who can see the effect of their care, skilled tradespeople who exercise craft judgment, researchers who set their own intellectual agenda, artists who create on their own timeline. The characteristics are present, intermittently and imperfectly, in many existing jobs. What makes the AI moment distinctive is the possibility that they could become the norm rather than the exception — that the elimination of bullshit could free the majority of workers for work that meets all three criteria.
Whether that possibility is realized depends on the final chapter of this analysis: the political imagination required to build institutions that channel AI's extraordinary productive capacity toward genuine human work rather than toward new and more sophisticated forms of administered pointlessness.
Graeber ended his career with a book that surprised many of his readers. The Dawn of Everything, co-authored with the archaeologist David Wengrow and published posthumously in 2021, was not about bullshit jobs or debt or the failures of capitalism. It was about the full range of social arrangements that human beings have devised throughout tens of thousands of years of history — a range vastly more varied, more creative, and more instructive than the narrow set of arrangements that contemporary societies recognize as possible.
The book's central argument was that the apparent inevitability of current institutions is an illusion produced by a truncated understanding of human history. Human societies have experimented constantly with different ways of organizing collective life. They have built cities without kings. They have managed economies without money. They have distributed resources without markets. These are not hypothetical possibilities. They are documented realities — attested by extensive archaeological and anthropological evidence, practiced by societies that were in many respects as complex and sophisticated as our own.
The relevance to the AI moment is not that pre-modern arrangements should be replicated in the twenty-first century. It is that the range of possible arrangements is enormously wider than contemporary political discourse acknowledges. The constraints on institutional design are political, not natural. The claim that income must be distributed through employment is a political choice. The claim that hierarchical management is necessary for productive activity is an assumption shaped by centuries of practice, not an empirical finding. The claim that markets are the best mechanism for determining value is a cultural commitment reflecting the interests of those who benefit from market-based valuation, not a universal truth.
AI forces these claims into the open by creating conditions under which they no longer function. When technology makes employment unnecessary for a growing share of productive output, the insistence that income requires employment becomes a constraint rather than a principle. When AI enables individuals to produce what previously required organizations, the insistence that hierarchy is necessary becomes an impediment rather than a support. When AI reveals that the market systematically undervalues care, judgment, and wisdom — the forms of human activity that matter most in an economy of abundant execution — the insistence that markets are the best measure of value becomes an obstacle rather than a guide.
The political imagination required for the AI era is the capacity to think beyond these constraints. To envision institutional arrangements that distribute income without requiring employment. That organize production without hierarchical management. That determine value without exclusive reliance on markets. This is not utopian dreaming. It is institutional design — informed by historical evidence, tested by practical experimentation, guided by a clear understanding of the values the institutions are meant to serve.
Graeber distinguished between what he called "bureaucratic technologies" and "poetic technologies." Bureaucratic technologies are tools of surveillance, control, and administration — technologies that help institutions manage populations. Poetic technologies are tools of imaginative liberation — technologies that expand what human beings can create, explore, and become. The printing press was a poetic technology. The surveillance camera is a bureaucratic one. The internet was conceived as poetic and has been substantially captured by bureaucratic logic.
The distinction maps directly onto the AI landscape. AI deployed for compliance monitoring, productivity surveillance, algorithmic management, and automated performance evaluation is bureaucratic technology — technology that serves institutional control regardless of its effect on the humans being controlled. AI deployed to enable creative production, to collapse the distance between imagination and artifact, to democratize the capacity to build, to free workers from mechanical drudgery — this is poetic technology. The same underlying capability serves either function. The function it serves depends on who controls the deployment and what they want from it.
Graeber was blunt about why poetic technologies are suppressed. In a Guardian interview, he described a historical pattern: "The ruling class had a freak out about robots replacing all the workers. There was a general feeling that 'My God, if it's bad now with the hippies, imagine what it'll be like if the entire working class becomes unemployed.' You never know how conscious it was but decisions were made about research priorities." The decisions channeled funding toward bureaucratic technologies — military applications, surveillance systems, administrative automation — and away from the poetic technologies that would have liberated labor. AI that replaces workers is bureaucratic. AI that empowers workers is poetic. The allocation of AI development resources between these functions is not a technical decision. It is a political one — determined by the interests of the people who control the resources.
The rhetoric of AI inevitability serves the same political function that Graeber identified in every institutional arrangement he studied: it converts political choices into technical necessities. "We must adopt AI or fall behind" converts a choice about how to organize productive activity into a competitive imperative that forecloses debate. "AI will eliminate these jobs" converts a set of decisions about deployment into a natural law that no one is responsible for. As the Collective Futures analysis of Graeber's legacy puts it: "When power presents itself as a technical necessity, we must look closely at who benefits from treating political choices as engineering problems."
Every dependency of the AI system represents a political choice. Copyright law could require compensation for training data but does not. Energy policy could price the carbon cost of computation but does not. Labor law could extend protections to the annotation workers and content moderators whose invisible labor teaches the systems to function but does not. Each absence is a choice — a choice made by specific actors with specific interests, a choice that could be made differently.
The Luddites — those skilled artisans of early nineteenth-century England who understood exactly what the power looms would do to their communities — lacked the political imagination to envision institutions that could distribute the gains of mechanization equitably. They saw the machines destroying their livelihoods and destroyed the machines. They did not envision the labor protections, the social insurance, the public education systems that would eventually be created — too late and too incompletely to prevent decades of immiseration, but eventually. The lesson is not that resistance was futile. Some of the Luddites' specific proposals — minimum wages, regulation of working conditions — were sound. The lesson is that resistance without constructive institutional vision is self-defeating. Breaking the machine does not change the system. Only building a better system changes the system.
The AI moment faces the same choice at vastly accelerated speed. The machines cannot be broken — they are software, distributed globally, evolving faster than any regulatory framework can track. What can be built are the institutional structures that determine whether AI's extraordinary productive capacity flows toward genuine human work or toward new forms of administered meaninglessness.
Several specific institutional designs deserve consideration, each addressing one of the barriers identified in the previous chapter.
For the economic barrier: mechanisms that decouple income from employment. Universal basic income is the most discussed but not the only option. Public investment in care infrastructure — paying people to perform essential relational work at wages commensurate with its social value — addresses both the income-distribution problem and the care-economy crisis simultaneously. Cooperative ownership structures that distribute AI-augmented productivity gains to all contributors rather than concentrating them in capital owners represent a third path. The Mondragon cooperative corporation in Spain — worker-owned, democratically governed, operating successfully for over sixty years — demonstrates that such structures are viable at scale and produce higher worker satisfaction than conventional corporate forms.
For the institutional barrier: organizational designs that flatten hierarchy and measure value by outcome rather than process. AI makes these designs more feasible by handling the coordination that previously justified management layers. When the coordination is automated, the management becomes optional — and the organizations that make it optional gain competitive advantage through speed, innovation, and the liberation of human energy from bureaucratic overhead. The competitive pressure is real, but it operates on a timeline measured in years, and the interim is filled with organizations deploying AI within feudal structures that absorb the technology without changing the power relationships.
For the cultural barrier: the hardest and most important transformation. Changing the moral framework that equates employment with virtue requires not policy but narrative — stories about what human beings are for that go beyond production and consumption. Graeber spent his career providing such narratives, drawing on anthropological evidence of societies that organized life around care, ceremony, artistic production, and communal celebration rather than around labor discipline and productivity metrics. AI, paradoxically, may help by making the current narrative's inadequacy impossible to ignore. When machines can produce everything, the question of what humans are for cannot be answered by pointing to production. A different answer is required — and the search for that answer is itself a form of the political imagination that Graeber advocated.
The window for institutional design is narrow. Technological transitions have tipping points beyond which institutional arrangements calcify and resist change. The early decades of the industrial revolution were a period of institutional fluidity — the relationships between capital and labor, the role of the state, the forms of social protection were all genuinely open questions. By the late nineteenth century, the arrangements had hardened into patterns that persisted for over a century, modifiable at the margins but structurally stable. The AI era is in its early period of fluidity now. The choices being made — in corporate boardrooms, in legislative chambers, in the design of AI systems and the policies governing deployment — will establish patterns that may persist for generations.
Graeber would not have been optimistic. His career was a sustained demonstration that the people who control institutions use that control to preserve their advantages, that political imagination is actively suppressed by the structures it would dismantle, and that the arc of institutional history does not bend naturally toward justice but toward the interests of the powerful. The Dawn of Everything was, among other things, a reminder that human societies have gone backward as well as forward — that societies achieving remarkable equality have been succeeded by societies of brutal hierarchy, and that there is no guarantee that the extraordinary technological capability of the present moment will produce a more humane world than the one it replaces.
But Graeber was also an anarchist — a person who believed, against considerable evidence, that human beings are capable of organizing themselves without coercion and that the hierarchical structures of state and corporation are impositions rather than necessities. His anarchism was not naivete. It was grounded in the same anthropological evidence that he used to diagnose the present: evidence that human beings have, in fact, built societies organized around care rather than extraction, around communal flourishing rather than individual accumulation, around the satisfaction of genuine work rather than the spiritual violence of administered meaninglessness. If they did it before, they can do it again. The tools are different. The political task is the same.
The tools have never been more powerful. The surplus has never been larger. The possibility of genuine work — work with legible contribution, engaged judgment, and sovereign pace — has never been more technically achievable. What is missing is what was always missing: the political will to build institutions worthy of the technological capability. Graeber's life work was an attempt to summon that will — to demonstrate that the institutional arrangements governing work are not natural laws but political choices, and that better choices are possible because human beings have already made them, in other times and other places, under conditions that were in many respects more constrained than our own.
The question is not whether AI can eliminate bullshit. The question is whether we will permit it to — and what we will build in the space it clears.
That is the number that stopped me. Not any of the technology metrics, not the adoption curves or the revenue milestones or the productivity multipliers that I spend my professional life tracking. Thirty-seven percent of British workers, surveyed by YouGov, reported that their job made no meaningful contribution to the world. A Dutch study produced similar figures. Extrapolate conservatively across advanced economies and you arrive at a staggering conclusion: hundreds of millions of people spend the majority of their waking hours performing activities that they themselves recognize as pointless.
I have been building things for thirty years. I have watched the commercial internet arrive, watched mobile reshape everything, watched streaming upend the music industry from inside the company that was both its first casualty and its proof of concept. I understand technological disruption at a visceral level — the exhilaration of the frontier, the vertigo of the ground shifting underfoot. But Graeber's thirty-seven percent forced me to confront something that technology discourse almost never addresses: the possibility that the most important thing AI could do is not build faster but stop pretending.
Stop pretending that every job is necessary. Stop pretending that organizational hierarchies exist because the work requires them. Stop pretending that the meetings, the reports, the compliance documentation, the layers of coordination are there because the world would fall apart without them. Graeber's contribution — the thing I cannot put down, the thing that has reshaped how I think about every organizational decision I make — is the insistence that we look at the machinery honestly and ask which parts are load-bearing and which are decorative.
I recognized the managerial feudalism he described. I have lived inside it for decades. I have been the lord with too many retainers, justifying headcount to justify budgets to justify authority. I have sat in rooms where the meeting existed to justify the existence of the people in the meeting. And I have watched AI strip the pretense away with a speed that is both liberating and terrifying — liberating because the genuine work stands revealed, terrifying because the genuine work turns out to be a much smaller fraction of total activity than any of us wanted to admit.
The care work argument hit hardest. When I wrote in The Orange Pill that caring is what makes us human, I believed it. I still believe it. But Graeber's framework exposed the gap between the moral claim and the economic reality. A society that declares caring to be the essence of humanity while paying its care workers poverty wages is a society that has made a claim it refuses to honor. AI widens the gap by amplifying the compensation of cognitive-technical work while leaving care compensation untouched. The nurse, the teacher, the elder-care aide — the people doing the work that no algorithm can perform — fall further behind with every productivity multiplier that benefits the already-privileged. The moral claim means nothing without institutional machinery to back it up.
What changed my thinking was the concept of efficient inefficiency — AI performing bullshit tasks faster, generating more sophisticated nonsense at greater scale. I had assumed, without examining the assumption, that giving people powerful tools would naturally lead to better outcomes. Graeber's framework forced me to see that powerful tools deployed within dysfunctional institutions produce dysfunction at scale. The twenty-fold productivity multiplier I witnessed in Trivandrum is not automatically a force for good. It is an amplifier. And an amplifier, as I wrote, does not care what signal you feed it. Feed it the institutional logic that generates bullshit, and it will generate bullshit twenty times faster.
The political imagination Graeber called for is not something that comes naturally to builders. We are wired to solve the problem in front of us, to ship the product, to optimize the system. The idea that the system itself might need to be redesigned — not optimized, not accelerated, but fundamentally restructured — sits uneasily with the builder's temperament. But Graeber's evidence is too strong to dismiss. The institutions governing work are producing spiritual violence on a massive scale, and the technology I have spent my life building is as likely to intensify that violence as to relieve it, unless the institutions change.
I do not have Graeber's answers. I am not an anarchist, though I find his anarchism more intellectually serious than I once assumed. I am not ready to advocate for the abolition of employment, though the evidence that employment and meaningful work are increasingly divergent categories is difficult to ignore. What I am ready to say — what working through Graeber's ideas has forced me to say — is that the question "What should we build?" must now be accompanied by a harder question: "What institutional structures will ensure that what we build serves genuine human needs rather than generating new forms of administered pointlessness?"
Graeber died before he could see the tools that might have given his vision practical form. He complained in 2012 that we did not have computers we could have interesting conversations with. We do now. The technology he said capitalism was suppressing arrived — but in exactly the form he would have critiqued, deployed as often for bureaucratic control as for imaginative liberation. The question he left us is not about the technology. It is about the choices we make with it. And those choices, as he spent his life demonstrating, are always political — always about who controls the tools, who benefits from their deployment, and whose needs are served or ignored by the institutions that govern their use.
The thirty-seven percent deserve better than bullshit. AI gives us the means to offer it. What remains is the will.
— Edo Segal
Thirty-seven percent of workers believe their jobs contribute nothing meaningful to the world. David Graeber spent a decade proving they were right — documenting the flunkies, goons, duct-tapers, box-tickers, and taskmasters whose roles exist not because the work demands them but because the institutions do. Now AI has arrived with the power to eliminate every category of pointless work Graeber identified. The question his framework forces is whether it will — or whether organizations will absorb the most powerful anti-bullshit technology ever invented and use it to generate new bullshit at unprecedented scale.
This book applies Graeber's anthropological lens to the AI revolution, examining what happens when a twenty-fold productivity multiplier meets an economy that distributes income through employment regardless of whether the employment produces anything. The answer reshapes everything we thought we knew about automation, hierarchy, and what genuine work actually looks like.
QUOTE:

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that David Graeber — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →