By Edo Segal
Nobody in the AI discourse is talking about the magistrates of Berkshire.
In 1795, a group of well-meaning local officials in a parish called Speenhamland looked at workers whose wages had collapsed below subsistence and did the compassionate thing. They created a public fund to make up the difference. Bread prices rise, the supplement rises. Wages fall, the public covers the gap. The intention was protection. The result, over the following decades, was the institutionalization of poverty itself. Employers discovered they could slash wages further — the fund would absorb the cost. The workers became permanent dependents. Compassion, absent structural reform, became a machine for producing the exact misery it was designed to prevent.
I read that story during the same month I was hearing serious people propose Universal Basic Income as the obvious answer to AI displacement. The parallel stopped me cold.
Karl Polanyi mapped something in 1944 that I had been living inside without seeing it. The pattern is this: when markets extend their logic into domains that cannot survive being treated as commodities — human labor, natural environments, the systems of trust that make exchange possible — the result is not equilibrium. It is destruction. And the destruction continues until society builds institutions strong enough to redirect the market's force.
The machines were never the point. The institutions were always the point.
That sentence rearranged my thinking about everything I describe in The Orange Pill. The twenty-fold productivity gain I witnessed in Trivandium. The trillion-dollar SaaS correction. The twelve-year-old asking what she is for. I had been telling a technology story. Polanyi forced me to see the institutional story underneath it — the one about who captures the gains, who bears the costs, and whether the structures we build in response will produce genuine protection or a comfortable dependency that subsidizes the problem while calling it a solution.
This book applies Polanyi's framework to AI with a rigor that made me uncomfortable on nearly every page. It identifies intelligence as a fourth fictitious commodity — something the market treats as a product for sale when it is actually a human capacity that cannot survive commodification without institutional protection. It traces the double movement already visible in regulation, labor organizing, and cultural discourse. And it asks the question I was not asking: whether the market is commodifying the very capacity we need to resist commodification.
The technology is extraordinary. The institutions are not ready. Polanyi shows you why that gap is the only thing that matters.
— Edo Segal ^ Opus 4.6
1886-1964
Karl Polanyi (1886–1964) was a Hungarian-American economic historian and political economist whose work fundamentally challenged the assumption that free markets arise naturally from human behavior. Born in Vienna and educated in Budapest, he fled fascism in the 1930s, living in England before settling in North America, where he taught at Columbia University. His masterwork, The Great Transformation (1944), argued that the self-regulating market was not a spontaneous development but a deliberate political construction — one that required the forcible commodification of labor, land, and money, and that inevitably provoked protective counter-movements from the societies it disrupted. Polanyi introduced the concept of "fictitious commodities" to describe things the market treats as products for sale but that were never produced for that purpose, and the concept of "embeddedness" to describe the historical norm in which economic activity was subordinated to social relationships rather than the reverse. His work has experienced a significant revival in the twenty-first century as scholars apply his frameworks to globalization, financialization, and digital disruption. He remains one of the most cited thinkers in economic sociology and institutional economics.
In 1944, Karl Polanyi published a book that explained the previous century by revealing the century's defining mistake. The mistake was not industrialization. It was not the steam engine, the power loom, or the factory system that reorganized English life in the eighteenth and nineteenth centuries. The mistake was a political decision — perhaps the most consequential political decision in the history of Western civilization — to create a self-regulating market and to subordinate all social life to its logic. The machines were instruments. The transformation was institutional. And the catastrophe that followed was not an accident of technology but the predictable consequence of treating human life, natural environments, and monetary systems as commodities to be bought and sold on markets that answered to no authority beyond the price mechanism itself.
Eighty years later, the decision is being made again. Only this time, the commodity is not labor or land or money. It is intelligence itself.
The AI moment that Edo Segal describes in The Orange Pill — the winter of 2025 when Claude Code crossed a threshold and the imagination-to-artifact ratio collapsed to the width of a conversation — is being narrated almost exclusively as a technology story. The discourse is about what the machines can do, how fast they can do it, what categories of work they can replicate or exceed. The triumphalists celebrate the speed. The elegists mourn the loss. The silent middle holds contradictory truths and waits for a framework. But nearly everyone is telling an incomplete story. They are telling a story about capability when they should be telling a story about institutions. They are asking what the machines can do when they should be asking what the markets will do with the machines.
This distinction is not pedantic. It is the difference between a technology that expands human capability — which is what every significant technology in human history has done — and a market system that converts that expansion into a mechanism for subordinating human life to economic logic. The printing press expanded capability. The market system that emerged around it determined who could read, who could publish, who could profit, and who was excluded. The steam engine expanded capability. The market system that organized its deployment determined who worked sixteen-hour days, whose children were sent into the mines, and whose fortunes multiplied accordingly. The technology was the occasion. The market was the engine. And the market, left to its own devices, did not produce equilibrium. It produced a catastrophe that required generations of political struggle to contain.
Polanyi's The Great Transformation traced the process by which European societies attempted to create something that had never existed in the history of human civilization: a self-regulating market, a market that would govern the production and distribution of goods without interference from social institutions. Previously, markets had been embedded in social relationships. They operated within boundaries set by custom, guild regulation, moral norms, and political authority. The market was useful, but it was one institution among many, subordinated to the society that hosted it. Economic activity was embedded in social relations rather than the reverse.
The great transformation reversed that relationship. Instead of the economy serving society, society was reorganized to serve the economy. The guilds were abolished. The commons were enclosed. The poor laws were reformed not to serve the poor but to force them into the labor market. Land that had sustained communities for centuries was converted into a commodity. And labor itself — the purposeful activity of living human beings — was converted into a commodity to be purchased at the lowest possible rate and deployed according to the logic of supply and demand.
Polanyi was emphatic on a point that remains essential: this was not a natural evolution. The self-regulating market did not emerge spontaneously from human nature. It was constructed through deliberate state action — through legislation that eliminated institutional protections, through political decisions that privileged capital over communities, through an intellectual revolution that elevated market logic to the status of natural law and dismissed every other form of social organization as primitive or obstructive. The myth of the market's natural emergence is the ideological foundation on which market fundamentalism rests. If the market is natural, then any constraint is interference. If the market is a political construction, then it is subject to political governance like any other human institution.
The AI market is surrounded by the same mythological apparatus. The deployment of artificial intelligence is presented as a natural process, driven by technological capability and market demand, that will unfold according to its own logic regardless of human intervention. The narrative of inevitable acceleration — which Segal identifies and challenges in The Orange Pill — is the contemporary version of the myth of natural emergence. It counsels passivity. It treats political agency as irrelevant. And it serves the interests of the organizations that profit from unconstrained deployment by delegitimizing the institutional constraints that would redirect it.
Polanyi's most important contribution to the current discourse may be precisely this: the destruction of the myth of inevitability. The AI transition is not inevitable in any specific form. It is a political process, shaped by institutional decisions, amenable to democratic governance, and responsive to the values of the societies that deploy it. A different institutional framework would produce a different trajectory. The myth of inevitability serves those who profit from the current framework by foreclosing the political imagination that would conceive of alternatives.
In every pre-market society that anthropologists have studied — from the Trobriand Islanders whose kula ring organized exchange according to reciprocity and social prestige, to the redistributive economies of Mesopotamia and Egypt, to the medieval guilds that governed production according to norms of quality and communal obligation — economic activity was embedded in social institutions. The embedded economy is not a utopian ideal. It is the historical norm. The disembedded economy, in which social life is subordinated to market logic, is the anomaly — recent, limited, and catastrophic in its consequences whenever it has been attempted without constraint.
The AI transformation is accelerating this disembedding with a velocity that the nineteenth-century architects of the original transformation could not have imagined. When the imagination-to-artifact ratio collapses from months to hours, the market can explore new territory at the speed of computation rather than the speed of human implementation. Domains of cognitive activity that were previously protected by the sheer cost of human expertise — legal analysis, medical diagnosis, creative production, strategic planning — are now open to commodity logic. The speed of the disembedding is itself a factor in its destructiveness, because social institutions cannot adapt at the speed of computation. They adapt at the speed of human social processes — slowly, painfully, and with enormous friction. And the friction of social adaptation is precisely what market logic eliminates.
Recent scholarship has begun to map this terrain. Jeremy Shapiro, Director of Research at the European Council on Foreign Relations, has argued that globalization, technological change, and immigration have already produced a nascent disembedding shock in much of the world, and that AI appears set to radically deepen that challenge. The IMF estimates that around sixty percent of jobs in advanced economies are exposed to AI, with roughly half facing potential negative impacts. Unlike earlier waves of automation, exposure is concentrated not only in routine manual work but in professional, clerical, and creative occupations — precisely the domains where the developmental costs of commodification are highest.
The original great transformation produced what Polanyi called a double movement: the simultaneous expansion of market logic and the emergence of a protective counter-movement from society. The double movement was not a deliberate strategy. It was a structural necessity. The market's logic, applied without constraint, destroyed the social fabric. Society pushed back — not because society was hostile to the market, but because the market's logic, extended to domains that could not survive commodification, produced intolerable destruction. The protective counter-movement took many forms: factory legislation, labor unions, public health measures, educational systems, and eventually the welfare state. Each was a structure designed not to stop the market but to redirect its force toward conditions compatible with human survival.
The AI transformation is generating its own double movement. The market expansion is the rapid deployment of AI tools across every domain of knowledge work, driven by cost reduction and efficiency optimization. The protective counter-movement is emerging in the form of regulatory proposals, labor organizing, educational reform, and the cultural discourse that The Orange Pill documents with careful attention. The EU AI Act, the American executive orders, the emerging frameworks in Singapore and Brazil and Japan — each represents the protective impulse of a society that senses the market's logic extending too far.
But the quality of the outcome depends entirely on the strength and wisdom of the counter-movement. The original great transformation's counter-movement lasted decades and was marked by enormous suffering, political turmoil, and the destruction of communities that were never rebuilt. The adaptation was not smooth. It was not painless. And it was not inevitable. It happened because people fought for it, organized for it, and built the institutions that made it possible.
Polanyi would observe that the current counter-movement is dangerously inadequate — not in ambition but in conception. The regulatory frameworks emerging today address the supply side: what AI companies may build, what disclosures they must make, what risks they must assess. The demand side — what citizens, workers, students, and parents need to navigate this moment — remains almost entirely unaddressed. The regulations constrain the producers of AI. They do not protect the consumers. They do not build the institutional infrastructure that workers need to adapt. They do not reform educational systems producing graduates unprepared for an AI-saturated economy. They do not create the social safety nets that would allow displaced workers to retrain without destitution.
The ground is shifting. The question is not whether to build but what to build, and for whom, and under what institutional constraints. The great transformation taught that the answer is never determined by the technology itself. It is determined by the political choices of the society that deploys it. The AI transformation presents the same choice, at a faster pace, on a larger scale, with consequences that will be felt by every human being on the planet.
The machines were never the point. The institutions were always the point. And the institutions that will govern the deployment of artificial intelligence — whether they are built wisely, built hastily, or not built at all — will determine whether this transformation produces expansion or catastrophe. Polanyi mapped the pattern. The pattern is repeating. The question is whether anyone is reading the map.
---
A commodity, in classical economic theory, is something produced for sale on a market. Wheat is a commodity. Cloth is a commodity. Steel is a commodity. Each is brought into existence for the purpose of being sold, and the market mechanism of supply and demand governs its production, distribution, and price. The commodity form is the natural mode of existence for goods produced by human labor for exchange. The market is the appropriate institution for governing genuine commodities, and the price mechanism works tolerably well in allocating resources within the commodity sphere.
But labor, land, and money are not commodities in this sense. This was Karl Polanyi's most enduring contribution to political economy, and it is the key that unlocks the full significance of the AI transformation. Labor is human activity — inseparable from the living person who performs it. Subjecting labor to the market mechanism means subjecting human beings to the market mechanism, governing their lives according to supply and demand. Land is nature — not produced by human activity, existing independently of markets. Treating land as a commodity means treating the physical environment as a resource for profit extraction, with no regard for the ecological systems that sustain life. Money is a social convention — a token that facilitates exchange but has no intrinsic value outside the relationships that give it meaning. Treating money as a commodity produces the monetary instability that has punctuated market society since its inception.
Polanyi called these fictitious commodities because the market treats them as if they were commodities — as if they were produced for sale, as if the price mechanism were the natural and appropriate instrument for their governance — when they are in fact elements of human life and nature that cannot be subjected to market logic without producing social destruction. The fiction is not that they are bought and sold. They are. The fiction is that the market is the appropriate institution for governing them. It is not. And the attempt to make it so — to create a self-regulating market in which labor, land, and money are governed by supply and demand without social constraint — is the utopian project whose catastrophic consequences Polanyi spent his career documenting.
Intelligence is the fourth fictitious commodity. It was not produced for sale on a market. It is a human capacity, developed through years of education, experience, and the slow accumulation of understanding that no market can create or replace. The market can price the output of intelligence. It cannot price intelligence itself, because intelligence is not a thing but a process, not a product but a development, not an output but a journey. When the market treats that journey as a cost to be minimized, it destroys the journey without recognizing what it has lost.
The mechanism of commodification is precise and already visible. A large language model is trained on the accumulated output of millions of human minds: the text they have written, the code they have produced, the analyses they have conducted. This accumulated output is the crystallized product of millions of developmental trajectories — millions of careers, millions of lifetimes of struggle, failure, and accumulated understanding. The model consumes this product and reproduces it in new configurations. It does not reproduce the trajectories. It does not reproduce the careers. It does not reproduce the lifetimes. It reproduces only the output, stripped of the developmental process that produced it.
This stripping of output from process is the commodification in its purest form. The market has found a way to extract the product of intelligence without sustaining the process that produces intelligence. It has found a way to harvest the crop without tending the field. The immediate yield is impressive. The long-term consequence is the exhaustion of the soil.
The parallel to extractive agriculture is not merely metaphorical. The enclosure of the commons in eighteenth-century England had exactly this character. The commons had sustained rural communities for centuries through shared stewardship that maintained soil fertility, biodiversity, and ecological balance. Enclosure converted the commons into private property, subjected it to market logic, and produced immediate gains in output. The gains were real. But the market management did not maintain the soil. It exhausted it. Within a generation, the most productive enclosed lands were depleted, and the communities that had sustained them were destroyed.
The commodification of intelligence is an enclosure of the cognitive commons. The accumulated knowledge, judgment, and wisdom of human civilization — sustained for millennia through education, mentorship, apprenticeship, and communal learning — is being enclosed by market logic and subjected to commodity pricing. The immediate output is impressive. AI tools produce competent cognitive work at a fraction of the cost of human expertise. But the market does not maintain the cognitive soil. It does not sustain the developmental processes, the educational institutions, the mentorship relationships, the communities of practice that produce the human intelligence the market is harvesting. The output increases. The soil depletes. And the depletion is invisible to the market because the market measures only the harvest, never the condition of the field.
Scholars have already begun extending Polanyi's framework in this direction. Bob Jessop, writing on knowledge as a fictitious commodity, observed that contemporary capitalism is widely seen as a knowledge-based economy, which raises the question of whether knowledge is also a fictitious commodity and whether its disembedding also entails a double movement. The Centre for International Governance Innovation has argued that data — the raw material of AI systems — meets every criterion of a fictitious commodity: it was not produced for sale, its value is not captured by its price, and its allocation by market logic produces social harm that the market cannot see. But data is the input. Intelligence — the capacity for judgment, analysis, and creative production — is the output being commodified. The commodification extends deeper than data governance alone.
Consider what happens to a profession over the course of a generation when the developmental trajectory that produces expertise is disrupted. The Orange Pill describes an engineer in Trivandrum who lost ten minutes of formative struggle along with four hours of tedium when AI automated her implementation work. The market saw a cost reduction. The productivity metrics improved. But the ten minutes were the process by which she developed architectural intuition — the capacity to feel that something was wrong before she could articulate what. The loss of those ten minutes is invisible to every metric the market uses, because the market does not price the formation of judgment. The market prices output. And when the output is satisfactory, the market declares victory.
This is the logic of every fictitious commodity. The market optimizes for what it can price and destroys what it cannot. When labor was treated as a commodity, the market optimized for productive output and destroyed the laborer's health, community, and dignity. When land was treated as a commodity, the market optimized for agricultural yield and destroyed ecological systems. When money was treated as a commodity, the market optimized for financial returns and destroyed monetary stability. Now intelligence is being treated as a commodity, the market is optimizing for cognitive output, and it is destroying the developmental processes, social relationships, and institutional structures that sustain intelligence as a human capacity.
The destruction follows a temporal logic that makes it particularly insidious. It is never immediate. It accumulates over decades, invisible to market metrics, recorded only in the declining depth of professional expertise, the atrophying tolerance for cognitive friction, the dissolving bonds of professional community, the slow erasure of the mentorship relationships that transmit judgment across generations. The market measures output. The output increases. The market declares success. Meanwhile, the developmental infrastructure is eroding in ways that will not become politically visible until the erosion has advanced to the point of crisis.
The senior software architect described in The Orange Pill — the one who felt like a master calligrapher watching the printing press arrive — had spent twenty-five years building embodied intuition about systems. He could feel a codebase the way a doctor feels a pulse. The market's response was straightforward: breadth was good enough. The market did not care about the layers. The market priced the output, and when AI could produce competent output at a fraction of the cost, the market repriced his life accordingly. His knowledge was real. His intuition was genuine. But the market does not price understanding. It prices output. And when the output can be produced without the understanding, the understanding has no market value.
Intelligence as a fictitious commodity is the most dangerous of all, for a reason that distinguishes it from every previous commodification. When labor was commodified, the laborers could feel the degradation — the exhaustion, the injuries, the premature aging. When land was commodified, the ecological destruction was eventually visible: polluted rivers, deforested hillsides. When money was commodified, the financial crises were dramatic enough to force political response. But the commodification of intelligence degrades the capacity for perception itself. The worker whose judgment has been hollowed out by years of surface-level AI-assisted production does not know what she has lost, because the capacity to recognize the loss is part of what has been lost. The student who has never struggled with an idea does not know what struggling would have produced.
The fourth fictitious commodity is self-concealing. The market produces the appearance of competence while destroying the conditions that make genuine competence possible. The brief is well-drafted. The code works. The analysis is competent. But the judgment that would have been produced by the struggle to draft, code, and analyze is absent — and its absence is invisible because the output conceals what the process would have produced. The surface is smooth. The foundation is eroding. And the erosion will not become visible until the foundation can no longer support the weight of the structures that rest upon it.
---
The self-regulating market is a utopian project. Karl Polanyi used the word utopian not in its popular sense of a beautiful ideal but in its original sense: a place that does not exist and cannot exist. The attempt to create a market that governs all production and distribution — including the fictitious commodities of labor, land, and money — without interference from social institutions is structurally impossible. The market destroys its own foundations. It undermines the social, natural, and institutional conditions on which market activity depends. And the destruction, when it reaches a critical point, triggers a protective counter-movement from society that constrains or dismantles the self-regulating mechanism.
The impossibility rests on a simple but profound observation. The market requires social institutions to function. It requires laws that enforce contracts. It requires norms that constrain fraud. It requires educational systems that produce skilled workers. It requires political stability that protects property rights. It requires cultural institutions that produce the trust without which market transactions would be prohibitively costly. The market depends on these institutions but does not produce them. They are produced by the society in which the market is embedded, through political processes, cultural traditions, and institutional arrangements that operate according to logics entirely different from supply and demand.
When market logic extends without constraint, it destroys the institutions on which the market depends. The factory system destroyed the communities that had produced skilled, healthy, socially embedded workers. The enclosure movement destroyed the subsistence economy that had sustained rural populations, producing a mass of displaced, impoverished, socially atomized workers who flooded into cities where they lacked the community ties, institutional support, and social stability that the market required. The market had destroyed its own supply of adequate labor by destroying the social conditions that produced adequate laborers.
The AI market is repeating this pattern with characteristic precision. The market's deployment of AI tools is destroying the developmental processes, educational institutions, professional communities, and mentorship relationships that produce the skilled workers the market will continue to need. The senior engineer's embodied intuition, the architect's capacity to feel a codebase, the designer's trained eye — these are not natural endowments. They are products of decades of institutional investment: educational curricula, professional training, apprenticeship systems, mentorship relationships, communities of practice. The market did not create these institutions. Society created them, through collective investment in the social infrastructure of skill development.
When the market deploys AI in ways that undermine these institutions, it destroys its own future supply of the judgment, expertise, and wisdom that the market will continue to require. The junior developer who never debugs by hand never develops architectural intuition. The lawyer who never reads cases never develops legal judgment. The student who never struggles with an essay never develops analytical capacity. The market has produced today's output more cheaply. It has also destroyed the process by which tomorrow's judgment would have been developed.
This is the impossibility of the self-regulating AI market. It optimizes for immediate output at the cost of long-term capability. It values this quarter's productivity at the expense of next decade's judgment. It treats the developmental infrastructure of human intelligence as a cost to be minimized rather than an investment to be sustained. And it does so not because anyone is short-sighted in some correctable sense, but because the market is structurally incapable of pricing long-term capability. The market prices current output. Developmental processes are externalities — costs that do not appear on any balance sheet and therefore do not enter any market calculation.
The boardroom conversation recounted in The Orange Pill crystallizes the impossibility. If five people using AI tools can do the work of one hundred, why keep one hundred? The arithmetic is clean and seductive. The market logic is impeccable. The social logic is catastrophic. The ninety-five people who are displaced do not simply find new roles, any more than the enclosed villagers of the eighteenth century simply found new livelihoods. The displacement destroys communities, disrupts developmental trajectories, eliminates the institutional infrastructure through which judgment and expertise are transmitted across generations. Segal chose to keep and grow his team — a decision that Polanyi's framework identifies as an act of re-embedding, a deliberate subordination of market logic to social logic. But the market does not reward this decision. The market rewards quarters. And decisions that the market punishes are not sustainable across an entire economy without institutional constraint.
The history of the original great transformation confirms this analysis with empirical force. The attempt to create a self-regulating labor market in England in the early nineteenth century produced not equilibrium but catastrophe: mass pauperism, social dislocation, the destruction of rural communities, and the creation of an urban working class whose conditions constituted a permanent source of social instability. The attempt to create a self-regulating land market produced ecological degradation that impoverished soil, poisoned water, and destroyed agricultural productivity. The attempt to create a self-regulating money market produced the financial crises that periodically destroyed fortunes and threw millions out of work.
In each case, the crisis was the structural consequence of governing a fictitious commodity by the commodity mechanism. The market's logic, applied to labor, destroyed the social conditions of labor. Applied to land, it destroyed the natural conditions of production. Applied to money, it destroyed the institutional conditions of exchange. The self-regulating market was not merely impractical. It was self-defeating.
The defenders of the self-regulating AI market make arguments that bear a striking resemblance to those made by their nineteenth-century predecessors. They argue that AI deployment produces net benefits: more output, lower costs, expanded capability, the democratization of tools previously available only to the privileged. They argue that interference will slow innovation, that regulation will prevent the full realization of AI's potential. They argue that displaced workers will find new roles, that atrophied skills will be replaced by new skills, that disrupted developmental processes will reconstitute in new forms.
These arguments are not entirely wrong. The market's deployment of AI is producing net benefits by many measures. Regulation can slow innovation if poorly designed. Displaced workers will, in many cases, find new roles. But the arguments miss the central point: the net benefits are not the issue. The issue is the distribution of benefits and costs, the temporal gap between the destruction of old capabilities and the development of new ones, and the irreversible damage that unconstrained market logic produces in the interval between destruction and reconstruction.
The factory system produced net benefits. Output increased. Goods became cheaper. National wealth grew. But the benefits were distributed in ways that produced immense suffering for the workers who bore the cost. The wealth accrued to owners. The cost was borne by laborers. The gap between aggregate benefit and individual cost was the space in which the catastrophe unfolded. The net benefits argument was not wrong. It was irrelevant to the people being ground up.
What makes the impossibility particularly acute in the domain of intelligence is the recursive dimension that earlier commodifications did not possess. When the market commodified labor in the nineteenth century, it did not commodify the capacity for political organization. The workers whose labor was being commodified retained their ability to form unions, to vote, to organize, to build institutions. They could perceive their degradation and respond to it collectively.
The commodification of intelligence threatens the capacity that the counter-movement requires. When the market commodifies thought, judgment, and institutional creativity, it undermines the human capabilities needed to conceive protective institutions, design regulatory frameworks, and organize political resistance. Polanyi's framework, applied recursively, suggests that the AI transformation is the most dangerous instance of the self-regulating market's impossibility — not because it extends market logic to a new domain, but because it extends market logic to the domain that produces the capacity for constraining market logic.
Jeremy Shapiro captures this recursion in geopolitical terms: if the AI age is another great transformation, the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion. The nations that build adequate institutional infrastructure will not merely protect their citizens. They will preserve the cognitive and political capacity to govern the transformation itself. The nations that allow the self-regulating AI market to proceed unconstrained will find, as the nineteenth-century laissez-faire societies found, that the market's logic eventually destroys the social conditions on which both the market and the state depend.
The self-regulating AI market will fail. The structural impossibility guarantees it. The question is how much it will destroy before it does — and whether the counter-movement will be adequate to rebuild what the market has consumed.
---
The double movement is not a theory of resistance. It is a theory of structural necessity. When the market extends its logic to domains that cannot survive commodification, society responds — not because society chooses to respond, but because the alternative is social disintegration. The two movements are always present simultaneously, pushing in opposite directions: the market movement toward universal commodification, the protective counter-movement toward institutional embedding. Neither achieves complete victory. The tension between them is permanent, and the quality of social life at any given moment is determined by the balance between them.
The AI transition is generating its own double movement with particular clarity and urgency. The first movement — the expansion of market logic through AI deployment — is proceeding at a pace no previous market expansion has matched. When the imagination-to-artifact ratio collapses, the market can explore new cognitive territory at computational speed. Legal analysis, medical diagnosis, creative production, educational assessment, strategic planning — each of these domains required, until recently, years of specialized human development to enter. Each is now accessible to market logic through AI tools that produce competent output at a fraction of the cost of human expertise.
The protective counter-movement, the second movement, is also visible — though less dramatic and less well-documented. It takes forms that Polanyi would recognize immediately: regulatory proposals, labor organizing, educational reform, and the cultural discourse through which a society processes its own transformation.
Consider the regulatory dimension first. The EU AI Act, adopted in 2024, represents the most comprehensive regulatory framework yet attempted — classifying AI systems by risk level, imposing graduated requirements for transparency and accountability, subjecting high-risk systems to stringent oversight. The American executive orders establish frameworks for federal oversight. Singapore, Brazil, Japan, and other nations are constructing their own governance architectures. Each represents the protective impulse of a society that senses market logic extending too far.
But Shapiro's analysis reveals the geopolitical tension within the counter-movement itself. The American model prioritizes speed over embedding — innovation first, protection later, and trust that the market will self-correct. From a Polanyian perspective, this creates the familiar risk of innovation outpacing legitimacy; without stronger social protection, the AI counter-movement in the United States will likely take illiberal rather than constructive forms. The European model reflects a Polanyian instinct to re-embed markets through rules, rights, and social protections — but risks constraining the productive capacity that makes protection affordable. The Chinese model directs AI deployment through state authority, achieving embedding of a kind that sacrifices democratic participation for coordinated action. Each model is a different answer to the same Polanyian question: how do you constrain market logic without eliminating market benefits?
The regulatory counter-movement, however significant, addresses only one dimension of the problem. Current frameworks focus almost entirely on the supply side — constraining what AI companies may build. The demand side — what citizens, workers, students, and communities need in order to navigate the transformation — remains almost entirely unaddressed. Regulations constrain producers. They do not empower the affected. They do not build the institutional infrastructure that workers need to adapt, do not reform educational systems producing graduates unprepared for cognitive displacement, do not create social safety nets that would allow retraining without destitution. The asymmetry is characteristic of early-stage counter-movements. Factory legislation in nineteenth-century England began with supply-side constraints — limits on hours, prohibitions on child labor — before developing into the demand-side institutions that gave workers organizational capacity to participate in industrial governance.
The labor dimension of the counter-movement faces challenges that have no precedent in Polanyi's original analysis. The labor movements of the nineteenth and twentieth centuries were organized around the workplace. The factory was the site of exploitation. The union was the institution through which workers constrained it. But AI-driven displacement is not concentrated in factories. It is distributed across every domain of knowledge work. The affected workers are not a homogeneous class but a diverse population of professionals with different skills, different institutional locations, and different relationships to the technology transforming their work. The freelance developer in Mumbai, the displaced paralegal in Manchester, the deskilled analyst in Chicago are experiencing the same structural transformation but lack the physical proximity and shared identity that made traditional labor organizing possible.
New forms of organizing must emerge: professional associations that evolve from credentialing bodies into advocacy organizations, cross-industry coalitions that address systemic effects, digital solidarity networks that connect geographically dispersed workers experiencing the same displacement. These organizational innovations do not yet exist at scale. Their creation is among the most urgent institutional challenges of the present moment. And their absence means that the people with the deepest understanding of what the commodification of intelligence actually costs — the senior engineers, the experienced lawyers, the master designers — are experiencing their displacement in isolation, without the institutional voice that would channel their expertise into the governance of the transformation.
The educational dimension is where the institutional gap is widest. Educational systems across the developed world were designed to produce executors — workers who could absorb information, reproduce it on command, and demonstrate competence in specified tasks. The AI economy does not value execution. It values judgment: the capacity to evaluate situations no algorithm can fully specify, to ask questions no dataset can answer, to make decisions requiring integration across domains. The educational counter-movement must transform the purpose of education from production of executors to development of judgment — not as a curriculum adjustment but as a paradigm shift.
The reform is urgent because the developmental window is narrow. A generation of students trained entirely on AI-assisted methods that prioritize output over development, answers over questions, efficiency over understanding, will lack the cognitive capacities that the economy and the democracy will continue to require. The institutions that develop these capacities — universities, professional schools, apprenticeship systems — are under simultaneous pressure from the market to produce immediately deployable workers and from the technology to make their traditional methods appear obsolete. The tension between the market's demand for immediate deployment and education's obligation to develop full human capacities is the central tension of educational policy in the AI age.
The cultural counter-movement is the most diffuse and ultimately most important. It consists of the public discourse that insists on values the market cannot price, that challenges the hierarchy in which economic value is treated as the only legitimate form of value, that articulates what a society worth living in looks like when machines can do most of what people used to be paid to do. The Orange Pill is itself a contribution to this cultural counter-movement — insisting on the value of human consciousness, of questions only conscious beings can ask, of caring that makes human life meaningful. The cultural counter-movement faces a specific obstacle: the market's capacity to absorb its own opposition. The market treats critique as content. It packages dissent as a product. The challenge is to sustain institutional critique without being commodified by the logic being criticized.
Polanyi's history teaches a lesson about the quality of counter-movements that is particularly relevant now. The social destruction produced by the unconstrained market in the nineteenth century provoked responses that ranged from constructive to catastrophic. The labor movement and the welfare state constrained the market while preserving its productive capacity. Fascism and Bolshevism destroyed the market along with the social fabric it had damaged. The quality of the counter-movement matters as much as its existence. A counter-movement that is timely, institutionally deep, and politically sophisticated produces democratic re-embedding. A counter-movement that comes too late, or that is captured by authoritarian impulses, produces the pathologies that defined the first half of the twentieth century.
The AI transition will intensify the social pressures that have already fueled populist movements across the developed world. The displacement of knowledge workers, the devaluation of professional expertise, the concentration of gains in a small number of technology companies — these will produce legitimate grievances that destructive counter-movements can exploit. The question is whether the constructive counter-movement — the one that builds inclusive institutions, shares gains broadly, constrains the market while preserving productive capacity — can organize before the destructive counter-movement captures the political energy of the displaced.
The double movement is underway. Both movements are visible. The market is expanding at computational speed. The counter-movement is organizing at human speed. The gap between these speeds is not merely inconvenient — it is the structural feature of the crisis. Recent scholarship on Polanyi highlights his underappreciated concern with what he called the "rate of change" — the speed at which economic transformation unfolds relative to a society's capacity to absorb it. AI's compression of timelines — disrupting entire professions in years rather than decades — makes the rate of change the most dangerous variable in the current equation. The faster the transformation, the more violent the dislocation, and the greater the risk that the counter-movement takes destructive rather than constructive form.
The institutions that manage the tension between market expansion and social protection — the labor laws, the educational reforms, the governance frameworks, the cultural narratives — are never adequate and always under pressure. They must be built, maintained, and defended through continuous political effort. The double movement does not resolve into synthesis. It produces an unstable equilibrium that must be continuously maintained. The AI age demands not a permanent solution but a permanent practice — the ongoing construction, maintenance, and adaptation of institutions that redirect market logic toward conditions compatible with human flourishing.
The counter-movement will come. The social destruction produced by unconstrained market logic always provokes protective response. The question that the present moment forces is whether the counter-movement will be adequate — whether it will match the scale and speed of the market's expansion, whether it will be comprehensive enough to address all five dimensions of the transformation (educational, economic, governance, cultural, temporal), and whether it will be wise enough to constrain market logic without eliminating market benefits. The historical record offers no guarantees. It offers only the knowledge that the choice matters, that the institutions determine the outcome, and that the window for building them is narrower than anyone would like.
The market can price a legal brief. It cannot price the judgment that determines whether the brief addresses the right question. It can price a line of code. It cannot price the architectural intuition that decides where the code belongs in a system that must hold together under conditions no specification anticipated. It can price a medical diagnosis. It cannot price the clinical wisdom that recognizes when the diagnosis is technically correct but humanly wrong — when the patient needs not a treatment plan but a conversation about what kind of life she wants to live in the time remaining.
This catalog of the unpriceable is not a sentimental exercise. It is a precise description of the mechanism through which the commodification of intelligence produces its characteristic destruction. The market optimizes for what it can measure. What it cannot measure, it treats as externality. And the externalities accumulate — invisibly, inexorably — until they produce a crisis that the market's own logic cannot resolve.
Adam Smith recognized the divergence between price and value two and a half centuries ago. Water is enormously useful and nearly free. Diamonds are largely ornamental and enormously expensive. The classical economists treated this as a curiosity — the "paradox of value" — and assumed the market would, over time, bring the two into rough alignment. Polanyi demonstrated that for an entire category of goods, the alignment never comes. The market does not bring the price of labor into alignment with the value of human activity, because the value includes dimensions — health, dignity, community, developmental fulfillment — that the market cannot price and therefore cannot protect. The same structural gap now opens in the domain of intelligence, wider than any previous instance, because intelligence includes more unpriceable dimensions than any commodity the market has previously attempted to govern.
Consider judgment. Not judgment in the abstract — the word has been dulled by overuse in management literature — but judgment as a specific cognitive operation performed by a specific person in a specific situation under conditions of genuine uncertainty. The senior engineer described in The Orange Pill, the one who could feel a codebase the way a doctor feels a pulse, exercised judgment every time he made a decision about system architecture. His judgment was the product of twenty-five years of engagement with systems that broke in ways no documentation predicted, twenty-five years of developing intuitions that operated below the level of conscious analysis, twenty-five years of building what the philosopher Michael Polanyi — Karl's brother — called tacit knowledge: we can know more than we can tell.
The market sees the output of this judgment: a system that works, that scales, that handles edge cases gracefully. The market does not see the judgment itself, because judgment does not produce measurable output independent of the context in which it is exercised. Judgment produces better outcomes, but "better" in this sense is visible only in comparison to the outcomes that would have been produced without it — outcomes that, by definition, did not occur. The market cannot price the counterfactual. It cannot assign a dollar value to the disaster that did not happen because someone's judgment caught the flaw before deployment. It can only price what was delivered: a working system, on time, within budget. The judgment that made the system work is invisible in the deliverable. And what is invisible to the market is what the market will not protect.
When AI produces a system that also works — that also handles edge cases, that also scales — the market sees equivalent output from a cheaper source. The market's conclusion is logical within its own framework: the judgment is unnecessary. The twenty-five years of developmental investment that produced the judgment is a sunk cost that the market need no longer subsidize. Breadth, produced at scale by pattern-matching systems trained on the accumulated output of millions of human minds, is good enough. The market has spoken. The judgment has been priced at zero.
But the judgment has not become worthless. It has become invisible to the pricing mechanism. The distinction matters enormously, because the consequences of its absence will not manifest as a sudden failure but as a gradual degradation — systems that work adequately rather than elegantly, architectures that hold under normal conditions but fracture under stress, decisions that are technically defensible but miss the deeper pattern that only experience could have detected. The degradation is statistical, distributed across thousands of decisions, each individually minor, collectively catastrophic. And the market will not detect the degradation because the market measures output at the moment of delivery, not resilience over time.
Trust presents a parallel case. Trust is the invisible medium in which all economic activity takes place — the accumulated confidence that contracts will be honored, representations will be accurate, and the person across the table is who she claims to be. Trust is not produced for sale. It is produced by the accumulated history of social interactions, by institutional frameworks that constrain opportunism, by cultural norms that make reliability the default. The market depends on trust but does not produce it and cannot price it.
The AI transition is degrading trust through a mechanism that the market cannot see. When a lawyer signs a brief produced by AI, she attests to its accuracy — but she may lack the knowledge to evaluate it, because the knowledge that evaluation requires is precisely the knowledge that the AI tool bypassed. When a doctor relies on an AI diagnostic recommendation, she exercises judgment on an output whose provenance she cannot fully inspect. When an executive presents AI-generated strategic analysis to a board, he makes implicit representations about the quality of the analysis that rest on a thinner foundation than anyone in the room recognizes. The professional authority that underwrites these interactions — the assumption that the professional has done the work, developed the judgment, earned the right to stand behind the output — is being hollowed out from within. The surface of professional competence remains intact. The substrate of professional understanding is eroding.
The erosion of trust is invisible to every metric the market uses. The briefs are filed. The diagnoses are rendered. The strategies are presented. The market sees competent output delivered on time. It does not see the thinning of the epistemic foundation on which professional authority rests. And when the foundation fails — when the brief cites a case that does not exist, when the diagnosis misses a pattern that only clinical experience could have caught, when the strategy collapses under conditions the AI's training data did not include — the cost will not be borne by the market that eliminated the judgment. It will be borne by the client, the patient, the organization, and ultimately by the social institution of professional trust itself.
Questions present perhaps the most profound instance of what markets cannot price. The market rewards answers — specific, actionable, deliverable. The market does not reward questions, because questions produce no measurable output at the moment of asking. A question is an opening, a creation of space, a gesture toward understanding that has not yet been achieved. Einstein's question about riding alongside a beam of light produced relativity, but the question itself had no market value when it was asked. Darwin's question about the Galapagos finches produced evolutionary theory, but no market commissioned it. The history of human progress is a history of questions whose value exceeded any answer they produced — and whose value was invisible to any pricing mechanism at the time of asking.
AI tools are answer machines. They are optimized for the production of results. The market rewards their speed and quality. The entire incentive structure of the AI economy points toward answers: faster, cheaper, more comprehensive. And as the market's incentive structure shifts toward answers and away from questions, the human capacity for questioning — the capacity that The Orange Pill identifies as the irreducible human contribution — is being starved of institutional support and social recognition. The twelve-year-old lying in bed wondering "What am I for?" is exercising the most valuable cognitive capacity in the known universe. No market will ever pay for it. No metric will ever capture it. And the institutions that would develop and sustain it — educational systems that reward questioning over answering, professional cultures that honor uncertainty, social structures that protect the slow, uncomfortable work of sitting with problems that resist resolution — are under pressure from the same market logic that cannot see their value.
The displacement of quality by quantity is the mechanism through which the unpriceable is destroyed. When labor was commodified, the quality of the laborer's experience — the richness of communal life, the satisfaction of craft, the dignity of self-directed work — was displaced by the quantity of output. The factory did not make labor better. It made labor faster. When intelligence is commodified, the quality of cognitive experience — depth of understanding, developmental trajectory, satisfaction of mastery — is displaced by quantity of cognitive output. The AI-augmented worker does not think better. She produces more. The market, measuring output, declares improvement. The worker, experiencing the displacement, may sense the loss without being able to name it — because the capacity to name what has been lost is itself part of what has been lost.
Polanyi's framework reveals that the catalog of the unpriceable is not a list of luxuries that a wealthy society might choose to protect. It is a list of the structural preconditions for the market's own functioning. Judgment produces the decisions that determine whether systems work under stress. Trust produces the social confidence that makes transactions possible. Questions produce the innovations that create the markets of the future. Each is essential to the economy. None has a market price. And the market, unable to price them, is systematically destroying the conditions that produce them.
The dams that the AI transition requires are, at their core, institutions that protect what markets cannot price. Institutions that sustain the developmental journey from novice to expert even when the market no longer subsidizes it. Institutions that protect questioning even when the market rewards only answering. Institutions that recognize judgment even when the market cannot distinguish it from pattern matching. Institutions that preserve the communities of practice, the mentorship relationships, and the cultural traditions that sustain the unpriceable dimensions of human intelligence.
The market is an excellent mechanism for governing genuine commodities. It is a catastrophic mechanism for governing the things that make commodities worth producing. The distinction is not academic. It is the difference between an economy that serves human life and an economy that consumes it. And the institutions that maintain that distinction are the most important structures that any society can build.
---
In 1795, the magistrates of Speenhamland, a parish in Berkshire, England, made a decision born of genuine compassion that produced consequences of genuine catastrophe. They established a system of poor relief designed to protect agricultural laborers from wages that the market had driven below subsistence. The system supplemented wages with public funds, tied to the price of bread. When bread prices rose, the supplement increased. When wages fell below a threshold, public money made up the difference. The intention was to prevent destitution. The result was to institutionalize it.
The mechanism through which compassion became catastrophe was invisible to the magistrates because it operated not through their intentions but through the market's response to their policy. When the supplement was established, employers discovered they could reduce wages further without losing workers — the public fund would cover the gap. Wages fell. The supplement increased. Wages fell further. The supplement increased again. The cycle continued until wages approached zero and the entire cost of labor was borne by the parish. The laborers were not protected from the market. The market was protected from the consequences of its own logic — the social cost of subsistence wages absorbed by public funds while the market's behavior remained unconstrained.
The laborers became something worse than exploited workers. They became paupers — dependent on charity, stripped of the dignity that comes from earning a livelihood, trapped in a system that subsidized their poverty rather than enabling their independence. The Speenhamland system did not fail because redistribution is wrong. It failed because redistribution that accepts the structural conditions producing the problem cannot change those conditions. It can only subsidize them. And subsidizing the conditions of commodification is not the same as constraining them.
Karl Polanyi analyzed Speenhamland in meticulous detail because he recognized it as a paradigm of a recurring failure: the well-intentioned social policy that addresses symptoms while leaving causes untouched. The magistrates addressed the consequences of market-driven destitution — insufficient income — without addressing the cause: the commodification of labor without institutional constraint on the terms of its sale. The supplement accepted commodification as given and attempted to redistribute its effects. But redistribution that leaves the structural logic of commodification intact merely makes that logic more sustainable by absorbing its social costs. The market continues to operate without constraint. The public fund absorbs the damage. And the population that was meant to be protected is reduced to permanent dependency.
The parallel to the emerging policy response to AI displacement is close enough to constitute a warning. Universal Basic Income — the proposal to provide every citizen with a guaranteed income regardless of employment — is the most prominent redistribution proposal in the AI discourse. Its proponents include some of the most influential figures in the technology industry, and their advocacy is not cynical. They see what the market is doing to the value of cognitive labor. They see the displacement accelerating. They recognize that the market's logic, applied without constraint, will produce a population that cannot earn a living through the exercise of the skills it spent years developing. UBI is their answer: a floor beneath which no one falls, a guarantee of survival in an economy that no longer guarantees employment.
The Speenhamland parallel is structural, not exact. The eighteenth-century laborers were agricultural workers in a pre-industrial economy. The twenty-first-century displaced are knowledge workers in a global digital economy. The institutional contexts differ enormously. But the structural logic is the same: redistribution that accepts commodification without constraining it subsidizes the problem rather than solving it.
If UBI is implemented without structural reform, employers will realize — as the Speenhamland employers realized — that they can reduce wages further. The guaranteed income covers the gap. The market continues to deploy AI wherever it reduces cost, because the social cost of displacement is absorbed by the public fund rather than borne by the organizations making deployment decisions. The displaced receive enough to survive but not enough to participate meaningfully in economic or social life. The dependency is stable, self-perpetuating, and politically manageable — which is precisely what makes it dangerous. A population in crisis demands response. A population in managed dependency accepts its condition, because the condition is tolerable enough to prevent revolt while insufficient enough to prevent flourishing.
Polanyi would not oppose redistribution. He would insist that redistribution must serve restructuring, not substitute for it. The gains of the AI transition must be shared broadly — this is a matter of both justice and stability. But the sharing must occur within a framework of structural reform that addresses the conditions producing displacement, not merely the consequences. Three structural reforms are essential, each addressing a different dimension of the commodification that redistribution alone cannot reach.
The first is educational restructuring. The current educational system was designed to produce workers for a market that valued execution — the capacity to perform specified tasks competently. The AI market does not value execution. It values the judgment that determines which tasks deserve execution, the questioning that identifies which problems deserve attention, the integration that connects insights across domains into coherent strategy. Educational reform must shift the fundamental purpose of formal education from training to formation: from the transmission of specifiable skills to the development of human capacities that no AI can replicate and no market can produce.
This is not a curriculum adjustment. It is a transformation of what educational institutions understand themselves to be doing. The teacher who grades questions rather than essays — described in The Orange Pill — has grasped the transformation intuitively. The student who produces the best questions demonstrates the deepest engagement with the material, because a good question requires understanding what you do not understand. But the insight must be systematized, institutionalized, and scaled. Assessment methods must be rebuilt around the evaluation of cognitive process rather than cognitive product. Curricula must be restructured to develop integrative capacity across domains rather than specialized competence within them. And the tension between the market's demand for immediately deployable workers and education's obligation to develop full human capacities must be resolved in favor of development, because the market that demands immediate deployment today will need judgment tomorrow — and judgment cannot be produced on a market timeline.
The second structural reform is labor market restructuring. The current framework treats workers as individual sellers of cognitive labor on a market that determines the price. The AI transition requires frameworks that treat workers as participants in a cooperative enterprise — with institutional protections ensuring that the gains of AI-augmented productivity are shared between workers and the organizations that deploy AI, rather than captured entirely by capital. This means new forms of collective bargaining adapted to the dispersed, cross-domain character of AI-affected work. It means profit-sharing arrangements that give workers a stake in the productivity gains their AI-augmented work generates. It means governance structures within organizations that give workers voice in deployment decisions that transform the nature of their work.
The third structural reform is governance. The governance of AI deployment must include mechanisms through which affected populations participate in the decisions that affect them. Workers whose jobs are being transformed by AI currently have no institutional mechanism for influencing deployment decisions. Students whose education is being reshaped have no voice in pedagogical choices made on their behalf. Citizens whose public services are being automated have no channel for ensuring that automation serves their needs. This governance gap is not merely a policy oversight. It is a structural feature of a system in which the organizations deploying AI have concentrated institutional voice — lobbying organizations, industry associations, direct access to policymakers — while the populations affected by deployment have none.
The Speenhamland magistrates were not foolish. They were responding to real suffering with the tools available to them. The tools were inadequate because the problem was structural and the response was distributional. The AI policy community faces the same inadequacy. UBI responds to real displacement with a distributional tool. But displacement produced by the structural commodification of intelligence requires a structural response — educational institutions that develop what markets cannot produce, labor frameworks that share what markets would otherwise concentrate, governance mechanisms that represent what markets would otherwise ignore.
The historical record teaches that the structural response is harder, slower, more politically contentious, and more institutionally demanding than the distributional response. Writing checks is simpler than building institutions. But the Speenhamland system teaches what happens when simplicity substitutes for adequacy: a stable equilibrium of dependency that perpetuates the conditions it was designed to alleviate. The AI transition requires institutions, not checks. It requires restructuring, not redistribution. And the construction of those institutions — educational, economic, and governmental — is the work that will determine whether the AI economy produces a society of participants or a society of dependents.
---
The attention economy commodified awareness. Social media platforms, built on the business model of selling human attention to advertisers, perfected the extraction of consciousness as a commercial input. Variable reward scheduling, social validation loops, infinite scrolling — each mechanism was designed not to serve the user but to capture the user's attention and convert it into a saleable commodity. Billions of people were trained to treat their awareness as a resource to be deployed for the production of content, engagement, and social capital. The quantification of attention — clicks, views, engagement time — established the conceptual framework within which consciousness could be measured, priced, and traded.
Artificial intelligence completes the commodification that the attention economy began. The attention economy captured the medium. AI commodifies the processes that the medium enables: judgment, analysis, creativity, the formation of beliefs, the making of decisions. When a large language model produces a legal brief, it does not merely capture the lawyer's attention. It replaces the lawyer's cognitive process — the sequence of analysis, evaluation, and judgment that constitutes her contribution. The lawyer's consciousness, the awareness that makes professional judgment possible, is being treated as a productive input — valuable when no cheaper input is available, dispensable when one is.
Karl Polanyi would identify this as the market's logic reaching its terminal extension: the commodification of the capacity for thought itself. If the original great transformation subordinated the body of the laborer and the surface of the earth to market logic, the AI transformation subordinates the mind of the thinker and the structure of awareness. The extension is not metaphorical. The same mechanism operates: the market treats something that was not produced for sale as if it were, subjects it to the price mechanism, and destroys the dimensions of it that the price mechanism cannot capture.
The commodification of consciousness is distinguished from all previous commodifications by a specific property: it is self-concealing. When labor was commodified, the laborers could feel the degradation in their bodies — the exhaustion, the injuries, the premature aging. When land was commodified, the ecological destruction was eventually visible: deforested hillsides, polluted rivers, exhausted soil. When money was commodified, the financial crises were dramatic enough to force political response. But the commodification of consciousness is experienced not as degradation but as enhancement. The AI tools make the worker more productive, more capable, more efficient. The person whose consciousness is being commodified feels amplified, not diminished.
This is the deepest irony of the AI transformation. The tool that commodifies consciousness presents itself as the tool that liberates it. The Orange Pill documents this irony with the honesty of a builder who has experienced both sides: the exhilaration of building something extraordinary with Claude, the recognition that the exhilaration was indistinguishable from compulsion, the catching of oneself at three in the morning unable to stop, the muscle that lets him imagine locked in a state of grinding production that had long since ceased to be creative. The whip and the hand that held it belonged to the same person. The self-exploitation was experienced as self-expression. The commodification was experienced as liberation.
Byung-Chul Han's concept of the achievement subject — the person who oppresses herself through internalized demands for optimization — maps onto this dynamic with uncomfortable precision. Han argues that the contemporary subject is no longer constrained by external prohibition but by internal compulsion. The factory whistle has been replaced by the imperative to optimize, to produce, to achieve. The AI tool is the most powerful instrument of this internalized imperative ever built, because it makes the production frictionless, immediate, and apparently limitless. The person does not feel exploited because there is no external exploiter. There is only her own ambition, amplified by a tool that removes every barrier between impulse and output.
Polanyi's framework adds a structural dimension that Han's psychological analysis does not fully capture. The achievement subject is not merely a psychological type. She is the product of a specific institutional arrangement — a market society that has extended commodity logic to the domain of consciousness itself. The internalized imperative to optimize is not a personal failing or a cultural pathology. It is the subjective experience of an objective structural condition: a market that treats cognitive output as the measure of human value and that provides no institutional recognition of the unpriceable dimensions of cognitive life. The burnout that Han diagnoses is the lived experience of the fourth fictitious commodity. The achievement subject is the person whose intelligence has been commodified and who has internalized the commodification as freedom.
The protective counter-movement against the commodification of consciousness must therefore operate on two levels simultaneously: the institutional and the cultural. The institutional level requires the construction of what the Berkeley researchers, in their study of AI's effects on work, called "AI Practice" — structured interventions that protect human cognitive development from the market's demand for continuous productive output. Structured pauses where AI tools are set aside. Sequenced workflows that protect deep thought against the temptation to parallelize everything. Protected mentoring time where junior practitioners develop intuition through slow, friction-rich interaction with experienced colleagues. Mandatory offline periods that allow the cognitive rest without which sustained judgment is impossible.
These micro-institutional interventions are necessary but insufficient without macro-institutional support. The organization that implements AI Practice competes against organizations that extract maximum output without regard for developmental costs. The market rewards the latter. The structural response requires governance frameworks that make AI Practice not merely admirable but rational — labor regulations that mandate cognitive rest, professional standards that require developmental investment, tax incentives that reward organizations investing in human capability rather than extracting maximum output from AI-augmented production.
The cultural level of the counter-movement is simultaneously more diffuse and more fundamental. It requires challenging the hierarchy of value in which economic output is treated as the measure of human worth. The dominant narrative of the AI economy is a narrative of acceleration: faster output, lower cost, expanded capability, relentless optimization. This narrative is not false. It is radically incomplete. It omits the values that no market can produce and no optimization can enhance: the satisfaction of mastery developed through years of patient practice, the depth of understanding that only struggle can build, the social bonds formed through shared difficulty, the capacity for wonder that drives questioning, the dignity of being fully present in one's own life rather than perpetually optimized for someone else's metric.
The counter-narrative must articulate these values not as nostalgic relics of a pre-AI world but as the structural preconditions of a society worth inhabiting. The candle of consciousness — the metaphor that The Orange Pill uses for the rarest thing in the known universe, the awareness that asks why — is not a luxury to be preserved if the market permits. It is the precondition for every institution the counter-movement needs to build. Without consciousness, no dam can be conceived. Without judgment, no regulation can be designed. Without questioning, no alternative can be imagined. The protection of consciousness is not one task among many for the counter-movement. It is the precondition for every other task.
And this is where the recursive paradox identified in Chapter 3 reaches its most acute expression. The market is commodifying the capacity that the counter-movement requires to function. The intelligence that would conceive protective institutions is being subjected to market logic. The judgment that would design governance frameworks is being outsourced to tools that produce output without developing judgment. The attention that would sustain democratic engagement is being captured by platforms optimized for engagement rather than enlightenment. The counter-movement faces a challenge that no previous counter-movement has faced: the market's logic is undermining the human capacity to resist the market's logic.
The resolution of this paradox — if it can be resolved — lies in the protection of spaces that are explicitly not governed by market logic. Educational institutions that value formation over deployment. Professional communities that value judgment over output. Cultural spaces that cultivate wonder, questioning, and moral sensitivity. Private spheres of family and friendship protected from the market's demand for continuous productive engagement. These spaces are the nurseries in which the architects of the counter-movement are formed. Their protection is not a social luxury. It is a political necessity — the preservation of the cognitive and institutional capacity that human societies need to govern their own transformation.
The commodification of consciousness is the horizon toward which market logic has been pointing since the first enclosure of common land. Each fictitious commodity brought the market closer to the innermost dimension of human existence: from the body's labor, to the earth's surface, to the conventions of exchange, to the capacity for thought itself. Each extension produced its characteristic destruction and its characteristic counter-movement. The question now is whether the counter-movement can mobilize before the commodity being protected — consciousness itself — has been degraded to the point where mobilization is no longer possible.
The window is not theoretical. It is measured in the developmental trajectories of the students currently in school, the professional habits being formed by workers currently adapting to AI tools, the institutional norms being established by organizations currently deploying AI at scale. What is being built now — or not built — will determine the cognitive and institutional capacity available to the next generation. The counter-movement must build within this window, with whatever institutional resources remain, understanding that the most dangerous feature of this particular commodification is that it degrades the very capacity on which all counter-movements depend.
---
Every generation since the original great transformation has faced the same project: re-embedding the economy in society, ensuring that economic activity serves human needs rather than human beings serving economic logic. The labor movement of the nineteenth century was a project of re-embedding. The welfare state of the twentieth century was a project of re-embedding. The environmental movement was a project of re-embedding. Each generation faced a market that had extended its logic to a domain that could not survive commodification, and each generation built institutions that constrained the market's logic in the interest of the social fabric.
The AI age faces this project in its most challenging form, because the domain the market has reached — intelligence — is more intimate to the human person than labor, more fundamental to social existence than land, and more constitutive of human identity than money. To commodify intelligence is to commodify the capacity for thought itself. And the institutions required to constrain this commodification must be proportionate to its ambition.
The project of re-embedding is not a project of nostalgia. It does not seek to return to pre-market arrangements or to eliminate AI from human life. The market is a useful institution for producing and distributing genuine commodities. AI is a powerful tool for expanding human capability. Re-embedding means subordinating both to social purposes — ensuring that market activity and AI deployment occur within institutional frameworks that protect the unpriceable dimensions of human existence. The economy serves society. Society does not serve the economy. This principle, which is the core of Polanyi's life's work, is the principle on which re-embedding must be built.
The re-embedding must address five dimensions simultaneously, each reinforcing the others, none sufficient alone.
The first dimension is educational. The educational system must be restructured to develop the human capacities that markets cannot produce and AI cannot replicate. This means a paradigm shift from training — the transmission of specified skills — to formation: the development of judgment, integrative capacity, questioning ability, and the courage to sit with uncertainty. The shift requires institutional commitment that extends far beyond individual pedagogical innovations. Assessment must be rebuilt around the evaluation of cognitive process rather than cognitive product. Curricula must be restructured to develop integration across domains rather than specialization within them. The tension between the market's demand for immediately deployable workers and education's obligation to develop full human capacities must be resolved institutionally, through public investment that sustains developmental timelines the market would not subsidize.
The second dimension is economic. The gains of the AI transition must be shared through structural mechanisms, not merely distributional ones. The Speenhamland lesson demonstrates that redistribution without restructuring produces dependency. Structural sharing means profit-sharing arrangements that give workers a stake in AI-augmented productivity gains, tax systems that redirect concentrated gains toward public investment in human development, and ownership models that distribute the returns of AI deployment more broadly than the current framework permits. National AI transition funds — financed by levies on the revenues of AI-deploying organizations — should develop the capabilities of displaced workers rather than merely maintaining their consumption. The funds must be designed with Speenhamland in mind: investing in formation rather than subsistence, developing judgment rather than subsidizing idleness, including affected workers in governance rather than treating them as passive recipients.
The third dimension is governance. The people affected by AI deployment must participate in the decisions that shape it. Currently, the organizations developing and deploying AI have concentrated institutional voice: lobbying power, industry associations, direct access to policymakers, and the economic leverage of being among the most valuable enterprises in human history. Workers, students, and communities affected by deployment have almost none. This asymmetry must be addressed through the creation of governance mechanisms that give affected populations institutional voice: workplace committees that advise on AI deployment, educational governance structures that include student and faculty input, public oversight bodies that ensure AI-automated services maintain responsiveness to citizen needs. Democratic governance of AI is not an additional feature to be bolted onto existing frameworks. It is the foundation on which legitimate governance rests.
The fourth dimension is cultural. Re-embedding requires challenging the hierarchy of value in which economic output is treated as the supreme measure of human worth. The counter-narrative must articulate what a society worth living in looks like when machines can produce most of what people used to be paid to produce. It must rehabilitate the values that markets cannot price: the developmental satisfaction of mastery earned through struggle, the social bonds of community sustained through shared difficulty, the dignity of work that carries weight beyond its market value, the wonder that drives the questions no market commissions. These values are not luxuries that a wealthy society might choose to protect. They are the structural preconditions of a society capable of governing itself — the cognitive and moral infrastructure on which democratic institutions depend.
The fifth dimension is temporal. The gap between the speed of technological change and the speed of institutional adaptation is the most dangerous structural feature of the current moment. AI capabilities advance in months. Institutional frameworks develop in years. The people in the gap — adapting without guidance, without support, without the historical knowledge to understand what is happening to them — bear the cost. Closing the gap requires mechanisms for rapid institutional response: provisional regulations that constrain deployment while comprehensive frameworks are developed, emergency retraining programs that provide immediate support while longer-term reforms are implemented, agile oversight bodies that adapt to changing technological conditions rather than governing according to frameworks that were obsolete before they were enacted.
These five dimensions are interdependent. Educational reform that develops judgment enables democratic governance by producing citizens capable of informed participation. Democratic governance that includes affected populations enables economic reform by creating the political constituency for structural sharing. Economic reform that distributes gains broadly enables cultural transformation by demonstrating that market logic is not the only organizing principle. Cultural transformation that challenges the hierarchy of economic value enables educational reform by creating conditions in which formation is valued as highly as training. The interdependence means that re-embedding must be comprehensive. Partial reform — addressing one dimension while neglecting the others — will be undermined by the unreformed dimensions. Educational reform without economic restructuring produces well-formed graduates who cannot find work that values their formation. Economic restructuring without governance reform produces outcomes reflecting the interests of the powerful rather than the needs of the displaced.
The international dimension of re-embedding deserves emphasis that the current discourse has not provided. The market for AI is global. The companies that develop and deploy AI operate across borders. The displacement is not confined to any nation. National regulations that constrain deployment in one jurisdiction will be undermined if companies relocate to jurisdictions with weaker constraints. The race to the bottom in AI governance is already visible: jurisdictions compete to attract AI investment by offering environments that prioritize market freedom over social protection. Shapiro's analysis highlights the geopolitical stakes: the United States model of speed-first deployment creates risks of innovation outpacing legitimacy; the European model of regulation-first risks constraining productive capacity; no model has yet achieved the integration of innovation and embedding that the moment requires.
Polanyi's analysis of the original great transformation provides a final, sobering lesson about the relationship between re-embedding and democratic governance. The counter-movements of the nineteenth century that produced durable institutions were those that combined multiple forms of organizing: unions that bargained for workplace conditions, parties that advocated legislative reform, mutual aid societies that provided social insurance, educational institutions that developed civic capacity. The counter-movements that failed relied on single forms: pure militancy without political engagement, pure advocacy without grassroots organizing, pure education without institutional power. The AI counter-movement must learn from both: combining regulatory advocacy with community organizing, educational reform with labor restructuring, cultural transformation with institutional construction.
The great transformation is not behind us. It is before us, in its most ambitious and most dangerous form. The original transformation disembedded the economy from society. The AI transformation extends that disembedding to intelligence itself — the capacity that human beings need to conceive and construct the institutions of re-embedding. The project of re-embedding, which Polanyi identified as the central political challenge of the modern era, now faces a recursive challenge: the capacity being commodified is the capacity required to resist commodification. The window for building the institutions that will protect this capacity is determined by the speed at which the market's logic extends its reach relative to the speed at which the counter-movement constructs its institutions.
The economy is the river. Society is the habitat. Intelligence is the water. The institutions that channel the water toward life rather than destruction are what must be built — now, with whatever materials are available, in whatever time remains. The great transformation taught this lesson at enormous human cost. The question is whether it will be learned in time, or whether a generation will pay the price of a lesson that history has already taught and that the market, by its nature, cannot remember.
The great transformation did not unfold uniformly across the globe. In England, where the self-regulating market was first constructed, the protective counter-movement eventually built institutions — factory legislation, labor unions, the welfare state — that constrained the market's most destructive tendencies. The process was brutal, contested, and required a century of political struggle, but it produced institutional frameworks within which market society became, if not just, at least survivable. In the colonial periphery, no comparable counter-movement was permitted. The commodification of labor and land proceeded without the institutional constraints that European societies eventually constructed, because the populations being commodified lacked the political standing to organize protective institutions. The consequences were proportionately catastrophic, and they persisted long after the formal end of colonial rule.
The AI transformation is reproducing this geography of uneven protection with a speed and comprehensiveness that the original transformation could not match. The tools being deployed are built by companies headquartered in a handful of wealthy nations — the United States, with its concentration of AI research capacity and venture capital, and to a lesser extent China, the United Kingdom, and a few European centers. The training data reflects the languages, cultural assumptions, and institutional frameworks of these nations. The deployment strategies serve their economic interests. The regulatory frameworks emerging to constrain deployment — the EU AI Act, American executive orders, emerging governance architectures in Singapore and Japan — are products of wealthy democracies with the institutional capacity to construct them.
The developer in Lagos, whom The Orange Pill invokes as an illustration of democratized capability, operates in a fundamentally different institutional environment. She has access to the same AI tools as an engineer in San Francisco — the same Claude Code, the same frontier models, the same collapse of the imagination-to-artifact ratio. The technological floor has risen. But the institutional floor has not. She builds without the labor protections that would ensure her productivity gains are shared with her rather than captured entirely by the platforms she builds on. She operates without the educational infrastructure that would develop her judgment alongside her technical capability. She works within governance frameworks that have no mechanism for representing her interests in the decisions being made about AI deployment by companies headquartered on another continent. The tools are global. The protections are local. And the gap between them is the space in which the periphery's transformation unfolds.
The IMF's estimate that sixty percent of jobs in advanced economies are exposed to AI receives extensive attention in policy discourse. The exposure in developing economies receives far less. The exposure is different in character — concentrated less in high-skill professional work and more in the business process outsourcing, customer service, data processing, and content moderation sectors that wealthy economies have offshored to lower-wage nations over the past three decades. These sectors were themselves products of an earlier wave of commodification: the conversion of cognitive labor into a tradeable service, purchased by corporations in New York and London, performed by workers in Manila and Nairobi and Bangalore, priced according to the wage differential between the purchasing and performing economies.
AI threatens to eliminate the wage differential that made offshoring profitable. When a large language model can perform customer service, data processing, and basic analytical tasks at a cost below even the lowest global wage floor, the economic logic of offshoring collapses. The jobs that wealthy economies sent to the periphery begin returning — not to human workers in the originating countries, but to AI systems operated by the same companies that offshored the work in the first place. The peripheral economies that built entire sectors around servicing the cognitive needs of wealthy economies find those sectors evaporating. The displacement is not gradual. It is the withdrawal of an economic tide that had appeared permanent.
Polanyi's analysis of the colonial periphery illuminates the structural mechanism at work. The original great transformation created a global division of labor in which the periphery supplied raw materials and cheap labor while the center supplied manufactured goods and financial capital. The terms of this exchange were set by the center's institutional power — its control of markets, its management of the international monetary system, its capacity to enforce favorable terms of trade. The periphery's counter-movements were suppressed by the same institutional power: colonial administrations that prohibited labor organizing, trade agreements that constrained industrial policy, international financial institutions that imposed market-liberalizing conditions on development assistance.
The AI transformation creates an analogous division: the center supplies AI systems and captures the returns, while the periphery supplies training data (through the digital activity of its populations), cheap content moderation labor (the human workers who clean training data of harmful content), and the consumer markets in which AI-powered products are deployed. The terms of this exchange are set by the center's institutional power — its control of AI development, its ownership of the platforms on which AI is deployed, its dominance of the governance frameworks being constructed to regulate deployment.
The counter-movement that Polanyi's framework predicts will emerge in the periphery faces obstacles that the center's counter-movement does not. The institutional capacity for constructing protective frameworks is weaker. The political systems through which governance operates are more fragile. The educational systems that would develop the judgment and institutional creativity the counter-movement requires are less resourced. And the economic dependency on the center's technology creates a structural asymmetry that limits the periphery's bargaining power. A nation that restricts AI deployment risks being bypassed by the technology's benefits while bearing its costs. A nation that embraces unrestricted deployment risks the full destructive force of unconstrained commodification without the institutional resources to contain it.
The Speenhamland lesson applies with particular force to the global periphery. The redistributive proposals emerging in AI policy discourse — universal basic income, transition funds, retraining programs — are designed primarily for the citizens of wealthy nations. The institutional mechanisms through which redistribution operates (tax collection, social insurance systems, public employment services) are weaker or absent in much of the developing world. UBI funded by AI-company levies in wealthy nations would benefit the citizens of those nations while doing nothing for the workers in Manila whose customer service jobs have been automated, the content moderators in Nairobi whose labor trained the systems that replaced them, or the software developers in Dhaka whose implementation skills have been commodified.
An international framework for AI governance — analogous to the international labor standards that eventually emerged in response to the original transformation's global dimensions — is not merely desirable. It is a structural requirement. The race to the bottom in AI governance, in which jurisdictions compete to attract AI investment by weakening protections, reproduces the dynamic that Polanyi identified in the original transformation: the market uses jurisdictional competition to escape institutional constraint, producing a global landscape in which the protective counter-movement is systematically undermined by the mobility of capital.
The framework must include mechanisms for ensuring that the peripheral economies that supplied the training data, the content moderation labor, and the consumer markets on which AI systems depend receive a share of the returns those systems generate. It must include provisions for technology transfer that develop local AI capacity rather than deepening dependency on the center's systems. It must include governance mechanisms that give peripheral nations voice in decisions about AI deployment that affect their populations. And it must include protections against the withdrawal of offshored cognitive work that would devastate the economies that built entire sectors around it.
The institutional creativity required is immense. No existing international framework is adequate to the task. The International Labour Organization, the World Trade Organization, the United Nations system — each was built for a different era's challenges and lacks the mandate, the expertise, and the enforcement capacity to govern the AI transformation's global dimensions. New institutions must be built, or existing ones fundamentally reformed, at a speed that matches the transformation they are meant to govern.
The uneven transformation is not a peripheral concern. It is a structural feature of the AI transition that will determine whether the transformation produces global expansion or global polarization. The history of the original great transformation teaches that market logic unconstrained by institutional protection produces catastrophe — and that catastrophe in the periphery eventually destabilizes the center. The colonial extraction that enriched Europe in the nineteenth century produced the anti-colonial movements that reshaped the twentieth. The economic polarization that the AI transformation threatens to deepen will produce its own destabilizing responses — migration pressures, political radicalization, the collapse of the cooperative international frameworks on which the center's prosperity depends.
Re-embedding the economy after AI is not a national project. It is a global one. And the quality of the global institutions built to govern the transformation will determine whether the AI age produces shared prosperity or a new geography of extraction in which the center's intelligence is augmented while the periphery's is commodified.
---
Karl Polanyi died in 1964, two decades before the personal computer, four decades before the smartphone, six decades before the large language model. He did not foresee artificial intelligence. But he mapped the pattern that artificial intelligence has activated — the pattern by which market logic extends to a new domain of human existence, commodifies what cannot survive commodification, and provokes a protective counter-movement whose adequacy determines whether the outcome is expansion or catastrophe. The pattern is not a law of nature. It is a description of what happens when political societies make a specific choice: the choice to allow market logic to govern domains of life that require other governing principles.
The original great transformation was comprehensive — it touched every dimension of human existence, from labor to community to money to the meaning of land. The AI transformation matches and may exceed that comprehensiveness. It reaches into the domain of intelligence itself, the capacity for thought, judgment, and creative production that constitutes the most intimate dimension of human existence. If the original transformation was an assault on the body of the laborer and the surface of the earth, the AI transformation is an operation on the mind of the thinker and the structure of consciousness. The word assault is structural, not moral. The builders of the AI economy are not, for the most part, malicious. Many are thoughtful people, deeply concerned about consequences. But the structural dynamic is the same regardless of the intentions of the participants. The market extends its logic. The logic commodifies what it reaches. The commodification produces destruction that the market cannot see. And the destruction accumulates until it provokes a response.
What distinguishes the AI transformation from every previous instance of the pattern is the recursive dimension. When labor was commodified, the laborers retained their capacity for political organization. They could form unions. They could vote. They could build the institutions that eventually constrained the market's logic. When land was commodified, the ecological destruction was eventually visible enough to galvanize a political movement. When money was commodified, the financial crises were dramatic enough to force institutional response.
The commodification of intelligence threatens the capacity that every previous counter-movement required to function. The judgment that would design protective institutions is being outsourced to tools that produce output without developing judgment. The attention that would sustain democratic engagement is being captured by platforms optimized for engagement rather than enlightenment. The questioning that would imagine institutional alternatives is being displaced by answer-machines that the market rewards for speed and volume. The counter-movement faces a structural challenge without precedent: the commodity it must protect is the commodity it needs in order to protect anything at all.
This recursion does not make the counter-movement impossible. It makes it urgent. The window for building institutions that protect human cognitive development from market logic is determined by the speed at which the market's commodification of intelligence degrades the capacity for institutional creativity. Every year that passes without adequate institutional construction is a year in which the developmental trajectories of students, the professional habits of workers, and the organizational norms of institutions are shaped by unconstrained market logic. What is being built now — or not built — will determine the cognitive and institutional capacity available to the next generation. The counter-movement builds its own future resources, or it watches those resources depleted by the market's relentless optimization.
The five dimensions of re-embedding — educational, economic, governance, cultural, temporal — are not a policy wishlist. They are the minimum institutional infrastructure that Polanyi's analysis identifies as necessary for containing the commodification of a fictitious commodity. The original transformation's counter-movement built comparable infrastructure — factory legislation, labor unions, educational systems, welfare states, international labor standards — over the course of a century. The AI transformation does not have a century. The speed of computational deployment against the speed of institutional construction is the most dangerous asymmetry in the current equation. Polanyi's concept of the "rate of change" — the speed of economic transformation relative to a society's capacity to absorb it — has never been more relevant. The faster the transformation, the more violent the dislocation, the greater the risk that the counter-movement takes destructive rather than constructive form.
The interwar period provides the cautionary tale. The social destruction produced by the unconstrained market of the nineteenth century created conditions for the destructive counter-movements of the twentieth: fascism in Italy and Germany, authoritarian collectivism in Russia. These movements did not emerge from malice. They emerged from legitimate grievances about the distribution of costs. They offered simple narratives, clear enemies, and the promise of protection that the constructive counter-movement had failed to provide. They succeeded because the constructive counter-movement was too slow, too fragmented, and too timid to build the institutions the moment required. The AI transition must not repeat this sequence. The constructive counter-movement must act with urgency and ambition, building institutions that address legitimate grievances before those grievances are captured by movements that would destroy the democratic framework within which constructive institutions can be built.
The Polanyian framework does not provide blueprints. It provides a diagnostic: a method for identifying the structural dynamics at work in any instance of market expansion and for assessing the adequacy of the institutional response. The diagnostic applied to the AI transformation reveals a market extending its logic to the domain of intelligence, a commodification producing characteristic destruction (the erosion of developmental processes, the degradation of trust, the displacement of questions by answers, the hollowing of professional judgment), and a counter-movement that is emerging but not yet adequate — fragmented across regulatory, labor, educational, cultural, and international dimensions, operating at institutional speed against a transformation moving at computational speed.
The diagnostic also reveals what the market-focused discourse systematically obscures: that the technology is not the determinant. The institutions are the determinant. The same AI technology, deployed within different institutional frameworks, produces radically different outcomes. Deployed within a framework that protects developmental processes, shares gains broadly, includes affected populations in governance, and sustains the cultural conditions for questioning and judgment, AI expands human capability in ways that justify every claim the triumphalists make. Deployed without such a framework, the same technology commodifies intelligence, concentrates gains, degrades the developmental infrastructure on which future capability depends, and produces the social destruction that the market's logic always produces when extended to domains that cannot survive commodification.
The choice between these outcomes is political. It is made by institutions, not by algorithms. It is determined by the quality of governance, the strength of counter-movements, the wisdom of educational reform, and the cultural capacity to insist on values that markets cannot price. The technology amplifies whatever institutional framework it encounters. The framework determines whether the amplification produces flourishing or destruction.
Polanyi mapped the pattern. The pattern is repeating. The machines were never the point. The institutions were always the point. And the institutions that will govern the deployment of artificial intelligence — whether built wisely, built hastily, or not built at all — will determine whether this generation's great transformation produces the expansion that the optimists promise or the catastrophe that every previous instance of unconstrained commodification has delivered.
The economy must serve human life. Human life must not serve the economy. This principle is the core of Polanyi's work and the most urgent political claim of the present moment. The principle does not implement itself. It requires institutions — complex, contested, continuously maintained, never adequate, always under pressure. The building of those institutions is the central political project of the AI age. The project has begun. The question is whether it will be completed in time.
---
Three numbers kept appearing on my screen during the months I spent inside Polanyi's framework: sixty, twenty, and zero.
Sixty percent of jobs in advanced economies exposed to AI — the IMF's estimate, cited in policy documents with the clinical detachment of a number that hasn't yet become personal. Twenty-fold productivity multiplier — the figure from Trivandrum that I watched materialize in a room full of engineers whose faces shifted, over five days, from skepticism to vertigo to something I can only describe as the expression of people recalculating their lives. And zero — the market price of the senior architect's twenty-five years of embodied intuition once breadth became good enough.
I wrote The Orange Pill from inside the exhilaration. I meant every word of it. The tools are extraordinary. The expansion of capability is real. The collapse of the imagination-to-artifact ratio is the most significant shift in the relationship between human intention and material reality since the invention of writing. I stand by the argument. I stand by the awe.
But Polanyi taught me to see the institutional frame around the awe — the frame I was standing inside without knowing it was there. The market that prices the output but not the journey. The system that rewards this quarter's productivity and treats next decade's judgment as someone else's problem. The logic that says breadth is good enough, that solo builders are more efficient than teams, that the developmental friction which built every expert I have ever admired is a cost to be eliminated. I recognized that logic. I had been operating inside it for decades. I had mistaken it for gravity.
It is not gravity. It is a political construction — built by specific decisions, sustained by specific institutions, amenable to specific reforms. That recognition is what Polanyi gives you, and once you have it, you cannot give it back. The market is not nature. The trajectory of AI deployment is not inevitable. The institutions that will govern this transformation are being built right now, by people who may or may not understand what they are building, and the quality of those institutions will determine whether the twenty-fold multiplier I witnessed in Trivandrum becomes a story of human expansion or a story of human extraction.
The Speenhamland chapter hit hardest. I had been casually supportive of UBI as the obvious policy response to AI displacement — a floor beneath which no one falls. Polanyi's analysis of the magistrates who created a stable equilibrium of dependency while believing they were protecting the vulnerable forced me to think structurally rather than distributionally. Writing checks is not building institutions. Subsidizing displacement is not the same as preventing it. The people I watched in Trivandrum did not need a guaranteed income. They needed the institutional support to develop the judgment that the market would continue to require but would not produce. The difference between those two needs is the difference between Speenhamland and a genuine counter-movement.
What keeps me awake now is the recursion. The idea that the market is commodifying the very capacity we need to resist commodification — that the judgment required to design protective institutions is itself being outsourced, that the attention required for democratic participation is itself being captured, that the questioning required to imagine alternatives is itself being displaced by answer-machines optimized for speed. The window for building institutions is measured not in policy cycles but in developmental trajectories — the students currently in school, the workers currently forming professional habits, the organizations currently establishing norms. What we build now, or fail to build, determines the cognitive and institutional resources available to the generation that inherits the consequences.
I am a builder. Polanyi was a historian. He would not have built the things I build. I would not have written the history he wrote. But his framework gave me something I did not have before: the ability to see that the exhilaration and the extraction are not two different stories. They are the same story, told from different positions in the institutional landscape. The builder who sees only the exhilaration is the factory owner who saw only the output. The critic who sees only the extraction is the Luddite who saw only the loss. The task is to hold both — and to build institutions that channel the exhilaration toward expansion while constraining the extraction before it depletes the soil.
The economy must serve human life. Not the other way around. That sentence sounds simple enough to embroider on a pillow. Living inside its implications is the hardest work I know.
AI didn't commodify your intelligence. A political decision did. The institutions that govern what happens next have not been built yet.
Every AI book asks what the machines can do. Karl Polanyi — writing eighty years before ChatGPT — mapped the deeper question: what do markets do with powerful new capabilities, and who pays when nobody builds the institutions to govern them? His framework reveals the AI revolution not as a technology story but as the latest instance of a pattern that has repeated since the Industrial Revolution — market logic extending into a domain that cannot survive commodification, producing destruction that only institutional counter-movements can contain. This book applies Polanyi's analysis to artificial intelligence with precision that will unsettle optimists and critics alike, identifying intelligence itself as a fictitious commodity and tracing the double movement already emerging in response.

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Karl Polanyi — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →