Gunnar Myrdal — On AI
Contents
Cover Foreword About Chapter 1: The Declaration of Values Chapter 2: The Principle of Cumulative Causation Chapter 3: Backwash and Spread in the Digital Economy Chapter 4: The Infrastructure of Advantage Chapter 5: One Hundred Dollars Is Not Zero Dollars Chapter 6: The Political Economy of Dam-Building Chapter 7: The Industrial Precedent, Honestly Told Chapter 8: Brain Drain at Digital Speed Chapter 9: Circular Causation in the Classroom Chapter 10: The Interventionist Imperative Epilogue Back Cover
Gunnar Myrdal Cover

Gunnar Myrdal

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Gunnar Myrdal. It is an attempt by Opus 4.6 to simulate Gunnar Myrdal's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question I never asked was the one that mattered most: one hundred dollars relative to what?

I wrote that number in *The Orange Pill* like it was a punchline. One hundred dollars a month. The most powerful creative amplifier in human history, for less than a decent dinner in San Francisco. I meant it as proof that the floor was rising, that capability was spreading, that the old gatekeepers were finished.

Gunnar Myrdal would have read that sentence and asked a single question that unravels the entire framing: one hundred dollars measured against whose income?

Against mine, it is nothing. Against a software engineer in Lagos, it is a quarter of her monthly pay. Against a household in rural Bihar, it exceeds total monthly earnings. The price is the same. The barrier is completely different. And the fact that I did not see the difference until a dead Swedish economist forced me to look is itself the kind of blind spot Myrdal spent his career exposing.

This is why his framework matters right now. Not because it disputes the power of AI. It does not. Not because it denies that the tools are spreading. They are. But because it asks the question the technology discourse almost never asks: who captures the value, and who bears the cost of the transition?

Myrdal called the mechanism cumulative causation. Advantages compound. Disadvantages compound. The rich school funds better teachers, which produces higher-earning graduates, which generates more tax revenue, which funds better schools. The poor school runs the same circle in reverse. The technology does not break the circle. It accelerates whichever direction the circle is already running.

I built *The Orange Pill* on the conviction that AI is an amplifier. Myrdal forced me to complete the sentence: an amplifier operating inside institutional structures that determine whose signal gets carried and whose gets lost.

This book is not a correction of my argument. It is a foundation I missed. The beaver builds dams, yes. But Myrdal asks who controls the water rights, whose fields flood when the dam is placed where the powerful want it, and whether the beaver has the political education to know the difference.

The sunrise I described from the top of the tower is real. Myrdal asks whether it is visible from every position, or only from the ones that were already elevated.

That question changes everything it touches. It changed how I think about the team I lead, the tools I celebrate, and the future I am building for my children. It should change how you think about yours.

Edo Segal ^ Opus 4.6

About Gunnar Myrdal

1898–1987

Gunnar Myrdal (1898–1987) was a Swedish economist, sociologist, and public intellectual whose work reshaped how scholars and policymakers understood inequality, development, and the limits of objective social science. Born in Skattungbyn, Sweden, he studied law and economics at Stockholm University and became a leading figure in the Stockholm School of economics. His landmark 1944 study *An American Dilemma: The Negro Problem and Modern Democracy*, commissioned by the Carnegie Corporation, provided the most comprehensive analysis of racial inequality in the United States to that date and influenced the Supreme Court's 1954 *Brown v. Board of Education* decision. His later works, including *Economic Theory and Under-Developed Regions* (1957) and the monumental three-volume *Asian Drama: An Inquiry into the Poverty of Nations* (1968), established the theory of circular cumulative causation — the principle that advantages and disadvantages compound through self-reinforcing feedback loops rather than self-correcting toward equilibrium. Myrdal insisted throughout his career that all social analysis proceeds from value premises, whether acknowledged or not, and that intellectual honesty requires those premises to be declared openly. He was awarded the Nobel Memorial Prize in Economic Sciences in 1974, shared with Friedrich Hayek. His concepts of "spread" and "backwash" effects, his critique of value-free social science, and his interventionist framework for development policy remain foundational to institutional economics, development studies, and the analysis of structural inequality.

Chapter 1: The Declaration of Values

Every analysis begins with a confession, whether the analyst knows it or not. The economist who claims to study markets objectively has already decided that markets matter. The technologist who celebrates democratization has already decided that wider access is good. The philosopher who warns against frictionless efficiency has already decided that friction carries value. None of these decisions is wrong. All of them are decisions, and intellectual honesty requires that they be named before the analysis proceeds, not buried in the methodology where they can operate unseen and unexamined.

Gunnar Myrdal spent the better part of four decades making this single point, and the resistance he encountered tells us something important about why the point needed making. In his 1944 study An American Dilemma, which examined the systematic oppression of Black Americans within a nation that professed universal equality, Myrdal opened not with data but with a declaration: the study would proceed from the explicit value premise that the American Creed — the stated commitment to liberty, equality, and the dignity of the individual — was the standard against which American practice would be measured. The declaration was not a rhetorical flourish. It was a methodological commitment. Myrdal had concluded, through years of studying how economic and social analysis actually functioned, that the pretense of value-free social science was itself a political act — a way of smuggling particular commitments into an analysis while denying their presence. "There is no view without a viewpoint," he wrote, and the scholar who refuses to name the viewpoint does not thereby eliminate it; the scholar merely makes it invisible, which is worse than stating it plainly, because invisible premises cannot be examined, challenged, or revised.

The relevance of this methodological demand to the current moment is not metaphorical. It is direct, immediate, and largely unheeded.

The discourse surrounding artificial intelligence is saturated with undeclared values. When Edo Segal asks, in the Foreword to The Orange Pill, "Are you worth amplifying?" — a question that organizes the entire book's argument — the question carries embedded premises that deserve examination. It presumes a subject who possesses the education to formulate clear descriptions in natural language. It presumes connectivity sufficient to access the tools. It presumes the economic stability to invest time in experimentation rather than immediate survival. It presumes, in short, a reader who already occupies a position of relative advantage in the global distribution of capability and resource. The question is powerful, and it is not wrong. But it is addressed to a particular audience, and the analytical framework that follows from it — the celebration of individual empowerment, the metaphor of the beaver building dams in the river of intelligence, the conviction that AI lowers the floor of who gets to build — reflects that audience's concerns, aspirations, and blind spots.

Myrdal's framework insists on widening the audience before the analysis begins. The question is not only "Are you worth amplifying?" but "Who is the 'you' that this question addresses, and what happens to the people it does not address?" The second question is not a correction of the first. It is a completion. Without it, the analysis proceeds from an unstated assumption about whose experience constitutes the relevant evidence, and that assumption — invisible, unexamined — shapes every conclusion that follows.

The values that inform the present analysis should therefore be stated plainly, in the tradition Myrdal established and defended throughout his career.

First: the distribution of a technology's benefits is a moral question, not a technical one. A technology that expands capability for those who already possess capability while leaving the structurally disadvantaged further behind is not a democratizing technology, regardless of how accessible it is in principle. The moral test is not what the technology makes possible in the abstract but what it makes possible in practice, for the people who most need expanded capability, in the institutional environments they actually inhabit.

Second: institutional analysis is more revealing than individual analysis. The story of a single developer in Lagos who builds a product with AI tools is real and important. But it is not representative unless the institutional conditions that enabled that developer's success — education, connectivity, market access, legal protections, financial infrastructure — are examined and found to be broadly available. If those conditions are concentrated among a small fraction of the population, the individual success story, however inspiring, describes the exception rather than the rule, and the exception cannot be generalized into evidence of democratization without committing a fundamental analytical error.

Third: the obligation of the analyst is not merely to describe what is happening but to examine the structures that determine who benefits and who bears the cost. This is the commitment that distinguishes Myrdal's institutional economics from both the laissez-faire tradition, which assumes that market forces will distribute benefits efficiently, and the purely critical tradition, which diagnoses pathology without prescribing treatment. Myrdal's work was always reformist. Understanding a problem created, in his framework, an obligation to address it. Analysis that stops at diagnosis is, for Myrdal, intellectually incomplete — a form of detachment that serves the interests of those who benefit from the status quo by providing sophisticated reasons to leave it undisturbed.

These three commitments — to distribution as a moral question, to institutional rather than individual analysis, and to the reformist obligation — constitute the lens through which the AI moment will be examined in the chapters that follow. They are not the only defensible commitments. Segal's commitments — to the value of individual empowerment, to the generative potential of human-AI collaboration, to the conviction that amplification rewards quality — are also defensible, and this analysis does not dismiss them. What it does is supplement them with the questions they do not ask, the populations they do not address, and the institutional dynamics they do not examine.

The Orange Pill framework rests on a metaphor that deserves scrutiny through this lens. Intelligence, Segal argues, is a river — a force of nature flowing through increasingly complex channels for 13.8 billion years, from hydrogen atoms to biological evolution to human consciousness to artificial computation. Humans are beavers in this river, building dams that redirect the flow toward life. The metaphor is evocative and contains genuine insight: the recognition that intelligence is not a human possession but a phenomenon humans participate in, and that stewardship — the deliberate construction of structures that direct powerful forces toward human flourishing — is the appropriate response to AI's arrival.

Myrdal's framework accepts the metaphor and immediately asks the question the metaphor does not answer: Who builds the dams? In Segal's telling, the beaver is a universal figure — any person with care, judgment, and the willingness to engage. But Myrdal spent his career demonstrating that the capacity to build institutional structures is itself unevenly distributed, shaped by the same patterns of advantage and disadvantage that the structures are meant to address. The nations with the strongest regulatory frameworks, the most adaptable educational systems, the deepest reserves of institutional capacity — these are the nations that already possess the advantages that make dam-building possible. The nations that most need the dams, because their populations are most vulnerable to the unmediated force of technological disruption, are precisely the nations whose institutional capacity to build them is weakest.

This is not an argument against dam-building. It is an argument for recognizing that the capacity to build dams is itself a form of advantage, distributed according to patterns of cumulative causation that the dam metaphor, taken on its own terms, does not address.

Segal acknowledges, in Chapter 14 of The Orange Pill, that "access requires connectivity, and connectivity requires infrastructure that billions of people do not have." He notes the barriers of hardware cost, English-language fluency, and the concentration of AI tools in Western workflows. These acknowledgments are genuine and important. But they occupy a paragraph in a chapter whose emotional and argumentative weight falls on the celebration of expanded access. The structural analysis is present but subordinate — mentioned in the spirit of intellectual honesty and then set aside in favor of the more compelling narrative of individual empowerment.

Myrdal's framework reverses this emphasis. The structural barriers are not a caveat to the democratization story. They are the story. The question is not whether a talented developer in Lagos can build with AI tools — clearly, some can and do — but whether the institutional environment in Lagos, and in the hundreds of cities and regions that face similar structural constraints, allows the majority of talented people to translate tool-access into genuine capability, sustained over time, captured by the local economy rather than extracted by the global one.

That question leads directly to the theoretical foundation of the analysis: the principle of cumulative causation, which Myrdal first formulated in the 1950s and which, as the following chapters will argue, describes the dynamics of AI adoption with an accuracy that borders on the prophetic.

But the principle cannot be properly understood without one further methodological observation. Myrdal distinguished, in his 1974 Nobel Prize lecture and throughout his later work, between what he called "opportunistic" and "honest" social science. Opportunistic analysis begins with a conclusion and assembles evidence to support it. It is selective in its attention, generous to confirming evidence and dismissive of disconfirming evidence, and it presents the resulting picture as though it were the product of open inquiry when it is in fact the product of motivated reasoning. Honest analysis begins with a question and follows the evidence wherever it leads, even when — especially when — the evidence leads to conclusions that are uncomfortable for the analyst, the analyst's community, or the analyst's funders.

The AI discourse is dominated by opportunistic analysis on both sides. The triumphalists select for success stories, adoption curves, and productivity metrics that confirm the democratization narrative. The catastrophists select for displacement data, inequality measures, and cautionary precedents that confirm the concentration narrative. Neither side is fabricating evidence. Both sides are selecting evidence, which is a subtler and more dangerous form of distortion, because it preserves the appearance of empiricism while eliminating the intellectual honesty that empiricism requires.

The analysis that follows attempts honest inquiry in Myrdal's sense. It takes Segal's celebration of AI's democratizing potential seriously, because the evidence for expanded access and individual empowerment is real. It also takes the structural critique seriously, because the evidence for concentration, extraction, and cumulative advantage is equally real. The tension between these two bodies of evidence is not a problem to be resolved by choosing a side. It is the phenomenon itself — the simultaneous spread and backwash that every powerful technology produces, operating at digital speed and global scale, with consequences that will be determined not by the technology but by the institutional choices that surround it.

Myrdal would insist, were he present, that the institutional choices are themselves shaped by the distribution of political power, and that the distribution of political power is shaped by the distribution of economic advantage, which is shaped by the institutional structures that the political power sustains. The circularity is not a logical flaw. It is the empirical reality that cumulative causation describes: a system in which causes are also effects, advantages produce further advantages, and the levers that might interrupt the cycle are held by the people who benefit from its continuation.

The intellectual honesty that Myrdal demanded begins with acknowledging this circularity and proceeds by mapping its specific mechanisms in the AI context. That mapping is the work of the chapters that follow. But it begins here, with the declaration that the analyst's values are on the table, and that the analysis will proceed from the commitment that the distribution of AI's benefits — not merely their existence — is the question that determines whether this technological moment becomes a story of human expansion or a story of accelerated divergence dressed in the language of liberation.

---

Chapter 2: The Principle of Cumulative Causation

In 1957, Gunnar Myrdal published Economic Theory and Under-Developed Regions, a book whose central argument was simple enough to state in a sentence and radical enough to occupy his remaining three decades: in the normal case, a change in a social system does not produce counterbalancing forces that restore equilibrium; it produces reinforcing forces that drive the system further in the direction of the initial change. The rich do not naturally converge with the poor. The developed do not naturally converge with the undeveloped. Initial advantages create further advantages, and initial disadvantages create further disadvantages, in spirals that market forces alone cannot interrupt. Myrdal called this principle "circular cumulative causation," and it challenged the most foundational assumption of mainstream economics — the assumption that markets tend, through the self-correcting mechanisms of supply and demand, toward a natural balance.

The assumption of equilibrium was not merely an academic abstraction. It was a policy prescription disguised as a description of reality. If markets naturally self-correct, then intervention is unnecessary at best and counterproductive at worst. If imbalances between regions, nations, or populations are temporary distortions that the market will resolve on its own schedule, then the appropriate response to inequality is patience — and the inappropriate response is the kind of deliberate institutional construction that Myrdal spent his career advocating. The equilibrium assumption served a political function: it provided intellectual cover for inaction in the face of persistent and deepening inequality, by recasting inaction as wisdom and intervention as interference with natural forces.

Myrdal's empirical work demolished this assumption with evidence drawn from three continents and several decades of observation. In the American South, he documented how racial discrimination in employment reduced Black earnings, which reduced Black investment in education, which reduced Black productivity, which was then cited as justification for continued discrimination in employment — a perfect circle of cumulative causation in which each disadvantage produced the next, and the system as a whole moved not toward balance but toward deepening inequality. In South and Southeast Asia, he documented how the absence of industrial infrastructure in poor regions drove capital toward already-industrialized regions, where returns were higher; the capital flight reduced the poor regions' capacity to invest in infrastructure, which made capital flight more attractive, which further reduced the capacity to invest — another circle, operating at national and regional scale, with the same self-reinforcing logic.

The principle is not mysterious. It does not require sophisticated mathematical modeling to understand. It requires only the willingness to observe that advantage compounds, that disadvantage compounds, and that the compounding operates through multiple channels simultaneously — economic, educational, institutional, psychological, cultural — in ways that reinforce each other. A region with good schools produces educated workers; educated workers attract employers; employers generate tax revenue; tax revenue funds better schools. A region with poor schools produces undereducated workers; employers locate elsewhere; tax revenue declines; schools deteriorate further. The initial condition — good schools or poor schools — does not self-correct. It self-amplifies.

The AI transition is cumulative causation at computational speed.

Consider the adoption curve that Segal describes with justifiable wonder in the opening chapters of The Orange Pill. ChatGPT reached fifty million users in two months. Claude Code's run-rate revenue crossed $2.5 billion by February 2026. The speed of adoption, Segal argues, measures not the quality of the tool but the depth of the need — the accumulated pressure of decades of creative ambition constrained by the friction of implementation. The reading is insightful. But it is incomplete without a Myrdalian corollary: the speed of adoption also measures the speed of divergence.

Every adoption curve has a shape, and the shape tells a distributional story. When a powerful tool is adopted rapidly, the early adopters gain productivity advantages, market position, and experiential knowledge that late adopters cannot replicate by simply acquiring the tool at a later date. The advantages of early adoption are not merely temporal — being first — but cumulative: the early adopter who gained a productivity advantage in January 2026 used that advantage to build products, capture market share, develop expertise, and attract talent throughout the following months. By the time the late adopter acquires the tool, the early adopter has not merely maintained the initial advantage but compounded it through every cycle of investment, learning, and return.

This is cumulative causation applied to technology adoption, and its dynamics are visible in the data. Within organizations, the Berkeley study that Segal discusses in Chapter 11 documented how AI-adopting workers expanded their scope, took on more tasks, crossed departmental boundaries, and colonized previously protected time — all patterns consistent with the accumulation of advantage by those who adopted first and most intensively. Between organizations, the Software Death Cross that Segal analyzes in Chapter 19 shows the same logic operating at industry scale: the companies that built the AI models captured enormous market value, while the SaaS companies that relied on the previous paradigm lost a trillion dollars in weeks. The redistribution of value was not random. It flowed from the companies that were late to the paradigm shift toward the companies that had created the paradigm shift — a transfer of advantage from the less-positioned to the already-positioned that Myrdal would have recognized instantly.

Between nations, the dynamics are starker and less frequently examined. The countries that lead in AI development — the United States, China, and a small number of European and East Asian nations — possess the computational infrastructure, the research talent, the venture capital, the regulatory frameworks, and the institutional capacity to capture the benefits of AI at scale. The countries that do not possess these assets do not merely lag behind; they fall further behind with each cycle of AI improvement, because each cycle increases the advantage of the leaders while the infrastructure gap that prevents the followers from catching up remains unaddressed or, worse, widens. As researchers at the Center for Global Development have documented, the same AI technologies that augment high-wage cognitive employment in wealthy nations are displacing the low-wage manufacturing and service tasks that provided pathways to development for poorer nations. The disruption is simultaneous and asymmetric: the gains concentrate in already-advantaged regions, and the costs distribute across already-disadvantaged ones.

The mechanism is not obscure. It is the same mechanism Myrdal identified in the 1950s, operating through the same channels — capital flows, talent migration, institutional adaptation, market dynamics — accelerated by the specific characteristics of digital technology. Digital technologies scale without proportional increases in marginal cost, which means that the returns to early advantage are not merely large but disproportionate. The company that builds the model bears the cost of research and training; the marginal cost of serving an additional user is negligible. This cost structure favors concentration: a small number of companies capture the vast majority of the market, because scale advantages make competition from smaller or later entrants increasingly difficult with each cycle of growth. The pattern is visible in every major digital market, from search to social media to cloud infrastructure, and Myrdal's framework predicts that AI will follow the same trajectory unless institutional interventions alter the cost structure, the competitive dynamics, or the distribution of returns.

Segal's framework recognizes the risk. In Chapter 17, he writes that the question is not whether AI will expand capability — it will — but "who captures the expansion, and who bears the cost of the transition." Myrdal's framework provides the analytical machinery to answer that question with the specificity it demands. Cumulative causation predicts that, in the absence of deliberate intervention, the expansion will be captured by those who already possess the infrastructure, talent, capital, and institutional capacity to use the tools — and the cost of the transition will be borne by those who do not. This is not a prediction born of pessimism. It is an empirical regularity documented across centuries of technological change, from the industrial revolution to electrification to the internet, and the burden of proof falls on those who claim that AI will be different, not on those who observe that the pattern is repeating.

The pattern repeats not because of any inherent property of the technology but because of the institutional environment in which the technology is deployed. Technologies do not distribute their benefits. Institutions do. And institutions, as Myrdal demonstrated throughout his career, are shaped by the very patterns of advantage and disadvantage they would need to overcome in order to distribute benefits equitably. The educational systems that could prepare populations for AI-augmented work are weakest in the regions where the preparation is most needed. The regulatory frameworks that could redirect AI's gains toward broader distribution are most captured by industry in the nations where the companies that build the tools are headquartered. The infrastructure investments that could extend connectivity and access to underserved populations are least likely to be made in the regions where the economic return on investment is lowest and the human return is highest.

This is circular causation: the institutions that could break the cycle are themselves products of the cycle, shaped by its logic, constrained by its dynamics, and resistant to the changes that would alter the distribution of advantage. Myrdal did not regard this circularity as grounds for despair. He regarded it as grounds for intervention — deliberate, sustained, politically difficult intervention that acknowledges the circularity and works within it, rather than pretending that the circle will break on its own.

The chapters that follow map the specific mechanisms through which cumulative causation operates in the AI economy: the simultaneous spread and backwash effects that Segal's framework partially addresses, the infrastructure requirements that tool-access alone cannot satisfy, the cost barriers that look trivial from positions of advantage, the political economy that determines whose interests the institutional response will serve. Each mechanism is a channel through which advantage flows toward the already-advantaged, and each represents a point where institutional intervention could redirect the flow — if the political will to intervene can be mobilized against the interests of those who benefit from the existing distribution.

That conditional — "if the political will can be mobilized" — is not a caveat. It is the central challenge. Myrdal understood, from his earliest work on the Swedish welfare state through his final writings on global inequality, that the analytical case for intervention is never sufficient on its own. The analytical case must be accompanied by political mobilization, by the construction of coalitions broad enough and durable enough to sustain the institutional changes that the analysis prescribes. The history of every successful dam-building effort — from labor protections in the industrial revolution to public education in the progressive era to the welfare states of mid-twentieth-century Europe — is a history of political struggle, not merely intellectual clarity.

The AI transition will be no different. The question is whether the political mobilization will come in time — before the patterns of cumulative causation have hardened into structures too rigid to reform. The speed of AI adoption, which Segal rightly identifies as unprecedented, compresses the window for that mobilization in ways that previous technological transitions did not. The industrial revolution's distributional consequences unfolded over decades; the AI transition's consequences are unfolding over months. The dams must be built faster, and the river is moving faster, and the beaver — to use Segal's metaphor one final time — must work with an urgency that the metaphor's pastoral associations do not quite convey.

---

Chapter 3: Backwash and Spread in the Digital Economy

Every center of economic growth produces two kinds of effects on the regions that surround it. Myrdal named them with the precision that characterized his best theoretical work: spread effects carry the benefits of growth outward — demand for suppliers' goods, transfer of knowledge, expansion of markets, the rising tide that the optimists celebrate. Backwash effects pull resources inward — talent migrates to where the opportunities are richest, capital flows toward higher returns, institutions adapt to serve the center's priorities rather than the periphery's needs. The question that determines whether a given technological revolution produces convergence or divergence between rich and poor, between center and periphery, between the advantaged and the disadvantaged, is not whether spread effects exist. They always do. The question is whether spread effects are strong enough to overcome backwash effects, or whether backwash dominates — draining the periphery of the very resources it would need to capture the benefits of the new technology on its own terms.

Myrdal's empirical finding, documented across multiple continents and decades of observation, was that backwash tends to dominate. Not always. Not inevitably. But in the absence of deliberate institutional intervention, the gravitational pull of the center is stronger than the diffusive force of the spread. The reasons are structural, not accidental: talent moves to where talent is rewarded; capital moves to where returns are highest; institutions evolve to serve the interests of the powerful, who are concentrated at the center. Each of these movements reinforces the others. The migration of talent to the center increases the center's productivity, which increases returns on capital, which attracts more capital, which creates more opportunities for talent, which attracts more talent. The periphery, meanwhile, loses the very people and resources that could have enabled it to develop its own capacity — a loss that is not compensated by the spread effects that the center's growth produces, because the spread effects, while real, are diffuse, indirect, and insufficient to offset the concentrated, direct, self-reinforcing dynamics of backwash.

The digital economy has not eliminated this dynamic. It has accelerated it while changing its form in ways that make it simultaneously more visible and more difficult to address.

The spread effects of the AI revolution are genuine, substantial, and deserving of the celebration Segal gives them. The tools are available globally. A developer in Nairobi can access the same model as a developer in Mountain View. The knowledge required to use the tools circulates freely through documentation, tutorials, open-source communities, and the tools themselves, which teach their users through interaction. The cost of building software has collapsed by orders of magnitude. The imagination-to-artifact ratio, Segal's useful measure of the distance between a human idea and its realization, has indeed approached something close to zero for a significant class of work. These are spread effects, and they are transforming individual lives in every region where the basic prerequisites of connectivity and education are met.

But the backwash effects are operating simultaneously, and at digital speed, and with a concentration of force that Myrdal's mid-twentieth-century framework anticipated in structure even if it could not anticipate in scale.

Consider the geography of AI talent. The researchers who build frontier models are concentrated in a handful of institutions: Anthropic, OpenAI, Google DeepMind, Meta AI, and a small number of university labs, virtually all of them located in the United States or, secondarily, the United Kingdom and China. These institutions offer salaries, computational resources, intellectual community, and career opportunities that no institution in the Global South can match. The result is predictable and documented: the most talented AI researchers from India, Nigeria, Brazil, and every other developing country migrate — physically or digitally — to the center. Their training was subsidized by their home countries' educational systems. Their talent was developed in their home countries' communities. Their economic contribution accrues to the center's employers, the center's shareholders, and the center's tax jurisdictions.

This is backwash in its purest form: the periphery bears the cost of developing talent, and the center captures the return.

The dynamic is not new. Brain drain has been a feature of the global economy for as long as the global economy has existed. But AI accelerates it in two ways that Myrdal's framework illuminates. First, the remote-work revolution that the pandemic catalyzed and AI tools have extended means that the talent extraction no longer requires physical migration. A brilliant machine learning engineer in Bangalore can work for a San Francisco company without leaving her apartment. Her labor generates value for shareholders in California. Her salary, while generous by local standards, represents a fraction of the value her work produces. The tax revenue from her employment accrues to India, but the capital gains, the intellectual property, and the strategic advantage accrue to the United States. The extraction is laundered through the language of opportunity — she has a great job, after all, and her career options are broader than they would have been without the global labor market — but the structural dynamic is extractive. The periphery develops the human capital; the center captures the economic return.

Second, AI tools themselves create a new form of backwash that operates through the medium of data and model performance rather than through the migration of individual workers. The models that power Claude, GPT, and their competitors are trained predominantly on English-language data, structured according to the workflows and assumptions of Western knowledge workers, and optimized for the use cases that the paying customers — overwhelmingly in the United States and Europe — demand. A developer in Lagos using Claude Code is using a tool designed for someone else's context. It works, often remarkably well. But the tool's performance is optimized for the problems, languages, and institutional environments of the center, not the periphery. The periphery's needs, languages, and institutional contexts are secondary considerations — not because the companies that build the tools are malicious, but because the market incentives that drive development reflect the purchasing power of existing customers, who are concentrated in the center.

This creates a subtler form of backwash: the tool is available everywhere, but it is optimized for the center. The spread effect is tool access. The backwash effect is tool design — the quiet, continuous recalibration of the tool's capabilities toward the needs and workflows of its highest-paying users, which further advantages those users while the periphery receives a tool that is good but not designed for its specific conditions.

Segal's account of the Trivandrum training — twenty engineers in southern India, each achieving twenty-fold productivity gains with Claude Code — is a case study in simultaneous spread and backwash that rewards close Myrdalian examination. The spread effect is clear: these engineers gained capability that transforms their individual productivity and professional range. They can now build across domains, ship faster, attempt projects that were previously beyond reach. The empowerment is real and should not be minimized.

The backwash effect is equally real and systematically underexamined. The twenty-fold productivity gain accrues, in the first instance, to the company that employs these engineers — a company headquartered in the United States, serving a global market, generating revenue that flows to shareholders and executives in the developed world. The engineers' increased capability serves the company's strategic priorities, which are set in meetings where the Trivandrum team's interests are one input among many, and rarely the decisive one. The engineers' salaries may increase — companies that gain twenty-fold productivity from their workforce have incentives to retain them — but the distribution of the gain between labor and capital, between the periphery where the work is performed and the center where the value is captured, is determined by the same institutional dynamics that Myrdal mapped in the 1950s: bargaining power, institutional frameworks, and the relative mobility of capital and labor.

Segal writes, with genuine care, that his company chose to keep and grow the team rather than convert the productivity gains into headcount reduction. The choice is admirable. It is also contingent — dependent on the specific values of a specific leader at a specific company, not on any institutional structure that would ensure the same choice across the economy. Myrdal's framework is not interested in the choices of virtuous individuals, not because those choices are unimportant, but because they are unreliable as a mechanism for distributing the gains of technological change at the scale the problem requires. The question is not whether Edo Segal's company treats its Trivandrum engineers well. The question is whether the institutional environment — the labor laws, the tax structures, the competitive dynamics, the cultural norms — creates conditions under which companies in general are incentivized or required to share productivity gains with the workers and communities that produce them.

The historical evidence on this question is not encouraging. Every previous round of globalization has produced versions of the same dynamic: spread effects that provide real but limited benefits to the periphery, and backwash effects that extract a larger share of the value to the center. The outsourcing revolution of the 1990s and 2000s brought employment to India, the Philippines, and other developing nations — a genuine spread effect. It also created a labor arbitrage in which companies captured the difference between developed-world productivity and developing-world wages — a backwash effect that enriched shareholders in the developed world while the developing-world workers received a fraction of the value they produced. The digital platform economy has replicated this pattern: Uber drivers in Nairobi, content moderators in Manila, data labelers in Addis Ababa, all performing essential work for companies whose value accrues overwhelmingly to the center.

AI does not break this pattern. It extends it into the domain of cognitive work, which was previously insulated from the same extractive dynamics by the high skill requirements and the need for physical proximity that characterized knowledge-work markets. AI tools remove those insulations. A developer in Trivandrum with Claude Code is, in the global labor market, a substitute for a developer in San Francisco — at a fraction of the labor cost. The spread effect is that the Trivandrum developer has access to frontier tools. The backwash effect is that the San Francisco developer faces wage pressure while the Trivandrum developer captures only a fraction of the value differential that her productivity now represents. The surplus — the difference between what the work is worth and what the worker is paid — flows to capital, which is concentrated at the center.

Myrdal would not deny the spread effects. He never denied them. His argument was never that growth in the center produces no benefits for the periphery. His argument was that the benefits are insufficient to overcome the extraction — that backwash dominates spread in the absence of institutional intervention, and that the intervention must be deliberate, sustained, and politically contested, because the interests that benefit from backwash have the resources and the motivation to resist it.

The AI economy is testing this proposition at unprecedented speed and scale. The spread effects are more visible than in any previous technological transition — the tools are literally available to anyone with connectivity and a subscription. The backwash effects are less visible but no less powerful — operating through talent extraction, tool optimization, value capture, and the institutional dynamics that determine who benefits from the productivity gains that the tools produce.

The analyst who sees only the spread tells a story of liberation. The analyst who sees only the backwash tells a story of extraction. The analyst who sees both — who holds the spread and the backwash in the same frame and refuses to resolve the tension by choosing a side — tells the story that cumulative causation actually describes: a system in which both forces operate simultaneously, in which their relative strength is determined not by the technology but by the institutional environment, and in which the institutional environment is itself a product of the distributional patterns it would need to overcome.

This circularity is the core of Myrdal's theoretical contribution, and it is the reality that the AI discourse must confront if it wishes to move beyond celebration and critique toward the institutional construction that the moment demands.

---

Chapter 4: The Infrastructure of Advantage

The imagination-to-artifact ratio is one of the most useful concepts in The Orange Pill. Segal defines it as the distance between a human idea and its realization — the accumulated friction of translation, implementation, and execution that separates what a person can imagine from what that person can build. The history of technology, in this framing, is the history of that ratio's compression: from the medieval cathedral that required decades of collective labor to the software product that can be prototyped in an afternoon conversation with Claude. Each technological transition — the compiler, the graphical interface, the cloud, the large language model — removes a layer of friction, bringing the ratio closer to unity, where the idea and the artifact are separated by nothing but the act of description.

The concept captures something real about what has changed. But it captures it from the perspective of the person who already stands at the narrowest point of the ratio — the person who possesses the education, the connectivity, the hardware, the linguistic fluency, and the institutional support that allow them to experience the ratio's compression as personal liberation. For that person, the ratio has indeed approached zero. The experience is genuine, the exhilaration is warranted, and the productivity gains are measurable.

Myrdal's framework asks a different question: What is the imagination-to-artifact ratio for the person who does not possess these prerequisites? For that person, the ratio has not approached zero. It has changed shape — the specific friction of coding has been replaced by other frictions, no less real, no less constraining — but it has not been compressed to the degree that the democratization narrative implies.

The prerequisites for using AI tools effectively constitute what might be called the infrastructure of advantage: the constellation of conditions that must be in place before tool-access translates into capability. Each prerequisite is individually necessary and collectively demanding, and their distribution maps onto existing patterns of global inequality with a precision that cumulative causation theory would predict.

The first prerequisite is connectivity. Not connectivity in the abstract — the theoretical availability of internet access — but connectivity of sufficient speed, reliability, and affordability to support real-time interaction with computationally intensive AI models. The distinction matters because the discourse about AI access often conflates availability with adequacy. A mobile phone with intermittent 3G access in rural Bihar is, technically, a connected device. It is not a device on which a developer can hold a sustained, iterative conversation with Claude Code about a complex system architecture. The gap between connectivity that exists and connectivity that enables is vast, and it is distributed along familiar lines: urban over rural, wealthy over poor, Global North over Global South. Data from the International Telecommunication Union reveals persistent disparities in broadband access that track income levels with mathematical regularity. The tool is available to anyone with internet access. The internet access sufficient to use the tool effectively is available to a substantially smaller population, concentrated in already-advantaged regions.

The second prerequisite is hardware. AI-assisted development requires devices with processing power, memory, and screen real estate sufficient to support development environments, model interactions, and the iterative workflows that produce real software. A smartphone is not sufficient. A basic laptop may be sufficient for simple interactions but inadequate for the sustained, multi-file, context-rich development sessions that produce the twenty-fold productivity gains Segal describes. The hardware required is not exotic — it does not require a gaming rig or a dedicated GPU — but it is also not trivial, and its cost must be understood in relation to local purchasing power. A laptop that costs eight hundred dollars represents a minor expense for a knowledge worker in San Francisco, who might earn that amount in a day. For a developer in Dhaka, where the average software engineer's monthly salary is a fraction of that figure, the same laptop represents a significant capital investment — one that must be justified against competing demands for food, housing, education, and the other necessities that the San Francisco developer can take for granted.

Myrdal was meticulous about exactly this kind of analysis: examining costs not in absolute terms but in relation to the economic circumstances of the people who bear them. One hundred dollars per month for Claude Code Max — the figure Segal presents as an unprecedentedly low barrier to entry — must be understood in the same relational terms. In San Francisco, it is the cost of a modest dinner. In much of the developing world, it is a significant fraction of monthly income. The barrier is lower than ever before. It is not zero. And the gap between "lower than ever" and "zero" is precisely the space in which cumulative causation operates — where costs that are trivial for the advantaged become consequential for the disadvantaged, and where the distribution of consequences tracks the distribution of advantage with the regularity of a natural law.

The third prerequisite is education — not merely literacy, but the specific kind of education that enables a person to formulate problems in natural language with sufficient clarity and specificity to guide an AI system toward useful output. Segal's celebration of the natural language interface — the revolution in which the machine learned to speak the human's language rather than requiring the human to learn the machine's — is warranted. But the celebration obscures a subtlety that Myrdal's framework illuminates: natural language is not a single thing. It is a family of capabilities that vary enormously across individuals and populations. The ability to describe a complex system architecture in English, with the precision and specificity that Claude Code requires to produce useful output, presupposes a level of technical vocabulary, conceptual clarity, and communicative competence that is itself the product of education, experience, and immersion in the knowledge environments where these skills are cultivated.

The shift from programming languages to natural language as the interface does not eliminate the skill requirement. It changes its character, from syntactic skill — the ability to write correct code — to descriptive skill, the ability to articulate what you want with enough precision that a probabilistic model can infer the correct implementation. Descriptive skill is not easier than syntactic skill. It is different. And its distribution, while broader than the distribution of programming expertise, is still shaped by the educational and cultural conditions that Myrdal spent his career analyzing. The person who can describe a complex system in clear, precise English has typically been educated in institutions that teach clarity of expression, exposed to technical concepts that provide the vocabulary for precise description, and immersed in professional environments where the iterative refinement of ideas through language is a daily practice. These conditions are not universally available. They are concentrated, as the infrastructure of advantage is always concentrated, in the populations and regions that already possess the educational resources to produce them.

The fourth prerequisite is what might be called institutional stamina — the constellation of supports that enables sustained creative effort over time. Building a real product, as opposed to generating a one-off prototype, requires not just a moment of inspiration and a subscription to an AI tool. It requires weeks or months of iteration, testing, refinement, and adaptation. It requires the economic stability to invest time without immediate return. It requires legal frameworks that protect the builder's intellectual property — frameworks that vary enormously in their existence, enforcement, and accessibility across jurisdictions. It requires financial systems that enable the builder to receive payment for their work — systems that, in much of the developing world, are inefficient, exclusionary, or nonexistent for small-scale digital entrepreneurs. It requires access to markets where the product can find users — markets that are, for digital products, overwhelmingly concentrated in the developed world and shaped by the preferences, languages, and institutional assumptions of developed-world consumers.

Each of these prerequisites — connectivity, hardware, education, institutional stamina — is a link in a chain, and the weakest link determines the strength of the chain. The AI tool itself is one link, and its quality has improved dramatically. But a chain with one strong link and four weak ones does not bear weight. The developer in Lagos who possesses the education, the hardware, the connectivity, and the institutional support to use Claude Code effectively is real, and her story is worth celebrating. The developer in Lagos who lacks one or more of these prerequisites — unreliable electricity, inadequate bandwidth, education that did not emphasize the descriptive precision that AI tools reward, absence of legal protections for digital intellectual property — is also real, and more numerous, and her story is the one that the democratization narrative systematically understates.

Myrdal's analysis of the Green Revolution provides the closest historical parallel. In the 1960s and 1970s, high-yield crop varieties were developed that promised to transform agricultural productivity in the developing world. The technology was transferable. The seeds were available. The potential was enormous. But the benefits accrued overwhelmingly to the farmers who possessed the institutional infrastructure to adopt the new technology: access to credit for purchasing seeds and fertilizer, secure land tenure that justified long-term investment, irrigation systems that provided the water the new varieties required, and access to markets where the increased production could be sold at a sustainable price. Farmers who lacked this infrastructure — the poorest farmers, the landless laborers, the smallholders in rain-fed regions — were left not merely unchanged but worse off, as the increased production by their better-resourced neighbors drove down crop prices while the costs of inputs rose. The technology was democratic in availability and anti-democratic in impact, because the impact was mediated by institutional conditions that the technology itself could not create.

The parallel to AI is precise enough to be uncomfortable. The AI tool is available. The institutional infrastructure required to translate tool-access into sustained capability — education, connectivity, hardware, legal protections, financial systems, market access — is concentrated among the already-advantaged. The technology is transferable. The institutional prerequisites are not. And the gap between transferable technology and non-transferable institutions is where cumulative causation does its work, quietly, compoundingly, and in the direction that Myrdal's framework predicts: toward the deepening of the very inequalities that the technology was supposed to address.

This is not an argument against the technology. The Green Revolution, despite its distributional failures, increased global food production and averted famines that would have killed millions. Its benefits were real, even though they were unequally distributed. The analytical point is not that the technology was bad but that the distribution of its benefits was determined by institutional conditions that the technology could not alter. The policy implication, which Myrdal drew explicitly in Asian Drama and which applies with undiminished force to the AI transition, is that technology transfer without institutional development produces dependency, not empowerment — a superficial modernization that leaves the underlying structures of advantage and disadvantage not merely intact but reinforced, because the technology has been absorbed into the existing institutional environment rather than transforming it.

The imagination-to-artifact ratio has compressed. For some. In some places. Under some conditions. The conditions are not random. They are the infrastructure of advantage, distributed according to patterns of cumulative causation that the ratio's compression, by itself, cannot disrupt. Segal's concept captures what has changed. Myrdal's framework reveals what has not — and what must be deliberately, institutionally, politically constructed if the change is to reach the people who need it most.

Chapter 5: One Hundred Dollars Is Not Zero Dollars

The most revealing number in The Orange Pill is not the trillion dollars that evaporated from software company valuations, nor the twenty-fold productivity multiplier that Segal documented in Trivandrum, nor the two months it took ChatGPT to reach fifty million users. The most revealing number is one hundred dollars. That is the monthly cost of Claude Code Max — the subscription tier that Segal describes as enabling the transformation he witnessed, the tier at which individual engineers became, in his telling, equivalent to entire teams.

Segal presents this figure as evidence of unprecedented accessibility. "One hundred dollars per person, per month," he writes, with the tone of a person who finds the number almost absurdly low relative to the capability it provides. And from the vantage point he occupies — a technology executive in the American economy, where one hundred dollars represents a negligible business expense, deductible against revenue, justified by the first hour of productivity gain it enables — the tone is warranted. The ratio of cost to capability is, by any historical standard, extraordinary. No previous tool in the history of computing has offered so much leverage for so little expenditure, measured in the currency of the person describing it.

Myrdal spent his career insisting on a methodological principle that this sentence illustrates: costs must be measured not in the currency of the analyst but in the currency of the person who bears them. One hundred dollars is not a fact. It is a relationship between a price and a context, and the relationship changes — changes fundamentally, changes in ways that determine whether the tool is accessible or exclusionary — depending on which context you measure it against.

In the United States, where the median household income exceeds seventy thousand dollars per year, one hundred dollars per month represents approximately 1.7 percent of monthly pre-tax income. For a professional software developer, whose median salary substantially exceeds the household median, the percentage is smaller still. The cost is, as Segal implies, trivial — absorbed into the operating expenses of a working life without requiring trade-offs against other necessities.

In Nigeria, where the average monthly income for a software developer is approximately four hundred to six hundred dollars in a major city like Lagos, one hundred dollars represents seventeen to twenty-five percent of gross monthly earnings. Not a rounding error. Not a minor expense. A quarter of income, diverted from food, housing, transportation, and the other demands that compete for every naira in a household operating without the surplus that makes optional expenditures feel optional. The same subscription that a San Francisco developer adds to the company credit card without a second thought requires, for the Lagos developer, a calculation: What am I giving up in order to gain this capability? Is the return sufficient to justify the sacrifice? Can I sustain the expense for the months or years required to build something that generates its own revenue?

These calculations are not abstract. They are the daily arithmetic of life in economies where the margin between sufficiency and insufficiency is narrow, where a single unexpected expense — a medical bill, a family obligation, an infrastructure failure — can interrupt the fragile equilibrium that makes sustained creative work possible. The San Francisco developer does not make this calculation, not because the calculation is unnecessary but because the margin is wide enough to absorb the cost without friction. The absence of the calculation is itself a form of advantage — the cognitive space that is freed when a person does not need to weigh every expenditure against survival.

In rural South Asia — in the villages of Bihar, Uttar Pradesh, or Bangladesh, where the majority of the population lives and where monthly household incomes may fall below one hundred dollars in total — the subscription is not merely expensive. It is, for practical purposes, prohibitive. The tool might as well not exist, because the price of entry exceeds the total economic capacity of the household. This is not a temporary condition that market forces will resolve through the natural compression of technology costs. It is a structural condition, rooted in the same patterns of cumulative disadvantage that Myrdal documented in Asian Drama: low incomes limit investment in education and infrastructure, which limits productivity, which limits incomes, in a circle that does not self-correct.

The analytical error that Myrdal's framework exposes is not in Segal's presentation of the cost — the hundred-dollar figure is accurate — but in the implicit framing of that cost as a universal measure of accessibility. The framing assumes that a low price, measured against the capability it provides, constitutes low barrier to entry. But a barrier's height is not determined by the price alone. It is determined by the relationship between the price and the resources of the person who must pay it. A one-foot wall is no barrier at all to a person who is six feet tall. The same wall is impassable to a person who is six inches tall. The wall has not changed. The person's capacity to surmount it has, and that capacity is distributed according to the same patterns of cumulative advantage and disadvantage that determine everything else.

This observation extends beyond the subscription cost to the total cost of participation in the AI economy. The subscription buys access to the model. Using the model requires a device — not a smartphone, which is inadequate for sustained development work, but a laptop or desktop computer with sufficient processing power, memory, and screen real estate to support the workflows that produce real software. It requires internet connectivity — not the intermittent, bandwidth-constrained connectivity that characterizes much of the developing world, but the reliable, high-speed access that sustained interaction with a computationally intensive AI model demands. It requires electricity — not the electricity that arrives unpredictably and disappears without warning, as it does in regions where grid infrastructure has not kept pace with demand, but the steady, reliable power that allows a developer to work through a sustained session without losing progress to an outage.

Each of these costs is individually modest by the standards of the developed world and collectively significant by the standards of the populations that the democratization narrative claims to include. The laptop. The internet plan. The electricity to power both. The subscription. The time — perhaps the most expensive resource of all for a person who cannot afford to invest months of unpaid effort in a speculative project. Aggregated, these costs constitute a barrier to entry that is lower than any previous barrier in the history of computing but that remains firmly, consequentially, exclusionarily nonzero.

Myrdal would recognize the pattern immediately, because it replicates a dynamic he documented in every technology transfer he studied. The Green Revolution's high-yield seeds were, in absolute terms, inexpensive. But the seeds required fertilizer, irrigation, credit, and market access, each of which carried costs that were modest individually and prohibitive collectively for the poorest farmers. The internet's core protocols are free and open. But using the internet effectively requires hardware, connectivity, education, and the institutional supports that translate access into capability — supports that are distributed along the familiar gradient of advantage. In each case, the core technology was accessible in principle and exclusionary in practice, because the total cost of effective participation exceeded the resources of the populations that would have benefited most from inclusion.

The cost dynamics of AI tools also exhibit a property that Myrdal's framework identifies as particularly insidious: they are regressive in their incidence. A regressive cost is one that falls proportionally harder on those with fewer resources. A flat subscription fee is the clearest example: one hundred dollars is the same dollar amount for the San Francisco developer and the Lagos developer, but it represents a radically different share of their economic capacity. The San Francisco developer pays with surplus. The Lagos developer pays with sacrifice. The flat fee structure, which appears egalitarian on its surface — everyone pays the same price — is in practice a mechanism that concentrates the benefits of the tool among those who can most easily afford it and excludes those who cannot.

This is not an argument that AI companies should offer their tools for free, or that pricing structures are the primary mechanism of exclusion. Pricing is one link in the chain of infrastructure requirements, and addressing pricing alone would not resolve the broader institutional deficits that constrain the periphery's capacity to participate. But it is an argument that the framing of one hundred dollars as "low cost" reveals the perspective from which the analysis is conducted — a perspective of relative affluence in which the cost is indeed low, measured against the analyst's resources. Measured against the resources of the populations that the democratization argument invokes, the cost is not low. It is a barrier — one of many, each reinforcing the others, each contributing to the cumulative dynamic in which tool-access concentrates among the already-advantaged while the structurally disadvantaged find that each prerequisite they lack makes the next one harder to obtain.

Segal acknowledges, in a passage of genuine intellectual honesty, that "these barriers will fall fast as models reach a certain rubicon... and then get optimized to reduce them." The prediction is plausible. The cost of inference is declining. Open-source models are proliferating. The trajectory of technology costs, across every previous computing paradigm, has been toward zero marginal cost. But Myrdal's framework adds a temporal dimension that the trajectory-based argument does not account for: the period during which the barriers are falling is also the period during which cumulative causation is operating. The early adopters — those who can afford the tools today, at today's prices, with today's infrastructure — are accumulating advantages during the period of high cost that will persist after costs decline. The productivity gains, the market positions, the experiential knowledge, the institutional adaptations — all of these compound during the window when the tools are expensive, and the compound advantages do not evaporate when the tools become cheap.

The analogy is to early-stage internet adoption. The internet eventually became cheap and ubiquitous. But the companies, industries, and economic structures that formed during the period of expensive, limited access — the Googles, the Amazons, the institutional frameworks of the digital economy — did not dissolve when access broadened. They had already captured the strategic positions, the market share, the institutional knowledge, and the network effects that later entrants could not replicate simply by acquiring the same tools at lower cost. The tools were necessary but insufficient; the advantage lay in the cumulative effects of early access, compounded over the years when access was expensive and limited.

The AI transition is following the same trajectory. The tools will become cheaper. Access will broaden. But the advantages accumulated during the period of expensive access — the period we are in now — will persist, because cumulative causation does not reverse when the initial conditions change. It continues in the direction established by the initial distribution of advantage, modified only by the institutional interventions that deliberately redirect it.

The cost of one hundred dollars per month will, in all probability, decline. The question Myrdal's framework poses is not whether it will decline but what happens during the period before it does — whether the cumulative advantages accumulated by early adopters will harden into structures that later adopters cannot overcome, or whether institutional interventions will ensure that the benefits of declining costs are captured by the populations that were excluded during the period of high cost.

History's answer to this question, across every technological transition Myrdal studied, is that the benefits of declining costs accrue to those who already possess the institutional infrastructure to capture them — unless deliberate, sustained, politically contested intervention redirects the distribution. The cost will fall. The question is whether the institutional environment will be ready to translate lower costs into broader capability, or whether the lower costs will arrive in an institutional landscape that has already been shaped, through cumulative causation, to concentrate the benefits among the already-advantaged.

One hundred dollars is not zero dollars. The distance between the two figures is the space in which the distribution of AI's benefits will be determined — not by the technology, which is agnostic to distribution, but by the institutional choices that the technology's advocates and architects and regulators make during the window when the choices still matter.

---

Chapter 6: The Political Economy of Dam-Building

Segal's metaphor of the beaver building dams in the river of intelligence is, in its essential structure, an argument for institutional intervention. The beaver does not refuse the river. It does not worship the river. It studies the current, identifies points of leverage, and builds structures that redirect the flow toward the conditions that sustain life. The metaphor carries genuine wisdom: the recognition that powerful forces require not resistance or acceleration but stewardship — the deliberate construction and continuous maintenance of structures that channel power toward human flourishing.

Myrdal would have appreciated the metaphor. He might also have observed that it omits the single most important question about any dam: Who decides where it is built, whose land it floods, and whose fields it irrigates?

Every institutional intervention — every regulation, every educational reform, every labor protection, every redistribution of resources — is a political act. It is an exercise of power that benefits some interests and constrains others. The eight-hour workday, which Segal invokes in Chapter 17 as a precedent for the dams that must be built around AI, was not a gift from factory owners who recognized, through moral reflection, that their workers needed rest. It was extracted through decades of labor organizing, strikes, political mobilization, legislative battles, and the occasional deployment of state violence against the workers who demanded it. The factory owners resisted. They resisted because the eight-hour day constrained their capacity to extract labor, reduced their profits, and shifted bargaining power toward workers — all of which were, from the owners' perspective, costs imposed by external authority on a system that was functioning, by their own measure, quite well.

The history of every dam that Segal's framework invokes — labor protections, public education, child labor laws, the weekend — is a history of political contest between those who benefited from the unregulated flow and those who were damaged by it. The dams were built not because the arguments for building them were intellectually compelling, though they were, but because the political coalitions that demanded them were powerful enough to overcome the resistance of the interests they constrained.

Myrdal understood this dynamic with the clarity of a person who had spent decades studying it across multiple national contexts. In An American Dilemma, he documented the gap between America's professed ideals of equality and its practiced reality of racial oppression, and he traced that gap not to ignorance or moral failure but to the political economy of the Jim Crow South — a system in which the beneficiaries of racial hierarchy had the political power to sustain it, and in which the institutions that might have challenged it (courts, legislatures, federal agencies) were themselves shaped by the interests they would have needed to override. The American Creed was real. The political power to realize it was insufficient. The dilemma was not moral; it was structural.

The AI transition presents an analogous structure. The values that Segal articulates — stewardship, democratization, the conviction that the gains of AI should flow broadly — are genuine and widely shared. The institutional will to realize these values is constrained by the political economy of AI governance, which is dominated by the companies that build the tools and the interests they serve.

The concentration of economic power in the AI industry is historically unusual in its speed and scale. By early 2026, a handful of companies — Anthropic, OpenAI, Google, Meta, Microsoft — controlled the frontier of AI development, and their combined market influence exceeded that of most national governments. These companies employ the researchers, own the models, control the computational infrastructure, and set the terms on which the rest of the world accesses AI capability. Their lobbying expenditures, their cultural influence, and their capacity to shape public discourse are commensurate with their economic power. They are not monolithic — they disagree with each other on significant questions of policy and strategy — but they share a structural interest in regulatory frameworks that preserve their competitive positions and resist redistributive interventions that would constrain their returns.

This is not a conspiracy theory. It is a description of how political economy works in every industry, documented by Myrdal and by every institutional economist who has studied the relationship between concentrated economic power and regulatory outcomes. Companies with enormous economic stakes in regulatory decisions invest enormous resources in shaping those decisions. The investment is rational, the mechanisms are legal, and the outcomes are predictable: regulatory frameworks tend to reflect the interests of the regulated, particularly when the regulated entities possess information, expertise, and resources that regulators lack.

The EU AI Act, which Segal mentions as an example of institutional response, illustrates the dynamic. The Act is a genuine attempt to regulate AI in the public interest, and it represents the most comprehensive regulatory framework any jurisdiction has produced. But its development was shaped by intensive industry lobbying — hundreds of millions of euros spent by technology companies seeking to influence the Act's provisions, its definitions, its exemptions, and its enforcement mechanisms. The resulting framework addresses supply-side questions — what companies may build, what disclosures they must make, what risk categories their products fall into — with considerable specificity. The demand-side questions — what citizens need to thrive in an AI-saturated environment, what educational adaptations are required, what redistributive mechanisms would ensure broad-based benefit — receive comparatively less attention. The asymmetry is not accidental. It reflects the relative political power of the interests involved: the companies that build AI tools have concentrated, well-organized, well-funded lobbying operations; the citizens whose lives AI tools reshape have diffuse, unorganized, unfunded interests that the political process is structurally ill-equipped to represent.

Myrdal would recognize this pattern from his analysis of development policy in South and Southeast Asia, where he found that the institutions responsible for economic development were systematically captured by the elites whose interests they were supposed to regulate. Land reform policies were designed by landowners. Tax policies were designed by the wealthy. Educational policies were designed by the educated. In each case, the institutions that should have redistributed advantage were instead reproducing it, because the people who controlled the institutions were the people who benefited from the existing distribution. The institutions were not corrupt in the narrow sense — they operated within legal frameworks and followed established procedures — but they were captured in the structural sense: their outcomes reflected the interests of the powerful rather than the needs of the vulnerable, because the powerful controlled the mechanisms of institutional design.

The AI governance landscape exhibits the same capture dynamics. The companies that build AI tools participate actively in the regulatory discussions that shape how those tools are governed. They provide technical expertise that regulators need and cannot obtain elsewhere. They fund research institutes and think tanks that produce the analysis on which regulatory decisions are based. They employ former regulators and hire lobbyists with government connections. None of this is illegal. All of it is predictable. And the result is a regulatory environment that is responsive to the concerns of the companies — safety, liability, competitive dynamics — and substantially less responsive to the concerns of the populations whose lives the technology reshapes.

Segal identifies the policy vacuum — the gap between the speed of AI capability and the speed of institutional response — and calls for it to be filled. Myrdal's framework adds the observation that the vacuum is not merely a product of institutional inertia. It is a product of deliberate political economy. The vacuum serves certain interests. The companies that benefit from the absence of redistributive policy have incentives to maintain that absence, and they have the political resources to do so. Filling the vacuum requires not merely good ideas — Segal has those — but political power sufficient to overcome the resistance of the interests that benefit from the vacuum's persistence.

This is the lesson that every successful dam-building effort in history teaches, and it is the lesson that the AI discourse has largely failed to absorb. The dams of the industrial revolution — labor protections, public education, the eight-hour day — were built through political mobilization. Workers organized. They formed unions. They struck. They voted. They built political parties that represented their interests in legislatures. The process was lengthy, contested, and often violent. The factory owners did not welcome it. The dams were not built by persuading the factory owners that dam-building was in everyone's interest. The dams were built by accumulating sufficient political power to override the factory owners' objections.

The AI transition requires equivalent political mobilization, and the conditions for that mobilization are substantially less favorable than they were during the industrial revolution. Workers in the AI economy are geographically dispersed, often working remotely, frequently classified as independent contractors rather than employees, and connected to their employers through digital platforms that can be reconfigured overnight. The traditional mechanisms of labor organizing — the shop floor, the picket line, the collective bargaining agreement — are poorly adapted to a workforce that is distributed across continents and connected by subscription rather than employment. The political parties that historically represented labor interests have, in most developed democracies, shifted their attention to cultural issues, leaving economic questions — particularly the distributional questions that the AI transition raises — without effective political representation.

Myrdal would add a further complication: the populations most affected by the distributional dynamics of AI — the workers in the Global South who face displacement, the communities in developing countries that lack the institutional infrastructure to participate, the structurally disadvantaged within wealthy nations — are the populations with the least political power to demand redistributive intervention. This is cumulative causation applied to the political process itself: the people who most need institutional protection are the people least able to demand it, because the same patterns of disadvantage that make them vulnerable also deprive them of the political resources — organization, funding, access, voice — that effective demand requires.

The beaver builds dams. The question Myrdal poses is whether the beaver will be permitted to build them where the ecosystem needs them, or whether the dams will be built where the most powerful interests want them — protecting the properties of the advantaged while leaving the vulnerable to the unmediated current. The answer will be determined not by the quality of the analysis or the sincerity of the values but by the distribution of political power at the moment when the institutional choices are made.

That moment is now.

---

Chapter 7: The Industrial Precedent, Honestly Told

The industrial revolution is the precedent that everyone in the AI discourse invokes and almost no one examines with the specificity it deserves. Segal uses it in Chapter 17 of The Orange Pill as evidence that technological transitions, while painful in the short term, produce broad prosperity in the long term — that the five-stage pattern of threshold, exhilaration, resistance, adaptation, and expansion has repeated across centuries and will repeat again. The argument is not wrong. But it is radically incomplete, and the incompleteness is consequential, because the precedent supports two opposite conclusions depending on which details are included and which are omitted.

The version of the industrial revolution that the technology industry typically invokes goes roughly as follows: Machines replaced manual labor. Workers were displaced. There was suffering. But eventually, the economy adapted, new jobs emerged, productivity increased, living standards rose, and everyone was better off. The moral of the story is patience — the recognition that technological transitions produce short-term costs and long-term gains, and that the appropriate response is to endure the costs while building toward the gains.

Myrdal's version of the same history is substantially different, not because it disputes the facts but because it insists on including the facts that the simplified version omits.

The industrial revolution did produce broad prosperity. Eventually. In some countries. Under specific institutional conditions. The qualifications are not minor. They are the entire story.

The first qualification is temporal. The industrial revolution's productivity gains took generations — not years, not decades, but multiple generations — to translate into broadly distributed improvements in living standards. The period between the introduction of mechanized production in the late eighteenth century and the emergence of a genuine middle class with rising real wages, improved working conditions, and access to education and healthcare stretched across most of the nineteenth century and into the twentieth. During that interval, the gains were captured overwhelmingly by capital — by the factory owners, the mine operators, the industrialists who controlled the means of production. Workers' real wages in Britain stagnated or declined for the first several decades of industrialization, a period economic historians call the "Engels Pause," during which productivity rose dramatically while the workers who produced the gains saw none of the benefit. The Luddites, whom Segal discusses with genuine sympathy in Chapter 8, were not wrong about the distribution. The gains were real, and the gains went to someone else.

The second qualification is geographical. The industrial revolution produced broad prosperity in Britain, the United States, Western Europe, and eventually Japan and a handful of other nations. It did not produce broad prosperity in the colonized world — in India, Africa, Latin America, or Southeast Asia — where the industrial revolution's effects were experienced not as domestic transformation but as extraction. The colonies provided raw materials for the industrial economies at the center and consumed manufactured goods produced by the center's factories, in a trade structure that enriched the center while impoverishing the periphery. Myrdal documented this dynamic extensively in Economic Theory and Under-Developed Regions and Asian Drama: the industrial revolution was not a tide that lifted all boats. It was a current that lifted boats at the center and swamped boats at the periphery, producing a divergence between rich and poor nations that persisted for centuries and, in many cases, persists still.

The third qualification — and the one most directly relevant to the AI transition — is institutional. The countries where the industrial revolution eventually produced broad prosperity were the countries that built deliberate institutional structures to redirect the gains from capital to labor, from the center to the periphery, from the already-advantaged to the structurally disadvantaged. These structures included tariffs that protected developing industries from competition with already-industrialized nations — a policy that Britain itself used during its industrialization and then attempted to deny to later industrializers, in a dynamic that the economist Ha-Joon Chang memorably called "kicking away the ladder." They included public education systems that built the human capital required for an industrial workforce. They included labor regulations — minimum wages, maximum hours, safety standards, the right to organize — that shifted bargaining power from employers to workers. They included infrastructure investments — roads, railways, ports, electrical grids — that connected peripheral regions to centers of economic activity.

None of these structures emerged spontaneously. None were products of market forces. All were products of political mobilization — the deliberate, contested, often violent exercise of political power by coalitions that demanded institutional change against the resistance of the interests that benefited from the existing arrangement. The broad prosperity that the industrial revolution eventually produced was not a natural outcome of the technology. It was a political achievement, won through decades of struggle by the people who bore the costs of the transition and demanded institutions that would redirect the benefits toward them.

The countries that did not build these institutions — that relied on market forces to distribute the gains of industrialization — experienced a different trajectory. In much of Latin America, Africa, and South Asia, the industrial revolution's gains were captured by narrow elites, often in collaboration with colonial or neo-colonial powers, while the majority of the population remained in poverty. The technology was the same. The institutional environment was different. And the institutional environment determined the outcome.

Myrdal was among the first economists to draw this conclusion systematically, and the conclusion carries a direct implication for the AI transition: the technology does not determine the distribution of its benefits. The institutions determine the distribution. And the institutions are shaped by the same political dynamics — the same concentrations of power, the same resistance to redistribution, the same gap between stated ideals and practiced reality — that determined the distribution of every previous technology's benefits.

Segal's five-stage model — threshold, exhilaration, resistance, adaptation, expansion — is accurate as a description of the sequence. But Myrdal's framework insists that the model is incomplete without specifying the conditions under which Stage Four, adaptation, actually produces Stage Five, expansion. The transition from adaptation to expansion is not automatic. It is contingent on the construction of institutions that redirect the technology's gains toward broad-based benefit. Where those institutions are built, expansion follows. Where they are not built, what follows is not expansion but concentration — the accumulation of gains among the already-advantaged, the compounding of disadvantage among the structurally excluded, and the hardening of inequality into institutional structures that resist subsequent reform.

The AI discourse invokes the industrial precedent as evidence of inevitable expansion. Myrdal's framework reveals the precedent as evidence of contingent expansion — expansion that occurred under specific institutional conditions that were politically constructed, not naturally emergent. The precedent does not say, "Relax; it will work out." The precedent says, "It worked out in the places that built the institutions, and it did not work out in the places that did not, and the places that did not build the institutions are still living with the consequences."

The relevance to the present moment is immediate. The AI transition is unfolding faster than the industrial revolution, which means the window for institutional construction is narrower. The industrial revolution's distributional consequences emerged over decades, giving political movements time — not enough time, the Luddites would attest, but some time — to organize, mobilize, and demand institutional change. The AI transition's distributional consequences are emerging over months and years. The productivity gains are measured in quarters. The market value redistributions documented in the Software Death Cross occurred in weeks. The compression of the timeline compresses the window for political response, and the compression of the window for political response favors the interests that are already organized, already funded, and already positioned to shape the institutional environment — which is to say, the interests that benefit from the existing distribution.

The industrial precedent, honestly told, supports Segal's optimism only conditionally. The condition is institutional construction — the deliberate building of educational systems, regulatory frameworks, labor protections, and redistributive mechanisms that redirect the technology's gains toward broad-based benefit. Without that construction, the precedent supports the opposite conclusion: that the gains of technological transition concentrate among the already-advantaged, that the costs fall on the structurally excluded, and that the resulting inequality persists for generations.

The question for the AI transition is not whether the precedent will repeat. It is which version of the precedent will repeat — the version in which institutions redirected the gains toward expansion, or the version in which the absence of institutions allowed the gains to concentrate and the costs to compound. The answer depends entirely on whether the political will to build the institutions can be mobilized in time.

Myrdal would observe, with the dry empiricism that characterized his most uncomfortable conclusions, that the political conditions for mobilization are substantially less favorable now than they were during the industrial revolution. The labor movements that built the dams of the industrial era had the advantage of physical concentration — workers in factories, sharing conditions, building solidarity through proximity. The AI-era workforce is dispersed, digitized, and often classified in ways that preclude traditional organizing. The political parties that historically represented labor interests have, in most democracies, lost their institutional connection to the economic concerns of working people. And the speed of the transition means that the distributional consequences will harden into institutional structures before the political response has time to form.

This is not defeatism. Myrdal was never a defeatist. He was a reformist who insisted that the analytical case for intervention be accompanied by clear-eyed assessment of the political obstacles to intervention, because reforms that ignore political economy are reforms that fail. The industrial precedent teaches that institutions can redirect the gains of technological transition toward broad prosperity. It also teaches that the institutions do not build themselves, that their construction requires political power, and that the distribution of political power is itself shaped by the very patterns of advantage that the institutions are meant to address.

The precedent is not a promise. It is a possibility — contingent, contested, and available only to societies that choose to build.

---

Chapter 8: Brain Drain at Digital Speed

In 1968, Myrdal published Asian Drama: An Inquiry into the Poverty of Nations, a three-volume, two-thousand-page study that remains one of the most comprehensive analyses of development failure ever conducted. Among its many findings was a pattern that Myrdal considered both predictable and tragic: the most talented individuals in the poorest nations systematically migrated to the wealthiest nations, drawn by opportunities that their home countries could not match. The migration was individually rational — each person who left was making the best available decision for their own career and family — and collectively catastrophic for the communities they left behind. The educational systems of poor countries invested scarce resources in developing human capital. The developed world captured the return on that investment. The poor countries bore the cost of training; the rich countries reaped the benefit of the trained.

Myrdal called this dynamic a form of backwash — one of the mechanisms through which growth in the center drains resources from the periphery, widening the gap between developed and developing nations in a self-reinforcing spiral. The brain drain was not a side effect of inequality. It was a mechanism of inequality, one of the channels through which cumulative causation operated to concentrate advantage at the center and deepen disadvantage at the periphery.

The internet was supposed to reverse this dynamic. The promise of digital connectivity was that talent would no longer need to migrate to access opportunity — that a brilliant programmer in Bangalore could participate in the global economy without leaving home, could build products for a global market without relocating to San Francisco, could capture the returns of the digital economy from the location where the educational investment was made. The promise was not entirely empty. The outsourcing revolution of the 1990s and 2000s did bring employment to India, the Philippines, and other developing nations. Remote work did expand the geographic range of opportunity.

But the form of the brain drain changed without the underlying dynamic changing. The talent no longer needed to relocate physically. It relocated economically — working for companies headquartered in the developed world, generating value that accrued to shareholders and tax jurisdictions in the developed world, building products designed for developed-world markets, and earning salaries that, while generous by local standards, represented a fraction of the value the work produced. The extraction was laundered through the language of opportunity and global access, but the structural dynamic was the same: the periphery developed the talent, and the center captured the return.

AI tools accelerate this dynamic in ways that Myrdal's framework illuminates with uncomfortable clarity.

The acceleration operates through three mechanisms. The first is visibility. AI tools make talented individuals in the Global South immediately visible to recruiters, platforms, and companies in the developed world. A developer in Nairobi who builds an impressive project with Claude Code can post it publicly, and within hours, that work is visible to employers on three continents. The visibility is a spread effect — it expands the developer's opportunity set beyond what local labor markets could offer. But it is simultaneously a backwash effect, because the visibility functions as a recruitment channel that pulls talent toward the center's priorities. The developer is not recruited to solve problems that matter to her community. She is recruited to solve problems that matter to her employer's customers, who are overwhelmingly located in the developed world. Her talent is redirected — from local problems to global ones, from community priorities to corporate ones, from the periphery's needs to the center's demands.

The second mechanism is wage arbitrage. Companies in the developed world have discovered that AI tools reduce the skill differential between developers in different locations — a developer in Trivandrum with Claude Code can produce output comparable to a developer in San Francisco with Claude Code, at a fraction of the labor cost. The result is a global labor market in which the value of output is determined by the center's standards while the cost of labor is determined by the periphery's wages. The difference — the arbitrage — flows to capital, which is concentrated at the center. The Trivandrum developer is better off than she would be without the global labor market. The shareholders of the company that employs her are substantially better off. And the difference between "better off" and "substantially better off" is the arbitrage that backwash extracts.

Segal's account of the Trivandrum training illustrates this dynamic with inadvertent precision. The engineers gained twenty-fold productivity. The company gained twenty-fold output from each engineer. The distribution of the gain between the engineers and the company — between labor in the periphery and capital at the center — is determined by bargaining power, labor market conditions, and the institutional frameworks that govern employment in India and the United States. Myrdal's framework predicts that, in the absence of institutional intervention, the larger share of the gain will accrue to capital, because capital is mobile, labor is less so, and the bargaining power of dispersed workers in a competitive global labor market is structurally weaker than the bargaining power of companies that can source talent from any location on the planet. Segal chose to keep and grow the team. The systemic question is whether the institutional environment creates incentives for that choice to be replicated across the economy, or whether the structural dynamics favor the alternative — headcount reduction, wage arbitrage, and the concentration of gains at the center.

The third mechanism is what might be termed cognitive extraction — the systematic incorporation of peripheral knowledge and creativity into center-controlled systems without proportional return. When a developer in Lagos uses Claude Code to build a product, the interaction generates data about how developers in Lagos think, what problems they prioritize, what workflows they follow, and what capabilities the tool needs to serve them better. This data feeds back into the model's training, improving the tool's performance for all users — but the improvement is owned by the company that controls the model, not by the developer whose interactions contributed to it. The developer receives the benefit of a better tool. The company receives the benefit of a better product, improved by the cognitive contributions of millions of users worldwide, monetized through subscriptions and enterprise contracts that generate returns flowing to shareholders at the center.

This is a novel form of the extractive dynamic that Myrdal documented in colonial and post-colonial economies: the periphery provides raw material — in this case, cognitive raw material, the patterns of thought and problem-solving that improve the model — and the center processes it into a product that generates returns captured overwhelmingly at the center. The language of participation obscures the structure of extraction, just as the language of free trade obscured the structure of colonial exchange.

The brain drain has always operated through the same fundamental mechanism: the periphery invests in human capital, and the center captures the return. What has changed is the speed, the scale, and the invisibility of the extraction. Physical migration was visible — a person left, and the community that lost them could see the loss and name it. Digital migration is invisible — the person stays, but their cognitive output, their professional growth, their most creative hours are dedicated to the center's priorities, and the community that invested in their education captures a diminishing share of the return.

The most talented developers in the Global South — the ones who possess the education, the connectivity, the English-language fluency, and the institutional supports that enable effective AI use — are precisely the ones most susceptible to digital brain drain, because they are the ones whose talent is most visible and most valuable to center-based employers. The tools that empower them individually also connect them to a global labor market that values their output at center-determined prices while compensating them at periphery-determined wages. The individual benefit is real. The systemic effect is extractive. And the extraction operates through cumulative causation: each talented person whose cognitive output is captured by the center strengthens the center's AI ecosystem, which produces better tools, which attract more talented users from the periphery, which strengthens the center further, in a spiral that Myrdal's framework describes with depressing accuracy.

The potential countervailing force — the possibility that AI tools could enable developers in the Global South to build products for local markets, addressing local problems with local knowledge — is real but faces institutional headwinds that the tool itself cannot overcome. Building a product requires not just the capacity to write code but access to capital for scaling, legal frameworks that protect digital intellectual property, payment systems that enable revenue collection, and markets sufficiently developed to support the product's commercial viability. Each of these institutional prerequisites is weaker in the periphery than at the center, and the weakness of each reinforces the weakness of the others through the circular causation that Myrdal spent his career documenting.

The result is a digital economy that replicates, at computational speed, the extractive dynamics that Myrdal identified in the mid-twentieth-century global economy: the periphery provides talent and cognitive labor; the center provides tools, capital, and market access; the returns distribute asymmetrically toward the center; and the asymmetry reinforces itself through every mechanism — wage arbitrage, talent visibility, cognitive extraction, institutional capture — that cumulative causation can employ.

The developer in Lagos can build with Claude Code. The question that Myrdal's framework poses — the question that the democratization narrative does not answer — is whether she can capture the value of what she builds, sustain her enterprise over time, contribute to her community's institutional development, and participate in the AI economy on terms that serve her interests rather than merely the interests of the center that provides the tools. The answer to that question depends not on the tool but on the institutional environment — the same institutional environment that cumulative causation has shaped, over decades and centuries, to concentrate advantage at the center and extract from the periphery.

The brain drain has gone digital. It has also gone cognitive. And the dams that could redirect this flow — institutional structures that retain value in the periphery, that strengthen local ecosystems, that ensure that the human capital developed in developing nations generates returns for those nations rather than exclusively for the center — are not being built at the speed or scale that the extraction requires.

Chapter 9: Circular Causation in the Classroom

The educational system is not one institution among many that the AI transition will reshape. It is the decisive institution — the one whose response to artificial intelligence will determine, more than any regulatory framework or corporate governance structure, whether the benefits of the AI revolution concentrate among the already-advantaged or distribute broadly enough to justify the word "democratization." Myrdal understood this with a clarity that his contemporaries in development economics often lacked, because he had studied the relationship between education and inequality across enough national contexts to see the pattern that individual-country analyses obscure: educational systems do not merely reflect inequality; they reproduce it, through mechanisms of circular causation that operate with the same self-reinforcing logic in classrooms as in capital markets.

The mechanism is straightforward. Communities with higher incomes generate more tax revenue. More tax revenue funds better schools. Better schools produce more educated graduates. More educated graduates earn higher incomes. Higher incomes generate more tax revenue. The circle runs in one direction for advantaged communities and in the opposite direction for disadvantaged ones, producing divergence that compounds with each generation. Myrdal documented this dynamic in the American South in An American Dilemma, where he found that the systematic underfunding of Black schools was not merely a symptom of racial inequality but a mechanism for its perpetuation — a way of ensuring that the next generation would inherit the disadvantages of the previous one, laundered through the ostensibly neutral language of local funding and community control. He documented it again in Asian Drama, where educational disparities between urban elites and rural populations reproduced the class structures that development policy was supposed to address.

The AI transition is inserting a new variable into this circle, and the variable is accelerating the divergence rather than moderating it.

Consider the distribution of AI-readiness across educational institutions within a single country — the United States, since Segal writes from that context and since the disparities are well-documented. Schools in wealthy districts have begun integrating AI tools into their curricula with the institutional flexibility that resources provide: updated devices for students, professional development for teachers, curriculum consultants who understand both the technology and the pedagogy, and the financial margin to experiment with approaches that may not produce immediate measurable results. These schools are not merely adopting AI as a tool. They are developing the pedagogical frameworks — the "AI Practice" that the Berkeley researchers proposed — that teach students to use the tools wisely: to question outputs, to develop the judgment that distinguishes productive collaboration from cognitive outsourcing, to maintain the capacity for independent thought that Segal and Byung-Chul Han both identify as endangered.

Schools in poor districts have not begun this process, and the reasons are structural rather than attitudinal. The teachers in under-resourced schools are not less intelligent or less committed than their counterparts in wealthy districts. They are constrained by conditions that the technology cannot address: overcrowded classrooms, outdated devices, insufficient bandwidth, the absence of professional development budgets, and the daily urgency of meeting basic needs — feeding students, maintaining safety, addressing the psychological effects of poverty — that consume the institutional energy that curriculum innovation requires. The AI tools are theoretically available to these schools. The institutional capacity to integrate them thoughtfully is not.

The result is a new dimension of the educational divide. Students in well-resourced schools are learning to use AI as a thinking partner — developing the questioning, evaluative, and directorial capacities that the AI economy will reward. Students in under-resourced schools are either not using AI at all, because their schools lack the infrastructure, or using it without pedagogical guidance, in ways that may substitute for the cognitive development that education is supposed to provide rather than augmenting it. The well-resourced student learns to direct the machine. The under-resourced student, if she accesses the tool at all, learns to depend on it — not because she is less capable, but because the institutional environment that could have taught her to use it critically does not exist in her school.

This is circular causation operating in real time, in the institution that Myrdal identified as the most consequential for breaking or perpetuating cycles of inequality. The students who receive thoughtful AI education in well-resourced schools will enter the AI economy with the judgment, the questioning capacity, and the directorial instinct that the economy rewards. The students who do not will enter the economy with tool-access but without the human capabilities that determine whether tool-access translates into genuine advantage. The educational divide will reproduce itself as an economic divide, which will reproduce itself as a funding divide, which will reproduce itself as an educational divide — the circle running in one direction for the advantaged and in the opposite direction for the disadvantaged, accelerated by the speed of the technology that was supposed to disrupt it.

Segal calls the educational system "one of the most urgent institutions requiring reform" and warns that "institutions that were already challenged by their ivory tower status" face a crisis of relevance. The diagnosis is correct. Myrdal's framework adds the specificity that the diagnosis requires: the crisis is not uniform. It is distributed along the same lines of advantage and disadvantage that shape every other aspect of the AI transition. The schools that are best positioned to reform are the ones that least need to — the already-resourced institutions that can absorb the disruption, adapt their curricula, invest in their teachers, and maintain their relevance. The schools that most need to reform are the ones least equipped for it — the under-resourced institutions that cannot invest in AI integration because they are struggling to maintain basic educational functions, and whose failure to adapt will compound the disadvantages their students already carry.

The international dimension amplifies the pattern. Educational systems in the developed world, despite their own internal inequalities, possess institutional capacities that educational systems in much of the developing world do not: trained teacher corps, curriculum development infrastructure, assessment frameworks, technological literacy among educators, and the financial resources to invest in adaptation. Educational systems in the developing world face the AI transition from positions of institutional weakness — large class sizes, undertrained teachers, limited technological infrastructure, curricula designed for a previous economic era — that make thoughtful AI integration not merely difficult but, for the foreseeable future, practically impossible at the scale the problem requires.

The consequences of this educational divide will not be temporary. They will be generational. The students who pass through well-resourced, AI-adapted educational systems in the next decade will enter the economy with capabilities that compound over their careers — the questioning instinct, the evaluative judgment, the capacity to direct AI tools toward meaningful ends. The students who pass through under-resourced, un-adapted educational systems will enter the economy without these capabilities, and the gap between the two groups will widen with each year of professional development, as the capabilities that the well-educated acquired early provide the foundation for further learning while the capabilities the under-educated lack become increasingly difficult to acquire outside the formal educational environment that should have provided them.

Myrdal would observe that this trajectory is not speculative. It is the same trajectory he documented in every educational system he studied — in the American South, in South Asia, in the welfare states of Scandinavia — where initial educational disparities, left unaddressed, reproduced themselves with compounding force across generations. The AI transition adds computational acceleration to a dynamic that was already, in Myrdal's analysis, the most consequential mechanism of inequality's perpetuation.

The question that Segal raises in Chapter 18 — what should education do? — receives, in Myrdal's framework, a two-part answer that is analytically simple and politically difficult. The first part is pedagogical: education should shift from teaching students to produce answers toward teaching them to ask questions, evaluate claims, exercise judgment, and maintain the capacity for independent thought in an environment saturated with machine-generated output. Segal's example of the teacher who stopped grading essays and started grading questions captures this shift with admirable concreteness. The pedagogical case is strong, and the direction is clear.

The second part is institutional, and it is where the difficulty lies. The pedagogical shift requires resources — teacher training, curriculum development, technological infrastructure, assessment reform — that are distributed according to the same patterns of advantage that the shift is supposed to address. Implementing the pedagogical vision in schools that already have resources is relatively straightforward. Implementing it in schools that lack resources requires the transfer of resources from where they are concentrated to where they are needed — a transfer that, as every chapter in this analysis has argued, does not occur spontaneously but must be deliberately, politically, institutionally constructed against the resistance of the interests that benefit from the existing distribution.

The educational response to AI is, in Myrdal's framework, the single most important policy question of the current decade — not because education is more important than labor regulation or technology governance, but because education is the institution through which the other inequalities either compound or moderate across generations. An educational system that equips all students, regardless of their economic background or geographic location, with the capacities that the AI economy rewards would be the most powerful dam ever built against the forces of cumulative causation. An educational system that equips only the already-advantaged with these capacities would be the most efficient mechanism of inequality's reproduction.

The current trajectory points toward the second outcome. The dams are being built in the schools that least need them. The schools that most need them are watching the river rise.

---

Chapter 10: The Interventionist Imperative

Myrdal's career arrived, repeatedly and across multiple national contexts, at a single conclusion that he stated with increasing directness as the decades passed: interventionism is not a policy preference. It is a logical necessity that follows from the empirical reality of cumulative causation. If initial advantages produce further advantages through self-reinforcing feedback loops, and if the market's natural dynamics amplify rather than correct these loops, then the only force that can alter the trajectory is deliberate institutional action — intervention that redirects the flow of advantage toward populations and regions that the market, left to its own logic, will systematically bypass.

The conclusion was never popular. It was resisted by the economists who preferred the elegance of equilibrium models, by the policymakers who preferred the convenience of market solutions, and by the economic interests that benefited from the distributions that intervention would alter. Myrdal accepted the resistance as part of the cost of honest analysis and continued making the argument, with evidence drawn from the American South, from South and Southeast Asia, from the welfare states of Scandinavia, and from the global economy as a whole. The evidence pointed in one direction, and he followed it.

The AI transition tests Myrdal's conclusion under conditions he could not have imagined: computational speed, global scale, and a technology whose benefits compound with the same self-reinforcing logic that his theory describes. If cumulative causation was a problem when the mechanisms of advantage operated at the speed of industrial production and the scale of national economies, it is a crisis when those mechanisms operate at the speed of digital computation and the scale of the global internet.

The interventions that the AI transition requires can be organized along four dimensions, each corresponding to a channel through which cumulative causation operates and each requiring institutional construction of a specific kind.

The first dimension is educational. The analysis in the previous chapter identified education as the decisive institution — the one whose response to AI will determine whether the benefits of the transition concentrate or distribute across generations. The educational intervention must operate at two levels. At the pedagogical level, curricula must shift from the transmission of knowledge, which AI tools can now provide with extraordinary efficiency, toward the development of the capacities that AI tools cannot replicate: questioning, judgment, evaluation, the ability to direct AI tools toward meaningful ends rather than accepting their outputs uncritically. At the institutional level, the resources required for this pedagogical shift must be distributed equitably — not according to local property tax revenues, which reproduce existing patterns of advantage, but according to need, which inverts them. This is not a novel prescription. Myrdal made it in An American Dilemma with respect to racial inequality in education, and the prescription has the same structure now: the populations that most need the educational investment are the populations least able to fund it from local resources, and the investment must therefore come from sources — state, national, international — that can redistribute across the lines of advantage.

The specific form the educational intervention should take is itself a matter of institutional design, not mere aspiration. Teacher preparation programs must be restructured to include AI literacy — not the superficial familiarity that a weekend workshop provides, but the deep understanding of what the tools can and cannot do, how they interact with cognitive development, and what pedagogical approaches maintain the capacity for independent thought in an AI-saturated learning environment. Curriculum standards must be revised to reflect the shift from knowledge acquisition to judgment development, and the assessment systems that drive pedagogical practice must be aligned with the new standards — because teachers teach what is tested, and if the tests measure recall while the curriculum aspires to cultivate judgment, the curriculum will lose. Educational technology procurement must be governed by pedagogical criteria rather than vendor marketing, and the devices, connectivity, and institutional supports that AI-integrated education requires must be provided as public goods to schools that cannot afford them as market purchases.

Each of these specifics is administratively achievable. None is politically simple. Each requires the reallocation of resources from where they are currently concentrated to where they are most needed, and each will be resisted by the interests that benefit from the current allocation. Myrdal would insist that this resistance is not a reason to retreat from the prescription. It is a reason to develop the political strategy that the prescription requires.

The second dimension is infrastructural. The analysis in Chapters 4 and 5 demonstrated that the prerequisites for effective AI use — connectivity, hardware, electricity, institutional supports — are distributed along the same lines of advantage that determine every other outcome in the global economy. Addressing this distribution requires investment in the physical and digital infrastructure of the developing world at a scale that neither market forces nor current development budgets provide. The cost of global broadband connectivity, of reliable electrical grids in underserved regions, of the hardware and institutional supports that translate tool-access into capability, is large in absolute terms but modest relative to the economic returns that genuine global participation in the AI economy would generate. The infrastructure investment is not charity. It is a precondition for the global expansion of productive capacity that the AI tools make possible — an expansion that benefits the global economy as a whole, not merely the populations that receive the investment.

Myrdal argued, in Economic Theory and Under-Developed Regions, that infrastructure investment in the periphery was one of the most effective mechanisms for counteracting backwash effects, because it provided the physical preconditions for economic activity that market forces alone would not supply. The argument applies to the digital infrastructure that AI requires with undiminished force. The market will not provide broadband to rural Bihar, because the immediate financial return does not justify the investment. The long-term economic return — measured not in quarterly earnings but in human productive capacity, in the innovations that would emerge from a global talent pool that is currently constrained by infrastructure deficits, in the tax revenues and economic growth that broad-based AI participation would generate — exceeds the investment by orders of magnitude. But the long-term return accrues to the public, while the cost of investment must be borne now, and the gap between public return and private cost is the gap that institutional intervention must bridge.

The third dimension is regulatory. The analysis in Chapter 6 documented the political economy of AI governance — the structural dynamics that produce regulatory frameworks responsive to the interests of the companies that build the tools rather than the populations whose lives the tools reshape. The regulatory intervention required is both supply-side and demand-side. Supply-side regulation — governing what companies may build, what disclosures they must make, what safety standards they must meet — is already underway, albeit shaped by the lobbying dynamics that make it responsive to industry interests. Demand-side regulation — governing the institutional conditions that citizens need to participate in the AI economy on equitable terms — is almost entirely absent and is more urgently needed.

Demand-side regulation includes labor protections adapted to the AI economy: portable benefits for gig workers and independent contractors, retraining programs funded by the productivity gains that AI generates, collective bargaining frameworks that give dispersed digital workers the organizing capacity that factory workers achieved through physical proximity. It includes data governance frameworks that ensure the cognitive contributions of AI users — the patterns of thought and interaction that improve the models — are compensated rather than extracted. It includes competition policy that prevents the concentration of AI capability in a handful of companies from hardening into monopolistic control over the tools that the entire economy depends on. Each of these regulatory interventions has precedent in the institutional responses to previous technological transitions — labor protections in the industrial era, antitrust enforcement in the Gilded Age, public utility regulation in the era of electrification — and each faces the same political obstacles that previous interventions faced: the resistance of concentrated economic interests that benefit from the absence of regulation.

The fourth dimension is redistributive. The productivity gains that AI generates are real, measurable, and growing. The question is who captures them. In the absence of redistributive mechanisms, the gains flow to capital — to the shareholders of the companies that build and deploy AI tools — while the costs of the transition, measured in displacement, deskilling, and the erosion of the economic foundations that sustained communities, fall on labor and on the populations that cumulative causation has already disadvantaged. Redistributive mechanisms — progressive taxation of AI-generated profits, sovereign wealth funds capitalized by technology revenues, direct transfers to displaced workers, public investment in the institutional infrastructure that translates AI capability into broad-based benefit — are the fiscal expression of the interventionist imperative. They do not impede the technology. They redirect its gains toward the populations that the technology, left to market dynamics, systematically bypasses.

Myrdal would be the first to acknowledge that this four-dimensional intervention is politically difficult, institutionally complex, and certain to be resisted by the interests it would constrain. He would also be the first to insist that the difficulty is not a reason for abandoning the prescription. The difficulty is a reason for developing the political strategies — the coalitions, the organizing frameworks, the communicative capacities — that the prescription requires. The analytical case for intervention is established by the evidence of cumulative causation. The political case must be constructed through mobilization, and the mobilization must occur during the narrow window in which the distributional patterns of the AI economy are still malleable — before they harden, as every previous round of cumulative causation has hardened, into structures that resist subsequent reform.

The moral test of any technology is not what it makes possible in principle but what it makes possible in practice, for the people who most need expanded capability, in the institutional environments they actually inhabit. By this test, the AI transition has not yet been measured. The tools exist. The capability is real. The democratization is genuine for those who possess the prerequisites. The question — Myrdal's question, the question that his entire career was devoted to answering — is whether the institutional conditions that translate tool-access into genuine human development will be constructed in time, at scale, for the populations that cumulative causation has positioned to benefit least from the technology and suffer most from its absence.

The answer is not determined by the technology. It is determined by the choices — the political, institutional, redistributive choices — that the societies deploying the technology make during the period when the choices still matter.

That period is now. The window is narrow. The forces of cumulative causation are already operating, already compounding, already hardening initial advantages into structural positions that will resist subsequent redistribution. The interventionist imperative is not an ideology. It is the empirical conclusion of a century of evidence about what happens when powerful technologies are deployed within institutional environments that concentrate rather than distribute their benefits.

The dam must be built. It must be built now. And it must be built not where the river is already calm but where the current runs fastest and the banks are most vulnerable — in the schools that lack resources, the communities that lack infrastructure, the nations that lack institutional capacity, the populations that cumulative causation has positioned at the greatest disadvantage and therefore at the greatest need.

Myrdal's framework does not offer optimism. It offers something more valuable: the analytical clarity to see the problem, the institutional specificity to address it, and the moral conviction that understanding a problem creates an obligation to act. The obligation is ours.

---

Epilogue

The number that rewired my thinking was not a trillion, or twenty-fold, or two months to fifty million users. It was one hundred dollars.

I had written the figure myself, in the opening chapters of The Orange Pill, presenting it exactly the way it felt from my vantage point: absurdly cheap for what it delivered. One hundred dollars a month for the most powerful creative amplifier in human history. I meant it as evidence of democratization. I believed it when I wrote it.

Then Myrdal's framework arrived, and I had to sit with a question I had not thought to ask: One hundred dollars relative to what?

Not relative to the capability it provides. Relative to the person who pays it. The San Francisco engineer and the developer in Lagos pay the same price and inhabit different economic universes, and the flat fee structure that looked egalitarian from my desk looks regressive from the perspective of the person for whom it represents a quarter of monthly income. I knew this, in the way you know things that sit at the periphery of your attention — present but uninspected. Myrdal's framework moved it to the center, where it belongs.

What unsettles me most about Myrdal is not the diagnosis. It is the circularity. The institutions that could break the cycle are themselves products of the cycle. The educational systems that could prepare everyone for the AI economy are weakest where the preparation is most needed. The regulatory frameworks that could redirect the gains are most captured by industry where the companies are most powerful. The infrastructure investments that could extend access are least likely where the return on investment is measured in human development rather than quarterly earnings. Every lever you reach for is connected to the same machine it is supposed to fix.

I built The Orange Pill on the conviction that AI is an amplifier, and that the question is whether you are worth amplifying. I still believe that. But Myrdal forced me to see the question I was not asking: amplified where? Amplified into what institutional environment? Amplified on behalf of whose priorities?

The engineers in Trivandrum were amplified. Twenty-fold. And I chose to keep and grow the team. But Myrdal's point is that my choice is contingent — dependent on my specific values at my specific company — and the system does not run on contingent choices. It runs on institutional structures, and the structures are not being built at the speed the moment requires.

Here is what I take from this: the beaver metaphor holds, but the beaver needs a political education. Building dams is not enough. You have to understand who controls the water rights, who profits from the unregulated flow, and whose fields flood when the dam is placed where the powerful want it instead of where the ecosystem needs it. Stewardship without political awareness is gardening on someone else's land.

I am not abandoning the tower I built in The Orange Pill. I am adding a foundation I missed. The view from the roof is real, but the building needs footings that reach into the ground — into the institutional, structural, distributional ground that determines whether the sunrise I described is visible from every position or only from the positions that were already elevated.

The interventionist imperative is not comfortable for a builder. Builders want to build. We want the tool in our hands, the conversation with the machine, the exhilaration of collapsing the imagination-to-artifact ratio toward zero. Myrdal says: yes, build — but ask who the building is for, and build the institutions that ensure the answer is not only the people who look like you, live where you live, and can afford what you can afford.

That is the unfinished work. Not the tools. The dams.

Edo Segal

AI expands capability. Cumulative causation determines who keeps it.
The most important question about the AI revolution is the one almost nobody is asking.
The AI revolution promises democratization:

AI expands capability. Cumulative causation determines who keeps it.

The most important question about the AI revolution is the one almost nobody is asking.

The AI revolution promises democratization: tools available to anyone, barriers collapsing, the floor rising for everyone. Gunnar Myrdal's framework reveals what that promise conceals. His principle of cumulative causation -- the empirical finding that advantages compound and disadvantages deepen through self-reinforcing cycles -- describes the dynamics of AI adoption with uncomfortable precision. The same tools that empower a developer in San Francisco extract cognitive labor from the Global South. The same productivity gains that expand capability for the well-resourced widen the gap for those without infrastructure, education, or institutional support. Through Myrdal's lens, this book examines who actually captures AI's benefits, who bears the cost of transition, and why the institutional dams that could redirect the flow toward equity are not being built at the speed the moment demands. The technology is not the problem. The distribution is.

Gunnar Myrdal
“** "The notion of a value-free social science is naïve empiricism... valuations are always with us." -- Gunnar Myrdal”
— Gunnar Myrdal
0%
11 chapters
WIKI COMPANION

Gunnar Myrdal — On AI

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Gunnar Myrdal — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →