Ha-Joon Chang — On AI
Contents
Cover Foreword About Chapter 1: The Ladder They Climbed Chapter 2: The Amnesia of the Advantaged Chapter 3: Protectionism Built the Modern World Chapter 4: The Free Market Fairy Tale Chapter 5: AI as Industrial Policy by Other Means Chapter 6: Who Sets the Rules and Why Chapter 7: The Subsidy Hidden in Plain Sight Chapter 8: Infant Industry Protection in the Age of Intelligence Chapter 9: The Impossible Prescription Chapter 10: The Ladder Still Standing Epilogue Back Cover

Ha-Joon Chang

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Ha-Joon Chang. It is an attempt by Opus 4.6 to simulate Ha-Joon Chang's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The number that rewired my thinking was not a productivity multiplier or an adoption curve. It was a tariff rate.

Forty to fifty percent. That was the average US duty on manufactured imports during the century America became the world's dominant industrial power. I had spent my career inside the mythology that markets built Silicon Valley, that entrepreneurial genius conjured the internet, that the garage was the origin story. Chang's tariff schedules — plain, dull, undeniable — told a different story. Every piece of infrastructure I build on was constructed behind protective walls, with public money, over decades. The garage came after. The state came first.

I needed Chang because The Orange Pill has a blind spot, and I would rather name it than pretend it does not exist.

When I describe the developer in Lagos gaining access to the same coding leverage as an engineer at Google, I am telling the truth about the tool. I am leaving out the truth about the ground. Unreliable electricity is not a detail. It is a policy failure. Expensive bandwidth is not a market condition. It is an institutional absence. The gap between what she can build with Claude and what she can sustain as a business is not a gap the tool can close. Only institutions can close it — the same kinds of institutions that wealthy nations built for themselves and now discourage others from building.

Chang spent his career documenting a pattern so consistent it should be a law of development economics: every wealthy nation on Earth built its prosperity using protectionist industrial policies, then turned around and told everyone else to rely on free markets. The countries that climbed the ladder kicked it away. The companies built on public investment preach private initiative. The amnesia is so complete it functions as common sense.

This matters for AI because the same pattern is forming in real time. The frontier models rest on publicly funded research, publicly built internet infrastructure, publicly subsidized semiconductor supply chains. The gains are captured privately. The prescription for the rest of the world is: adopt our tools, compete on your comparative advantage, trust the market. It is the Washington Consensus repackaged for the age of intelligence, and Chang's work is the clearest lens I have found for seeing through it.

The dam metaphor I use throughout The Orange Pill describes what to build. Chang's framework specifies who gets to build it, and who is being told to stand in the river without sticks.

The tools are generous. The conditions for using them are not. And the conditions are constructed — which means they can be reconstructed, if we learn from the record of what actually worked.

Edo Segal ^ Opus 4.6

About Ha-Joon Chang

1963-present

Ha-Joon Chang (1963–present) is a South Korean economist and one of the most influential critics of free-market orthodoxy in contemporary development economics. Born in Seoul, he studied economics at Seoul National University before completing his PhD at the University of Cambridge, where he taught for over two decades at the Faculty of Economics. In 2022, he moved to the School of Oriental and African Studies (SOAS), University of London. Chang's most celebrated works include *Kicking Away the Ladder: Development Strategy in Historical Perspective* (2002), which documented how today's wealthy nations used protectionist policies to industrialize and then prohibited those same policies for developing countries, and *23 Things They Don't Tell You About Capitalism* (2010), which challenged mainstream economic assumptions with historical evidence and accessible argument. His other major works include *Bad Samaritans: The Myth of Free Trade and the Secret History of Capitalism* (2007) and *Economics: The User's Guide* (2014). Chang's key contributions center on infant industry protection, the role of the state in economic development, the selective amnesia of wealthy nations regarding their own protectionist histories, and the argument that free-market prescriptions imposed on developing countries have consistently produced worse outcomes than the interventionist strategies those countries were pressured to abandon. He has served as a consultant to the World Bank, the Asian Development Bank, and various UN agencies, and was named by *Prospect* magazine as one of the world's top twenty public intellectuals. His work remains a foundational reference for anyone examining how the rules of the global economy are written, by whom, and in whose interest.

Chapter 1: The Ladder They Climbed

In 1721, Robert Walpole, Britain's first Prime Minister, introduced a comprehensive industrial policy that would have made a modern-day Silicon Valley lobbyist choke on his artisanal coffee. Walpole raised tariffs on imported manufactured goods, abolished export duties on most British manufactures, and provided subsidies for raw material imports that British factories needed. He was not being subtle. He was telling the world: Britain will protect its industries until they are strong enough to compete, and not a day before.

This is how the richest country of the nineteenth century actually became the richest country of the nineteenth century. Not through free trade. Not through laissez-faire. Through a deliberate, sustained, interventionist program of industrial protection that lasted more than a century.

Ha-Joon Chang's Kicking Away the Ladder documented this history with the patience of a prosecutor and the satisfaction of someone who has caught the defendant lying under oath. His central finding was devastating in its simplicity: virtually every wealthy nation on Earth today built its prosperity using protectionist industrial policies — tariffs, subsidies, state-owned enterprises, directed credit, technology transfer requirements — and then, once at the top, advocated the opposite for everyone else. The advice the rich gave the poor bore no resemblance to the practice the rich had followed. It was, to use Chang's own framing, like an adult who climbed a ladder to the rooftop and then kicked it away so nobody else could follow.

The history is specific and well-documented. Britain maintained tariffs on manufactured imports that ranged from forty to fifty-five percent during the period of its most explosive industrial growth. The United States was, for most of the nineteenth century, the most protectionist economy in the world. Alexander Hamilton's Report on the Subject of Manufactures, delivered to Congress in 1791, laid out a comprehensive program of infant industry protection — tariffs, subsidies, quality standards, infrastructure investment — that became the template for American industrial policy for the next hundred years. Average US tariff rates on manufactured imports exceeded forty percent from the Civil War until the First World War, the precise period during which the United States overtook Britain as the world's largest industrial economy.

Germany used state subsidies and directed credit to build its chemical and steel industries in the late nineteenth century. The Prussian state invested directly in railroads, education, and technical training. Japan's Ministry of International Trade and Industry orchestrated one of the most successful programs of industrial development in human history, using import quotas, directed lending through state-controlled banks, technology licensing requirements, and export promotion to transform a devastated postwar economy into a global technological powerhouse in three decades. South Korea followed an almost identical playbook: the Park Chung-hee government in the 1960s and 1970s directed credit to favored industries, imposed import restrictions, negotiated technology transfer agreements with foreign firms, and invested heavily in education and infrastructure. Taiwan, Singapore, and China each developed their own variations on the same theme.

The pattern admits no exceptions. There is not a single case — not one — of a nation that developed a sophisticated industrial economy through the free-market policies that the wealthy nations now prescribe for everyone else.

This history matters for the argument that Edo Segal advances in The Orange Pill, and it matters in ways that Segal himself acknowledges but does not fully develop. In his chapter on the pattern of technological transitions, Segal writes that "the productivity gains of the industrial revolution took generations to translate into broadly distributed improvements in living standards, and the translation was not automatic: it required labor movements, legislation, decades of political struggle, the explicit construction of institutions that did not exist at the time of the first power loom." This observation is correct and important. Chang's contribution is to specify what those institutions were, who built them, and — most critically — who is now preventing their construction in the context of artificial intelligence.

The translation was not merely "not automatic." It was actively contested at every stage. The factory owners who benefited from the power loom did not voluntarily share their gains with the workers who operated it. The railroad barons who connected the American continent did not spontaneously create safe working conditions or fair wages. The Japanese zaibatsu that built the modern Japanese economy did not do so out of altruism. In every case, the translation of technological capability into broad prosperity required the deliberate construction of institutions — labor laws, public education systems, progressive taxation, social insurance, antitrust regulation — that the beneficiaries of the existing arrangement resisted with every tool at their disposal.

The AI transition is the latest technology that requires this kind of institutional translation. And the institutional translation is being resisted by the same kind of actors, for the same kind of reasons, using the same kind of arguments that were used to resist every previous round of institution-building.

Consider the contemporary landscape. The companies that built the leading AI systems — Anthropic, OpenAI, Google DeepMind, Meta AI — are headquartered in the United States, a country whose AI dominance rests on a foundation of public investment so massive it is easy to forget it exists. The internet was developed by DARPA, the research arm of the US Department of Defense. The algorithms underlying modern deep learning were developed in universities funded by the National Science Foundation, the Department of Energy, and the National Institutes of Health. The semiconductor supply chains that manufacture the chips running AI systems were shaped by decades of industrial policy in Taiwan, South Korea, and Japan, and are now being reshaped by the CHIPS and Science Act, which committed over fifty billion dollars in direct subsidies to semiconductor manufacturing on American soil.

This is not a free market. This is industrial policy on a scale that would make Robert Walpole weep with envy.

And yet. The discourse that surrounds AI development — the public rhetoric, the conference keynotes, the op-eds in the Financial Times and the Wall Street Journal — presents the AI revolution as a triumph of private enterprise, entrepreneurial vision, and market competition. The garage mythology is powerful. The story of the brilliant founder, the venture-backed startup, the scrappy team that outcompeted the incumbents through pure ingenuity — this story is so deeply embedded in the culture of Silicon Valley that it functions as an origin myth, and like all origin myths, it serves the interests of the civilization that tells it.

The myth is not harmless. It performs a specific ideological function: it attributes the gains of AI to private initiative, which legitimizes the concentration of those gains in private hands. If AI is a product of the free market, then the market's verdict on who should profit is legitimate. If AI is a product of decades of public investment that private companies subsequently commercialized, then the public has a legitimate claim on the returns — and the companies' resistance to taxation, regulation, and redistribution becomes harder to justify.

Chang's framework cuts through the mythology with a historian's scalpel. The question is not whether the entrepreneurs who built Anthropic or OpenAI are brilliant. They are. The question is whether their brilliance would have produced anything at all without the publicly funded internet, the publicly funded research that produced the algorithms their models depend on, the publicly funded university system that educated their workforce, and the publicly subsidized semiconductor supply chain that manufactures their hardware. The answer is obviously no. The brilliance is real. The claim that it is self-sufficient is fantasy.

This matters for the developing world in ways that the AI discourse has barely begun to acknowledge. When Segal describes his developer in Lagos — the woman who now has access to "the same coding leverage as an engineer at Google" — he is describing a real and genuinely significant expansion of individual capability. Chang's framework does not dispute this. What it asks is a different question: What happens between the individual's access to the tool and the society's capacity to benefit from it?

The answer, historically, is: institutions happen. Or they don't. And when they don't, access to powerful tools produces dependency rather than development. The developer in Lagos can build a prototype. Whether she can build a company, employ others, generate tax revenue, and contribute to a domestic technology ecosystem that creates broad prosperity depends on conditions the tool cannot provide: reliable electricity, affordable internet connectivity, intellectual property protections, access to payment systems, functioning courts, a domestic market large enough to sustain a software business, and access to international markets shaped by trade agreements in which Lagos had no voice.

Every one of these conditions is a product of institutional construction. Every one of them was built, in the countries that have them, through the kind of deliberate policy intervention that Chang has spent his career documenting and that contemporary economic orthodoxy dismisses as distortionary. The irony is bitter. The countries that built their institutions through intervention now advise the countries that lack them to rely on the market. The countries that climbed the ladder now kick it away.

Segal writes that "the dams determine whether the trajectory becomes expansion or collapse." Chang's work provides the engineering specifications for those dams. They are not made of metaphor. They are made of tariff schedules and public education budgets and labor codes and telecommunications regulations and antitrust enforcement and progressive taxation and social insurance systems. Each of these was contested. Each of these was resisted by the interests that benefited from the status quo. Each of these was eventually built — in the countries that managed to develop — through a combination of political mobilization, institutional innovation, and sheer persistence.

The AI transition will be no different. The technology is new. The political economy is ancient.

The ladder is still standing. The question that echoes across three centuries of development economics, from Walpole's tariffs to Hamilton's manufactures report to MITI's industrial masterplan, is whether the nations now ascending it will be allowed to climb — or whether the nations at the top will, once again, kick it away.

---

Chapter 2: The Amnesia of the Advantaged

There is a very specific kind of forgetting that happens when a nation becomes rich. It is not the passive forgetting of time passing, the way one forgets the name of a childhood friend or the details of a meal eaten years ago. It is an active, structural, functional forgetting — the deliberate erasure of the means by which the wealth was produced, replaced by a story in which the wealth was always already there, a natural consequence of superior institutions, superior values, superior people.

Ha-Joon Chang has a name for this. He calls it "kicking away the ladder." But the metaphor, vivid as it is, understates the audacity of the maneuver. It is not merely that the wealthy nations climbed a ladder and then removed it. It is that they climbed the ladder, removed it, and then gave lectures to the people on the ground about the moral superiority of not using ladders.

The United States, which maintained tariffs averaging above forty percent on manufactured goods for most of the nineteenth century, spent the second half of the twentieth century pressuring developing nations to lower their tariffs through the World Trade Organization, the International Monetary Fund, and bilateral trade agreements. Britain, which used the Navigation Acts to restrict colonial trade for over a century, became the world's most vocal champion of free trade the moment its industries could outcompete any rival. The pattern is so consistent across so many countries and so many centuries that calling it a coincidence would require a level of credulity that no economist, however orthodox, should claim.

The amnesia serves a purpose. If the wealthy nations acknowledged that their wealth was produced by policies they now prohibit for others, the entire intellectual foundation of the development establishment — the World Bank, the IMF, the WTO, the OECD — would require reconstruction from the ground up. The policy advice these institutions dispense — liberalize trade, deregulate markets, privatize state enterprises, reduce government spending — would be revealed as not merely wrong but perverse: the opposite of what actually worked.

This is uncomfortable knowledge. The institutions resist it the way any organism resists information that threatens its survival. The amnesia is not a failure of memory. It is a success of ideology.

And now the amnesia has found its newest and most potent expression: artificial intelligence.

The companies that dominate the AI landscape — Anthropic, OpenAI, Google DeepMind, Meta — are headquartered in a country that has practiced industrial policy more aggressively and more consistently than almost any other nation on Earth, while simultaneously preaching the gospel of free markets to the rest of the world. The American government invested billions of dollars in the basic research that produced the internet, the algorithms, and the computational infrastructure on which AI depends. It funded the university system that trained the researchers. It maintained a patent regime that allowed companies to commercialize publicly funded discoveries. It provided tax incentives, regulatory forbearance, and — through the CHIPS Act — direct subsidies to the semiconductor supply chain.

Having climbed this elaborately constructed ladder, the American AI industry now advocates for "open innovation," "competitive markets," and "light-touch regulation." These phrases sound neutral. They are not. They are the contemporary equivalent of Britain's post-1846 conversion to free trade: the advocacy of openness by the party that no longer needs protection because it has already won.

Consider what "open innovation" means in practice. It means that the companies that have already accumulated the data, the talent, the computational infrastructure, and the brand recognition compete on equal terms with everyone else. Equal terms, in a race where one runner started with a hundred-meter head start, produce a predictable outcome. The advocacy of equality among the unequal is, as Anatole France observed in a different context, a majestic form of injustice.

The technology industry's amnesia is particularly brazen because the public investment that built it is so recent and so well-documented. This is not like tracing the origins of British mercantilism in the sixteenth century, where the evidence must be painstakingly reconstructed from archival sources. The publicly funded origins of the American technology industry are a matter of living memory. Vint Cerf and Bob Kahn developed TCP/IP under DARPA funding. Tim Berners-Lee developed the World Wide Web at CERN. The Global Positioning System was a military project. The touchscreen technology in every smartphone was developed with public funding. Siri's underlying technology was funded by DARPA's PAL (Perceptive Assistant that Learns) program. The backpropagation algorithm that enabled modern deep learning was developed by researchers working with government grants.

These are not obscure historical footnotes. They are the load-bearing walls of the building in which the AI industry lives. And yet, the industry's self-presentation consistently minimizes or erases them. The origin story features the visionary founder, the angel investor, the garage. It does not feature the decades of patient, publicly funded basic research without which the garage would have contained nothing more interesting than a car.

Chang's insight is that this erasure is not incidental to the industry's political agenda. It is essential to it. If AI is understood as a product of private ingenuity, then the private sector's claim to control its development and capture its returns is strong. If AI is understood as a product of public investment that private companies commercialized, then the public's claim to shape its development and share its returns is equally strong — and the industry's resistance to regulation, taxation, and redistribution looks less like principled opposition to government overreach and more like an attempt to privatize gains while socializing costs.

The amnesia extends to the international arena. The United States now uses export controls — specifically, restrictions on the sale of advanced AI chips to China and other countries — as an instrument of industrial policy. This is a perfectly rational policy from the perspective of American strategic interests. It is also a breathtaking contradiction of the free-market principles that the United States has spent decades imposing on others. When developing nations used export controls, import quotas, or technology transfer requirements to build their domestic industries, the United States, through the WTO and bilateral trade agreements, pressured them to stop. Now that the technology at stake is AI rather than textiles or steel, the same instruments are acceptable — because they serve American interests rather than threatening them.

This is not hypocrisy in the ordinary sense. Ordinary hypocrisy involves saying one thing and doing another while knowing the difference. The amnesia of the advantaged is more sophisticated. It involves genuinely believing that the rules that serve your interests are universal principles, and that the rules that serve others' interests are distortions. The American policymaker who imposes chip export controls while lecturing China about the dangers of state intervention in markets does not, in most cases, experience cognitive dissonance. The amnesia is too deep for that.

Segal recognizes, in The Orange Pill, that "the question of who captures those gains, whether they flow broadly or narrow, is not yet answered." Chang's analysis suggests that the answer is already being written — not by market forces, not by technological capability, but by the institutional arrangements that the powerful are constructing while the amnesia prevents the rest from seeing what is happening.

The AI Act adopted by the European Union in 2024 represents one attempt to write different rules. The EU's approach — risk-based classification, transparency requirements, fundamental rights protections — reflects a different set of priorities than the American approach. Whether the EU's rules will shape the global AI ecosystem or merely create a compliance burden that advantages the American companies large enough to absorb it remains an open question. What is not open is the question of whether rules matter. Rules always matter. The question is always who writes them.

The amnesia of the advantaged is not a disease that can be cured by exposure to facts. It is a structural feature of the global economy, reinforced by institutional incentives, ideological commitments, and the simple human tendency to attribute one's success to one's virtues rather than one's circumstances. Overcoming it requires not merely better arguments but different power configurations — a redistribution of the capacity to write rules, set standards, and shape the institutional environment in which AI develops.

Chang has spent his career documenting what the amnesia conceals. The question for the AI age is whether the documentation will arrive in time to change the outcome, or whether, as in so many previous episodes, the ladder will be kicked away before most of the world has had a chance to climb it.

---

Chapter 3: Protectionism Built the Modern World

The word "protectionism" has become an insult. To call an economic policy protectionist in polite company is roughly equivalent to calling a person superstitious — it implies a failure of intellectual sophistication, a clinging to primitive beliefs in the face of superior modern knowledge. The superior modern knowledge, in this case, is the theory of comparative advantage, first articulated by David Ricardo in 1817, which demonstrates that all countries benefit from free trade even when one country is more efficient at producing everything.

The theory is elegant. It is mathematically coherent. And it has been used, for two hundred years, to justify policies that have repeatedly failed to produce the outcomes the theory predicts.

Ha-Joon Chang's work does not dispute Ricardo's mathematics. What it disputes is the assumption that mathematical elegance translates into policy relevance. The theory of comparative advantage assumes, among other things, that factors of production do not move between countries, that technology is fixed, that there are no economies of scale, and that full employment obtains. In the real world, none of these assumptions holds. Capital moves. Technology changes. Scale matters enormously. And unemployment — the chronic, structural unemployment that results when a country's existing industries are destroyed by foreign competition before new ones have time to develop — is not a theoretical curiosity. It is a catastrophe, measured in lives.

The gap between the theoretical elegance of free trade and its practical consequences has been filled, for most of the past two centuries, by the thing that the theory says should not be necessary: protection. Tariffs, subsidies, import quotas, technology transfer requirements, directed credit, state investment in infrastructure and education — the entire toolkit of what contemporary economics dismisses as "distortionary" intervention — built every successful modern economy.

Consider the sequence. Britain protected its textile industry with tariffs and export bans on raw wool until it was globally competitive, then converted to free trade and urged everyone else to do the same. The United States imposed tariffs that averaged above forty percent on manufactured imports during the entire century of its industrialization. Germany's Friedrich List explicitly argued against Ricardo's free trade doctrine, proposing instead an "infant industry" strategy: protect your industries while they are young, invest in them, give them time to learn, and liberalize only when they are strong enough to survive foreign competition. Japan followed List's advice more faithfully than List's own countrymen. South Korea followed Japan.

Each of these countries eventually liberalized. The crucial word is "eventually." They liberalized after their industries had become competitive — not before. The sequence matters more than the destination. A country that liberalizes before its industries are competitive exposes those industries to competition they cannot survive. The industries die. The skills they were building die with them. The country is left with whatever it could already do — typically the export of raw materials and low-value services — and the promise that comparative advantage will, in time, produce prosperity.

That promise has been tested. The results are in.

The developing countries that followed the Washington Consensus prescription — liberalize trade, deregulate markets, privatize state enterprises, cut government spending — during the 1980s and 1990s experienced, on average, slower growth than they had achieved during the 1960s and 1970s, when most of them were pursuing protectionist industrial policies. Latin America, the poster region for the Washington Consensus, lost an entire decade of development. Sub-Saharan Africa lost two. The countries that defied the prescription — China, most obviously, but also Vietnam, India (which liberalized selectively rather than comprehensively), and Ethiopia — grew faster.

This is not a contested finding. The data is clear. The countries that protected their industries grew faster than the countries that liberalized. The theory said the opposite should happen. The theory was wrong — not because its logic was flawed, but because its assumptions did not hold in the real world.

The relevance to AI is direct and immediate.

The AI transition is producing a global division of labor: a small number of countries develop frontier AI models, and everyone else adopts them. The developing nations adopt rather than develop because they lack the computational infrastructure, the trained talent pools, the research ecosystems, and the financial resources to build frontier models. The conventional prescription — adopt the leading tools, integrate them into your economy, compete on the basis of your existing comparative advantage — is structurally identical to the Washington Consensus advice that failed so spectacularly in the 1980s and 1990s.

Segal captures the optimistic version of this adoption in The Orange Pill when he describes the democratization of capability. A developer in Lagos, a student in Dhaka, an engineer in Trivandrum — each can now access tools that were previously available only to well-resourced teams in wealthy countries. The imagination-to-artifact ratio, in Segal's compelling phrase, has collapsed to the width of a conversation. This is real. The expansion of individual capability is genuine and significant.

But individual capability and national development are not the same thing. A country full of individuals who can build prototypes is not a country that has built a technology industry. The difference is institutional. A technology industry requires educational institutions that produce a continuous supply of trained talent. It requires financial institutions that can evaluate and fund technology ventures. It requires legal institutions that protect intellectual property and enforce contracts. It requires physical infrastructure — electricity, internet connectivity, logistics networks — that functions reliably. It requires a domestic market large enough to sustain technology firms during the period when they are learning and not yet globally competitive.

Every one of these institutional requirements was met, in the countries that built successful technology industries, through deliberate public investment and strategic policy intervention. Israel's technology sector was built on the back of military R&D spending and government venture capital funds. Taiwan's semiconductor industry was created by the state-funded Industrial Technology Research Institute, which licensed technology from RCA and then spun off TSMC. South Korea's technology sector was built through directed credit from state-controlled banks, government-negotiated technology transfer agreements, and massive public investment in education.

The countries that are now being advised to "adopt AI tools and compete on the basis of their comparative advantage" are being given advice that the countries dispensing the advice never followed themselves. The United States did not build its technology sector by adopting British tools and competing on the basis of its comparative advantage in cotton. It built its technology sector by protecting its infant industries, investing in public education, funding basic research, and constructing the institutional infrastructure that made technology development possible.

Chang's 2025 warning about India — that it would be "one of the biggest casualties of AI" — crystallizes the danger with uncomfortable precision. India built its growth model on services: business process outsourcing, call centers, low-value-add IT services. These are precisely the activities that AI automates most effectively. A country that skipped manufacturing and built its middle class on activities that a large language model can perform is a country standing on ground that is about to give way.

The conventional response is that India should "move up the value chain" — shift from low-value services to high-value services, from call centers to AI development. This advice has the same structure as the advice that developing countries should pursue their comparative advantage: it tells you where to end up without telling you how to get there. How does a country move up the value chain when the upper rungs are occupied by incumbents with decades of accumulated advantage? How does it build an AI development ecosystem when the frontier models require billions of dollars of computational investment, when the talent is concentrated in a handful of wealthy-country universities and companies, and when the rules of the game — intellectual property regimes, chip export controls, data governance frameworks — are written by the incumbents?

The answer, historically, is always the same. Strategic intervention. Protect the infant industries. Invest in education and research. Build the institutional infrastructure. Give the domestic ecosystem time to learn, to fail, to iterate, and to develop capabilities that cannot be acquired overnight. And resist — politically, diplomatically, institutionally — the pressure from the countries at the top of the ladder to liberalize before you are ready.

Protectionism is not a permanent condition. It is a phase. It is what a country does while it is building the capability to compete. Every successful country went through this phase. Every successful country eventually graduated from it. The sin of contemporary economic orthodoxy is not that it advocates eventual liberalization — eventual liberalization is indeed the goal. The sin is that it insists on immediate liberalization, which forecloses the possibility of ever building the capability that would make liberalization beneficial.

The modern world was built by protectionism. The modern AI economy is being built by protectionism — American protectionism, Chinese protectionism, European protectionism, each conducted under different names and different justifications, but recognizably the same in structure. The question is not whether protectionism works. The historical evidence on that question is settled. The question is who gets to practice it.

---

Chapter 4: The Free Market Fairy Tale

Once upon a time, in a garage in Palo Alto, two young men with big ideas and no money built a company that changed the world. They did it through genius, hard work, and the magic of the free market. The government stayed out of the way. The market rewarded their innovation. And everyone lived prosperously ever after.

This is the creation myth of the technology industry. Like all creation myths, it serves the civilization that tells it. And like all creation myths, it is not true.

Ha-Joon Chang's work has always been concerned with the stories economies tell about themselves — the narratives that make existing arrangements seem natural, inevitable, and just. In 23 Things They Don't Tell You About Capitalism, he systematically dismantled the most cherished stories of mainstream economics: that free markets produce the best outcomes, that shareholders should run companies, that government intervention distorts markets, that poor countries are poor because they lack the right values. Each chapter took a piece of conventional wisdom, examined it against the historical record, and found it wanting.

The technology industry has its own set of conventional wisdoms, and they are, if anything, more resistant to examination than those of mainstream economics, because they are wrapped in the peculiar glamor of innovation. The innovator, in the Silicon Valley mythos, is a secular saint: brilliant, driven, unencumbered by institutional loyalties or bureaucratic caution, creating value from nothing through pure force of vision. The market is the arena in which this saint proves his worth. Government is the obstacle that must be overcome — or, in the more diplomatic version, the well-meaning but slow-footed institution that should get out of the way and let the innovators innovate.

This story is not merely incomplete. It is the opposite of what happened.

Start with the internet. The internet was not developed by private enterprise. It was developed by DARPA, the Defense Advanced Research Projects Agency, with public money, for military purposes. ARPANET, the precursor to the internet, was commissioned in 1969 and funded entirely by the US Department of Defense. The TCP/IP protocol that makes the internet work was developed by Vint Cerf and Bob Kahn under DARPA contract. The Domain Name System was developed under government auspices. The transition from ARPANET to the commercial internet was managed by the National Science Foundation. At no point in the creation of the internet did the private sector take the lead. The private sector entered the picture after the infrastructure was built, the protocols were established, and the network was functioning — and then claimed credit for the revolution.

The World Wide Web was not developed in a garage. It was developed at CERN, the European Organization for Nuclear Research, a publicly funded international research institution. Tim Berners-Lee was a CERN employee when he wrote the proposal for what would become the Web. CERN made the technology available royalty-free in 1993. The decision not to patent the Web — a decision made by a public institution acting in the public interest — is arguably the single most economically consequential act of generosity in the history of technology. Had CERN patented the Web and licensed it commercially, the entire subsequent history of the digital economy would have been different.

Move to the algorithms that power modern AI. The backpropagation algorithm, which enabled the training of deep neural networks, was developed through academic research funded by government grants. The convolutional neural networks that revolutionized image recognition were developed by Yann LeCun at Bell Labs, which was at the time a subsidiary of AT&T, a regulated monopoly whose research capacity was effectively subsidized by guaranteed profits from its telephone business. The transformer architecture that underlies every modern large language model was developed by researchers at Google — but Google's capacity to fund that research rested on a business model built atop the publicly funded internet, and the researchers themselves were trained in publicly funded universities.

The hardware tells the same story. The semiconductor industry that manufactures the chips running AI systems is the product of decades of industrial policy. TSMC, the Taiwanese semiconductor company that produces the most advanced AI chips in the world, was created by the Taiwanese government's Industrial Technology Research Institute, using technology licensed from RCA. The South Korean semiconductor industry was built through government-directed credit and technology transfer agreements. The American semiconductor industry was sustained through military procurement during its early decades — the US Department of Defense was, for years, the largest purchaser of integrated circuits, providing the demand that made commercial production economically viable.

Mariana Mazzucato, the economist whose The Entrepreneurial State documented the public origins of private technology in meticulous detail, traced the components of the iPhone — the technology that the fairy tale attributes entirely to Steve Jobs's genius — and found that every core technology in the device was developed with public funding. The internet, GPS, the touchscreen, Siri's underlying technology, the lithium-ion battery — all products of government-funded research. Jobs's genius was in assembling these publicly funded technologies into a beautiful consumer product. That is a genuine and valuable form of innovation. It is not the same as creating the technologies from scratch, and it does not justify the claim that the market, left alone, would have produced the same outcome.

The fairy tale performs a specific ideological function, and it is worth being explicit about what that function is. If innovation is a product of private initiative operating in free markets, then four conclusions follow. First, the profits from innovation belong to the innovators — taxation is a confiscation of privately created value. Second, regulation is a drag on innovation — the less government interference, the more innovation. Third, the market's distribution of rewards is just — the people who capture the most value are the people who created the most value. Fourth, the correct policy for countries that want to innovate is to create the conditions for private enterprise — low taxes, light regulation, flexible labor markets — and then step aside.

Each of these conclusions serves the interests of the companies and individuals who have already captured the gains from publicly funded innovation. Each is undermined by the historical record that the fairy tale conceals.

If innovation is substantially a product of public investment, then the public has a legitimate claim on the returns — through taxation, through regulation, through the requirement that the technologies developed with public money be deployed in ways that serve the public interest. If the market's distribution of rewards reflects not the creation of value but the appropriation of publicly created value, then the distribution is not just — it is a subsidy flowing from the public to the private, the opposite of what the fairy tale describes.

This matters for AI governance in the most immediate and practical way. The debate over how to regulate AI is, at bottom, a debate over who has the legitimate authority to shape how AI develops. The fairy tale says the market should decide — that the companies building AI are best positioned to determine how it is used, and that government intervention will only slow them down. The historical record says the opposite — that the technologies the companies are building rest on a foundation of public investment, and that the public, through its elected representatives, has both the right and the responsibility to shape how those technologies develop.

Segal writes that "the market rewards efficiency more reliably than it rewards vision" and acknowledges that "the market does not reward patience." These observations are accurate as far as they go. Chang's framework pushes further: the market does not merely fail to reward patience. It actively punishes the investments — in basic research, in education, in infrastructure, in institutional capacity — that produce the conditions for innovation in the first place. These investments have long time horizons, uncertain payoffs, and benefits that are difficult to appropriate privately. The market, left to its own devices, underproduces them. It always has. That is why every successful innovation ecosystem in history was built on a foundation of public investment that the market alone would not have provided.

The fairy tale is not just wrong. It is expensive. It costs developing nations the policy space to invest in their own AI capabilities. It costs workers the institutional protections that would ensure they share in the gains from AI-driven productivity growth. It costs the public the authority to shape how publicly funded technologies are deployed. And it costs the discourse the intellectual honesty that genuine policy-making requires.

There is a simple test for whether a country actually believes in the free market: look at what it does, not what it says. The United States says it believes in free markets. It has just committed over fifty billion dollars in public subsidies to semiconductor manufacturing through the CHIPS Act. It maintains export controls on advanced AI chips that would make a mercantilist blush. It funds basic AI research through government agencies at a level that dwarfs the research budgets of most countries' entire economies.

These are not the actions of a country that believes the free market will produce the optimal allocation of resources. These are the actions of a country that is practicing industrial policy with full conviction while telling everyone else to rely on the market. The fairy tale is for export. Domestically, the toolkit is the same one that Walpole used, that Hamilton used, that List recommended, that Japan and Korea and Taiwan and China employed to build the industrial economies that now lead the world.

The fairy tale says the garage is where innovation happens. The historical record says the garage is where publicly funded innovation gets repackaged as private enterprise. The distinction matters — not because the entrepreneurs in the garage are not doing real work, but because the story about where value comes from determines the story about who deserves to capture it. And that story, in the age of AI, will determine whether the most powerful technology in human history produces broad prosperity or concentrated wealth.

The fairy tale has been told often enough and confidently enough that it has acquired the force of common sense. Chang's life work has been to remind us that common sense is often the sediment of yesterday's propaganda, hardened into assumption. The AI age needs better assumptions. Building them requires, first, dismantling the fairy tale — not with anger, but with evidence. The evidence is abundant. The question is whether anyone in a position to act on it is willing to listen.

Chapter 5: AI as Industrial Policy by Other Means

In the spring of 2024, Anthropic announced that Claude would support conversations in over a hundred languages. The announcement was celebrated, reasonably, as a step toward inclusion. A tool that speaks Swahili and Bengali and Tagalog is, on its face, a more democratic tool than one that speaks only English. But the celebration obscured a question that nobody on the stage was asking: Who decided which languages to prioritize, in what order, with what level of capability, and according to what criteria?

The answer is: Anthropic did. A private company, headquartered in San Francisco, accountable to its investors and its board, made a decision that shaped which populations on Earth would have high-quality access to the most powerful cognitive tool ever built, and which populations would have to wait.

This is industrial policy. It is not called industrial policy, because the phrase is reserved for actions taken by governments. But the effect is identical. When a government decides to invest in semiconductor manufacturing rather than textile production, it is making an industrial policy decision — choosing which sectors of the economy to develop, which populations to employ, which capabilities to build. When Anthropic decides to optimize Claude for English-language software development before optimizing it for Yoruba-language agricultural extension, it is making the same kind of decision, with the same kind of distributional consequences. The difference is that the government's decision is, at least in theory, subject to democratic accountability. Anthropic's is not.

Ha-Joon Chang's framework illuminates this with uncomfortable clarity. Chang has spent decades arguing that the most important economic decisions are not made by markets. They are made by the institutions that shape markets — the rules, the standards, the regulations, the investment priorities that determine which activities are profitable and which are not, which industries thrive and which wither, which populations gain access to new capabilities and which are left behind. In the conventional economic story, these decisions are made by governments, and the debate is over whether governments make them well or badly. Chang's contribution is to show that the debate is incomplete: the decisions are also made by private actors, and the pretense that private decisions are "market outcomes" rather than policy choices obscures the power that private actors exercise over public outcomes.

The leading AI companies are making industrial policy decisions on a global scale. Consider the choices that are embedded in the design of a frontier AI model — choices that are made by engineers and executives, not by elected representatives, and that have consequences reaching far beyond the company's customer base.

The choice of training data determines which knowledge the model can access and which it cannot. A model trained predominantly on English-language text from the internet has deep knowledge of American case law, European philosophy, and Silicon Valley programming conventions. It has shallow knowledge of oral legal traditions in West Africa, indigenous agricultural practices in Southeast Asia, and informal economic institutions in South Asia. This is not a limitation that can be fixed by adding a few more languages to the interface. It is a structural feature of the model's knowledge base, shaped by the data that was available, affordable, and prioritized during training. The resulting model is not a neutral tool. It is a tool that knows some things and not others, and the pattern of its knowledge reflects the priorities of its creators.

The choice of optimization targets determines what the model is good at. A model optimized for coding productivity — because that is what paying customers in San Francisco need — will be extraordinarily useful for software developers and less useful for smallholder farmers, community health workers, or informal-sector entrepreneurs. This is a reasonable business decision. It is also an industrial policy decision: the choice to develop capabilities that serve wealthy-country knowledge workers before developing capabilities that serve the populations most in need of cognitive assistance.

The choice of pricing determines who can access the model at all. Frontier model access costs money — not trivial money, but subscription fees, API costs, and computational expenses that are affordable in San Francisco and prohibitive in Dhaka. Segal acknowledges this in The Orange Pill when he notes that "the cost of inference of these frontier models is very high" and that token costs "could be cost-prohibitive even to the affluent developer in San Francisco." The pricing of AI is a distributional decision with global consequences, and it is made by corporate finance departments, not by democratic deliberation.

Each of these choices — training data, optimization targets, pricing — shapes the global distribution of AI's benefits as surely as any tariff, subsidy, or trade agreement. Taken together, they constitute an industrial policy of enormous scope, conducted by a handful of private companies, on behalf of their shareholders, with no mechanism for democratic input or accountability.

Chang's analysis of the World Trade Organization provides a useful parallel. The WTO's rules — ostensibly neutral, formally applying equally to all member states — were written by the wealthy nations that dominated the organization's founding negotiations. The rules reflected the interests of the rule-writers: strong intellectual property protections that benefited technology-exporting nations, restrictions on industrial subsidies that constrained developing nations' ability to build domestic capability, and dispute resolution mechanisms that favored well-resourced litigants. The rules were presented as universal principles of fair trade. They were, in practice, instruments of advantage.

The AI ecosystem's emerging rules follow the same pattern. Intellectual property regimes that protect the AI companies' models while treating the training data contributed by millions of creators as freely appropriable. Safety standards developed by the leading companies and then proposed as global norms — standards that, not coincidentally, align with the technical capabilities and business models of the companies that developed them. Evaluation benchmarks that measure the capabilities the leading models excel at and ignore the capabilities that underserved populations most need.

Segal asks, in The Orange Pill, "who captures the expansion, and who bears the cost of the transition?" This is exactly the right question. Chang's framework provides the analytical tools to answer it. The expansion is captured by whoever conducts the industrial policy. When the industrial policy is conducted by private companies accountable to shareholders, the expansion is captured by shareholders and the populations those companies choose to serve. When the industrial policy is conducted by democratically accountable governments representing diverse constituencies, the expansion may — not will, but may — be captured more broadly.

The word "may" is important. Chang is not naive about government. He has documented cases where government industrial policy failed spectacularly — where subsidies flowed to politically connected firms rather than productive ones, where protection bred complacency rather than competitiveness, where state-owned enterprises became vehicles for patronage rather than development. Government is not inherently wise. But government is, at least in principle, accountable to the public in a way that Anthropic is not. The failures of government industrial policy can be corrected through democratic processes — elections, legislative oversight, judicial review. The failures of corporate industrial policy can be corrected only by the market, and the market's corrections tend to be slow, painful, and indifferent to the populations that bear the cost.

The distinction between government industrial policy and corporate industrial policy is not merely academic. It has immediate practical consequences for how the AI transition unfolds.

If AI governance is understood as a matter of regulating private companies — the approach taken by the EU AI Act and most proposed American legislation — then the framework assumes that private companies are the natural locus of AI development and that government's role is to constrain their excesses. This framing accepts the corporate industrial policy as a given and asks only how to mitigate its worst effects.

If AI governance is understood as a matter of public industrial policy — the approach implied by the CHIPS Act, by China's national AI strategy, and by Chang's framework — then the question is different. It is not "How do we regulate the companies?" but "How do we shape the ecosystem?" Not "How do we prevent harm?" but "How do we direct capability toward the populations and purposes that need it most?" These are different questions, and they produce different answers. The first question produces compliance frameworks. The second produces development strategies.

The developing world needs development strategies, not compliance frameworks. A compliance framework tells Nigeria how to regulate AI. A development strategy tells Nigeria how to build an AI ecosystem — how to train researchers, how to develop domain-specific models, how to build the institutional infrastructure that translates AI capability into broad domestic benefit. The difference between the two is the difference between consuming a technology and producing one. And the difference between consuming and producing has been, throughout the history of economic development, the difference between dependency and sovereignty.

Chang's work suggests that the most consequential AI governance question is not what rules to impose on AI companies. It is who gets to conduct AI industrial policy, and in whose interest. The companies will continue to make the decisions that shape the global distribution of AI's benefits, regardless of what regulations are imposed, because the decisions are embedded in the technology itself — in the training data, the optimization targets, the pricing, the language support, the capabilities that are developed and the capabilities that are deferred. The only way to change the outcome is to change who makes the decisions. And that requires not regulation of the existing actors but the creation of new actors — public institutions, in developing countries, with the resources and the mandate to conduct AI industrial policy on behalf of their populations.

This is, to be sure, an ambitious prescription. Building public AI institutions in countries that lack reliable electricity is not a simple matter. But the historical record is clear: the countries that built public institutions for industrial development — however imperfectly, however slowly, however messily — are the countries that developed. The countries that waited for the market to develop them are still waiting.

The AI companies are conducting industrial policy. The question is not whether industrial policy is happening. It is whether it will happen only for shareholders, or also for citizens.

---

Chapter 6: Who Sets the Rules and Why

In 1994, the Uruguay Round of trade negotiations concluded with the establishment of the World Trade Organization. The agreement was celebrated as a triumph of multilateral cooperation — a rules-based international trading system that would ensure fairness, predictability, and mutual benefit for all nations. The celebrations were premature.

Ha-Joon Chang documented what actually happened. The rules were written primarily by the United States, the European Union, and Japan — the three dominant trading powers of the era. The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), which imposed American-style patent and copyright protections on every member state, was drafted largely by a coalition of American pharmaceutical, software, and entertainment companies and adopted over the objections of developing nations that correctly predicted it would raise the cost of medicines, restrict technology transfer, and constrain their policy space for decades.

The Agreement on Subsidies and Countervailing Measures restricted the ability of developing nations to subsidize their infant industries — the same subsidies that the United States, Europe, and Japan had used extensively during their own industrialization. The Agreement on Agriculture, ostensibly designed to liberalize agricultural trade, permitted the wealthy nations to maintain their enormous agricultural subsidies while requiring developing nations to open their markets to the subsidized imports that those payments produced.

The rules were formally equal. Every nation was bound by the same text. But rules that treat the unequal equally are not fair. They are a mechanism for preserving inequality while wearing the mask of fairness. Chang's phrase for this — "kicking away the ladder" — captures the dynamic precisely: the nations that had already developed used the multilateral system to prohibit the policies they had used to develop, locking in their advantage behind a wall of apparently neutral regulations.

Now consider the rules being written for artificial intelligence.

The EU AI Act, adopted in 2024, is the most comprehensive regulatory framework for AI in the world. It classifies AI systems by risk level, imposes transparency requirements, mandates human oversight for high-risk applications, and prohibits certain uses outright. It is a serious, thoughtful piece of legislation, and it reflects genuine concern for fundamental rights and democratic values.

It was also written by and for wealthy European nations. The compliance costs of the AI Act — the documentation requirements, the conformity assessments, the risk management systems, the monitoring obligations — are substantial. For a large technology company with dedicated legal and compliance teams, they are manageable. For a startup in Lagos or a public institution in Dhaka, they are prohibitive. The effect, whatever the intention, is to create a regulatory environment that favors large incumbents over small challengers and wealthy nations over developing ones.

This is not a criticism unique to the EU. Every regulatory framework has distributional consequences. The American approach — lighter regulation, greater reliance on voluntary industry standards and post-hoc enforcement — favors different actors but is no more neutral. It favors the companies that write the voluntary standards, which are, not coincidentally, the companies with the most resources and the most market power.

Chang's analytical insight is that the question of who writes the rules is more important than the content of the rules themselves. Rules written by incumbents will protect incumbents. Rules written by developing nations will protect developing nations' policy space. Rules written through genuinely inclusive processes will — at minimum — reflect a broader range of interests. The content follows from the authorship.

The AI standards currently under development illustrate the point with uncomfortable precision. The National Institute of Standards and Technology (NIST) in the United States has developed an AI Risk Management Framework that is becoming, through the gravitational pull of American market power, a de facto global standard. The framework is technically sound. It is also an American product, reflecting American assumptions about what risks matter, what capabilities are important, and what institutional arrangements are appropriate for managing AI. Countries that adopt it — as many will, because adopting the American standard is the path of least resistance for companies that want to sell into the American market — are adopting not just a technical framework but a set of institutional assumptions embedded in the framework.

The same dynamic plays out in evaluation benchmarks. The benchmarks that determine which AI models are considered "state of the art" — MMLU, HumanEval, GSM8K, and their successors — measure capabilities that matter to the populations that designed the benchmarks. Mathematical reasoning. Code generation. English-language comprehension. These are valuable capabilities. They are not the only capabilities that matter. A benchmark that measured an AI model's ability to provide agricultural extension advice in Hausa, or to navigate informal legal systems in Tamil Nadu, or to support community health workers in rural Mozambique would measure different capabilities and produce different rankings. The benchmarks that exist reflect the priorities of the institutions that created them, and those institutions are concentrated in a small number of wealthy countries.

Technical standards are a form of power that operates below the threshold of public visibility, and for that reason they are among the most durable and least contested forms of power in the global economy. Tariffs are visible. Subsidies are visible. Export controls are visible. Standards are invisible — they look like technical necessities rather than political choices, and their distributional consequences are felt only by the people who are disadvantaged by them, who are usually the people with the least capacity to contest them.

Consider the Unicode standard for text encoding. Unicode was developed primarily by American technology companies in the late 1980s and early 1990s. It was a genuine achievement — a universal standard for representing text in every writing system on Earth. But the process of developing it was dominated by companies and researchers whose priorities were shaped by the Latin alphabet and the commercial needs of the American software industry. Scripts with large character sets — Chinese, Japanese, Korean — were accommodated because the market for those languages was commercially significant. Less commercially significant scripts were handled later, less completely, and with less attention to the nuances that their users cared about.

This is not malice. It is the predictable outcome of a standard-setting process dominated by actors with specific interests. The Unicode Consortium did not set out to marginalize minority writing systems. It set out to solve a technical problem, and the solution it produced reflected the priorities of the people in the room.

The AI standards being developed now will have consequences far more significant than text encoding. They will determine which AI capabilities are considered safe, which are considered risky, and which are prohibited. They will determine what documentation AI companies must provide, what tests AI systems must pass, and what oversight mechanisms must be in place. They will determine, in practical terms, which countries and companies can participate in the global AI ecosystem and which are priced out by compliance costs.

Segal calls for governance — for "rules that regulate how AI tools are developed, deployed, and distributed." Chang's framework adds the question that Segal's call leaves open: governance by whom? Rules written by the leading AI companies and the nations that host them will reflect the interests of those companies and nations. Rules written through inclusive processes that give voice to the populations that will live under the rules — the workers whose jobs will be transformed, the students whose education will be reshaped, the developing nations whose economies will be restructured — will reflect a different set of interests.

The history of international rule-making provides little ground for optimism. The rules of the global trading system were written by the powerful. The rules of the global financial system were written by the powerful. The rules of the global intellectual property system were written by the powerful. In each case, the rules were presented as neutral, universal, and mutually beneficial. In each case, they served the interests of their authors while constraining the options of everyone else.

But history also provides examples of successful contestation. The TRIPS agreement, which Chang identified as one of the most damaging instruments of the post-Uruguay Round order, was partially amended through sustained pressure from developing nations, public health advocates, and civil society organizations. The Doha Declaration on TRIPS and Public Health, adopted in 2001, affirmed that the agreement should be interpreted in a manner that supports public health, and it established mechanisms for developing nations to issue compulsory licenses for essential medicines. This was not a victory produced by the market. It was a victory produced by political mobilization — by the organized, sustained, strategically sophisticated efforts of people who understood the rules well enough to change them.

The AI rules are being written now. The authorship is not yet fixed. The question is whether the populations that will be most affected by these rules — the workers, the students, the developing nations — will organize effectively enough to earn a seat at the table where the rules are written. Chang's entire body of work is an argument that the seat must be claimed, not requested, and that claiming it requires understanding the rules well enough to know which ones to fight.

The rules matter more than the technology. The technology is a given — it will advance regardless of what any government does. The rules determine who benefits from the advance. And the rules are written by whoever shows up to write them.

---

Chapter 7: The Subsidy Hidden in Plain Sight

Somewhere in the training data of every major large language model is the work of a programmer who spent four years learning to code, three years contributing to open-source projects on GitHub, and approximately zero seconds being asked whether a trillion-dollar company could use her labor to train a commercial product.

She was not paid. She was not consulted. She was not credited. Her code — along with the code of millions of other programmers, the text of millions of writers, the images of millions of photographers, the music of millions of composers — was treated as raw material, freely available for extraction, the way a mining company treats ore in the ground.

Ha-Joon Chang would recognize this pattern instantly. It is enclosure. Not the enclosure of common land that preceded the English industrial revolution, where fields that had sustained communities for centuries were fenced off and converted to private property. This is the enclosure of common knowledge — the appropriation of collectively produced intellectual resources by private companies for commercial exploitation.

The parallel is not metaphorical. It is structural. In both cases, a resource that was produced by many and available to all was appropriated by a few and converted into private wealth. In both cases, the appropriation was presented not as theft but as improvement — the land would be more productive under private management; the data would be more useful as a trained model. In both cases, the people who had produced and maintained the resource received nothing, while the people who appropriated it received enormous returns.

The scale of the transfer is difficult to comprehend. The training dataset for a frontier large language model contains trillions of tokens — words, code fragments, image descriptions, musical notations — each of which represents a tiny fragment of someone's labor, creativity, and expertise. No single contribution is large enough to be individually significant. But the aggregate is the foundation of an industry valued in the hundreds of billions of dollars. The value was created collectively and captured privately. This is a subsidy, flowing from millions of individual creators to a handful of corporations, and it is the largest involuntary transfer of intellectual value in economic history.

The legal framework that enables this transfer is instructive. In the United States, the AI companies have argued that training a model on copyrighted material constitutes "fair use" — that the transformation of individual works into statistical weights in a neural network is sufficiently transformative to fall outside the scope of copyright protection. Whether this argument will survive judicial scrutiny remains uncertain; several major cases are working their way through the courts. But the legal question, important as it is, obscures the economic question that Chang's framework insists upon: regardless of whether the appropriation is legal, is it just? And regardless of whether it is just, what are its distributional consequences?

The distributional consequences are clear. The value of the training data flows upward — from the individual creators who produced it to the companies that aggregated and commercialized it. The creators receive nothing directly. They receive, it is argued, the indirect benefit of access to the tools that their labor made possible. But this argument has the same structure as the argument that workers in the early industrial revolution benefited from the factories that their dispossession made possible. The benefit is real, in the long run, in the aggregate. The cost is immediate, specific, and borne by identifiable people.

Segal addresses this in The Orange Pill through the figure of the elegist — the senior software architect who felt "like a master calligrapher watching the printing press arrive." The elegist's grief, Segal notes, is the sound of expertise being dissolved into a tool. Chang's framework adds the economic dimension that the elegist himself may not articulate: his expertise was not merely dissolved. It was extracted. His years of patient practice, his accumulated craft knowledge, his hard-won understanding of how systems fit together — all of this was scraped, tokenized, and incorporated into a model that now competes with him. The printing press analogy understates the situation. The printing press did not learn from the calligrapher's manuscripts without asking. It merely made his skill less commercially necessary. The large language model did both: it appropriated his skill and then made it unnecessary.

The conventional defense of this arrangement rests on two arguments. The first is that the individual contributions to training data are so small as to be negligible — no single programmer's code, no single writer's text, constitutes a meaningful fraction of the training dataset. This is true as a mathematical statement and irrelevant as an economic one. The value of a coal mine does not reside in any single lump of coal. It resides in the aggregate, and the question of who owns the aggregate is a question of law and politics, not geology. The fact that no individual contribution is significant does not mean that the contributions, taken together, are not the foundation of the entire enterprise. They are. And the fact that the enterprise could not exist without them creates a legitimate claim — not necessarily a legal claim, under current law, but an economic and moral claim — on the value they helped produce.

The second defense is that the creators benefited from the culture of open sharing that made their work available in the first place. The programmer who posted her code on GitHub benefited from the code that others had posted. The writer who published on the open web benefited from the writing of others. The commons was reciprocal, the argument goes, and the AI companies simply participated in the same culture of open exchange.

This argument fails on its own terms. The commons was reciprocal among its participants. The programmer who posted code on GitHub expected other programmers to use it, learn from it, build on it. She did not expect a corporation to aggregate her code with millions of other contributions, train a commercial product on the aggregate, and sell that product for billions of dollars while paying her nothing. The reciprocity of the commons was premised on a shared understanding — an implicit social contract — that the contributions would be used in certain ways and not others. The AI companies violated that understanding, and the violation is not redeemed by the fact that the understanding was implicit rather than contractual.

Chang's analysis of the global trading system provides a direct parallel. The TRIPS agreement, which imposed American-style intellectual property protections on developing nations, was justified by the argument that strong IP protections benefit everyone by encouraging innovation. The developing nations pointed out that the innovation being protected was overwhelmingly produced in wealthy countries, and that the TRIPS agreement effectively required them to pay for access to knowledge they had previously obtained freely or cheaply. The wealthy nations were, in effect, enclosing the global knowledge commons and then charging rent for access to it.

The AI training data situation inverts this dynamic in an instructive way. Here, it is not the wealthy nations imposing IP protections to capture value from the developing world. It is wealthy corporations resisting IP claims to extract value from individual creators worldwide. In both cases, the mechanism is the same: the powerful party defines the rules of the knowledge commons in whatever way maximizes its own returns. When strong IP serves the powerful, IP is sacred. When weak IP serves the powerful, IP is an obstacle to innovation.

The resolution of this question — who owns the value embedded in AI training data — will shape the economics of the AI age as profoundly as the enclosure movement shaped the economics of the industrial age. If the current arrangement persists — if training data remains freely appropriable and the value it generates flows exclusively to the companies that aggregate it — then the AI economy will rest on a foundation of uncompensated extraction that reproduces, at a vastly larger scale, the distributional dynamics of every previous enclosure.

Alternative arrangements are conceivable. Collective licensing schemes, modeled on the performing rights organizations that license music. Data trusts that hold training data on behalf of the communities that produced it and negotiate terms of access with AI companies. Taxation of AI revenues with proceeds directed toward the creators and communities whose work made the models possible. Public ownership of training datasets, compiled through transparent and consensual processes.

Each of these alternatives has practical difficulties. None is obviously superior. What they share is a common premise: that the value embedded in training data belongs, at least in part, to the people who created it, and that the current arrangement — in which it belongs exclusively to the companies that appropriated it — is not a natural outcome of market forces but a policy choice, made by specific actors, serving specific interests, and susceptible to being changed through political action.

Chang's framework insists that we see the subsidy for what it is. Not a market outcome. Not an inevitable feature of the technology. A policy choice, with identifiable beneficiaries, made in an institutional context that the beneficiaries had a disproportionate role in creating. The subsidy is hidden in plain sight. Recognizing it is the first step toward deciding whether to maintain it.

---

Chapter 8: Infant Industry Protection in the Age of Intelligence

In 1961, South Korea's per capita income was lower than that of many sub-Saharan African countries. The country had virtually no industrial base. Its main exports were fish, raw silk, and tungsten ore. Its infrastructure had been devastated by war. By any standard the Washington Consensus would later establish, South Korea was a hopeless case — a resource-poor, war-torn nation with no apparent comparative advantage in any manufactured good.

Sixty years later, South Korea is the world's twelfth-largest economy. Samsung and SK Hynix are among the most advanced semiconductor manufacturers on Earth. Hyundai and Kia are global automotive brands. Korean popular culture — K-pop, Korean cinema, Korean television — is a major export. The transformation is, by any measure, one of the most remarkable episodes of economic development in human history.

It did not happen through free markets. It happened through the most systematic, sustained, and comprehensive program of infant industry protection the modern world has seen.

Ha-Joon Chang, who grew up in South Korea during the period of this transformation, has documented the program in detail. The Park Chung-hee government identified target industries — steel, shipbuilding, automobiles, electronics, chemicals — and used every available policy tool to develop them. State-controlled banks directed credit to favored firms at below-market interest rates. Import quotas and tariffs protected domestic producers from foreign competition. The government negotiated technology transfer agreements with foreign companies, requiring them to share technical knowledge as a condition of market access. Export targets were set and enforced, with firms that met them rewarded with additional subsidies and those that failed disciplined through reduced credit access.

The program was messy, politically contentious, and frequently corrupt. Some of the favored firms failed. Some of the industrial targets proved unwise. The government made mistakes, sometimes catastrophic ones. But the overall trajectory was unmistakable: from fish and tungsten to semiconductors and automobiles in a single generation. And the mechanism was equally unmistakable: strategic state intervention that protected infant industries until they were strong enough to compete globally, and not a day before.

The concept of infant industry protection is simple. A new industry, in a developing country, cannot immediately compete with established producers in wealthy countries. The established producers have advantages that the newcomer cannot match: accumulated experience, economies of scale, established supply chains, brand recognition, access to cheaper capital, and the tacit knowledge that comes from decades of practice. Left to the market, the newcomer will be destroyed by competition before it has time to learn. The infant dies in the crib.

Protection gives the infant time to grow. Tariffs keep foreign competitors at bay while the domestic industry builds capacity. Subsidies offset the higher costs that the domestic industry incurs during its learning period. Directed credit provides the capital that private financial markets, focused on short-term returns, would not supply. Technology transfer requirements give the domestic industry access to the knowledge it needs to close the gap with established producers.

The protection is not meant to be permanent. It is meant to be temporary — long enough for the infant to grow into an adult that can survive without support. The key distinction, which Chang emphasizes repeatedly, is between protection that breeds competence and protection that breeds complacency. Korea's industrial policy succeeded in part because it included performance requirements: firms that received subsidies were expected to meet export targets, invest in research and development, and demonstrate continuous improvement. Firms that failed to perform lost their subsidies. The protection was conditional, not unconditional.

This concept applies to the AI transition with a directness that the contemporary policy discourse has been slow to recognize.

Consider the situation facing a country like Nigeria. Nigeria has a young, growing population, a vibrant entrepreneurial culture, and a nascent technology sector. Lagos is one of the most dynamic startup ecosystems in Africa. Nigerian developers are talented, ambitious, and increasingly well-connected to global technology networks.

But Nigeria has no frontier AI capability. It has no capacity to train large language models. It lacks the computational infrastructure — the GPU clusters, the high-bandwidth interconnects, the massive electricity supply — that frontier AI development requires. It lacks the research ecosystem — the universities with AI departments, the postdoctoral programs, the industry-academic partnerships — that produce the researchers who advance the field. It lacks the financial infrastructure — the venture capital firms, the sovereign wealth funds, the patient capital that long-term technology development requires.

The conventional prescription says: adopt the leading tools. Use Claude, GPT, Gemini. Integrate them into your economy. Compete on the basis of your existing comparative advantage — which, in Nigeria's case, means services, agriculture, and natural resource extraction.

This prescription has the same structure as the prescription that told developing nations in the 1980s to open their markets and pursue their comparative advantage. It sounds generous. It sounds efficient. And it produces dependency.

A country that adopts foreign AI tools without building domestic AI capability becomes a consumer of AI, not a producer. The value chain runs through Silicon Valley: the models are trained there, the infrastructure is operated there, the profits accumulate there. The Nigerian developer uses the tool, builds applications on top of it, and generates value — but the foundational value, the value embedded in the model itself, flows back to the company that built the model and the country that hosts the company.

This is not a new pattern. It is the same pattern that characterized the colonial resource-extraction economies of the nineteenth century, updated for the digital age. The colony provided the raw materials. The metropole processed them into manufactured goods. The manufactured goods were sold back to the colony at a markup. The value-added — the processing, the manufacturing, the innovation — remained in the metropole.

In the AI version of this pattern, the developing country provides the users, the data, and the market. The wealthy country provides the model, the infrastructure, and the ecosystem. The value-added — the training, the refinement, the continuous improvement of the model — remains in the wealthy country. The developing country's participation in the AI economy is real but shallow: it captures the surplus of application development while the surplus of model development flows elsewhere.

Chang's 2025 warning about India must be understood in this context. India's service-sector growth model — the call centers, the business process outsourcing, the low-value-add IT services — was always a form of dependent development. It created employment and generated foreign exchange, but it did not build the kind of deep industrial capability that would make India an autonomous participant in the global technology economy. The value-added was low. The barriers to entry were low. And now the barriers to replacement are low: the services that India provides are precisely the services that AI automates most effectively.

A country that had invested in building domestic AI capability — in training researchers, developing models, constructing the institutional infrastructure that supports technology development — would be better positioned to navigate this transition. Not immune to disruption — no country is immune to disruption — but capable of participating in the AI economy as a producer rather than merely a consumer.

Building that capability requires infant industry protection. Not the crude protectionism of blanket tariffs, but the sophisticated, performance-conditioned, strategically targeted protection that worked in Korea, in Taiwan, in China. Public investment in AI research and education. Government-funded computational infrastructure that domestic researchers can access without paying commercial rates. Technology transfer requirements imposed on foreign AI companies that want to operate in the domestic market. Domestic data governance frameworks that give the country sovereignty over the data resources generated by its population. Procurement policies that favor domestic AI applications over foreign ones when the domestic applications meet minimum performance standards.

Each of these policies will be resisted. The foreign AI companies will argue that requirements for local data storage, technology transfer, or domestic procurement are "distortionary" — that they interfere with the efficient operation of the market. The international institutions will echo this argument, as they have echoed it for decades. The trade agreements will contain provisions that limit the policy space available for infant industry protection, just as the WTO agreements limited the policy space for manufacturing protection.

The pressure to conform — to adopt the leading tools, to integrate into the existing global AI ecosystem, to accept the rules written by others — is the contemporary equivalent of the free-trade pressure that prevented so many developing nations from building domestic manufacturing capability in the 1980s and 1990s. The urgency is real: falling behind in AI capability is costly, and the gap between leaders and followers widens with every passing year. But the response to urgency should not be uncritical adoption. It should be strategic adoption — using foreign tools where necessary while building domestic capabilities where possible, and negotiating the terms of engagement from a position of informed self-interest rather than passive acceptance.

The infant industries of the AI age are not steel mills or automobile factories. They are AI research labs, data infrastructure, computational capacity, and the educational institutions that produce the researchers and engineers who will staff them. Protecting these infant industries does not mean banning ChatGPT or blocking Claude. It means creating the conditions under which domestic alternatives can emerge, compete, and — eventually, as in every successful case of infant industry development — grow strong enough to survive without protection.

The Korean model worked not because it was perfect but because it was strategic: targeted, conditional, performance-oriented, and explicitly designed to build capability rather than merely protect markets. The countries that will navigate the AI transition most successfully will be the ones that apply the same strategic logic — adapted to the specific requirements of the AI age, but grounded in the same principles that have produced every successful case of industrial development in the modern era.

The infant is in the crib. The question is whether it will be given time to grow, or whether the doctrine of free markets — applied selectively by the nations that no longer need protection — will ensure that it never leaves the nursery.

Chapter 9: The Impossible Prescription

There is a particular kind of advice that wealthy people give to poor people. It sounds generous. It sounds practical. It is often delivered with genuine sincerity. And it is almost always useless, because it assumes the person receiving the advice inhabits the same world as the person giving it.

"Invest in index funds." This is excellent advice for someone who has money to invest. It is meaningless advice for someone who spends everything they earn on rent and food.

"Network with people in your industry." This is excellent advice for someone who attended an elite university and lives in a city with a functioning professional ecosystem. It is absurd advice for someone in a rural town with no industry to speak of.

"Build the dam." This is excellent advice for someone with sticks, mud, teeth, and time.

Ha-Joon Chang's career has been devoted to exposing the structural impossibility of prescriptions that assume universal starting conditions. The Washington Consensus told developing nations to liberalize trade, deregulate markets, privatize state enterprises, and cut government spending. These prescriptions were drawn from the experience of wealthy nations — or, more precisely, from a highly selective reading of that experience that omitted the century of protectionist industrial policy that preceded the liberalization. The prescriptions assumed that developing nations could achieve through openness what wealthy nations had achieved through intervention. They could not. The results were catastrophic for much of the developing world.

Segal, in The Orange Pill, offers prescriptions for navigating the AI transition. They are thoughtful prescriptions, born of genuine experience and genuine concern. Build AI Practice frameworks — structured pauses where AI tools are set aside and people engage directly with each other. Sequence workflows rather than parallelize them, protecting deep thought against the temptation to do everything at once. Create protected mentoring time where junior people develop intuition through slow, friction-rich interaction with experienced colleagues. Maintain the dam. Tend the ecosystem.

These are excellent prescriptions. For a company in San Francisco with access to capital, institutional stability, legal protections, a functioning educational system, and a domestic market large enough to sustain the investment in employee development that the prescriptions require, they are actionable and wise.

For a company in Lagos, they describe a world that does not exist.

This is not a criticism of Segal's intentions, which are clearly sincere. It is a criticism of the assumption of universality — the assumption that a prescription developed in one institutional context can be transplanted to another without modification. Chang's entire body of work is an argument against this assumption, and the AI transition is the context in which the assumption is most dangerous and most widespread.

Consider what the beaver's ethic requires in practice. Segal describes his decision to keep and grow his team at Napster rather than converting the twenty-fold productivity gain into headcount reduction. This was a genuine choice, made against genuine economic pressure. The Believer's path — reduce staff, capture the margin — was available and tempting. Segal chose the Beaver's path: invest in people, expand what they build, develop their capacity to direct AI wisely.

This choice was possible because Segal operates within an institutional environment that makes it possible. He has access to capital markets that value long-term growth, not just quarterly returns. He has labor laws that, whatever their imperfections, establish a baseline of worker protection. He has an educational system that produces a continuous supply of trained talent. He has infrastructure — electricity, internet connectivity, legal systems — that functions reliably. He has a domestic market of three hundred and thirty million people, the wealthiest consumer market in human history.

Remove any one of these conditions, and the choice becomes harder. Remove several, and it becomes impossible.

A technology company in Lagos operates on thinner margins. It has less access to patient capital — African venture capital markets are growing but remain a fraction of the American market. It faces higher infrastructure costs — generators for power outages, premium prices for reliable internet, the invisible tax of unreliability that pervades every aspect of doing business in an environment where public infrastructure is inadequate. Its domestic market is large in population but constrained in purchasing power. Its legal environment is less predictable. Its educational system produces talented individuals but not enough of them, and the most talented are continually recruited away by companies in wealthier countries.

In this environment, the twenty-fold productivity gain creates a different calculus. The pressure to convert productivity into headcount reduction is not merely economic. It is existential. The company that maintains its full staff while expanding capability may not survive the quarter. The company that cuts staff and captures the margin may survive to build another day. The Beaver's path, in this context, is not a choice between ethics and efficiency. It is a choice between a principle and survival.

Chang would recognize this immediately as the infant industry dilemma in a different guise. The developing-country firm, like the developing-country industry, cannot afford the investments that long-term development requires because the short-term competitive pressure is too intense. Without protection — without the institutional support that gives the firm breathing room to invest in its own development — the competitive pressure will always win. The firm will always choose survival over development, margin over growth, the Believer's path over the Beaver's.

This is not a failure of character. It is a failure of institutions.

The eight-hour day was not produced by the ethical choices of individual factory owners. It was produced by legislation that applied the constraint equally to all factories, removing the competitive disadvantage of ethical behavior. An individual factory owner who unilaterally reduced hours while his competitors did not would go bankrupt. The legislation made the ethical choice economically viable by making it universal.

The same logic applies to AI governance. An individual company that unilaterally invests in employee development, maintains work-life boundaries, and resists the pressure to extract maximum productivity while its competitors cut staff and capture margin will be at a competitive disadvantage. The investment in people is the right thing to do. It is also the expensive thing to do. And in a competitive market, the expensive thing loses to the cheap thing unless the rules of the game make the cheap thing unavailable.

This is why Chang insists that individual ethics, however admirable, are not a substitute for institutional design. The question is not whether Segal's prescriptions are good — they are. The question is whether the institutional environment makes them viable for everyone, not just for companies that already enjoy the advantages of wealth, stability, and access.

Segal acknowledges this partially when he writes about the developer in Lagos and the constraints she faces — unreliable power grids, limited bandwidth, economic precarity. But the acknowledgment does not extend to the prescription. The prescription remains universal: build the dam, maintain the dam, tend the ecosystem. The constraints are noted as context. They should be noted as barriers — barriers that require institutional demolition before the prescription becomes actionable.

What would it take to make the Beaver's ethic viable in Lagos? The answer reads like a development policy agenda: reliable electricity, affordable high-speed internet, a functioning legal system for intellectual property and contract enforcement, access to patient capital, educational institutions that produce a continuous supply of AI-literate graduates, labor protections that prevent the productivity-to-layoff pipeline, procurement policies that give domestic AI applications a fighting chance against foreign incumbents, and trade agreements that preserve the policy space to implement all of the above.

Every item on this list is a public good. Every item requires public investment. Every item will be resisted by the interests that benefit from the current arrangement — the foreign AI companies that prefer unencumbered access to developing-country markets, the international institutions that prefer liberalization to intervention, the domestic elites who capture rents from the existing distribution of advantage.

The Beaver's ethic is not wrong. It is incomplete. It describes what to do but not how to create the conditions under which doing it is possible. And the conditions are not natural features of the environment. They are constructed — built through the same kind of deliberate, strategic, politically contested institutional engineering that produced every previous round of broad-based development.

Segal's five-stage pattern — threshold, exhilaration, resistance, adaptation, expansion — identifies adaptation as the stage that "decides everything." Chang's framework specifies what adaptation must actually consist of: not just cultural norms and individual practices, but industrial policies, labor protections, educational investments, redistributive mechanisms, and regulatory frameworks that make the ethical choice the economically viable choice for everyone, not just for the privileged.

The prescription is impossible — not because it is wrong, but because the conditions for following it do not yet exist in most of the world. Making it possible is the work of institutional construction. And institutional construction is, as it has always been, a political project — requiring not just vision and goodwill but organization, mobilization, and the sustained willingness to contest the interests that prefer the world as it is.

The advice that the wealthy give the poor is not wrong in content. It is wrong in context. And the gap between content and context is not bridged by sincerity. It is bridged by institutions.

---

Chapter 10: The Ladder Still Standing

The ladder is still standing. This much is true, and it is worth stating clearly before examining the forces that threaten to pull it down.

The tools are available. Claude Code, GPT, Gemini, and their successors are accessible to anyone with an internet connection and a modest subscription fee. The developer in Lagos can, today, build applications that she could not have built five years ago. The student in Dhaka can access explanations of complex concepts in his own language, at his own pace, with a patience no human tutor can match. The small business owner in São Paulo can prototype a customer management system over a weekend. The expansion of individual capability that Segal describes in The Orange Pill is real, documented, and significant.

The AI transition has produced, in its first phase, a genuine reduction in the barriers to individual creation. The imagination-to-artifact ratio has collapsed. The translation cost that separated intention from execution has been radically diminished. A person with an idea and the ability to describe it in natural language can produce a working prototype in hours. This is not hype. It is observable reality, measurable in the products shipped, the businesses launched, and the problems solved by people who could not have solved them before.

Ha-Joon Chang's framework does not deny any of this. It asks a different question — the question that every previous round of technological expansion eventually forced societies to confront: What happens between the expansion of individual capability and the emergence of broad-based prosperity? What institutions, policies, and political arrangements are required to translate the former into the latter? And who is building those institutions — or, more to the point, who is preventing them from being built?

The historical record on this question is unambiguous. Technological capability has never, in any society, at any point in recorded history, automatically translated into broad prosperity. The translation has always required deliberate institutional construction: policies that direct investment toward productive capacity, protections that give nascent industries time to develop, redistributive mechanisms that ensure the gains from new technology flow broadly rather than concentrating at the top, educational systems that prepare populations to participate in the new economy, and labor protections that prevent the productivity gains from being captured entirely as corporate profit.

Every item on this list was contested. Every item was built through political struggle against the interests that preferred the status quo. And every item was, in retrospect, essential to the translation of technological capability into the kind of broad prosperity that the wealthy nations now enjoy and that their citizens now take for granted.

The AI transition is at the beginning of this translation process. The capability is expanding rapidly. The institutions that would translate that capability into broad prosperity are, with a few notable exceptions, not being built.

Consider the current landscape of AI governance. The EU AI Act addresses the supply side: what AI companies may build, what disclosures they must make, what risk assessments they must conduct. It does not address the demand side: what citizens, workers, students, and developing nations need to actually benefit from the technology. The American approach is lighter still — voluntary commitments, industry-led standards, and a regulatory philosophy that prioritizes innovation over distribution. China has its own approach, tightly controlled and oriented toward national strategic objectives. None of these frameworks is designed to answer the question that Chang's work insists upon: How do you translate technological capability into broad-based prosperity for the populations that need it most?

The question is not abstract. It has concrete, specific, answerable components.

First, education. The educational systems of most countries are not preparing their populations for the AI economy. This is true in wealthy countries, where curricula still emphasize the production of answers over the formulation of questions, and it is acutely true in developing countries, where educational infrastructure is strained, underfunded, and often oriented toward the needs of an economy that is being restructured beneath the educators' feet. Chang's framework insists that public investment in education is not a luxury. It is the most fundamental form of industrial policy — the investment that determines whether a country's population can participate in the next economy or is relegated to the margins of it.

Second, infrastructure. The digital infrastructure of most developing countries — internet connectivity, electricity reliability, computational capacity — is inadequate for broad AI adoption. The developer in Lagos can access Claude Code, but she cannot access it reliably, affordably, or at the speed that productive use requires. The infrastructure gap is not closing through market mechanisms. It is widening, because the market invests where returns are highest, and returns are highest where infrastructure already exists. Public investment in digital infrastructure is, like public investment in education, a precondition for broad participation in the AI economy. It will not happen without deliberate policy.

Third, labor protections. The Berkeley study that Segal cites in The Orange Pill found that AI does not reduce work — it intensifies it. Workers using AI tools worked faster, took on more tasks, and experienced the "task seepage" that colonized their breaks, their evenings, and their weekends. In countries with strong labor protections, this intensification can be managed — through mandated rest periods, limits on working hours, and the right to disconnect. In countries without these protections, the intensification becomes exploitation, and the productivity gains flow entirely to the employer while the costs are borne entirely by the worker.

Fourth, redistributive mechanisms. The gains from AI-driven productivity growth will, if history is any guide, concentrate at the top of the income distribution unless specific policies redirect them. Progressive taxation, social insurance, public investment in the common goods that markets underprovide — these are the mechanisms that translated the gains of previous technological revolutions into broad prosperity. They were not automatic. They were fought for, legislated, implemented, and defended against continuous efforts to dismantle them. The AI transition will require the same mechanisms, updated for the specific characteristics of the AI economy.

Fifth, and most ambitiously, international governance. The rules of the AI economy are being written now, by the companies and countries that dominate the field. These rules will determine, for decades, who participates in the AI economy as a producer and who participates only as a consumer. Who sets the standards, who controls the data, who owns the infrastructure, who captures the value — these are questions that cannot be answered by individual nations acting alone. They require international cooperation, and the history of international cooperation on economic matters is, to put it gently, not encouraging.

But the history is not uniformly discouraging. The Doha Declaration on public health, the amendments to TRIPS that preserved developing nations' access to essential medicines, the coalition of developing nations that blocked the most damaging provisions of the Cancún trade negotiations — these were victories won through organized, informed, strategically sophisticated political action by nations that understood the rules well enough to change them. The rules of the AI economy can be contested in the same way, if the nations that stand to lose from the current trajectory organize effectively to contest them.

Chang has always been clear that the trajectory is not determined by the technology. It is determined by the institutions that shape how the technology is deployed. The same AI tools that could concentrate wealth and deepen dependency could, under different institutional arrangements, distribute capability broadly and support the development of domestic AI ecosystems in the countries that most need them. The technology is agnostic. The institutions are not.

The ladder is still standing. The tools are available. The capability is expanding. The question — the question that has been asked at every turning point in the history of economic development, and that has never been answered by the market alone — is whether the institutions will be built that translate capability into prosperity.

Chang's work provides the blueprint. Not a detailed engineering plan — no blueprint can specify in advance the precise institutions that every country needs, because institutional design must be adapted to local conditions, local capabilities, and local political realities. But a set of principles, drawn from the accumulated evidence of three centuries of development experience:

Protect your infant industries. Not indefinitely, not unconditionally, but strategically — giving domestic capability time to develop before exposing it to competition it cannot survive.

Invest in your people. Education is not a cost. It is the most productive investment a society can make, and the returns are measured not in quarters but in generations.

Build your institutions. Labor protections, redistributive mechanisms, regulatory frameworks, educational systems, digital infrastructure — these are the dams that direct the river of technological capability toward broad prosperity rather than concentrated wealth.

Contest the rules. The rules of the global AI economy are being written now. Absent deliberate contestation, they will be written by the incumbents, for the incumbents. The only antidote to this is organized, informed, strategically sophisticated participation in the rule-writing process by the nations and populations that stand to lose from the incumbents' preferred arrangement.

Refuse the fairy tale. The story that AI-driven prosperity will emerge naturally from free markets and entrepreneurial genius is, like the story that industrial prosperity emerged naturally from free trade, a retrospective fabrication that serves the interests of the people who have already captured the gains. The actual mechanism of broad prosperity — in every case, in every country, without exception — has been deliberate institutional construction, contested at every step by the interests that preferred the status quo.

The ladder is still standing. Whether it remains standing depends on choices being made now — by governments, by companies, by international institutions, and by the citizens and workers who will live with the consequences of those choices. The history says the choices will be contested. The history also says they can be won.

Ha-Joon Chang has spent his career documenting what happened the last time the ladder was kicked away. The purpose of the documentation is not nostalgia. It is instruction. The pattern is known. The mistakes are identified. The alternative is specified. The question is whether, this time, the ladder will be defended — or whether the amnesia of the advantaged will, once again, prevail.

---

Epilogue

The tariff schedule is what did it.

Not the arguments about the river, not the philosophical tensions between Han's garden and the engineer's screen, not even the vertigo of watching my own team transform in Trivandrum. Those were the experiences that cracked open the question. But it was Chang's tariff schedules — those plain, dull columns of import duties from nineteenth-century America and Britain — that rearranged how I understood the answer.

Forty to fifty percent. That was the average US tariff on manufactured goods during the century America built itself into the world's largest industrial economy. I had known the number existed. I had never sat with what it meant. It meant that every story I had absorbed about markets and meritocracy and the self-made nation was, at minimum, incomplete. It meant that the infrastructure I build on — the internet, the algorithms, the chips, the research universities that trained the people who trained the models — was not conjured from entrepreneurial will. It was constructed, deliberately, with public money, behind protective walls, over decades.

Chang's framework forced me to look at the developer in Lagos differently. When I wrote about her in The Orange Pill, I described what she could now do. Claude Code gave her the same leverage as an engineer at Google. The imagination-to-artifact ratio collapsed. The floor rose. All of that remains true. But Chang made me see what I had left out of the picture: not the tool in her hands but the ground beneath her feet. Unreliable electricity is not a detail. It is a policy failure. Expensive bandwidth is not a market condition. It is an institutional absence. The gap between what she can build and what she can sustain is not a gap the tool can close. It is a gap that only institutions can close — the same kinds of institutions that wealthy nations built for themselves and now discourage others from building.

This is the part that stays with me. Not because it invalidates the optimism I feel about AI — it does not. I remain convinced that the expansion of human capability I described is real and significant and worth celebrating. But because it reveals the incompleteness of the celebration. The tool is generous. The conditions for using it are not. And the conditions are not natural. They are constructed, by specific actors, through specific choices, and they can be reconstructed through different choices — if the people making the choices are willing to learn from the record of what actually worked.

I keep returning to Chang's core provocation: the countries that succeeded did so by doing the opposite of what they now prescribe for others. The countries that climbed the ladder kick it away. The companies that were built on public investment preach private initiative. The amnesia is so complete that it has become invisible — not a lie exactly, but a forgetting so total it functions as common sense.

My children will inherit whatever institutions we build or fail to build. The tools will be there regardless — better, faster, more capable with each passing year. The question is whether the institutions will be there too. Whether the dams will be built not just in San Francisco but in Lagos, in Dhaka, in São Paulo. Whether the prescriptions I offer in The Orange Pill will be actionable for everyone or only for the privileged.

Chang taught me that the answer to that question has never been delivered by the market. It has always been delivered by political struggle, institutional innovation, and the refusal to accept that the current distribution of advantage is natural or permanent. The ladder is still standing. But it will not stand on its own.

Edo Segal

The richest countries on Earth built their wealth behind walls of tariffs, subsidies, and state intervention.
Then they told everyone else to trust the market.
AI is next.

** The frontier AI models were built on publicly funded research, publicly constructed internet infrastructure, and publicly subsidized semiconductor supply chains. The gains are captured privately. The prescription for the developing world -- adopt the tools, compete on your comparative advantage -- is the Washington Consensus repackaged for the age of intelligence. Ha-Joon Chang has spent three decades documenting what happens when wealthy nations kick away the ladder they climbed. His framework, applied to AI, reveals the institutional machinery that determines whether the most powerful technology in human history produces broad prosperity or concentrated wealth. This is not a book about whether AI works. It is about who it works for -- and why the answer to that question has never been decided by markets alone.

Ha-Joon Chang
“** "Once you realize that trickle-down economics does not work, you will see the excessive tax cuts for the rich as what they are -- a simple upward redistribution of income, rather than a way to make all of us richer, as we were told." -- Ha-Joon Ch”
— Ha-Joon Chang
0%
11 chapters
WIKI COMPANION

Ha-Joon Chang — On AI

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ha-Joon Chang — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →