David Landes — On AI
Contents
Cover Foreword About Chapter 1: Culture as the Amplifier of the Amplifier Chapter 2: The European Miracle and Its AI Parallel Chapter 3: Why Clock-Making Mattered: The Culture of Precision and the Culture of Judgment Chapter 4: Tolerance, Curiosity, and the Innovation Ecosystem Chapter 5: The Role of Education in National AI Capability Chapter 6: What the Industrial Revolution Teaches About AI Transitions Chapter 7: The Invention of Invention and the Invention of Judgment Chapter 8: Climate, Geography, and the Digital Divide Chapter 9: The Culture of Maintenance Versus the Culture of Innovation Chapter 10: The Long View and the Patient Society Epilogue Back Cover
David Landes Cover

David Landes

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by David Landes. It is an attempt by Opus 4.6 to simulate David Landes's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question that kept nagging me was not about the technology. It was about the room.

I stood in Trivandrum watching twenty engineers transform their output with Claude Code. Twenty-fold productivity. The number was real. The energy was real. But something kept pulling at me in the weeks after, a question I could not articulate until I found David Landes.

The question was: Why this room?

Not why this tool — I understood the tool. Not why this moment — I understood the moment. Why these particular engineers, with these particular habits of mind, producing these particular results? Because the tool was identical everywhere. The same Claude Code available in Trivandrum was available in a hundred other cities. The model did not care about geography. It did not adjust its capability based on the educational system that had shaped the person prompting it.

But the results were not identical everywhere. They could not be. The engineers in that room brought something the tool did not provide: decades of cultivated judgment, the habit of questioning output rather than accepting it, the architectural instinct built through years of friction-rich education. That cognitive architecture was not theirs alone. It was the product of institutions, schools, universities, professional cultures, entire systems of national investment that had been compounding for generations before any of them sat down at a terminal.

Landes spent his career studying exactly this. Not technology — culture. The deep, invisible, accumulated habits of mind that determine whether a society converts a powerful tool into broad prosperity or into concentrated extraction. He looked at the Industrial Revolution and asked the question everyone else was too polite to ask: Why did some nations thrive while others, with access to identical technology, stagnated? His answer was blunt enough to make his colleagues uncomfortable and clear enough to outlast their objections.

Culture makes all the difference.

I needed that framework. The Orange Pill argues that AI is an amplifier, and the quality of what you feed it determines the quality of what comes out. Landes adds the layer beneath: the quality of what you feed the amplifier is shaped by the culture that raised you, educated you, rewarded certain habits and punished others. Culture is the amplifier of the amplifier.

This book applies Landes's five centuries of evidence to the question that keeps me awake: Which societies will build the dams that direct AI toward life, and which will be flooded by capability they lack the institutional infrastructure to direct? The answer is not about compute. It never was.

Edo Segal ^ Opus 4.6

About David Landes

1924–2013

David Landes (1924–2013) was an American economic historian whose work traced the deep cultural and institutional roots of global inequality. Born in New York City, he spent most of his career at Harvard University, where he held a joint appointment in economics and history. His major works include *The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present* (1969), a landmark study of European industrialization; *Revolution in Time: Clocks and the Making of the Modern World* (1983), which argued that the culture of precision fostered by clock-making laid essential groundwork for industrial civilization; and *The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor* (1998), a sweeping global history that placed culture — values, attitudes toward work and inquiry, tolerance, and institutional habits — at the center of economic development. Landes was celebrated for his narrative power and criticized for what some considered Eurocentrism, but his central argument — that identical technologies produce radically different outcomes depending on the cultures that receive them — has gained renewed relevance in the age of artificial intelligence.

Chapter 1: Culture as the Amplifier of the Amplifier

In 1500, China possessed every material advantage a civilization could want. Its population dwarfed Europe's. Its bureaucracy was the most sophisticated administrative apparatus on earth. Its engineers had invented the compass, gunpowder, paper, and movable type — four technologies that would, in European hands, reshape the planet. Chinese metallurgists were producing cast iron a thousand years before their European counterparts. Chinese ships were larger, more seaworthy, and more numerous than anything floating in the Mediterranean. By any objective measure of technological capability, institutional sophistication, or accumulated wealth, China in 1500 should have industrialized first.

It did not.

David Landes spent much of his career asking why. His answer, elaborated across The Wealth and Poverty of Nations and refined over decades of scholarly combat, was blunt enough to scandalize his colleagues and clear enough to outlast most of their objections: culture. Not geography. Not resources. Not the accidents of dynastic succession or the vagaries of climate. Culture — the deep, accumulated, often invisible habits of mind that determine whether a society encourages curiosity or punishes it, rewards initiative or suppresses it, distributes opportunity broadly or hoards it among a narrow elite.

The argument earned Landes charges of Eurocentrism, cultural determinism, and worse. He did not retreat. "If we learn anything from the history of economic development," he wrote in The Wealth and Poverty of Nations, "it is that culture makes all the difference." The sentence is characteristically unhedged. Landes did not say culture matters. He said culture makes all the difference. The emphasis was deliberate, and the provocation was the point, because Landes understood that polite qualifications would obscure the central insight: that two societies with identical access to the same technology will produce radically different outcomes depending on the values, attitudes, and institutional habits their citizens bring to that technology.

Five centuries after China's missed industrialization, the same pattern is unfolding again. Artificial intelligence is the most powerful general-purpose technology since the steam engine — arguably since writing itself. Its capabilities are, for the first time in the history of transformative technology, globally available almost simultaneously. The large language models that power Claude, ChatGPT, and their successors are not locked behind national borders or protected by geographical accident. A developer in Lagos can access the same model as an engineer in San Francisco. A student in Dhaka can prompt the same system as a researcher at MIT. The technology is, to a first approximation, uniform.

The cultures that receive it are not.

This asymmetry is the subject of this book. It is the argument that the history of economic development, read through Landes's framework, makes unavoidable: the nations that thrive in the age of artificial intelligence will not be those with the best models, the most compute, or the largest datasets. They will be those whose cultures produce citizens capable of directing AI wisely. The technology is the river. Culture determines whether the river irrigates or floods.

Segal's Orange Pill makes the individual version of this argument with considerable force. "AI is an amplifier," he writes in the Foreword, "and the most powerful one ever built. And an amplifier works with what it is given; it doesn't care what signal you feed it." Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real craft, and it carries that further than any tool in human history. The question, Segal argues, is whether you are worth amplifying.

Landes's framework adds the layer beneath. The individual signal — the quality of the question you bring to the machine, the judgment you exercise over its output, the care with which you direct its capability — is not formed in a vacuum. It is formed in a culture. The culture that raised you, educated you, rewarded certain habits of mind and punished others. Long before you sit down with Claude Code and describe the problem you want solved, your culture has already shaped the quality of your description, the sophistication of your judgment, and your willingness to reject the machine's first plausible answer in favor of a better one.

Culture is the amplifier of the amplifier.

Consider what this means in practice. A society that has spent generations investing in broad-based education — not elite education for a narrow class, but genuine, widespread cultivation of the capacity for critical thinking across the population — produces citizens who bring fundamentally different cognitive habits to AI than a society that has invested in rote memorization and obedience to authority. The first society's citizens will interrogate AI output. They will recognize when the machine has produced something plausible but wrong — what Segal, in Chapter 7 of The Orange Pill, calls "confident wrongness dressed in good prose." They will reject the smooth surface and dig for the structural flaw beneath it. They will, in short, exercise judgment.

The second society's citizens will accept. They will treat the machine's output as authoritative precisely because they have been trained to treat authoritative-sounding outputs as correct. The cultural habit of deference to expertise — which in many contexts is a reasonable and even admirable trait — becomes, in the age of AI, a catastrophic vulnerability. Because the machine sounds authoritative. It always sounds authoritative. It produces confident, well-structured, grammatically impeccable prose regardless of whether the underlying claims are true, partially true, or fabricated from statistical patterns that happen to resemble truth.

The culture of judgment — the habit of questioning, verifying, pushing back — is not a technical skill. It is a cultural competency. And like all cultural competencies, it is built over generations, not quarters.

Landes documented this dynamic across every major technological transition in modern history. The nations that led the Industrial Revolution were not those with the best raw materials or the most favorable geography. Britain had coal, but so did Belgium and parts of Germany. What Britain had that its competitors lacked, Landes argued, was a specific cultural configuration: a tradition of empirical inquiry that valued practical knowledge alongside theoretical learning; an institutional structure that protected intellectual property and rewarded innovation; a social mobility, however imperfect, that allowed talented individuals from the middling classes to rise; and a tolerance for heterodox thinking that meant good ideas could survive even when they challenged established interests.

These cultural factors were not independent of each other. They reinforced one another in a virtuous cycle that Landes traced with meticulous historical specificity. The tradition of empirical inquiry produced inventors. The institutional protections gave inventors a reason to invest years in developing their ideas. The social mobility meant that a clockmaker's son could become an engineer, and an engineer's son could become a factory owner. The tolerance for dissent meant that when the established guilds tried to suppress labor-saving machinery — as they did, repeatedly, across Europe — the suppression failed in Britain more often than it succeeded, because the cultural and institutional ecosystem had enough redundancy, enough alternative harbors for innovation, that a good idea shut down in one place could find a home in another.

The nations that failed to industrialize, or that industrialized late and painfully, were not lacking in intelligence or resources. They were lacking in the cultural infrastructure that converts raw technological capability into broad-based prosperity. Spain had the wealth of the Americas and squandered it on consumption and warfare rather than productive investment. The Ottoman Empire had access to the printing press for centuries and chose not to adopt it, because the cultural and political establishment perceived it as a threat to existing structures of authority. China, as noted, had every technological advantage and converted none of them into sustained industrial development, because the imperial bureaucracy valued stability over innovation and punished entrepreneurial risk-taking as a threat to social order.

In each case, the technology was available. The culture determined whether it was used.

The AI transition will reproduce this pattern with two critical differences. The first is speed. The Industrial Revolution unfolded over a century. The AI transition is unfolding over years. The cultural advantages and disadvantages that took generations to manifest during industrialization will manifest in a decade during the AI transition, because the technology's capability is advancing at a pace that compresses every historical timeline. Societies that lack the cultural infrastructure to direct AI wisely will not have a century to build it. They will have, perhaps, a generation. Perhaps less.

The second difference is visibility. During the Industrial Revolution, the cultural factors that determined national success were invisible to most contemporaries. Adam Smith could see the division of labor; he could not see the cultural attitudes toward innovation that made the division of labor possible in some societies and impossible in others. Landes, writing two centuries later, had the advantage of hindsight. In the AI transition, the cultural factors are becoming visible in real time, because AI makes the consequences of cultural habits immediate and measurable. A society that produces citizens who accept AI output uncritically will see the consequences not in a century but in a product cycle: lower-quality decisions, shallower analysis, a gradual erosion of the institutional knowledge that took decades to build.

Segal observes in The Orange Pill that "the quality of your choices is the only thing that separates building from flooding." Landes's historical analysis reveals the infrastructure beneath that observation. The quality of individual choices is shaped by cultural values — what a society teaches its children to value, what it rewards in its professional institutions, what it demands of its leaders. A culture that values speed over accuracy will produce citizens who use AI to generate fast, plausible, and often wrong outputs. A culture that values accuracy over speed will produce citizens who use AI to generate slower, verified, and more reliable outputs. Both cultures have access to the same technology. The technology does not determine the outcome. The culture does.

This book applies Landes's framework to the question that The Orange Pill raises but does not fully answer: the question of nations. Segal's analysis operates primarily at two scales — the individual builder navigating the AI transition, and the species grappling with the meaning of machine intelligence. The scale between them, the nation, is where the most consequential decisions about AI will actually be made. Not by individuals choosing whether to adopt Claude Code, and not by the species evolving over millennia, but by nations choosing how to educate their citizens, regulate their industries, distribute the gains from technological transition, and build or neglect the institutional infrastructure that determines whether AI becomes a force for broad-based flourishing or an instrument of elite extraction.

The history of economic development offers a clear, empirically grounded, and frequently uncomfortable answer to which nations will make these choices well. The answer is not primarily about resources, geography, or technological capability. It is about culture — the accumulated habits of mind that determine whether a society directs its tools toward life or allows them to become instruments of its own stagnation.

Landes was accused of cultural determinism. The charge misses the point. Culture is not destiny. Cultures change, sometimes rapidly, under the pressure of crisis or the influence of visionary leadership. Japan's Meiji Restoration remade an entire civilization's relationship to technology in a single generation. South Korea's transformation from one of the world's poorest nations to one of its most technologically advanced occurred within living memory. These transformations were not accidents. They were acts of cultural will — deliberate, sustained, often painful decisions to cultivate the habits of mind that technological civilization requires.

The question for the AI age is which nations will undertake equivalent transformations, and whether they will do so in time. The technology does not wait. The amplifier is already amplifying. And what it amplifies is not the nation's technical infrastructure or its natural resources or its military capacity. It amplifies the culture — the deep, invisible, accumulated habits that determine whether a society's citizens can think critically, question authority, tolerate uncertainty, and direct extraordinary capability toward purposes that serve the common good.

Culture makes all the difference. Landes said it about the Industrial Revolution. It is truer now than it has ever been.

---

Chapter 2: The European Miracle and Its AI Parallel

The European miracle was not European superiority. It was European fragility.

This is the counterintuitive core of Landes's argument in The Unbound Prometheus, and it is the insight that matters most for understanding the AI transition. Europe did not rise to global economic dominance because it was stronger than its competitors. It rose because it was weaker — weaker in the specific sense that mattered: no single authority was strong enough to suppress innovation across the entire continent.

China had the emperor. The Ottoman Empire had the sultan. Both possessed the centralized authority to make decisions that applied uniformly across vast territories. When the Chinese emperor decided to end maritime exploration in the 1430s — burning the great treasure ships and forbidding the construction of vessels with more than two masts — the decision stuck. There was no competing harbor, no rival court, no alternative jurisdiction where a shipbuilder could relocate and continue his work. The decision was final because the authority was total.

Europe had no such authority. It had, instead, a patchwork of competing kingdoms, duchies, city-states, and ecclesiastical territories, each jealous of its prerogatives, each competing with its neighbors for trade, talent, and military advantage. When one jurisdiction suppressed an innovation — as the Church suppressed certain lines of scientific inquiry, as various guilds suppressed labor-saving machinery, as individual monarchs suppressed religious minorities who happened to be commercially productive — the innovators moved. They crossed a border. They found a rival court that valued their skills precisely because a neighboring court had been foolish enough to expel them.

The Huguenots, driven from France by the Revocation of the Edict of Nantes in 1685, carried their craft expertise to England, the Netherlands, Prussia, and Switzerland, enriching every receiving society and impoverishing the one that expelled them. The Jews, expelled from Spain in 1492, carried their commercial networks and financial sophistication to the Ottoman Empire, to the Netherlands, to wherever tolerance could be found. In each case, the loss was France's or Spain's. The gain was distributed across the competitors who had the cultural capacity to welcome what their rivals had been foolish enough to reject.

Landes called this pattern "the European advantage of backwardness" — a phrase that sounds paradoxical until you understand its mechanism. The advantage was not backwardness in the usual sense. It was the absence of the specific kind of strength that could kill innovation: centralized authority with no external check. Europe was too fragmented, too competitive, too politically chaotic to achieve the lethal efficiency of a single decision that shut down an entire domain of inquiry. And that very fragility — that inability to coordinate suppression — was the condition that allowed innovation to survive, relocate, and ultimately flourish.

The parallel to the AI transition is structural. The question is not which nation will build the most powerful AI model. That question, while commercially important, is historically secondary. The question is which nations will create the conditions — cultural, institutional, political — in which AI capability can be directed toward broad-based prosperity rather than narrow extraction.

The answer, if Landes's framework holds, is: the nations that cannot suppress experimentation even when their elites want to. The nations whose political, economic, and cultural fragmentation ensures that good ideas can always find a harbor. The nations where no single authority — no government ministry, no dominant corporation, no cultural orthodoxy — is powerful enough to determine how an entire population uses a transformative technology.

This is not a prescription for chaos. Landes was not an anarchist. He understood that institutions matter enormously, that rule of law is indispensable, that property rights and contract enforcement are preconditions for sustained investment. But he also understood, with the historian's hard-won clarity, that the institutions that promote innovation are not the same as the institutions that promote order, and that the societies that thrive in the long run are those that find ways to maintain both without allowing either to dominate.

The Orange Pill describes a five-stage pattern that Segal identifies in every major technological transition: threshold, exhilaration, resistance, adaptation, expansion. Landes's European miracle illuminates why the pattern does not always complete. Threshold and exhilaration are nearly universal — every society that encounters a transformative technology experiences the initial shock and the initial excitement. Resistance is nearly universal too, because every transformative technology threatens established interests. The critical stages are adaptation and expansion, and these are the stages where culture determines outcomes.

Adaptation requires institutional innovation: the creation of new rules, new norms, new ways of organizing economic activity that accommodate the new technology without allowing it to destroy the social fabric. The patent system, the limited liability corporation, the public education system, the labor regulations that eventually tamed the worst abuses of industrialization — these were all institutional innovations that emerged during the adaptation phase of the Industrial Revolution. They did not emerge spontaneously. They emerged because the political fragmentation of Europe meant that at least some jurisdictions were willing to experiment, and the competitive pressure between jurisdictions meant that successful experiments were copied.

Expansion — the stage where the gains from a technological transition are distributed broadly enough to produce sustained, society-wide improvement — requires something even harder: the cultural willingness to share. To invest in education for the many rather than the few. To build infrastructure that connects the periphery to the center. To create institutions that distribute opportunity rather than concentrating it. Landes documented, with relentless specificity, the difference between nations that achieved this and nations that did not. Britain's Industrial Revolution produced a century of wrenching disruption before the gains were broadly distributed — but the distribution eventually happened, because the political culture produced labor movements, reform legislation, and public investment in education and infrastructure that redirected the gains from the factory owners to the broader population. Spain's access to New World wealth produced no equivalent distribution, because the political culture concentrated gains among the landed aristocracy and the Church, and the institutional structure provided no mechanism for broader investment.

The AI transition faces the same fork. The technology is arriving into a global landscape of radically different cultural and institutional configurations. Some nations have robust traditions of broad-based education, institutional trust, political competition, and tolerance for dissent. Others have narrow educational systems that serve elites, institutional structures that concentrate gains, political systems that suppress experimentation, and cultural norms that punish heterodox thinking.

Landes would observe, with characteristic bluntness, that the nations in the first category are better positioned for the AI transition than those in the second, and that no amount of compute capacity or model sophistication will compensate for the cultural deficit. China in 2026, like China in 1500, possesses extraordinary technological capability. Its AI models are competitive with the best in the world. Its investment in AI infrastructure is enormous. Its engineering talent is world-class. And its political system is centralized enough to suppress lines of inquiry, applications, and uses of AI that the central authority deems threatening — which is precisely the configuration that, in Landes's analysis, has historically prevented sustained, broad-based innovation.

The uncomfortable implication is not that authoritarian systems cannot innovate. They can, and China's AI achievements demonstrate this conclusively. The implication is that authoritarian systems innovate within the boundaries set by authority, and those boundaries are determined not by what would produce the broadest prosperity but by what serves the interests of the authority that sets them. Innovation within boundaries is not the same as innovation without them. The difference, over time, is the difference between the European miracle and the Chinese stagnation that Landes documented — not a difference in raw capability, but a difference in the cultural and political conditions that determine how capability is directed.

The European miracle was produced by fragility, not strength. By the inability of any single authority to kill a good idea across an entire continent. By the competitive pressure between jurisdictions that rewarded tolerance and punished intolerance. By the cultural habit of questioning that survived because there was always somewhere for the questioner to go.

The AI miracle — if it is to be a miracle for humanity rather than for a narrow class of shareholders and autocrats — will require equivalent conditions. Not identical conditions; history does not repeat so neatly. But structurally equivalent ones: political and economic fragmentation that prevents any single authority from determining how AI is used; competitive pressure that rewards broad-based investment in human capability; cultural habits of questioning, dissent, and critical evaluation that ensure citizens engage with AI as directors rather than as subjects.

The nations that possess these conditions are not guaranteed success. The European miracle was not inevitable, and neither is the AI miracle. But the nations that lack them — the nations where centralized authority can suppress experimentation, where narrow elites capture the gains from technological transition, where the culture punishes questioning and rewards obedience — face a historical headwind that no amount of technical investment can overcome.

The miracle is not a gift. It is a cultural achievement. And the cultures that have earned it will know, because they will have built the institutions that make it possible for a thousand small experiments to run simultaneously, for good ideas to find harbors when they are rejected by the powerful, and for the gains from extraordinary capability to flow broadly rather than narrowly.

That is the European lesson. Not that Europe was superior. That fragility, tolerance, and distributed experimentation produce outcomes that centralized strength cannot match. The AI age will test whether that lesson still holds. Landes's wager — and the wager of this book — is that it does.

---

Chapter 3: Why Clock-Making Mattered: The Culture of Precision and the Culture of Judgment

In 1370, the city of Paris installed a mechanical clock on the tower of the Palais de la Cité. For the first time, every resident of the city could hear the same hour struck at the same moment. The implications were not merely practical. They were civilizational.

Before mechanical clocks, time was local, approximate, and negotiable. A workday began at dawn and ended at dusk, which meant that a workday in June was fundamentally different from a workday in December. Appointments were kept loosely. Commerce operated on the elastic schedule of human agreement rather than the rigid schedule of mechanical precision. The clock changed this. It imposed a standard. It made time measurable, comparable, and — critically — contractual. You could now promise to deliver goods at a specific hour and be held accountable if you did not. You could synchronize the activities of dozens of workers in a single workshop. You could coordinate the arrival of ships, the departure of coaches, the opening of markets.

But Landes, in Revolution in Time, argued that the clock's deepest impact was not on commerce or logistics. It was on the culture of the people who made clocks.

Clock-making was, for centuries, the most demanding precision craft in Europe. A clock that gained or lost more than a few minutes per day was useless — worse than useless, because it created false confidence in measurements that were wrong. The clockmaker therefore had to develop, and pass down through apprenticeship and guild tradition, a set of cognitive habits that went far beyond the mechanical skill of cutting gears and winding springs. The clockmaker had to think in tolerances. He had to understand that the difference between a functional instrument and a decorative failure lay in margins so small they were invisible to the naked eye. He had to cultivate the discipline of measurement, the habit of verification, and the intellectual humility to accept that his hands, however skilled, would produce errors that only careful testing could reveal.

This culture of precision — the habit of measuring, the discipline of accuracy, the institutional infrastructure of standards and calibration — spilled over into every other domain of manufacturing and engineering. The toolmakers who supplied the clockmakers developed techniques that were later applied to firearms, scientific instruments, textile machinery, and eventually the machine tools that made mass production possible. The cognitive habits of the clockmaker — precision, verification, tolerance for tedium in pursuit of accuracy — became the cognitive habits of an entire industrial civilization.

Landes's insight was not that clocks caused the Industrial Revolution. It was that the culture of precision that clock-making required and cultivated was a necessary precondition for industrialization, and that societies which did not develop an analogous culture of precision — either because they lacked the craft tradition or because their institutional structure did not reward it — could not industrialize effectively even when they had access to the same technology.

The AI transition requires an analogous cultural competence. Not a culture of mechanical precision — the machines handle that now with a reliability no human can match. What the AI age requires is a culture of judgment: the habit of questioning output, the discipline of verifying claims, the institutional infrastructure of critical evaluation that determines whether a society's citizens use AI as a tool for genuine understanding or as a machine for producing plausible-sounding nonsense at unprecedented scale.

The parallel is exact. A clock that is slightly wrong is worse than no clock at all, because it creates false confidence in false information. An AI system that produces confident, well-structured, grammatically impeccable output that happens to be factually incorrect or analytically shallow is worse than no AI at all, for precisely the same reason. The person who checks the clock and acts on its reading without verifying it against other sources is the person who accepts AI output without questioning it. Both are betrayed not by the tool's malice but by the tool's smooth, authoritative surface — and by their own insufficient culture of verification.

Segal captures this dynamic precisely in The Orange Pill when he describes catching Claude in a fabrication: a passage that attributed to Gilles Deleuze a concept that bore almost no relationship to what Deleuze actually wrote. "The passage worked rhetorically," Segal notes. "It sounded right. It felt like insight. But the philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze." The smooth surface concealed the structural flaw. Only the discipline of verification — only the culture of judgment — could catch it.

This is not a technical problem. Better models will reduce hallucinations, and retrieval-augmented generation will ground outputs more firmly in verifiable sources. But the fundamental challenge is not technical. It is cultural. Even a model that hallucinates rarely still hallucinates sometimes, and the cost of a single undetected hallucination in a legal brief, a medical diagnosis, a financial analysis, or a policy recommendation can be enormous. The question is not whether the model will eventually become perfectly reliable. The question is whether the culture surrounding the model produces users who verify, question, and exercise independent judgment — or users who accept the smooth surface and move on.

Landes's clockmakers knew, from decades of craft experience, that precision is a discipline, not a feature. A clock does not become precise because its maker wants it to be precise. It becomes precise because its maker has internalized the habits of measurement, calibration, and verification that precision requires — habits that are tedious, time-consuming, and fundamentally at odds with the desire for speed and ease. The clockmaker who rushed produced clocks that looked beautiful and ran wrong. The clockmaker who submitted to the discipline of verification produced clocks that looked ordinary and ran true.

The contemporary equivalent is the knowledge worker who uses AI to produce a report in an hour that would previously have taken a week. The report looks polished. The structure is clean. The citations appear to be in order. The analysis reads well. But has the worker verified the citations? Has she tested the analytical logic against her own understanding of the domain? Has she caught the places where the model's statistical confidence has outrun its factual grounding?

If she has, the report is probably better than what she would have produced alone — more comprehensive, better structured, drawing on a wider range of sources. The tool has amplified her competence.

If she has not, the report is a beautiful clock that runs wrong. And she will not know it until someone downstream acts on the wrong information and the consequences propagate.

The culture of judgment, like the culture of precision, must be cultivated deliberately. It does not emerge spontaneously from contact with the tool. If anything, the tool's smooth surface militates against it, because the effort of verification is precisely the kind of friction that AI is designed to eliminate. Verifying a citation takes time. Testing an analytical claim against your own understanding requires the prior investment of having developed understanding in the first place. Questioning the machine's output when the machine's output is articulate, confident, and structurally sound requires a specific kind of intellectual courage — the courage to trust your own judgment over the machine's authority.

This courage is cultural. Some societies cultivate it systematically: through educational systems that reward questioning over rote performance, through professional norms that expect practitioners to verify before they rely, through institutional structures that protect the person who says "wait, this doesn't look right" from the social cost of slowing things down. Other societies suppress it: through educational systems that reward obedience, through professional norms that equate speed with competence, through institutional structures that punish the person who questions the output and thereby delays the project.

Landes's historical research suggests that the societies in the first category will outperform those in the second over any meaningful time horizon, because the cost of false precision — of beautiful clocks that run wrong, of polished reports that contain fabrications, of confident analyses built on statistical patterns rather than verified facts — compounds over time. A single unverified output may cause no visible harm. A thousand unverified outputs, accumulated across an organization over months, produce a decision-making environment in which no one is quite sure what is true, because the institutional habit of verification has atrophied and the organizational memory has been contaminated by plausible-sounding errors that no one caught.

The culture of judgment requires investment in the specific cognitive habits that AI makes easy to skip: reading primary sources rather than accepting summaries, testing claims against personal knowledge rather than assuming the machine knows better, maintaining the slow, tedious, unglamorous practice of verification that the clock-making tradition elevated to an art form.

Societies that make this investment — in education, in professional training, in organizational norms that protect and reward the practice of judgment — will produce citizens who use AI as the clockmakers used their tools: with respect for precision, with the discipline of verification, and with the understanding that a tool is only as good as the culture that surrounds it.

Societies that fail to make this investment will produce citizens who use AI the way a tourist uses a phrase book — confidently enough to order dinner, fluently enough to sound competent, and without the underlying comprehension to know when the translation has gone terribly wrong.

The clockmakers of Europe did not know they were building the cultural foundation of industrial civilization. They were simply trying to make clocks that kept accurate time. But the habits they cultivated — precision, verification, intellectual humility, submission to the discipline of measurement — proved to be the habits that industrial civilization required. The AI age will have its own clockmakers: the educators, the institutional designers, the organizational leaders who cultivate the culture of judgment not because they foresee its civilizational implications but because they understand, at the craft level, that the difference between a tool that serves you and a tool that misleads you is nothing more than the discipline with which you use it.

Landes understood that craft traditions are not merely economic phenomena. They are cultural ones. The craft of judgment — the habit of verification in an age of plausible machines — is the clockmaking of the twenty-first century. The societies that master it will set the standard. The rest will buy their clocks from abroad and wonder why they keep losing time.

---

Chapter 4: Tolerance, Curiosity, and the Innovation Ecosystem

In 1685, Louis XIV of France made one of the most consequential economic decisions in European history. He revoked the Edict of Nantes, which for eighty-seven years had guaranteed French Protestants — the Huguenots — the right to practice their faith and pursue their trades without persecution. The revocation was a triumph of religious uniformity. It was also, as Landes documented with unsparing detail, an act of economic self-mutilation.

The Huguenots were disproportionately represented in France's most skilled and commercially productive trades: silk weaving, watchmaking, glassblowing, silversmithing, printing, and finance. They were not merely skilled workers. They were nodes in networks of knowledge, trade, and trust that extended across Europe and beyond. When the revocation came, between two hundred thousand and one million Huguenots — the estimates vary widely, but even the lowest figure represents a staggering loss — left France. They went to England, the Netherlands, Brandenburg-Prussia, Switzerland, the Cape Colony, and the American colonies.

Every receiving society was enriched. France was diminished. The damage was not merely the loss of skilled labor, which could in theory be replaced over time. The damage was the loss of the networks — the relationships of trust, the channels of information flow, the cross-border connections that had made the Huguenots commercially productive in the first place. Networks do not regenerate when you re-admit the people you expelled. The trust has been broken. The connections have been rewired. The knowledge has found new homes and new allegiances.

Landes told this story not as an isolated anecdote but as an instance of a structural pattern: the connection between tolerance and innovation is causal, not coincidental. Societies that tolerate religious, ethnic, and intellectual diversity produce more innovation than societies that enforce conformity. The mechanism is not sentimental. It is not that tolerance is morally good (though Landes, characteristically, did not shy from the moral argument). It is that innovation requires the collision of different perspectives, and collision requires proximity, and proximity requires tolerance.

An economy composed entirely of people who think the same way, who were educated in the same tradition, who share the same assumptions about what is possible and what is proper, will produce incremental improvements within the existing paradigm. It will not produce the paradigm-breaking insight that comes from the collision of incommensurable worldviews — the moment when a silk weaver's understanding of material behavior meets a watchmaker's understanding of precision mechanics and produces something neither could have imagined alone.

This is precisely the mechanism that Segal describes in The Orange Pill when he recounts the afternoon on the Princeton campus with a neuroscientist and a filmmaker. The neuroscientist sees consciousness as a computational problem. The filmmaker sees intelligence as the cut between images — the meaning that lives in the space between perspectives. The builder sees both and tries to construct something from the collision. "Three fishbowls cracked against each other on a stone path and let the water mingle," Segal writes. The mingling is where the insight lives. But the mingling requires that the fishbowls be in the same room — that the neuroscientist and the filmmaker and the builder exist in a culture that brings them together rather than sorting them into separate, non-communicating domains.

Tolerance is the precondition for that proximity. Not tolerance in the weak sense of reluctant coexistence, but tolerance in the strong sense that Landes meant: the active valuing of cognitive diversity, the institutional willingness to welcome people whose perspectives challenge the majority, and the cultural confidence to allow disagreement without perceiving it as a threat to social order.

The AI age intensifies the connection between tolerance and innovation to a degree that even Landes might not have anticipated. The reason lies in the architecture of the technology itself.

Large language models are trained on enormous corpora of text drawn from across human knowledge. They are, in a computational sense, the most comprehensive repositories of diverse perspective ever assembled. When Segal describes Claude making a connection between evolutionary biology and technology adoption curves — the insight about punctuated equilibrium that he credits with opening the central argument of The Orange Pill — the connection was possible because the model had internalized perspectives from both domains and could find the structural parallel between them. That cross-domain connection is the computational equivalent of the Huguenot silk weaver meeting the Swiss watchmaker. It is the collision of different traditions of knowledge producing something neither tradition contained alone.

But the model's capacity for cross-domain connection is only as valuable as the human's capacity to recognize, evaluate, and direct it. And that capacity — the ability to see that a connection between evolutionary biology and technology adoption is genuinely illuminating rather than merely superficially plausible — is itself a product of cognitive diversity. A person educated in a single tradition, trained to think within a single framework, will use AI to reinforce what they already believe. They will prompt the model within their existing frame of reference and receive outputs that confirm their existing assumptions, because the model is sophisticated enough to produce confirmation for virtually any perspective when directed to do so.

A person educated across traditions — someone who has been exposed to multiple, conflicting frameworks of understanding — will use AI differently. They will prompt the model with genuine questions rather than disguised confirmations. They will recognize when the model has produced a genuinely novel connection and when it has produced a superficially clever restatement of what they already thought. They will, in other words, exercise the kind of judgment that makes the tool genuinely useful rather than merely efficient.

This is the innovation ecosystem that tolerance produces. Not a single insight but a self-reinforcing cycle: diverse perspectives produce novel questions; novel questions produce novel AI outputs; novel outputs stimulate further diverse thinking; and the cycle accelerates. The society that cultivates this ecosystem — that invests in the cognitive diversity of its population through broad-based education in multiple traditions of inquiry, through institutional structures that bring different disciplines into contact, through cultural norms that value the outsider's perspective — will use AI with a creativity and depth that homogeneous societies cannot match.

The historical evidence is overwhelming and consistent across centuries. The Dutch Republic in the seventeenth century — the most tolerant society in Europe — was also the most commercially innovative. It welcomed Sephardic Jews expelled from Spain and Portugal, Huguenots fleeing France, dissenting Protestants from across northern Europe, and Catholic merchants who found the Republic's relatively relaxed religious climate more conducive to business than the stricter orthodoxies of their home countries. The result was not merely a diverse population but a diverse knowledge network: a commercial infrastructure in which information flowed across religious and ethnic boundaries with a speed and reliability that more homogeneous societies could not match.

England's rise to industrial supremacy in the eighteenth and nineteenth centuries was, in significant part, a tolerance story. The Toleration Act of 1689 did not create full religious equality — Dissenters and Catholics remained excluded from many institutions — but it created enough space for nonconformist communities to flourish commercially and intellectually. The Quakers, the Unitarians, the Baptists, and other Dissenting communities produced a disproportionate share of England's industrial innovators, not because dissent magically conferred entrepreneurial talent but because the culture of dissent cultivated certain habits of mind — independent inquiry, resistance to received authority, willingness to experiment — that were precisely the habits industrialization required.

Landes was careful to note that tolerance alone was insufficient. Tolerance without institutional structure produces cosmopolitan chaos — a diverse population with no mechanism for converting diversity into productive collaboration. The Dutch Republic succeeded not only because it was tolerant but because it built institutions — the Dutch East India Company, the Amsterdam Exchange Bank, the system of commercial law that protected property rights and enforced contracts — that channeled diversity into economic productivity. Tolerance was the necessary condition. Institutional structure was the sufficient one.

The implication for the AI age is direct. National AI strategies that focus exclusively on technical capability — building larger models, accumulating more compute, training more engineers — are addressing the necessary condition while neglecting the sufficient one. The nations that will extract the greatest value from AI will be those that combine technical capability with cognitive diversity: educational systems that expose students to multiple traditions of inquiry, professional cultures that bring different disciplines into genuine contact, and organizational structures that reward the person who makes the unexpected connection rather than the person who confirms the expected one.

Societies that enforce intellectual conformity — whether through political censorship, cultural pressure, educational standardization, or the subtler mechanisms of algorithmic filtering that show every citizen the same curated version of reality — will use AI for confirmation. The model will produce what the user expects, because the user will prompt within the boundaries of what they have been trained to expect, and the model is sophisticated enough to deliver confirmation with the appearance of independent analysis. The result will be an echo chamber of unprecedented efficiency: a society in which every citizen has access to a machine that can produce articulate, well-structured, persuasive arguments for whatever that citizen already believes.

Societies that cultivate intellectual diversity will use AI for exploration. The model will produce what the user does not expect, because the user will prompt with genuine questions — questions whose answers they do not already know — and will have the cognitive training to recognize when the model's response represents a genuinely novel connection rather than a superficially plausible fabrication.

The difference, over time, is the difference between the Dutch Republic and the Spain of Philip II. Both were wealthy. Both had access to the same technologies. Both were staffed by intelligent, capable people. One built an innovation ecosystem that produced sustained, broad-based prosperity for centuries. The other consumed its wealth, expelled its most productive citizens, and stagnated.

Louis XIV thought he was purifying France. He was bleeding it. The nations that purify their intellectual culture in the AI age — that enforce conformity of thought, that suppress dissent, that filter information to ensure their citizens encounter only approved perspectives — will bleed themselves with the same quiet efficiency. And the nations that welcome the cognitive refugee, the heterodox thinker, the person whose perspective challenges the comfortable consensus, will be enriched by exactly the same mechanism that enriched Amsterdam, London, and Berlin in the centuries after 1685.

The mechanism has not changed. The stakes have grown. The amplifier makes every cultural habit more consequential, every institutional choice more far-reaching, every act of tolerance or intolerance more immediately measurable in its economic and intellectual effects. Tolerance is not a luxury of prosperous societies. Landes's history demonstrates the reverse: tolerance is the precondition for the prosperity that allows a society to afford its other ambitions. In the age of AI, a society's capacity for intellectual diversity is not a nice-to-have supplement to its technical infrastructure. It is the cultural foundation on which the value of that technical infrastructure depends.

Chapter 5: The Role of Education in National AI Capability

In 1807, defeated and humiliated by Napoleon at Jena, Prussia made a decision that would prove more consequential than any battle. It built schools.

Not military academies for the officer class. Not finishing schools for the aristocracy. Schools for everyone. The Prussian educational reforms of 1807–1813, driven by Wilhelm von Humboldt and implemented with the systematic rigor for which Prussian bureaucracy was both famous and feared, created the first truly universal public education system in European history. Every child, regardless of birth, would learn to read, write, calculate, and — critically — think within a structured framework of inquiry that valued both discipline and independent reasoning.

The results took a generation to manifest. By the 1840s, Prussian industry was catching up with Britain's. By the 1870s, Germany had surpassed Britain in chemicals, electrical engineering, and the applied sciences. By 1900, the German research university had become the global model for higher education, and German was the language of science. The nation that Napoleon had dismantled in a single campaign had rebuilt itself into the most technologically dynamic economy in Europe — not through military conquest but through the patient, unglamorous, compound investment in the cognitive capacity of its population.

Landes was unequivocal about the lesson. National investment in education, he argued across multiple works, was the single most important determinant of long-term economic performance. Not resources. Not geography. Not even institutional quality in the narrow sense of legal and political structures. Education — the deliberate, sustained, broad-based cultivation of the capacity for inquiry, analysis, and judgment across the population — was the variable that explained more of the variance in national prosperity than any other.

The claim was controversial in Landes's time. It is becoming empirically undeniable in the age of AI.

The logic runs as follows. When AI reduces the cost of execution toward zero — when the act of writing code, drafting documents, generating analyses, and producing artifacts that previously required years of specialized training can be accomplished through natural language conversation with a machine — the economic value of execution declines. What rises in value is everything that execution cannot replace: the capacity to formulate the right question, to evaluate whether the machine's output serves the purpose, to judge whether the thing that has been built deserves to exist. These capacities are not technical skills. They are educational outcomes. They are the product of years of structured exposure to multiple domains of inquiry, the habit of questioning rather than accepting, the discipline of verification, and the intellectual confidence to reject a plausible answer in favor of a true one.

Segal arrives at this conclusion from the builder's perspective in The Orange Pill. "Do not teach your child to code," he writes in Chapter 18. "AI will do that. Teach them to ask questions. Teach them to be curious about their curiosity. Teach them to sit with uncertainty long enough for genuine learning to take root." The prescription is sound. But Landes's framework reveals the structural challenge that individual prescriptions cannot address: the capacity to ask good questions is not distributed randomly across populations. It is distributed along the lines of educational investment, which is to say along the lines of national policy, cultural priority, and institutional design.

The global distribution of educational quality in 2026 is, by any measure, radically uneven. The Programme for International Student Assessment, which measures the reading, mathematics, and science capabilities of fifteen-year-olds across participating nations, reveals not just differences in average performance but differences in the distribution of performance within nations. Some countries — Singapore, Finland, South Korea, Japan, Canada — produce relatively narrow distributions: most students perform at a high level, and the gap between the best and worst performers is comparatively small. Other countries — including, notably, the United States — produce wide distributions: extraordinary performance at the top and devastating underperformance at the bottom, often correlated with race, class, and geography.

In the pre-AI economy, this distribution mattered for the obvious reasons: it determined who could access skilled employment, who could participate in the knowledge economy, who could contribute to national productivity. In the AI economy, the distribution matters for a different and more consequential reason: it determines who can direct AI wisely and who will be directed by it.

A student who has spent twelve years in an educational system that rewards questioning, that develops the capacity for critical evaluation, that exposes her to multiple frameworks of inquiry and teaches her to navigate the tensions between them — that student brings to AI a cognitive architecture that allows her to use the tool as an amplifier of genuine capability. She can formulate questions that elicit useful responses. She can evaluate the machine's output against her own understanding. She can recognize when the smooth surface of AI-generated text conceals a structural flaw. She can, in Landes's terms, exercise judgment.

A student who has spent twelve years in an educational system that rewards memorization, that teaches to standardized tests, that treats knowledge as a fixed body of facts to be absorbed rather than a living practice of inquiry to be developed — that student brings to AI a cognitive architecture that makes him vulnerable to the tool's most dangerous failure mode. He will accept the first plausible output. He will mistake confidence for accuracy. He will not verify, because verification requires a baseline of independent knowledge that his education did not provide. He will use AI the way a person who cannot swim uses a boat: dependent on the vessel, unable to evaluate whether it is seaworthy, and catastrophically exposed when it springs a leak.

The gap between these two students is not a gap in intelligence. It is a gap in education, which is to say a gap in national investment, which is to say a gap in cultural priority. And the AI amplifier will widen this gap with a speed and efficiency that no previous technology could match. The well-educated student, amplified by AI, will produce work of extraordinary range and depth. The poorly educated student, amplified by the same AI, will produce work that looks impressive and collapses under scrutiny — confident wrongness at scale, the polished clock that runs wrong, extended across an entire career.

Landes would recognize this as a familiar pattern. The nations that invested in broad-based education before the Industrial Revolution captured the gains from industrialization. The nations that invested in narrow, elite education — educating the governing class while leaving the population illiterate — found that their elites could adopt industrial technology but their populations could not operate, maintain, or improve it. The result was dependency: dependent on imported expertise, dependent on foreign engineers, dependent on the continued goodwill of the nations that had invested in the broad-based human capital that industrialization required.

The AI parallel is almost too exact. The nations that have invested in broad-based educational quality — in systems that teach questioning, critical evaluation, and independent judgment to the widest possible population — will capture the gains from AI. Their citizens will direct the tool. Their organizations will use AI to solve problems that require genuine understanding, not just pattern matching. Their economies will produce the high-judgment, high-creativity output that AI makes possible but cannot generate without skilled human direction.

The nations that have invested in narrow, elite education will find that their elites adopt AI effectively while their populations are displaced by it. The result will be a new form of dependency — not on imported machinery, as in the nineteenth century, but on imported judgment. A nation whose educational system produces a thin layer of sophisticated AI users sitting atop a broad base of citizens who lack the cognitive tools to engage with AI critically is a nation that has automated its own stratification. The technology does not cause the stratification. The educational system does. The technology merely makes it faster, more visible, and harder to reverse.

The most urgent implication of Landes's educational argument is temporal. Educational reform is slow. The cognitive habits that the AI age requires — questioning, verification, cross-domain thinking, the intellectual confidence to reject a plausible machine output — cannot be developed in a semester or a training program. They are the product of years of cumulative investment: years of exposure to teachers who model inquiry, years of practice in evaluating claims, years of developing the independent knowledge base against which AI output can be tested.

The nations that began this investment decades ago — the Finlands, the Singapores, the South Koreas that built educational systems designed to cultivate broad-based critical thinking — are reaping the compound returns now. The nations that did not begin are facing a compounding deficit. Each year that passes without educational reform is a year in which the AI-amplified gap between well-educated and poorly educated populations widens, and a year in which the institutional inertia of existing educational systems grows harder to overcome.

Segal warns in The Orange Pill about "educational establishments staffed with calcified pedagogy." The warning echoes Landes's diagnosis of institutional sclerosis — the tendency of established institutions to resist reform even when the need for reform is existentially obvious. The university system that was designed to produce specialists for a pre-AI economy cannot be reformed by adding an AI module to the existing curriculum. The entire pedagogy — what is taught, how it is assessed, what cognitive habits are rewarded — requires reconstruction around the competencies that the AI age demands.

This reconstruction is not happening fast enough. It is not happening fast enough in any country, including the ones with the strongest educational traditions. But the distance between the current pace of reform and the required pace is not the same everywhere. The nations with flexible educational systems, with traditions of pedagogical experimentation, with cultural willingness to question established educational practices — these nations can close the gap. The nations with rigid systems, with entrenched bureaucracies, with cultural reverence for established educational forms regardless of their fitness for purpose — these nations will fall further behind with each passing year.

Landes was fond of reminding his readers that the effects of educational investment are measured in generations, not years. The generation of Prussians educated under Humboldt's reforms did not produce Germany's industrial dominance. Their children did. The investment was patient, sustained, and — crucially — broad-based. It did not produce a thin elite of highly educated specialists. It produced a population with the general cognitive capacity to participate in, contribute to, and benefit from technological transformation.

The AI age requires the same patience and the same breadth. And it requires them at a moment when every incentive — the speed of AI development, the quarterly pressure on corporate earnings, the political cycle that rewards visible action over invisible investment — militates against patience. The nations that resist the pressure for immediate, visible AI deployment in favor of the slower, less visible investment in the educational foundation that makes deployment productive will outperform those that do not.

This is not a comfortable prescription. It asks nations to invest in outcomes they will not see for fifteen years, in a political and economic environment that discounts anything beyond the next quarter. It asks educational institutions to dismantle and rebuild themselves around competencies they are not yet sure they understand. It asks parents to value their children's capacity for questioning over their children's capacity for performance on standardized measures of knowledge.

But the history of economic development, examined across five centuries with the rigor that Landes brought to the task, offers no alternative. The nations that invest in education thrive. The nations that neglect it stagnate. The technology has changed. The mechanism has not. The amplifier amplifies the signal, and the quality of the signal is determined, before anything else, by the education that shaped the mind that produces it.

Prussia was defeated at Jena. It responded by building schools. The nations that are being defeated by the AI transition — defeated not militarily but economically, culturally, cognitively — face the same choice. They can build schools. Or they can watch their children become consumers of a tool they lack the education to direct.

The choice, as Landes would say, makes all the difference.

---

Chapter 6: What the Industrial Revolution Teaches About AI Transitions

The power loom was invented in 1785. The Factory Act that prohibited employing children under nine in textile mills was passed in 1833. The gap between those two dates — forty-eight years — is the measure of institutional failure, the time a society took to build the first meaningful dam against a river that was already flooding.

Forty-eight years. Two full generations of children entered the mills before the political system produced a law that said they should not. And the 1833 Act was barely enforceable. Effective inspection did not arrive until 1844. Compulsory education, which actually gave children somewhere to go instead of the factory, did not arrive until 1870. The full institutional infrastructure that transformed the Industrial Revolution from a mechanism of exploitation into a mechanism of broad-based prosperity — labor regulation, universal education, public health, the right to organize — took the better part of a century to construct.

Landes documented this lag with the precision of a diagnostician recording symptoms. The lag was not accidental. It was structural. The people who benefited from the absence of regulation — the factory owners, the landed interests, the political class that overlapped substantially with both — had every incentive to delay institutional reform. The people who bore the cost of the absence — the workers, the children, the communities whose traditional economies were being dismantled — had almost no political voice. The institutions that eventually redirected the river's flow toward broad-based prosperity were built not by the willing generosity of the powerful but by decades of political struggle, social agitation, and the slow accretion of moral pressure that eventually made the status quo untenable.

This is the pattern that The Orange Pill describes from the individual perspective: threshold, exhilaration, resistance, adaptation, expansion. Landes's contribution is to reveal the political economy beneath each stage — who benefits, who bears costs, and what determines whether adaptation serves broad interests or narrow ones.

The technology was global in its potential. Any nation with access to coal, iron, and water power could, in principle, industrialize. But the institutional infrastructure that determined whether industrialization produced broadly shared prosperity or concentrated extraction was national — specific to the political culture, the distribution of power, the educational capacity, and the institutional traditions of each society.

Britain industrialized first, and the first generation of industrialization was brutal. The Luddites, whose story Segal tells with genuine sympathy in The Orange Pill, were correct about the immediate consequences: skilled wages collapsed, traditional communities disintegrated, and the gains flowed overwhelmingly to capital. But Britain also built the institutional infrastructure — imperfectly, slowly, after unconscionable delay — that eventually redirected those gains. The Factory Acts. The Public Health Acts. The expansion of the franchise. The legalization of trade unions. The establishment of compulsory education. Each of these institutions was a dam in the river, built not to stop industrialization but to ensure that the water irrigated rather than flooded.

Other nations that industrialized later had the advantage of observing Britain's mistakes — and the disadvantage of different political cultures that made different mistakes. Germany industrialized rapidly under state direction, building an educational infrastructure and a social insurance system that Britain lacked, but concentrating political power in ways that would eventually prove catastrophic. The United States industrialized with extraordinary energy and almost no institutional restraint, producing the Gilded Age — a period of spectacular innovation, spectacular wealth creation, and spectacular exploitation that was only partially tamed by the Progressive Era reforms of the early twentieth century. Japan industrialized under the deliberate cultural revolution of the Meiji Restoration, which systematically dismantled feudal institutions and replaced them with modern ones — a feat of institutional engineering that Landes regarded as one of the most remarkable in modern history.

Each of these national stories confirms the central insight: the technology does not determine the outcome. The institutions that surround the technology determine the outcome. And the institutions that produce broad-based prosperity are different from the institutions that produce concentrated wealth. The first require investment in education, regulation that distributes gains, political structures that give voice to those who bear costs, and cultural norms that value broad-based capability over narrow elite performance. The second require only the absence of these things — the default condition in which the powerful capture the gains and the powerless absorb the costs.

The AI transition is reproducing this pattern at compressed timescales. The technology arrived not over decades but over months. The exhilaration phase — Segal's "orange pill moment" — was measured in weeks. The resistance phase is well underway: regulatory proposals in the European Union, executive orders in the United States, cautionary reports from labor economists, and the quieter resistance of millions of workers who sense that the ground beneath their careers is shifting but lack the institutional vocabulary to articulate what is happening.

The critical phase — adaptation — is where the analogy becomes most instructive and most urgent.

The Industrial Revolution's adaptation phase lasted roughly a century. From the 1780s to the 1880s, British society constructed, through painful political struggle, the institutional infrastructure that converted raw industrial capability into broadly shared prosperity. A century is a long time. It encompassed generations of workers who bore the full cost of the transition without any of its institutional protections — who lived and died in the gap between the technology's arrival and the institutions that tamed it.

The AI transition does not have a century. The speed of capability improvement means that the gap between technology and institutions will widen, not narrow, with each passing year unless adaptation is accelerated. Landes's historical analysis suggests that this acceleration requires three things.

First, political voice for those who bear costs. The Industrial Revolution's institutional response was driven not by the wisdom of elites but by the political pressure of organized labor. The workers who demanded regulation, the reformers who documented abuses, the political movements that expanded the franchise — these were the forces that built the dams. The AI transition requires equivalent political voice for the workers, students, and communities that bear the costs of displacement. If these populations are excluded from the conversation — if the adaptation phase is managed entirely by the technology companies that profit from AI and the governments that depend on those companies for economic competitiveness — the adaptation will serve narrow interests. The dams will be built, but they will be built to protect the powerful rather than the vulnerable.

Second, institutional experimentation across jurisdictions. The European miracle, as the previous chapter argued, was produced by fragmentation: the inability of any single authority to suppress innovation or impose a single institutional model. The AI transition requires equivalent experimentation. Different nations, different states, different municipalities trying different regulatory approaches, different educational models, different labor market interventions — and the competitive pressure between jurisdictions ensuring that successful experiments are copied and unsuccessful ones are abandoned. The nations that impose a single, top-down institutional response to AI will almost certainly get it wrong, because no single authority possesses the information needed to design the right institutions for a technology whose capabilities are changing monthly. The nations that allow a thousand small experiments will find the right institutional configurations faster, because the selection mechanism — the observable success or failure of different approaches — operates on experiments, not on theories.

Third, investment in the demand side. Segal observes in The Orange Pill that existing AI governance frameworks address the supply side — what AI companies may build and how they must disclose — while neglecting the demand side: the capacity of citizens to use AI wisely. Landes's historical analysis confirms that supply-side regulation is necessary but radically insufficient. The Factory Acts regulated what factory owners could do. But it was universal education — a demand-side investment in the cognitive capacity of the population — that ultimately converted industrialization from a mechanism of exploitation into a mechanism of empowerment. The citizens who could read, calculate, and think critically could negotiate better wages, evaluate better opportunities, and participate in the political process that shaped regulation. The citizens who could not were at the mercy of whatever institutional framework others designed.

The AI equivalent is clear. Supply-side regulation — rules about what AI companies can build, how models must be tested, what disclosures must be made — addresses the immediate risks. Demand-side investment — in education, in critical thinking, in the broad-based capacity for judgment that determines whether citizens direct AI or are directed by it — addresses the long-term trajectory.

The Industrial Revolution teaches one final, uncomfortable lesson. The nations that built the institutional infrastructure were not the nations whose elites were most enlightened. They were the nations whose political systems were most responsive to pressure from below. Britain's Factory Acts were not gifts from benevolent industrialists. They were concessions extracted through decades of labor organizing, public agitation, and political struggle. The institutions that tamed the Industrial Revolution were built not by consensus but by conflict — the productive conflict of a political system in which those who bore costs had enough voice to demand that the system account for their interests.

The AI transition will require equivalent conflict. Not the destructive conflict of machine-breaking, which the Luddites discovered was both emotionally satisfying and strategically catastrophic. The productive conflict of political voice: the demand, from the populations that bear the costs of AI displacement, that the institutional response account for their interests. The demand that educational systems be reformed. The demand that labor market transitions be managed. The demand that the extraordinary productivity gains from AI be distributed broadly enough to sustain the social contract that makes democratic governance possible.

Landes was not an optimist. He was a historian, and historians know that the gap between the technology's arrival and the institutions that tame it is where human suffering concentrates. The gap is measured in generations, not quarters. The question is not whether the gap can be eliminated — it cannot — but whether it can be narrowed. Whether the institutions can be built faster this time, informed by the historical record of what happened when they were built too slowly.

The power loom arrived in 1785. The first meaningful labor protection arrived in 1833. Forty-eight years. The AI threshold was crossed in 2025. The clock has started. The question is not whether the institutions will eventually catch up. The history of every major technological transition suggests they will. The question is how many people will live and work in the gap, and whether the societies that call themselves civilized will find that gap acceptable.

---

Chapter 7: The Invention of Invention and the Invention of Judgment

Something happened in Western Europe between the sixteenth and eighteenth centuries that had never happened before, anywhere, in the history of human civilization. Innovation became normal.

This sounds unremarkable only because we live downstream of the transformation it describes. For the vast majority of human history, innovation was accidental, sporadic, and frequently suppressed. A clever solution to a practical problem might be devised by an individual craftsman, adopted locally, and lost within a generation when the craftsman died without transmitting the knowledge. Entire civilizations — Rome, Song China, the Abbasid Caliphate — produced extraordinary bursts of technological creativity that flared and faded without producing the self-sustaining chain reaction that characterizes modern economic growth.

Landes called what happened in Europe "the invention of invention." Not any single discovery but the creation of a system for producing discoveries. The research university. The scientific journal. The patent system. The culture of priority — the social reward for being first to discover, which incentivized disclosure rather than secrecy. The apprenticeship networks that transmitted craft knowledge across generations while allowing each generation to build on the last. The correspondence networks that connected natural philosophers across national borders, creating what amounted to a distributed intelligence system centuries before the term existed.

Each component was individually unremarkable. Universities had existed elsewhere. Patronage of learning was common across civilizations. What was new was the system — the way the components reinforced each other to produce a self-sustaining cycle of inquiry, discovery, application, and further inquiry. The research university trained people to ask questions. The scientific journal made answers widely available. The patent system rewarded practical application. The culture of priority incentivized speed and disclosure. And the entire system was embedded in a competitive landscape of nation-states, each eager to capture the economic benefits of innovation, which meant that investment in the system was rewarded by measurable national advantage.

The result was compounding. Each generation inherited the discoveries of the previous generation, added its own, and passed the accumulated total forward. The rate of innovation accelerated because innovation was no longer dependent on individual genius operating in isolation. It was dependent on a system that produced innovators reliably, channeled their efforts productively, and preserved their results permanently.

Landes considered this the single most important development in economic history. Not the steam engine. Not the power loom. Not any particular invention but the invention of invention itself — the creation of the institutional and cultural infrastructure that made sustained innovation possible.

The AI age requires an equivalent institutional innovation. Not the invention of invention — that system exists and continues to function, however imperfectly. What the AI age requires is something that might be called the invention of judgment: the creation of institutional and cultural infrastructure that makes sustained, high-quality direction of AI capability possible.

The distinction parallels Landes's original insight. Before the invention of invention, isolated individuals produced brilliant discoveries that flared and faded. After the invention of invention, a system produced discoveries reliably and cumulatively. Before the invention of judgment, isolated individuals exercise excellent judgment in their use of AI — Segal's practice of rejecting Claude's smooth output, the senior engineer in Trivandrum whose decades of experience provided the evaluation layer that made AI-assisted coding genuinely productive. But these are individual practices, dependent on individual discipline, and they do not scale. They do not compound. They do not create the self-reinforcing cycle that converts individual wisdom into civilizational capacity.

The invention of judgment would be the creation of institutions, norms, and practices that produce good AI judgment reliably across populations — the way the research university produces researchers reliably, the way the patent system produces disclosed innovations reliably, the way the scientific journal produces verified knowledge reliably.

What would such institutions look like? Landes's historical method suggests looking at the components of the invention of invention and asking what their AI-age equivalents might be.

The research university trained people to ask questions. Its AI-age equivalent is an educational system redesigned around the competency that Segal identifies as the primary human skill in the AI age: the capacity to formulate questions that direct AI capability toward genuine understanding rather than plausible-sounding confirmation. This is not a minor curricular adjustment. It is a fundamental reorientation of pedagogy — from teaching students to produce correct answers to teaching students to produce good questions, where "good" means questions that open genuine inquiry rather than close it.

The scientific journal made answers available and subjected them to critical evaluation through peer review. Its AI-age equivalent is an institutional infrastructure for evaluating AI output — not just the output of AI models (which is what current AI governance frameworks address) but the output of AI-human collaboration. When a legal brief is drafted with AI assistance, what institutional mechanisms ensure that the brief's claims have been verified? When a medical diagnosis is informed by AI analysis, what institutional mechanisms ensure that the diagnosis reflects genuine clinical judgment rather than uncritical acceptance of machine output? When a policy recommendation is generated with AI support, what institutional mechanisms ensure that the recommendation has been stress-tested against alternatives rather than accepted because it sounded authoritative?

These questions do not yet have institutional answers. Individual practitioners exercise judgment. Individual organizations develop internal protocols. But there is no systemic, cross-institutional, self-reinforcing infrastructure for producing and maintaining AI judgment at scale. The invention of such infrastructure — the AI equivalent of peer review, professional standards, and institutional verification — is the institutional innovation that the AI age most urgently requires.

The patent system rewarded practical application by granting temporary monopolies in exchange for public disclosure. Its AI-age equivalent is harder to identify, because the economics of AI are fundamentally different from the economics of physical invention. An algorithm, unlike a machine, can be reproduced at essentially zero marginal cost. The patent system's logic — temporary monopoly as incentive for disclosure — breaks down when the "invention" is a prompt strategy, a fine-tuning approach, or a workflow design that can be communicated in a paragraph. The institutional innovation required here is not a new form of intellectual property protection but a new form of knowledge-sharing that incentivizes the disclosure of effective AI-direction practices. How do you create institutional incentives for people to share not just their AI outputs but their judgment processes — the specific practices of questioning, verification, and evaluation that made their outputs reliable?

The culture of priority — the social reward for being first — drove the speed of scientific discovery. The AI-age equivalent is a culture that rewards not speed of output but quality of judgment. This is a profound cultural shift, because the technology itself rewards speed. AI can produce output at a pace that makes verification feel like an impediment. The institutional innovation required is a set of norms — professional, organizational, educational — that slow the cycle just enough for judgment to operate. Not the elimination of speed, which would sacrifice AI's most obvious benefit. The creation of structured pauses, verification checkpoints, and institutional expectations that convert the raw speed of AI-assisted production into the reliable quality of AI-directed production.

Segal describes this need in The Orange Pill when he proposes "AI Practice" — structured pauses in the workflow where AI tools are set aside and people engage directly with each other. The proposal is sound at the organizational level. But the invention of judgment, like the invention of invention, is a civilizational project. It requires institutions that operate across organizations, across sectors, across nations. It requires the AI equivalent of the scientific method: a shared, widely adopted, institutionally supported set of practices for ensuring that AI-human collaboration produces reliable results.

Landes would approach this challenge with characteristic directness. The invention of invention was not designed by committee. It emerged from the competitive interaction of multiple institutions, each pursuing its own interests, in an environment where successful practices were visible and could be copied. The research university spread because it produced results. The scientific journal spread because it solved a genuine coordination problem. The patent system spread because it aligned private incentive with public benefit.

The invention of judgment will likely follow the same pattern — not a single designed institution but an ecosystem of practices that emerge, compete, and are selected for effectiveness. The organizations that develop reliable AI judgment practices will outperform those that do not. The educational systems that produce graduates capable of exercising AI judgment will produce graduates that employers prefer. The nations that invest in the institutional infrastructure of AI judgment will attract the talent, the investment, and the innovative energy that follows wherever conditions for productive work are best.

The question is speed. The invention of invention took roughly three centuries to mature — from the first scientific societies of the seventeenth century to the fully developed research infrastructure of the twentieth. The AI transition does not have three centuries. The institutional infrastructure for AI judgment must be built in decades, perhaps less, because the technology's capability is advancing at a pace that makes each year of institutional lag more costly than the last.

Landes noted that late-industrializing nations had the advantage of borrowing institutional models from early industrializers — what economic historians call "the advantage of backwardness." Japan's Meiji Restoration borrowed institutional forms from Germany, Britain, and the United States, adapting them to Japanese cultural conditions with remarkable speed and effectiveness. The AI transition offers the same advantage to nations and organizations that are willing to observe, borrow, and adapt the institutional practices that are currently emerging in the most sophisticated AI-using environments.

But borrowing requires the capacity to evaluate what is worth borrowing. It requires, in other words, judgment. The institution that produces judgment is itself the thing that must be borrowed. The circularity is not vicious — it is the same circularity that characterizes all institutional development. You need institutions to build institutions. The way out of the circle is through exemplars: visible, successful, imitable practices that demonstrate what good AI judgment looks like and provide a template for replication.

The invention of invention was the most important institutional innovation in economic history. The invention of judgment may prove to be its necessary successor — the institutional infrastructure that determines whether humanity's most powerful tool produces the sustained, broad-based prosperity that the invention of invention made possible, or whether it produces the more efficient version of existing pathologies that the absence of institutional infrastructure has always produced.

Landes spent his career studying how societies build the institutions that convert raw capability into civilizational progress. That study has never been more relevant. The capability is here, immense and growing. The institutions are not. The gap between them is where the future is being decided.

---

Chapter 8: Climate, Geography, and the Digital Divide

There is a fact about the global distribution of internet connectivity that illuminates the AI transition more starkly than any policy paper or economic model. In 2025, roughly 2.6 billion people — one-third of the world's population — had no internet access at all. Not slow access. Not intermittent access. No access. The map of the unconnected overlaps with disturbing precision with the map of the world's poorest populations, which overlaps in turn with the map of the world's least educated, which overlaps with the map of the world's most geographically disadvantaged.

Landes was criticized for acknowledging the role of geography in economic development. The criticism was partly justified — geographic determinism, taken to its extreme, becomes a counsel of despair that denies human agency and excuses institutional failure. But Landes's actual argument was more nuanced than his critics allowed. He did not claim that geography determines economic outcomes. He claimed that geography conditions them — that it creates starting advantages and disadvantages that can be overcome by institutional effort but that do not disappear simply because one wishes they would.

Tropical climates, Landes noted, imposed health burdens — malaria, parasitic diseases, heat-related productivity losses — that temperate climates did not. Landlocked nations faced higher transportation costs than coastal ones. Mountainous terrain fragmented markets and slowed the diffusion of ideas. None of these geographical factors made development impossible. But each made it harder, and the cumulative effect of multiple geographic disadvantages was a starting position so far behind the temperate, coastal, well-connected nations that catching up required not just equal effort but dramatically greater effort.

The digital divide is the AI-age expression of this geographic conditioning. AI capability, as Segal argues in The Orange Pill, is theoretically available to anyone with an internet connection. The developer in Lagos can access the same model as the engineer in San Francisco. This is true. It is also, as a description of the actual conditions under which most of the world's population encounters AI, profoundly misleading.

The developer in Lagos needs electricity that does not cut out three times per day. She needs internet bandwidth sufficient to support the data-intensive interactions that AI tools require. She needs hardware — at minimum a modern laptop, ideally with enough processing power to run local models when the connection fails. She needs the educational background that the previous chapter described: the capacity for questioning, verification, and judgment that makes AI use productive rather than merely consumptive. And she needs all of these things in an environment where the cost of each, relative to local wages, is multiples of what it would be in San Francisco.

Segal acknowledges these barriers. "Access requires connectivity," he writes, "and connectivity requires infrastructure that billions of people do not have." But the acknowledgment is brief, almost parenthetical, in a chapter whose energy is devoted to the celebration of democratization. Landes's historical framework demands a fuller reckoning, because the history of technological transitions shows that access disparities at the outset of a transition do not naturally narrow over time. They compound.

The mechanism is straightforward. A technology that amplifies capability disproportionately benefits those who have the most capability to amplify. The knowledge worker in San Francisco who uses Claude to draft a complex analysis in an hour was already more productive than the knowledge worker in Lagos who lacks reliable electricity. After AI, the San Francisco worker is not just more productive but exponentially so — operating at a level that the Lagos worker cannot match even with AI access, because the infrastructure to support intensive AI use does not exist.

The productivity gap widens. The wealth gap widens with it. And the wealth gap determines the capacity to invest in the infrastructure that would close the access gap, creating a cycle that is self-reinforcing in the wrong direction. The nations that are already connected invest in faster connectivity. The nations that are unconnected lack the resources to invest. The floor may be rising, as Segal argues. But the ceiling is rising faster.

Landes documented precisely this dynamic in the history of industrialization. The nations that industrialized first captured an economic lead that compounded over decades. The lead was not just in industrial output but in the institutional and educational infrastructure that sustained industrial development. By the time late-industrializing nations attempted to catch up, the gap included not just factories and machines but the entire ecosystem of knowledge, institutions, and cultural competencies that made factory production effective. The machines could be imported. The ecosystem could not.

The parallel to AI is direct. The models can be accessed globally. The ecosystem that makes model access productive — the educational infrastructure, the reliable connectivity, the organizational capacity to direct AI toward genuine problems, the cultural competencies that make human-AI collaboration productive — cannot be downloaded. It must be built, locally, with local resources, in local conditions that may include every geographical disadvantage Landes identified and several he did not anticipate.

Jared Diamond, in Guns, Germs, and Steel, made the geographical argument with greater emphasis than Landes, attributing the broad patterns of global inequality to the accidents of continental orientation, the availability of domesticable plants and animals, and the disease environments that shaped population densities. Landes acknowledged these factors while insisting that they were insufficient as explanations. Geography sets initial conditions. Culture and institutions determine what societies make of those conditions. Japan, geographically disadvantaged in multiple respects — lacking natural resources, isolated from the major trade routes, with limited arable land — nonetheless became one of the world's most technologically advanced economies through deliberate institutional and cultural transformation.

The AI-age digital divide demands the same dual recognition. Geography matters. The cost of building connectivity infrastructure in sub-Saharan Africa or rural South Asia is genuinely higher than in temperate, densely populated regions. The health burdens that reduce productivity in tropical climates do not disappear because AI tools are available. The transportation costs that fragment markets in landlocked, mountainous nations do not diminish because the product being transported is now digital rather than physical.

But geography is not destiny, and the nations that treat the digital divide as an insuperable barrier rather than an institutional challenge are making the same error as the nations that treated geographic disadvantage as an excuse for stagnation in the industrial age. The divide can be narrowed — not by the technology itself, which is indifferent to who uses it, but by deliberate investment in the infrastructure that makes productive use possible.

This investment must address three layers simultaneously. The first is physical infrastructure: connectivity, electricity, and hardware. This is the most obvious layer and the one that international development organizations have focused on most heavily. It is necessary but insufficient, for the same reason that building factories in a country without an educated workforce does not produce industrialization. The infrastructure is the precondition, not the product.

The second layer is educational infrastructure: the broad-based cultivation of the cognitive competencies that make AI use productive. This is the layer that the previous chapter examined. Without educational investment, physical connectivity produces consumption of AI-generated content rather than productive direction of AI capability. The unconnected become connected consumers — passive recipients of content generated elsewhere rather than active participants in the production of value.

The third layer is institutional infrastructure: the legal frameworks, professional norms, and organizational practices that channel AI capability toward locally relevant problems. An AI tool that can produce a competent legal brief is useful in a jurisdiction with a functioning legal system. It is marginally useful in a jurisdiction where legal disputes are resolved through informal power rather than formal adjudication. An AI tool that can optimize agricultural decisions is useful where farmers have access to markets that reward optimization. It is marginally useful where farmers lack roads to reach those markets.

Each layer depends on the others. Physical connectivity without educational capacity produces passive consumption. Educational capacity without institutional infrastructure produces skilled individuals who emigrate to places where their skills are valued. Institutional infrastructure without physical connectivity produces well-designed systems that no one can access. The investment must be simultaneous and coordinated, which is precisely the kind of investment that the international development system is worst at delivering.

Landes was skeptical of aid as a mechanism for closing development gaps, not because he was indifferent to poverty but because he had observed, across decades of historical research, that externally imposed development programs rarely produced the cultural and institutional changes that sustained development requires. The changes had to be internally driven — motivated by domestic constituencies, adapted to local conditions, sustained by local resources and local commitment.

The implication for the AI-age digital divide is that the most effective investments will not be technology transfers but capacity-building efforts that strengthen local educational, institutional, and infrastructural ecosystems. The goal is not to give the developer in Lagos access to Claude — she may already have it, on a good connectivity day. The goal is to build the local ecosystem that makes her AI use as productive as the San Francisco engineer's: reliable infrastructure, relevant education, functional institutions, and cultural norms that value the judgment and initiative that productive AI use requires.

This is slower work than deploying connectivity. It is less photogenic than distributing laptops. It requires the patient, unglamorous investment in human and institutional capacity that Landes identified as the determining factor in every technological transition since the eighteenth century. And it requires the honest acknowledgment that the AI transition, like every previous technological transition, will disproportionately benefit those who are already positioned to benefit — unless deliberate, sustained, institutionally grounded efforts are made to broaden the distribution.

The amplifier is the most powerful tool in human history. But an amplifier in a room with no signal produces nothing. The signal — the human capability that AI amplifies — is distributed unevenly, and the distribution follows the contours of geography, education, and institutional quality that Landes traced across five centuries of economic history.

The digital divide is not a technology problem. It is a civilization problem. And civilization problems are solved not by deploying better tools but by building the cultural, educational, and institutional conditions under which tools become instruments of broad-based human flourishing rather than mechanisms for the further concentration of advantage among those who already possess it.

Chapter 9: The Culture of Maintenance Versus the Culture of Innovation

Every dam rots.

This is not poetry. It is hydrology. A beaver dam exposed to flowing water loses structural integrity at a predictable rate. Sticks loosen. Mud erodes. The pressure of the current tests every joint, every seam, every point where one material meets another. A dam that is not maintained daily is a dam that is failing slowly. The failure is invisible until it is catastrophic — the pool behind the dam drops an inch, then another, then the breach comes all at once, and the ecosystem that depended on the pool collapses not gradually but in a single season.

Landes understood maintenance as a civilizational competency, not merely a technical practice. The societies that sustained prosperity across centuries were not the ones that produced the most innovations. They were the ones that maintained their innovations — that invested the unglamorous, continuous, largely invisible labor required to keep complex systems functioning after the excitement of their creation had faded.

This is the argument that technology culture is least equipped to hear, because technology culture has elevated innovation to the status of a secular religion while treating maintenance as a cost center to be minimized. The startup celebrates the launch. The venture capitalist rewards the pivot. The technology press covers the breakthrough. No one covers the patch, the update, the slow work of keeping yesterday's system running while today's system is being built.

The asymmetry is not accidental. It reflects a deep cultural bias that Landes traced across the history of economic development. Innovation is visible, legible, narratively satisfying. A new machine is installed. A new product is launched. A new market is opened. The story has a protagonist (the inventor), a conflict (the existing order), and a resolution (the breakthrough). It maps onto the heroic narrative that Western culture has been telling since Prometheus stole fire.

Maintenance is invisible, illegible, narratively unsatisfying. A bridge does not collapse. A water system continues to deliver clean water. A legal framework continues to adjudicate disputes fairly. The story has no protagonist, no conflict, no resolution. It has only the absence of catastrophe, which is the definition of a non-story.

But the absence of catastrophe is the product of continuous effort. Someone inspected the bridge. Someone tested the water. Someone updated the legal code to address conditions that the original drafters could not have anticipated. The effort is real, the skill required is substantial, and the consequences of neglecting it are severe. The invisibility of maintenance does not reduce its importance. It increases the danger of its neglect.

Landes documented what happens when societies celebrate innovation while neglecting maintenance. Spain in the sixteenth and seventeenth centuries provides the starkest example. The conquest of the Americas produced an extraordinary inflow of wealth — gold, silver, and the resources of an entire hemisphere. Spain innovated brilliantly in navigation, military organization, and colonial administration. But it failed to maintain the institutional infrastructure that would have converted wealth inflow into sustained productive capacity. It consumed rather than invested. It celebrated the conquest without building the systems — educational, financial, agricultural, manufacturing — that would sustain prosperity after the conquest's returns diminished. By the seventeenth century, Spain was poorer than the nations it had once dominated, despite having access to resources that those nations lacked.

The Ottoman Empire followed a similar trajectory. Its institutional innovations — the millet system of religious governance, the devshirme recruitment system, the sophisticated bureaucracy of the Sublime Porte — were remarkable in their initial design. But the empire's culture did not sustain the practice of institutional maintenance. Regulations calcified. Institutions that had been adaptive became rigid. The printing press, which could have democratized knowledge across the empire's vast and diverse population, was resisted for centuries because it threatened existing structures of authority that no one was willing to reform. The maintenance failure was not technical. It was cultural — a collective unwillingness to update institutional structures that had once served well but no longer fit the conditions they were meant to address.

The AI transition is producing precisely the kind of innovation-maintenance asymmetry that Landes identified as a precursor to civilizational decline. The technology press covers each new model release with breathless enthusiasm. Investment flows toward the next capability breakthrough. The public conversation about AI is almost entirely focused on what AI can newly do — what barriers it has broken, what benchmarks it has surpassed, what previously impossible tasks it has made routine.

Almost no one is asking who will maintain the systems that AI is building.

The question is not abstract. The Berkeley study that Segal describes in The Orange Pill found that AI-assisted workers took on more tasks, expanded into adjacent domains, and produced more output. The study did not ask what happened to the systems those workers were already responsible for maintaining. When a developer uses AI to build a new feature in two days rather than two weeks, the new feature enters the codebase. It must now be maintained — updated, debugged, integrated with other systems, adapted as requirements change. The maintenance burden increases with every new feature, and the maintenance burden is borne not by the AI that helped build the feature but by the human beings responsible for the system's long-term health.

The most dangerous version of this dynamic is already visible. Organizations that have used AI to accelerate development are discovering that they have built faster than they can maintain. The codebase has grown. The feature set has expanded. The system's complexity has increased. But the organizational capacity for maintenance — the human understanding of how the system works, why certain design decisions were made, what will break if certain components are changed — has not kept pace.

Landes would recognize this immediately as the pattern he documented in every society that prioritized expansion over sustainability. The initial expansion is exhilarating. The productivity gains are real. The organization feels like it is operating at a new level of capability. Then the maintenance deficit begins to manifest. A bug appears in a component that no one fully understands, because it was built by AI and reviewed by a human who was already building the next feature. The fix introduces a new bug, because the fixer's understanding of the system is shallower than the system's complexity requires. The cascade continues. The system degrades. And the organization discovers, too late, that the speed of construction was purchased at the cost of the institutional knowledge that maintenance requires.

The culture of maintenance is, at its core, a culture of humility. It requires the acknowledgment that building is the easy part and sustaining is the hard part. It requires the willingness to invest in understanding systems you did not build, in testing assumptions you did not make, in updating structures that were adequate when they were created but no longer fit the conditions they must address. It requires the specific form of attention that Byung-Chul Han celebrates and that the AI-accelerated work culture militates against: slow, sustained, unglamorous attention to things that are already working, with the understanding that "already working" is a temporary condition that requires continuous effort to maintain.

The AI age will test whether societies possess this culture. The technology makes building cheap and fast. It does not make maintaining cheap or fast. Maintenance still requires human understanding — the deep, embodied knowledge of how a system works that comes from years of patient engagement with its quirks, its failure modes, its undocumented dependencies. This is precisely the kind of knowledge that the aesthetics of the smooth, which Han diagnoses in The Orange Pill, tends to erode.

The society that celebrates only the builder — the person who ships the new feature, launches the new product, deploys the new system — while neglecting the maintainer — the person who ensures the existing feature works, the existing product serves its users, the existing system does not degrade — is a society that is consuming its institutional capital. It is Spain, spending the gold of the Americas without building the productive capacity that would sustain prosperity after the gold ran out.

The prescription is cultural, not technical. Organizations must value maintenance alongside innovation — in compensation, in promotion decisions, in the stories they tell about what constitutes important work. Educational systems must prepare students not just to build but to maintain — to understand systems they did not create, to update structures they did not design, to exercise the patient, sustained attention that maintenance requires. National AI strategies must include investment not just in capability development but in the institutional infrastructure that sustains capability over time: standards bodies, professional certification, the institutional memory that ensures lessons learned from AI failures are preserved and transmitted rather than lost in the rush toward the next deployment.

Landes's history of economic development is, at its most fundamental level, a history of maintenance. The societies that sustained prosperity were not the most innovative. They were the most diligent — the ones that built institutions and then maintained them, that created systems and then tended them, that achieved breakthroughs and then did the unglamorous work of converting breakthroughs into durable structures.

The AI age offers unprecedented capability for building. It offers nothing for maintaining. That gap — between the ease of creation and the difficulty of sustenance — is the gap that will determine whether the AI transition produces durable prosperity or spectacular collapse. Landes's life work suggests, with the weight of five centuries of evidence, that the answer depends entirely on whether societies can cultivate the cultural humility to value the maintainer as highly as the innovator.

Every dam rots. The question is whether anyone will be there to repair it.

---

Chapter 10: The Long View and the Patient Society

In the spring of 1868, a fifteen-year-old Japanese nobleman named Ito Hirobumi boarded a ship for the West. He had already, at this absurdly young age, traveled to England once before — a clandestine journey in 1863, disguised in peasant clothing, slipping past the Tokugawa shogunate's prohibition on foreign travel. That first trip had lasted five months and had shown him what two centuries of isolation had concealed: a world that had been transformed by technologies, institutions, and cultural practices that Japan neither possessed nor understood.

The second trip, in 1868, was different. This time Ito traveled not as a fugitive but as a representative of the new Meiji government, which had overthrown the shogunate and committed itself to a project of civilizational transformation without precedent in modern history. The Meiji leaders had looked at the gap between Japan and the industrialized West and made a decision that Landes regarded as one of the most remarkable acts of collective will in modern economic history: they would close that gap. Not gradually. Not tentatively. Completely, deliberately, and within a generation.

What followed was not merely an economic modernization program. It was a cultural revolution — a systematic dismantling of the institutional and cultural structures that had served Japan for centuries and their replacement with structures borrowed, adapted, and often improved upon from every industrial nation the Meiji leaders could study. The German model for the army. The British model for the navy. The French model for the legal system. The American model for the educational system. Not copied slavishly but studied, evaluated, adapted to Japanese conditions, and implemented with a discipline that reflected centuries of cultural emphasis on collective effort and institutional loyalty.

The result, within a single generation, was a nation transformed. By 1905, Japan defeated Russia in a war that shocked the world — the first time in modern history that a non-Western nation had defeated a European power in a major military conflict. By the 1920s, Japan was a fully industrialized economy. By the 1960s, it was the world's second-largest economy. The gap that had seemed insuperable in 1868 had been closed — not by a single breakthrough but by decades of patient, systematic, institutionally grounded investment in the cultural and educational capacity of the Japanese population.

Landes told the Meiji story not as an inspirational tale but as an analytical case study. What made the Meiji transformation possible was not the willingness to adopt Western technology — any nation can purchase machines. It was the willingness to adopt, adapt, and build the institutional and cultural infrastructure that made technology productive. The educational reforms. The legal reforms. The financial institutions. The industrial policy. The cultural shift from a feudal hierarchy based on birth to a meritocratic hierarchy based on capability. Each of these was a dam built in the river, redirecting the flow of industrialization toward broad-based development rather than narrow extraction.

And each required patience. Not the patience of waiting for things to happen, but the active patience of sustained institutional investment — the willingness to commit resources to outcomes that would not materialize for fifteen or twenty years, in a political and cultural environment where the pressure for immediate, visible results was intense. The Meiji leaders invested in universal education knowing that the returns would not be visible until the first generation of educated citizens entered the workforce. They invested in legal reform knowing that the commercial benefits would not materialize until the legal system had gained the trust of foreign trading partners, which would take years. They invested in institutional capacity knowing that the full benefits of their investment would be reaped not by themselves but by their successors.

This is what Landes meant by the patient society — a society capable of investing in institutional infrastructure whose returns are measured in decades rather than quarters, and whose benefits accrue not to the individuals who make the investment but to the community that inherits its results. Patient societies compound their advantages the way compound interest compounds capital: slowly, invisibly, and with a power that becomes apparent only in retrospect. Impatient societies consume their advantages the way inflation consumes currency: gradually, until the purchasing power of their institutional capital has been hollowed out entirely.

The AI transition is the most severe test of societal patience since the Industrial Revolution itself.

The technology's capability is advancing at a pace measured in months. The institutional infrastructure that the previous nine chapters have described — the culture of judgment, the educational systems that produce good questions, the regulatory frameworks that distribute gains, the maintenance practices that sustain complex systems — requires years or decades to build. The mismatch between the speed of capability and the speed of institutional development is the central challenge of the AI age, and it is a challenge that only patient societies will navigate successfully.

The impatient response is visible everywhere. Companies deploy AI at the maximum speed the technology permits, without investing in the organizational capacity to maintain what they build. Governments announce national AI strategies focused on capability development — building models, accumulating compute, training engineers — without equivalent investment in the educational and institutional infrastructure that determines whether capability is directed wisely. Investors reward growth and punish the slower, less visible investments in institutional quality that growth depends on. The pressure to move fast, to capture the gains before competitors, to demonstrate visible progress on a quarterly timeline is structural. It is built into the incentive systems of capital markets, political cycles, and organizational culture.

Landes would observe, with the blunt candor that characterized his scholarship, that this impatience is the precise configuration that has historically produced concentrated gains, widespread displacement, and institutional crises that take decades to resolve. The British Industrial Revolution's first fifty years are the template: spectacular capability growth, spectacular wealth creation at the top, spectacular suffering at the bottom, and institutional responses that arrived generations too late to prevent the damage they were meant to address.

The patient response is less visible but more consequential. It is Finland investing in an educational system that reliably produces citizens capable of critical thinking, not because Finland faces an immediate AI workforce crisis but because Finland's leaders understand that educational quality compounds over generations. It is Singapore designing an AI governance framework that addresses not just what companies may build but how citizens are prepared to evaluate what is built. It is the organizations — less celebrated, less frequently profiled — that invest in institutional knowledge management, in mentoring programs that transmit judgment across generations, in the slow work of building the human capacity that AI capability depends on.

The patient society is not a passive society. This distinction matters, because patience is often misunderstood as waiting — as the decision to do nothing and hope that favorable outcomes arrive on their own. The Meiji leaders were not patient in this sense. They were ferociously active. They sent delegations across the world. They recruited foreign advisors. They dismantled centuries-old institutions and built new ones from imported and adapted templates. Their patience was not in the doing but in the time horizon of the doing — the willingness to commit to outcomes they would not live to see, and to sustain that commitment through the inevitable setbacks, failures, and political pressures that long-term institutional investment produces.

The AI-age patient society will have the same characteristics. It will adopt AI aggressively — there is no premium on technological slowness, and the societies that refuse to adopt will be left behind as surely as the societies that refused to industrialize were left behind in the nineteenth century. But it will adopt with institutional depth. It will invest in education alongside capability. It will build regulatory frameworks alongside deployment plans. It will maintain the systems it builds alongside building new ones. It will measure its progress not just in model benchmarks and deployment metrics but in the harder, less quantifiable measures of institutional quality: the breadth of its citizens' capacity for judgment, the resilience of its regulatory framework, the durability of its organizational practices.

Landes ends The Wealth and Poverty of Nations with a passage that reads, in the context of the AI transition, as both diagnosis and prescription. The societies that will prosper, he wrote, are those that are open to the world, that invest in their people, that allow and encourage initiative, that reward enterprise and risk-taking. The societies that will stagnate are those that close themselves off, that hoard opportunity among a narrow elite, that punish initiative and reward obedience, that consume their institutional capital rather than investing in its renewal.

The passage was written about the twentieth century. It applies with greater force to the twenty-first, because the amplifier magnifies every cultural strength and every cultural weakness. A culture of curiosity, amplified by AI, produces an exponential expansion of what its citizens can understand and create. A culture of obedience, amplified by AI, produces an exponential expansion of what its elites can extract and control. A culture of patience, amplified by AI, produces compound institutional returns that impatient cultures cannot match. A culture of impatience, amplified by AI, produces rapid gains followed by institutional collapse when the systems built at speed prove too fragile to sustain.

Segal ends The Orange Pill with a sunrise. The image is hopeful, and the hope is earned — earned through twenty chapters of honest grappling with both the exhilaration and the terror of the AI transition. Landes's contribution to that hope is to anchor it in evidence: five centuries of evidence showing that transformative technologies do, eventually, produce broad-based prosperity. Not automatically. Not equitably. Not without institutional struggle. But consistently, across every major transition, the long arc bends toward expansion — if the patient societies build the dams.

The sunrise is not guaranteed. Landes was too rigorous a historian to promise outcomes. But he was also too honest a scholar to deny the pattern. The pattern shows that patient societies — the ones that invest in education, build inclusive institutions, maintain their systems, and tolerate the short-term costs of long-term institutional development — are the societies that eventually stand in the light.

The Meiji leaders boarded ships for the West knowing they would not live to see the Japan they were building. They built it anyway. The AI age demands equivalent commitment: not to any particular technology, which will be superseded, but to the institutional and cultural infrastructure that determines whether technology serves humanity broadly or narrows the circle of those it serves.

That infrastructure is built by patient societies. Societies that measure their progress in generations rather than quarters. Societies that invest in the education of every citizen, not just the training of a technical elite. Societies that maintain their institutions with the same care they bring to building new ones. Societies that understand, as Landes demonstrated across a lifetime of scholarship, that the wealth of nations is not found in their resources or their technologies but in the accumulated cultural and institutional capital that determines what they make of both.

The impatient societies will capture the first gains. They always do. The patient societies will capture the lasting ones. They always have.

---

Epilogue

The map I cannot stop redrawing is the one that does not yet exist.

Not a technology map — where the compute sits, which companies lead in model development, whose chips are fastest. That map updates itself quarterly and is obsolete by the time it prints. The map I keep returning to is the one Landes spent his career attempting to draw: the map of cultural readiness. The invisible topography of which societies possess the accumulated habits — of questioning, of verification, of patience, of maintenance, of broad-based investment in human capability — that determine whether a transformative technology irrigates or floods.

I wrote about standing in a room in Trivandrum watching twenty engineers discover what they could do with Claude Code. Twenty-fold productivity. The number is real, and the exhilaration in that room was real, and I would not trade that week for anything. But Landes forced me to ask a question I had been avoiding: What is the map around that room?

The engineers had degrees from excellent institutions. They had years of experience. They had the cultural habit of questioning that their education had cultivated. They were positioned, by everything in their biographies, to use the tool well. But the map extends beyond that room. It extends to the schools those engineers came from, and to the schools their children will attend, and to the millions of knowledge workers in that same country who lack the educational infrastructure that gave my team their cognitive architecture. The amplifier was working beautifully inside that room. The map of who else it could work for — and who it would leave behind — is the map Landes taught me I had to draw.

What haunts me is the compounding. Landes showed, across century after century, that cultural and educational advantages do not simply add up. They multiply. A society that invested in broad-based education fifty years ago is not fifty years ahead. It is exponentially ahead, because each generation of educated citizens built the institutional capacity that made the next generation's education more effective. The AI amplifier turns that exponential curve into something steeper still. The gap between societies that possess the culture of judgment and societies that do not is not widening linearly. It is widening at the speed of the technology itself.

And yet Landes also showed that the gap can be closed. Japan in 1868 was centuries behind the industrial West. A single generation of institutional will — patient, deliberate, ferociously committed — closed it. The lesson is not that catching up is easy. The lesson is that catching up is possible, but only for societies willing to invest in the slow, unglamorous, compound work of building the cultural and educational infrastructure that no technology can substitute for.

I keep thinking about maintenance. About the fact that every system I have built in thirty years eventually needed tending, and that the tending was always the part I was worst at, and that the AI age makes this weakness more dangerous, not less. Landes did not live to see Claude Code. But he spent a lifetime documenting what happens to civilizations that build brilliantly and maintain poorly. The ruins are impressive. They are still ruins.

The sunrise at the end of The Orange Pill is something I believe in. I believe the AI transition can produce broad-based human flourishing on a scale no previous technology has achieved. But Landes taught me that the sunrise does not arrive because the technology is powerful. It arrives because patient societies build the cultural and institutional dams that direct powerful technology toward life. The sunrise is earned. And it is earned not by the builders, who get the credit, but by the maintainers, the educators, the institution-builders, the people who do the slow work that compounds across generations and never makes the front page.

That map — the one showing which societies are doing that slow work, and which are consuming their institutional capital in the rush to deploy — is the one I keep redrawing. It is the map that will determine whether my children, and yours, inherit a world that was irrigated by the most powerful amplifier in human history, or one that was flooded by it.

Culture makes all the difference. Landes was right. He is more right now than he has ever been.

Edo Segal

AI is globally available. The outcomes will not be globally equal. The difference is not compute -- it is civilization.
PITCH:
Every nation on earth can access the same AI models. The same Claude Code

AI is globally available. The outcomes will not be globally equal. The difference is not compute -- it is civilization.

PITCH:

Every nation on earth can access the same AI models. The same Claude Code that produced twenty-fold productivity gains in Trivandrum is available in Lagos, São Paulo, and Berlin. The technology is uniform. The cultures that receive it are not. David Landes spent fifty years documenting why identical technologies produce radically different national outcomes -- why some societies convert powerful tools into broad prosperity while others stagnate or fracture. His answer was controversial, rigorously evidenced, and more relevant now than when he wrote it: culture determines everything. This book applies Landes's framework to the AI revolution, arguing that the nations that thrive will not be those with the best models but those whose cultures produce citizens capable of directing AI wisely. The amplifier is here. The question is what it is amplifying.

David Landes
“If we learn anything from the history of economic development, it is that culture makes all the difference.”
— David Landes
0%
11 chapters
WIKI COMPANION

David Landes — On AI

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that David Landes — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →