By Edo Segal
The cathedral kept bothering me.
Not a cathedral I visited. One I read about — the Norse cathedral at Gardar, on the eastern shore of Greenland, built from red sandstone blocks by settlers who had crossed the North Atlantic to plant European civilization at the edge of the habitable world. They raised cattle. They built churches. They tithed to the Pope. And then they died, every last one of them, surrounded by a sea full of fish they apparently refused to eat.
That image lodged in me during a week when I was deep inside the chapters of *The Orange Pill* — writing about ascending friction and the beaver's dam and the democratization of capability — and I could not shake it. Because the Norse were not stupid. They were formidable. Building a pastoral society in the Arctic required extraordinary organizational and technical competence. They understood their environment. They could see the Inuit thriving next to them, using kayaks and harpoons and seal-hunting techniques perfectly adapted to the conditions that were killing Norse cattle farming.
They saw it. They understood it. They chose not to adopt it.
That is the detail that breaks your heart and rearranges your thinking. The failure was not ignorance. It was identity. The cattle and the churches were not just economic strategies. They were who the Norse *were*. To abandon them would have meant becoming something other than Norse. And across five centuries of increasingly desperate conditions, they chose identity over survival.
Jared Diamond spent decades studying this pattern — not just in Greenland but across civilizations on every continent. Why do some societies facing severe environmental challenge adapt and thrive while others, facing comparable or lesser challenges, collapse? His answer was structural, comparative, and deeply uncomfortable: the determining variable is almost never the severity of the threat. It is the quality of the response. Societies choose to fail or succeed. The choice is made not in a single dramatic moment but in the accumulation of daily decisions — each one small, each one defensible, each one contributing to a trajectory that becomes visible only when it is too late to change.
I needed Diamond's framework because the AI transition is an environmental regime shift, and I did not have the vocabulary for what that means. The technology discourse gives us tools for talking about capability, productivity, disruption. It does not give us tools for talking about what happens when a civilization's practices become maladapted to a new environment and the people with the most power to change those practices are the ones whose identity depends on keeping them.
Diamond does. That is why this book exists.
— Edo Segal ^ Opus 4.6
b. 1937
Jared Diamond (b. 1937) is an American biogeographer, evolutionary biologist, and author whose comparative studies of civilizational success and failure have shaped how a generation of thinkers understands the relationship between human societies and their environments. Born in Boston and trained in physiology at Cambridge University, Diamond spent decades conducting fieldwork in New Guinea before turning to the large-scale questions of why some civilizations flourish while others collapse. His book *Guns, Germs, and Steel: The Fates of Human Societies* (1997) won the Pulitzer Prize for its argument that geography and environment — not racial or cultural superiority — determined which societies came to dominate the modern world. *Collapse: How Societies Choose to Fail or Succeed* (2005) examined why civilizations from the Norse Greenland colony to the Maya and Easter Islanders failed to adapt to changing environmental conditions, identifying a five-factor framework in which a society's own response to crisis is the decisive variable. His later works include *The World Until Yesterday* (2012) and *Upheaval* (2019). A professor of geography at UCLA until his retirement in 2024, Diamond is recognized as one of the foremost living scholars of the deep patterns that govern civilizational resilience and collapse.
In 1984, Jared Diamond stood on the eastern shore of Greenland and looked at the ruins of a cathedral. The building had been constructed from blocks of red sandstone, carefully cut and fitted by Norse settlers who had crossed the North Atlantic in the late tenth century. The cathedral at Gardar had been, in its time, the northernmost outpost of European Christendom — a statement of cultural ambition planted at the edge of the habitable world. By the time Diamond visited, the congregation had been dead for five hundred years.
The Norse Greenland colony lasted from roughly 985 to approximately 1450. For nearly half a millennium, between four and five thousand people maintained a European pastoral society in an environment that was, by any rational assessment, hostile to the practices that European pastoralism required. They raised cattle. They built churches. They imported iron, stained glass, and church bells from Europe. They exported walrus ivory, furs, and the narwhal tusks that medieval Europeans believed were unicorn horns. They tithed to the Pope.
And then, across a period that archaeologists estimate at roughly a generation, they disappeared. Not gradually, in the slow attrition of emigration. Completely. The last Norse Greenlanders died in Greenland, in houses that still contained the bones of their last cattle, surrounded by a sea teeming with fish and seals they had apparently refused, or been unable, to eat.
The question Diamond spent years investigating — first in fieldwork, then in the book that would become Collapse: How Societies Choose to Fail or Succeed — was deceptively simple. Why did some societies facing severe environmental challenge adapt and survive, while others, facing comparable or even less severe challenges, failed to adapt and collapsed?
The answer Diamond arrived at was not simple at all. It was structural. It involved five interacting factors that, taken together, determined whether a society would navigate crisis successfully or fail. These factors were: environmental damage that a society inflicts on itself; changes in the external environment (climate shifts, for instance, that alter the conditions a society depends on); hostile neighbors who exploit a society's weakness; the loss of friendly trading partners whose support the society needs; and — most critically — the society's own response to the first four factors.
The fifth factor is the one that carries the moral weight of Diamond's entire project. Environmental damage happens to many societies. Climate shifts are not selective in their targets. Hostile neighbors and trade disruptions are the background radiation of geopolitical life. What separates the survivors from the collapsed is not the severity of the challenge but the quality of the response.
The Norse Greenlanders faced worsening climate — the Medieval Warm Period that had made their initial settlement viable was ending, and the Little Ice Age was beginning. They faced environmental damage they had inflicted on themselves — overgrazing had stripped the fragile topsoil from their pasturelands, and deforestation had eliminated the few trees the landscape supported. They faced hostile neighbors — the Inuit, who were expanding south into territories the Norse considered their own. And they faced the slow erosion of their trading relationship with Norway, as the Black Death devastated Europe and the walrus ivory trade lost value to cheaper elephant ivory from Africa.
Four factors, each severe. But other societies had faced comparable combinations and survived. The Inuit, occupying the same landscape at the same time, flourished. What killed the Norse was the fifth factor — their own response.
The response was, in Diamond's clinical assessment, catastrophically maladaptive. The Norse knew the Inuit existed. They encountered Inuit hunters regularly. They could see, with their own eyes, that the Inuit had developed technologies — kayaks, toggling harpoons, techniques for hunting ringed seals through winter ice — that were superbly adapted to the conditions that were destroying Norse pastoral farming. The Inuit were thriving in the same environment that was killing the Norse.
The Norse did not adopt Inuit techniques. They did not learn to hunt ringed seals. They did not build kayaks. Archaeological evidence suggests they did not eat fish at all during their final decades, despite living on an island surrounded by some of the richest fishing grounds in the North Atlantic. They maintained cattle farming as their grasslands eroded. They built churches as their children starved.
This is the fact that makes the Norse Greenland case so haunting, and so relevant to any society facing an environmental transformation it did not choose. The Norse failure was not a failure of knowledge. They knew what the Inuit were doing. It was not a failure of intelligence. Building a pastoral society at the edge of the Arctic required formidable organizational and technical competence. It was a failure of identity. The cattle, the churches, the European cultural practices — these were not merely economic strategies. They were the markers of who the Norse were. To abandon them would have been to stop being Norse. And the Norse chose, across generations of increasingly desperate conditions, to remain Norse rather than to survive.
Diamond's comparative method revealed that the Norse pattern was not unique. The Maya civilization of the Classic Period, centered in the Yucatan Peninsula and the lowlands of present-day Guatemala, Honduras, and Belize, collapsed between roughly 800 and 1000 CE. The proximate cause was agricultural failure: the Maya had cleared forests to expand farming, which disrupted rainfall patterns and degraded soil fertility, which reduced crop yields, which produced famine, which triggered warfare, which accelerated the cycle. But the deeper cause, the one Diamond's analysis returns to repeatedly, was the Maya elite's commitment to practices that had sustained their status but were destroying their agricultural base. The kings continued building monumental architecture — temples, palaces, ball courts — that consumed the labor and resources the society needed for agricultural adaptation. Their legitimacy was tied to monument-building. They could not stop building monuments without undermining the political theology that justified their power.
The Easter Islanders, the Anasazi of the American Southwest, the Pitcairn and Henderson Islanders of the Pacific — Diamond examined case after case, and the structural pattern held. A society develops practices adapted to current conditions. The conditions change. The practices that were adaptive become maladaptive. The society fails to change the practices because changing them would require abandoning the identity markers, the status hierarchies, and the institutional structures that the old practices produced.
The failure is never inevitable. This is Diamond's most important claim, and it is the reason his book is titled Collapse: How Societies Choose to Fail or Succeed rather than Collapse: How Societies Are Destroyed by Forces Beyond Their Control. The emphasis on choice is deliberate and diagnostic. Other societies, facing equivalent challenges, chose differently. The Tokugawa Japanese, facing complete deforestation by the early 1700s, implemented a comprehensive reforestation program that took two centuries to mature but that saved their civilization's resource base. The Icelanders, facing the same soil erosion and overgrazing that destroyed Norse Greenland, developed commons management systems that regulated land use and preserved fertility. The Tikopia islanders of the southwestern Pacific, facing the ecological consequences of pig farming on a tiny island, made the collective decision — extraordinary in its cultural cost — to slaughter every pig on the island, eliminating a prestige food source to preserve the island's carrying capacity.
In each case, survival required three things. First, the recognition that environmental conditions had changed and that existing practices were no longer viable. Second, the willingness to abandon practices that defined the society's identity — to stop being what they had been in order to become what they needed to be. Third, investment in long-term adaptation at the cost of short-term comfort, the kind of investment that serves the grandchildren rather than the current generation.
Recognition, willingness, investment. Three requirements. Each difficult. Each contested. Each opposed by those whose power, status, and identity depend on the continuation of existing practices.
The pattern Diamond identified operates at a specific level of analysis that is neither purely environmental nor purely cultural but somewhere in between — at the interface where human decisions meet environmental constraints. A society does not collapse because its environment deteriorates. It collapses because it fails to change its behavior in response to environmental deterioration. The environment sets the challenge. The society's response determines the outcome.
This distinction is critical because it locates agency precisely where agency exists. A society cannot control its climate. It cannot control whether hostile neighbors appear on its borders. It cannot, in most cases, reverse environmental damage already done. But it can control its response. It can choose to recognize the change, study its implications, and modify its practices accordingly. Or it can choose — and the choice is always available, always contested, and always made under conditions of uncertainty — to maintain the practices that defined its success in the old environment and hope that the old environment returns.
Hope is not a strategy. Diamond's archive of collapsed civilizations is, among other things, a catalogue of societies that chose hope over adaptation.
The relevance to our present moment should be becoming visible, though Diamond himself — now eighty-eight, retired from UCLA since 2024 — has not drawn the connection explicitly. In a 2025 interview with the Asahi Glass Foundation, Diamond warned that "technology is disrupting capitalism, creating a 'winner-takes-all economy'" and argued that "nations have failed to distribute economic benefits more broadly among their people." He noted that "capitalism operates on an exploitative structure" and that "young people are protesting that capitalism does not serve their interests — and they are right." But these remarks, while suggestive, did not name artificial intelligence specifically.
The absence is itself significant. The foremost living scholar of civilizational collapse, the thinker whose entire career has been dedicated to understanding why societies fail to perceive and respond to existential threats, has not yet turned his analytical framework on what many observers consider the most consequential environmental transformation since the Industrial Revolution. Whether this reflects the natural conservatism of a scholar trained in the physical and biological sciences, or the difficulty of applying a framework built on centuries-long case studies to a transition that is unfolding in months, the analytical vacuum remains.
This book attempts to fill that vacuum — not by speaking for Diamond, but by applying his methods to the transformation he has not yet addressed.
The question that Diamond's framework poses to the present moment is not whether artificial intelligence is powerful. Power is a given. The question is not whether AI will change the conditions under which knowledge work, creative work, institutional decision-making, and educational formation occur. That change is already underway, measurable and accelerating. The question is the one Diamond has spent his career asking:
Will the societies that built their practices, their institutions, their professional identities, and their educational systems in the pre-AI environment recognize that the environment has changed? Will they study the new conditions with the rigor and honesty that survival requires? Will they abandon the practices that defined their success — the organizational structures, the professional hierarchies, the educational curricula, the economic models that were adaptive before and may be maladaptive now — in time to develop new practices suited to the new conditions?
Or will they do what the Norse did? Will they maintain the cattle and the churches? Will they refuse to learn the new techniques because learning them would require abandoning the identity markers that define who they are? Will they starve in the presence of abundance because the abundance is in a form they consider culturally unacceptable?
The pattern of civilizational collapse is not a prediction. It is a diagnostic framework. It identifies the structural conditions under which collapse becomes likely and the structural conditions under which adaptation becomes possible. Applied to the AI transition, it does not predict that contemporary societies will collapse. It identifies the specific decisions, made daily by millions of individuals, thousands of organizations, and hundreds of governments, that will determine whether the outcome is collapse or renewal.
Those decisions are being made now. Many of them are being made badly. And the window for correction is narrower than most people realize, because the environmental transformation that AI represents is faster than any transformation Diamond studied — faster than climate change in Greenland, faster than deforestation on Easter Island, faster than soil degradation in the Maya lowlands. The Norse had generations to adapt and failed. Contemporary societies may have years. The challenge is commensurately more severe, and the quality of the response commensurately more consequential.
Diamond closed Collapse with a chapter titled "The World as a Polder." A polder is a Dutch word for land reclaimed from the sea, maintained only by the continuous operation of dikes and pumps. If the maintenance stops, the land floods. The metaphor was Diamond's way of saying that civilization itself is reclaimed land — territory held against the pressure of entropy, maintained only by continuous, deliberate human effort.
The AI transition has not breached the dikes. But the water level is rising. And the question of who is maintaining the pumps — and whether the pumps are even pointed in the right direction — is the question this book sets out to answer.
The proximate-ultimate distinction is the most useful analytical tool Jared Diamond ever developed. In Guns, Germs, and Steel, he used it to separate the immediate causes of historical events — the specific diseases that killed indigenous populations, the specific weapons that gave conquistadors military advantage — from the ultimate causes that explained why those proximate advantages existed in the first place. Smallpox killed millions of Native Americans. That is a proximate cause. The ultimate cause was the Eurasian continent's east-west orientation, which allowed the spread of domesticated animals across similar climate zones, which produced centuries of close human-animal contact, which generated the zoonotic diseases to which Eurasian populations had developed immunity and American populations had not.
The proximate cause is what happened. The ultimate cause is why it was possible.
Applied to the AI transition, the proximate-ultimate distinction immediately reframes the conversation. The proximate event — the one that dominated the discourse in late 2025 and early 2026 — was the arrival of AI tools capable of producing competent software, coherent analysis, serviceable prose, and functional creative work through natural language conversation. Claude Code crossed a capability threshold in December 2025. Within weeks, a Google principal engineer described publicly how the tool had produced, in an hour, a working prototype of a system her team had spent a year building. The adoption curves were steep. The productivity multipliers were real. By February 2026, run-rate revenue for Claude Code alone had crossed $2.5 billion.
These are proximate facts. Significant, measurable, consequential. But they explain what happened without explaining why it matters at civilizational scale. To understand the ultimate significance, Diamond's environmental framework is required.
In every case Diamond studied, civilizational collapse was triggered not by a single event but by a change in the environmental conditions to which a society's practices were adapted. The Norse Greenland colony did not collapse because of one bad winter. It collapsed because the climate regime shifted — gradually, over decades — from one that could marginally support European pastoral farming to one that could not. The practices that had been adaptive in the old climate became maladaptive in the new one. The colony's survival depended on recognizing the regime shift and modifying its practices accordingly. It failed to do so.
The AI transition is an environmental regime shift in the cognitive economy. The conditions under which knowledge work is performed — the cost of producing software, analysis, legal documents, educational materials, creative content, strategic plans — have changed not incrementally but categorically. The environment in which a senior software engineer's twenty years of accumulated expertise made her the most valuable person on her team has given way to an environment in which a junior developer with Claude Code and good judgment can produce comparable output in a fraction of the time.
This is not a tools upgrade. A tools upgrade changes the speed at which existing practices produce results. An environmental regime shift changes which practices produce results at all.
The distinction matters because it determines the appropriate response. If AI is a tools upgrade — a faster compiler, a better IDE, a more powerful search engine — then the appropriate response is adoption: learn the tool, integrate it into existing workflows, carry on. This is the response most organizations have attempted. It is also, if Diamond's framework is correct, precisely the wrong response to an environmental regime shift.
When the environment changes categorically, integrating new tools into old workflows is the equivalent of the Norse putting better shoes on their cattle as the grasslands eroded. The cattle were not the solution. They were the problem. The practice itself — pastoral farming in an Arctic climate — was what needed to change. Better tools for the wrong practice produce faster failure.
Diamond's analysis of technology adoption, developed most fully in Chapter 13 of Guns, Germs, and Steel ("Necessity's Mother"), provides the framework for understanding why societies adopt some innovations and reject others. Diamond identified four factors that influence whether a society embraces a new technology: its relative economic advantage over existing methods; its social value and prestige; its compatibility with existing vested interests; and the ease with which its advantages can be observed. Technologies that score high on all four factors spread rapidly. Technologies that threaten vested interests, regardless of their economic advantages, face resistance.
AI scores extraordinarily high on the first factor. The economic advantage is measurable and dramatic. A twenty-fold productivity multiplier, documented in real engineering environments, represents the kind of economic advantage that Diamond's framework predicts will drive rapid adoption regardless of other factors. The social value and prestige factor also favors adoption — in the technology sector, early AI adoption has become a status marker, the equivalent of the QWERTY keyboard that Diamond noted persists not because it is optimal but because it is entrenched.
The third factor — compatibility with vested interests — is where Diamond's framework becomes most diagnostic. AI is compatible with the vested interests of some groups and catastrophically incompatible with the interests of others. For organizational leaders who measure success by output per dollar, AI is perfectly compatible. For individual practitioners whose status, income, and professional identity depend on the scarcity of their technical skills, AI is an existential threat. The technology simultaneously serves and undermines different constituencies within the same organization.
This is the structural condition that Diamond identified, across every collapsed civilization he studied, as the most dangerous: when the interests of elites diverge from the interests of the broader population, and when the elite response to environmental change serves elite interests at the expense of collective adaptation. The Norse chiefs who maintained cattle herds had their status tied to herd size. The practice served the chiefs' interests while destroying the colony's agricultural base. The Maya kings who continued building monuments had their legitimacy tied to monumental architecture. The practice served the kings' interests while consuming the labor and resources the society needed for agricultural reform.
The contemporary parallel is precise. The technology executives whose quarterly metrics improve when AI replaces headcount are in the structural position of the Norse chiefs. Their decision is individually rational and institutionally destructive. The immediate gains are visible — reduced costs, faster delivery, higher margins. The long-term costs — the depletion of the expertise pipeline, the erosion of mentorship capacity, the concentration of institutional knowledge in systems that can be replicated but not understood — are invisible, deferred, and distributed across a population that has no voice in the decision.
Diamond was explicit about this dynamic. In Collapse, he wrote that "a recurrent problem in collapsing societies is a structure that creates a conflict between the short-term interests of those in power, and the long-term interests of the society as a whole." The conflict is not a failure of individual morality. It is a structural feature of systems in which decision-making authority is concentrated in people whose incentives are misaligned with collective welfare.
The positive feedback loops that Diamond identified in Guns, Germs, and Steel — the mechanisms by which initial advantages compound into insurmountable leads — are already operating in the AI economy. Early movers in AI development accumulate data, which improves models, which attracts users, which generates more data, which improves models further. The dynamic mirrors, with uncanny precision, the feedback loops that Diamond showed had determined the distribution of power across continents for thirteen thousand years: early access to domesticable plants and animals produced agricultural surpluses, which supported larger populations, which generated more complex political organizations, which developed more sophisticated military technologies, which conquered societies that lacked those cascading advantages.
The geographic distribution of AI capability follows the same contours as every previous technological advantage that Diamond mapped. AI development is concentrated in the United States and China — societies that possess the computing infrastructure, the research institutions, the capital markets, and the engineering talent that AI development requires. These proximate advantages are themselves the product of ultimate causes: the historical accumulation of wealth, institutional capacity, and educational infrastructure that geography set in motion millennia ago.
Diamond would recognize this pattern immediately. The claim that AI will democratize capability — that a developer in Lagos will have the same leverage as an engineer at Google — must be evaluated against the reality that technologies have democratic potential but their actual distribution follows existing patterns of advantage unless deliberate institutional efforts redirect them. The printing press had democratic potential. For two centuries, that potential was captured primarily by existing power structures — the Church, the state, the merchant class — before institutional reforms (public education, public libraries, press freedom laws) began to distribute its benefits more broadly. The gap between a technology's democratic potential and its democratic reality is bridged only by institutional construction, and that construction takes time, resources, and political will that the AI transition may not allow.
In Diamond's 2025 interview, his warning about technology creating a "winner-takes-all economy" captures the dynamic without naming its mechanism. The mechanism is the positive feedback loop operating without institutional counterweight. When the advantages of AI adoption compound — when each increment of capability generates the conditions for further capability — and when no institutional structure exists to distribute those compounding advantages broadly, the result is concentration. Concentration of capability. Concentration of wealth. Concentration of the power to determine how the technology is deployed and who bears the cost of its deployment.
The environmental transformation metaphor is not decorative. It carries specific analytical weight. When Diamond studied civilizational collapse, he found that the most dangerous environmental changes were the ones that operated on a different timescale than the society's decision-making processes. Climate shifts that took decades to manifest were invisible to leaders whose planning horizons were measured in years. Soil degradation that accumulated over centuries was imperceptible to farmers who measured fertility season by season.
The AI transition inverts this problem. The environmental change is faster than the institutional response, not slower. The technology advances in months. Regulatory frameworks take years. Educational reform takes decades. The mismatch is the opposite of Diamond's historical cases but produces the same structural vulnerability: a gap between the speed of environmental change and the speed of institutional adaptation, within which enormous damage can accumulate before the society recognizes what has happened.
This temporal mismatch is the defining structural feature of the AI transition. The Norse had generations to adapt and failed. The Maya had centuries. Contemporary societies facing the AI transformation may have years — and the institutions responsible for managing the transition are operating on timescales calibrated to a world that no longer exists.
Diamond's framework does not predict that the AI transition will produce civilizational collapse. It predicts, with the confidence that comes from dozens of historical case studies, that the outcome depends on the fifth factor: the society's response. The environmental transformation is given. The competitive pressures are given. The feedback loops are operating. What is not given — what remains, in Diamond's language, a choice — is whether the societies experiencing this transformation will recognize it as a regime shift rather than a tools upgrade, will study its implications with the honesty and rigor that survival demands, and will build the institutional structures that redirect its enormous power toward broadly distributed flourishing rather than concentrated extraction.
The Norse had kayak technology available to them. They could see it working. They chose not to adopt it because adoption would have required becoming something other than what they were.
The question for contemporary societies is whether they will make the same choice.
The Maya city of Copán, in what is now western Honduras, contains one of the most revealing archaeological records of civilizational collapse ever excavated. The record is written not in text but in stone — in the sequence of monuments that Maya kings built over the course of four centuries, each monument larger and more elaborate than the last, each consuming more labor and more resources, each erected while the agricultural base that sustained the city was visibly deteriorating.
The sequence tells a story of escalating commitment. The early monuments at Copán were modest: carved stelae commemorating military victories, dynastic successions, astronomical observations. As the city grew, the monuments grew with it. By the seventh and eighth centuries, the kings of Copán were constructing enormous temple complexes, ball courts, and sculptural programs that required the mobilization of thousands of workers for years at a time. The labor came from the agricultural population. Every worker building a temple was a worker not tending a field.
The irony, visible to us through the archaeological record but apparently invisible to the kings who made the decisions, was that the monuments were consuming the resources their construction was supposed to celebrate. The kings built to demonstrate abundance. The building depleted the abundance. And the kings, whose legitimacy depended on their capacity to build, could not stop building without undermining the political theology that justified their rule.
Diamond identified this pattern — elite commitment to practices that serve elite interests while undermining collective survival — as the single most consistent predictor of civilizational collapse across every case he studied. The mechanism is not corruption, greed, or stupidity, though all three may be present. The mechanism is structural: elites derive their status from specific practices, and when environmental conditions change in ways that make those practices maladaptive, the elites face a choice between abandoning the practices that define their status and maintaining those practices at collective cost. The choice is not, for the elite, between good and evil. It is between identity and survival. And identity, as the Norse demonstrated across five centuries, often wins.
The elite commitment problem is structural rather than personal. A Norse chief whose status derived from cattle herd size was not being irrational when he maintained his herd as the grasslands eroded. He was being responsive to the incentive structure in which he was embedded. His political authority, his social prestige, his marriage prospects, his capacity to command labor and loyalty — all of these depended on the size of his herd. The fact that maintaining the herd was destroying the common resource base was, from his individual perspective, someone else's problem. The structure produced the behavior. Changing the behavior would have required changing the structure — and the people with the power to change the structure were the same people whose interests it served.
This analysis applies to the AI transition with a precision that Diamond himself might find uncomfortable, given his reluctance to address the technology directly. The contemporary equivalents of the Norse chiefs and the Maya kings are the executives, administrators, and institutional leaders whose authority, compensation, and professional identity are tied to organizational structures that AI is rendering obsolete.
Consider the technology executive whose career has been built on the management of large engineering teams. Her authority derives from the number of people who report to her. Her compensation is benchmarked to organizational span. Her professional identity — the thing she puts on her LinkedIn profile, the story she tells at dinner parties, the source of her self-regard — is "I lead a team of two hundred engineers." When AI tools enable five engineers to produce the output that previously required fifty, the mathematics of her position collapse. If she embraces the change fully — if she reduces her team to the size that the new environment supports — she simultaneously reduces her authority, her compensation, and her professional identity. She becomes a leader of twenty rather than two hundred. The org chart that justified her vice presidency no longer justifies it.
Her incentive, therefore, is to resist the change, or to adopt it superficially while preserving the organizational structures that sustain her position. She will implement AI tools within existing team structures rather than restructuring the teams. She will capture the productivity gains as increased output rather than reduced headcount, which produces the work intensification that researchers documented at Berkeley but preserves her organizational footprint. She will frame resistance as prudence — "We need to be careful about moving too fast" — and caution as wisdom.
Diamond would recognize this behavior immediately. It is the behavior of the Norse chief maintaining his cattle herd. The executive is not wrong that caution has value. She is not wrong that moving too fast carries risks. She is wrong about the nature of the change. She is treating an environmental regime shift as a tools upgrade, and calibrating her response accordingly, because the alternative — recognizing the regime shift for what it is — would require her to dismantle the institutional structures that define her professional life.
The elite commitment problem operates at every level of institutional hierarchy. University administrators whose budgets depend on enrollment face an AI environment that may fundamentally alter the value proposition of a four-year degree. If AI tools enable a motivated twenty-year-old to acquire functional competence in months rather than years — competence sufficient for productive work, if not for deep expertise — then the enrollment model that sustains the contemporary university is threatened. The administrator's rational response, within the incentive structure she inhabits, is to defend the existing model: to argue that the four-year degree provides something AI cannot replicate (socialization, critical thinking, the nebulously defined "college experience"), and to resist reforms that would reduce the time and cost of credentialing.
Some of these arguments are genuine. The university does provide things AI cannot replicate. But the defense is structurally indistinguishable from the Norse insistence that cattle farming provided things seal-hunting could not replicate. Both claims are true. Both are also beside the point if the environment no longer supports the practice.
Government officials whose regulatory frameworks were designed for a pre-AI economy face the same structural bind. The regulations they administer, the agencies they staff, the enforcement mechanisms they have built — all of these are adapted to an economy in which the production of software, analysis, legal documents, and creative work requires human labor at scale. When AI reduces the human labor required for these activities, the regulatory infrastructure that was designed to govern that labor becomes misaligned. The official's incentive is to maintain the existing framework — to regulate AI within the categories that already exist rather than developing new categories that might render the existing apparatus irrelevant.
Diamond noted that in every collapsed civilization he studied, the elites were not the first to suffer. They were the last. The Norse chiefs ate while their tenants starved. The Maya kings maintained their households while the agricultural population declined. The Easter Island chiefs continued erecting moai while the commoners who transported them could no longer feed themselves. The elite's insulation from the consequences of maladaptive practices is what allows those practices to persist long enough to produce collapse. By the time the elite begins to feel the consequences, the damage is too advanced for the corrective measures that might have worked earlier.
The contemporary version of this insulation is economic. The technology executives who resist organizational restructuring are typically well-compensated enough to absorb years of suboptimal decisions without personal hardship. The university administrators whose institutions fail to adapt will retire with their pensions intact. The government officials whose regulatory frameworks prove inadequate will move to the private sector. The cost of maladaptive elite commitment is borne not by the elites but by the populations they lead — the engineers whose skills atrophy, the students whose educations prove inadequate, the workers whose jobs disappear while the institutions that were supposed to prepare them for the transition were busy defending the old model.
Diamond's analysis also identified a subtler form of elite commitment that is particularly relevant to the AI transition: the commitment to a specific theory of value. The Norse valued what Europeans valued — cattle, churches, metalwork, trade connections with the continent. These were not arbitrary preferences. They were the markers of civilizational membership, the things that distinguished the Norse from the Inuit, the signs that you belonged to Christendom rather than to the pagan Arctic. To adopt Inuit practices would have been to abandon not just a way of farming but a way of being — to become, in the eyes of the society you belonged to, something less than what you were.
The contemporary technology industry has its own theory of value, and it is being disrupted by AI as surely as Norse pastoralism was disrupted by the Little Ice Age. The theory holds that deep technical specialization — years of mastery in a specific programming language, framework, or system — is the primary source of professional value. The entire apparatus of technology hiring reflects this theory: the coding interview, the whiteboard exercise, the assessment of algorithmic fluency, the premium placed on specific technical credentials. A senior engineer's status derives not from judgment, taste, or the capacity to envision what should be built, but from demonstrated mastery of how things are built.
When AI tools can write competent code through natural language conversation, the theory of value that sustained the technology profession's status hierarchy becomes the equivalent of the Norse theory that cattle-herding was the mark of civilized life. Not wrong, exactly. The skills are real. The mastery is genuine. But the environment in which that mastery commanded a premium has changed, and the elite's commitment to the old theory of value prevents the adaptation that the new environment requires.
The hardest part of Diamond's diagnosis is that the elite commitment problem cannot be solved by replacing the elites. New elites, inheriting the same structural incentives, would make the same decisions. The Norse chiefs who maintained cattle herds were not uniquely stubborn. Any chief, embedded in the same status structure, would have done the same. The problem is not the person. It is the structure.
Solving the elite commitment problem requires changing the incentive structures that produce elite behavior — the metrics by which leaders are evaluated, the rewards they receive for different kinds of decisions, the institutional mechanisms that connect elite interests to collective welfare. Diamond found that the societies which survived environmental challenge were the ones that had developed such mechanisms: governance structures that forced elites to bear the costs of their decisions, cultural norms that valued adaptation over tradition, institutional frameworks that rewarded long-term investment over short-term extraction.
Building those mechanisms in the context of the AI transition is the work of institutional construction — the most difficult and most consequential work available to the current generation. It is work that elites, by definition, are unlikely to initiate, because it requires restructuring the systems that serve their interests. And it is work that cannot wait for the elites to come around, because the environmental change is proceeding on its own timeline regardless of whether the institutional response keeps pace.
The Maya kings built monuments until the food ran out. The Norse chiefs maintained herds until the grass was gone. In both cases, the practices that served the elite were the practices that destroyed the base. The question for the AI transition is whether contemporary elites — in technology, in education, in government — will recognize this pattern in time to change the structures that produce it, or whether they will maintain the organizational equivalents of cattle and churches until the cognitive base they depend on is depleted beyond recovery.
In the 1990s, residents of Montana's Bitterroot Valley — an area Diamond studied intensively for Collapse — were asked about changes to their local environment. Older residents remembered a time when the valley's rivers ran clear, its hillsides were covered in old-growth forest, and its summers were free of the smoke that now blanketed the landscape for weeks each year. But when Diamond interviewed them about when the degradation had occurred, they struggled to identify a specific year or even a specific decade. The change had been so gradual that no single year's conditions seemed dramatically worse than the previous year's. Each year's new normal was only marginally different from the last.
Diamond called this phenomenon "creeping normalcy" — the process by which slow, cumulative environmental degradation escapes notice because each increment of change is too small to trigger alarm. The frog in the pot of gradually heating water does not jump out, not because it cannot detect the change, but because the change at any given moment is below the threshold of recognition. The cumulative effect is lethal. The incremental experience is unremarkable.
Creeping normalcy is one of Diamond's most powerful explanatory concepts, and it illuminates a dimension of the AI transition that the prevailing discourse has almost entirely missed. But the application requires a counterintuitive inversion: in the AI transition, the normalcy is not creeping. It is sprinting. And the sprint, paradoxically, produces the same psychological outcome as the creep.
The conventional reading of Diamond's concept assumes that the danger lies in slowness — that if environmental change were faster, societies would notice it and respond. The AI transition tests this assumption and finds it wanting. The change in the cognitive economy between October 2025 and March 2026 was visible, dramatic, and widely discussed. A Google engineer's public confession that her team's year of work had been replicated in an hour. A trillion dollars of market value vanishing from software companies in weeks. Productivity multipliers that experienced engineers described as "not funny" and "terrifying." The change was not subtle. It was not below the threshold of recognition. It was, by any reasonable measure, the opposite of creeping.
And yet normalization occurred at a speed that rivaled the change itself.
The mechanism is different from the one Diamond documented in Montana, but the outcome is identical: the failure to sustain the level of alarm that institutional adaptation requires. In Diamond's cases, the alarm never materialized because the change was too slow to perceive. In the AI case, the alarm materialized, peaked, and dissipated within weeks because the cultural apparatus for processing novel information — the discourse, the media cycle, the attention economy — metabolized the shock and moved on before institutions could respond.
This is a form of creeping normalcy adapted to the speed of contemporary information processing. The increments of change are not small. They are enormous. But the intervals between public attention to those changes are so brief that each enormous change is absorbed, categorized, and filed before its implications can be processed. The discourse that erupted in late 2025 followed the familiar arc of technological disruption narratives: shock, polarization into optimists and pessimists, a brief period of genuine exploration, and then calcification into positions that stopped evolving regardless of subsequent evidence. By March 2026, the positions were set. The tools were either going to save us or destroy us. The middle ground — the recognition that both outcomes were possible and that the determining variable was the quality of institutional response — had been squeezed out by the dynamics of a media environment that rewards certainty over nuance.
Diamond documented an analogous phenomenon in his study of the Maya collapse. The Maya had sophisticated astronomical observation, elaborate record-keeping, and a priestly class whose explicit function was to track environmental changes and advise the king accordingly. They were not lacking in information. They were lacking in the institutional capacity to translate information into action. The astronomical observations continued. The records were kept. The environmental deterioration was documented. And the kings continued building monuments, because the information, however accurate, could not overcome the structural incentives that favored continuation over change.
The contemporary parallel is the research. The Berkeley study documenting work intensification was published in February 2026 — rigorous, well-designed, and clear in its findings: AI tools were increasing work hours, colonizing rest periods, fracturing attention, and producing measurable burnout. The study was read, discussed, and absorbed into the discourse. It changed nothing. Organizations continued deploying AI tools with the same intensity. Workers continued filling freed-up time with additional work. The information was available. The institutional response was absent.
Diamond's framework explains why. Information alone does not produce adaptation. Adaptation requires institutional mechanisms that translate information into changed behavior — governance structures, regulatory frameworks, cultural norms, incentive systems. Without these mechanisms, even accurate information about environmental deterioration is metabolized as data rather than as a signal for action. The Maya priests knew the harvests were declining. The Montana ranchers knew the rivers were running brown. The technology workers know that AI is restructuring their cognitive environment. Knowing does not produce doing. Institutional structure produces doing.
The speed of forgetting in the AI transition is compounded by a feature of contemporary information ecology that Diamond's historical cases did not possess: the algorithmic organization of attention. The attention economy, as currently structured, optimizes for engagement rather than for the sustained focus that institutional adaptation requires. Information about the AI transition is plentiful. Sustained attention to its implications is scarce. The algorithmic feed serves novelty — the next development, the next threshold, the next shock — rather than the patient, iterative processing of a single development's consequences across months and years.
Diamond observed that the societies which adapted successfully to environmental challenge were those in which attention to the challenge was sustained across decision-making timescales. The Tokugawa reforestation program required the Japanese government to maintain attention to forest management for two centuries. The Icelandic commons management systems required communities to sustain attention to grazing practices across generations. Sustained attention was not a personality trait. It was an institutional product — the result of governance structures that kept environmental management on the agenda regardless of what else was happening.
The AI transition has no analogous governance structure. No institutional mechanism exists to keep the implications of AI transformation on the agenda of the organizations, governments, and educational systems that need to adapt. Attention is allocated by market forces — which reward novelty — and by political cycles — which reward crisis response over long-term planning. The result is a pattern of punctuated attention: brief spikes of intense focus triggered by threshold events (the December 2025 capability leap, the February 2026 market crash), followed by rapid normalization and the redirection of attention to the next event.
This pattern is precisely the opposite of what successful adaptation requires. Successful adaptation, in Diamond's evidence, requires the sustained, boring, institutional attention that keeps the problem on the table after the crisis has passed. It requires the governance equivalent of the Tokugawa bureaucracy that monitored forest cover year after year, decade after decade, not because forests were in crisis but because the monitoring itself was the mechanism that prevented crisis.
There is a deeper dimension to creeping normalcy that Diamond explored in The World Until Yesterday: the phenomenon of "landscape amnesia," in which each generation perceives its own environment as the baseline and loses awareness of how different conditions were before. A Montana resident born in the 1990s perceives smoky summers as normal because smoky summers are all she has known. The pre-smoke condition is not part of her experiential baseline. She cannot miss what she never experienced.
Applied to the AI transition, landscape amnesia operates on a compressed timescale. A software engineer who began her career in 2024, working with AI tools from the start, has no experiential baseline for what software development felt like without those tools. The specific cognitive experience of debugging — the hours of patient deduction, the accumulation of system understanding through repeated failure, the embodied knowledge that builds through friction — is not part of her professional formation. She does not know what she has not experienced. And she cannot miss the depth she has not developed, because depth, by its nature, is invisible to those who lack it. You do not know what you do not know.
This is the mechanism through which cognitive resource depletion operates. It is not dramatic. It does not announce itself. It proceeds through the accumulation of individual experiences that are, each one, marginally thinner than what came before. Each generation of practitioners trained in an AI-augmented environment has slightly less deep understanding than the generation before, not because AI made them stupider but because AI removed the friction through which deep understanding forms. And each generation, perceiving its own level of understanding as normal, has no basis for recognizing the depletion.
The compound effect, across generations, follows the pattern Diamond documented in every case of resource depletion that produced collapse. The Norse Greenlanders did not wake up one morning to find their grasslands gone. They woke up each morning to grasslands that were marginally thinner than the year before, and each year's grasslands were the baseline from which the next year's loss was measured. By the time the depletion was severe enough to penetrate the defense of creeping normalcy, the recovery threshold had been passed.
Diamond's concept of creeping normalcy was developed to explain how societies fail to perceive slow changes. The AI transition forces a revision: societies can also fail to sustain perception of fast changes. The mechanism is different — normalization through information saturation rather than through incremental imperceptibility — but the outcome is the same. The society knows, in the abstract, that conditions have changed. It cannot maintain the sustained institutional attention that would translate that knowledge into adaptive behavior.
The implications are immediate and practical. If the AI transition is an environmental regime shift — and the evidence presented in the previous chapters argues that it is — then the institutional response must be designed to resist normalization. This means governance structures that keep AI adaptation on the institutional agenda regardless of the media cycle. It means measurement systems that track cognitive resource depletion across professional populations, the way environmental monitoring tracks soil fertility or forest cover. It means educational frameworks that give students experiential baselines for pre-AI cognitive work, so that landscape amnesia does not erase the understanding of what depth feels like.
Diamond found that the societies which resisted creeping normalcy were the ones that had developed what might be called institutional memory — mechanisms for preserving awareness of past conditions across the timescales relevant to environmental change. The Tokugawa forest management system included detailed records of forest cover, maintained across centuries, that made depletion visible even when it was too gradual for any individual to perceive. The records were the institutional eye that saw what individual eyes could not.
The AI transition requires an equivalent institutional eye — a mechanism for tracking what is being gained and what is being lost across the timescales relevant to cognitive development and professional formation. Without such a mechanism, the depletion will proceed beneath the threshold of collective perception, and each generation will inherit a slightly thinner version of the cognitive resources that civilization depends on, and none of them will know it.
The frog does not jump. Not because the water is heating slowly, and not because the frog is stupid. The frog does not jump because each moment's temperature is the new normal, and the distance from normal to lethal is traveled one imperceptible increment at a time. The speed of the heating changes the sensation but not the outcome. Whether the pot heats over an hour or over a century, the frog that lacks the institutional capacity to recognize cumulative change ends up the same way.
Diamond's contribution is the insistence that this outcome is chosen, not fated. Societies with institutional mechanisms for resisting normalcy — for maintaining attention across the timescales that matter — survive environmental transformations that destroy their neighbors. The construction of those mechanisms is the most important work available to the current generation facing the AI transition. It is also, given the structural incentives that favor normalization, the work least likely to be done without deliberate, sustained, and politically costly effort.
The next chapter examines the resource that may be most vulnerable to the combined effects of elite commitment, rapid environmental change, and the failure of sustained institutional attention: human expertise itself.
Easter Island's palm forests once covered nearly the entire island. The trees were enormous — a species related to the Chilean wine palm, capable of reaching heights of over eighty feet, with trunks wide enough to hollow into seagoing canoes. The forests provided timber for houses, bark for rope and cloth, nuts for food, and — most critically — the logs used as rollers to transport the massive stone statues, the moai, from the quarries where they were carved to the coastal platforms where they stood. The forest was the island's primary capital asset, the resource that made everything else possible.
The Easter Islanders cut the forest down. Not in a single act of recklessness but tree by tree, generation by generation, each tree felled for a purpose that was individually rational and collectively catastrophic. A tree for a canoe. A tree for a house. Trees for the rollers that moved the moai that sustained the chiefs' political authority. Each tree removed reduced the forest's capacity to regenerate. Each reduction was imperceptible against the baseline of the forest that remained. And at some point — a point that no individual could have identified at the time — the rate of cutting exceeded the rate of regrowth, and the forest entered a terminal decline from which it could not recover.
By the time the last tree was felled, the islanders had lost not just the timber but everything the timber had made possible. No more canoes, which meant no more deep-sea fishing, which meant the loss of the primary protein source. No more rollers for the moai, which meant the collapse of the political system that the moai sustained. No more roots holding the topsoil, which meant erosion, which meant reduced crop yields, which meant famine. The cascade was total. The initial resource — trees — had been the keystone of an ecological and social system that collapsed entirely when the keystone was removed.
Diamond used Easter Island as his most vivid illustration of a principle that operated across every collapsed civilization he studied: societies can deplete the resources they depend on, gradually and invisibly, until the depletion crosses a threshold beyond which recovery is impossible. The depletion follows a characteristic pattern. It is driven by individually rational decisions. It is invisible at any given moment because each increment of loss is small relative to the remaining stock. It is self-reinforcing because each increment of depletion reduces the resource's capacity to regenerate. And the threshold at which depletion becomes irreversible is impossible to identify in advance, because the threshold is a property of the system's dynamics rather than a fixed quantity.
The application to the AI transition requires identifying the resource that is being depleted. The answer is not jobs, though jobs are being displaced. It is not income, though income distribution is being restructured. The resource being depleted is human expertise itself — the deep, embodied, friction-built understanding that enables practitioners to exercise the judgment that AI cannot replicate and that every serious analysis of the AI transition identifies as the quality that will distinguish human contribution from machine output.
The claim requires careful construction, because it is easy to dismiss as nostalgia — the lament of an older generation for the difficulties that shaped it. This is not that claim. The claim is structural, not sentimental. It follows Diamond's analytical method: identify the resource, trace the mechanism of depletion, assess the rate of loss relative to the rate of replenishment, and evaluate whether institutional structures exist to prevent the depletion from crossing the threshold of irreversibility.
The resource is expertise, but expertise of a specific kind. Not the knowledge of facts, which AI possesses in greater breadth and depth than any human. Not the knowledge of procedures, which AI can execute with greater reliability than most practitioners. The expertise being depleted is what cognitive scientists call tacit knowledge — the understanding that cannot be articulated, that lives in the body and the reflexes and the pattern-recognition apparatus that decades of practice deposit in a practitioner's nervous system.
A senior software architect who has spent twenty years building and debugging complex systems possesses tacit knowledge that is invisible even to herself. She cannot enumerate the heuristics she uses to evaluate a system design. She cannot explain, in propositional terms, why one architecture "feels wrong" and another "feels right." The knowledge is embodied — encoded in neural pathways that were laid down through thousands of hours of the specific cognitive friction that debugging, troubleshooting, and recovering from failure produce.
This tacit knowledge is the product of friction. It cannot be acquired through instruction, through reading documentation, or through reviewing AI-generated output. It is deposited, layer by thin layer, through the experience of confronting problems that resist easy solution, of forming hypotheses that prove wrong, of building mental models that are corrected and refined through repeated contact with reality. The process is slow, uncomfortable, and inefficient. It is also, as far as cognitive science can determine, irreplaceable. No shortcut to tacit knowledge has ever been discovered, because the knowledge is not a content that can be transmitted but a capacity that must be grown.
AI tools, by removing the friction through which tacit knowledge forms, interrupt the deposition process. A junior developer who uses Claude Code to generate working software from natural language descriptions is producing output without undergoing the experience that would build the judgment to evaluate that output. The software works. The developer has not understood why it works, or how it could fail, or what assumptions are embedded in its architecture. The output is present. The expertise that would have formed through the struggle to produce it is absent.
Each individual instance of this substitution is trivial. One developer, on one project, using AI to skip the debugging that would have deposited one thin layer of tacit knowledge — this is not a crisis. It is not even visible. It is the equivalent of one Easter Islander felling one tree. The tree is gone. The forest looks the same.
But the instances are not isolated. They are systemic, simultaneous, and compounding. Across the technology industry, across every knowledge profession that AI tools are penetrating, the substitution of AI output for human struggle is occurring at a scale and speed that has no historical precedent. Millions of practitioners, simultaneously, are receiving outputs they did not earn, skipping the friction they did not enjoy, and accumulating credentials without accumulating the tacit knowledge those credentials are supposed to represent.
The rate of depletion is determined by the rate of AI adoption. The rate of replenishment is determined by the rate at which new practitioners undergo the friction-rich experiences that build tacit knowledge. If adoption proceeds faster than replenishment — if more practitioners are skipping the struggle than are undergoing it — the stock of tacit knowledge in the profession declines. And because tacit knowledge is the foundation on which judgment, mentorship, and institutional wisdom rest, the decline cascades through every function that depends on those capacities.
Diamond's analysis of resource depletion identified a feature that makes the dynamic particularly dangerous: the depletion is masked by the continuing availability of the resource's most visible products. The Easter Islanders continued to see moai standing on their platforms even as the forest that had made their transportation possible was disappearing. The products of the resource outlasted the resource itself, creating the illusion of abundance in the presence of decline.
The contemporary equivalent is the continuing availability of competent output. AI-augmented teams produce working software, coherent analyses, functional products. The output is visible, measurable, and often impressive. The tacit knowledge that would have formed through the struggle to produce that output is invisible, unmeasurable, and absent. Organizations see the output and conclude that their capability is intact. They do not see the expertise that is not forming, the judgment that is not developing, the mentorship capacity that is not being built — because these are absences, and absences are, by definition, invisible.
The mentorship pipeline is particularly vulnerable. Tacit knowledge is transmitted primarily through apprenticeship — the close, sustained interaction between an experienced practitioner and a developing one, in which the experienced practitioner's judgment is made visible through the collaborative navigation of difficult problems. The senior architect who sits with a junior developer for an hour, working through a design decision, is not merely transferring information. She is demonstrating, in real time, the application of tacit knowledge to a specific problem — the weighing of considerations that cannot be enumerated, the intuitive rejection of solutions that "feel wrong," the pattern-recognition that only decades of practice produce.
When AI tools reduce the need for this interaction — when the junior developer can get a working solution from Claude Code faster and more efficiently than from a senior colleague — the mentorship interaction becomes harder to justify. The junior developer's time is "better spent" producing output. The senior architect's time is "better spent" on strategic work. The interaction that would have transmitted tacit knowledge from one generation to the next is optimized away, and the loss is invisible because the output metrics improve.
Each generation of practitioners trained in an AI-augmented environment has access to more powerful tools and less deep understanding than the generation before. The tools compensate for the missing understanding — producing outputs that meet functional requirements regardless of the practitioner's comprehension. But the compensation is fragile. It depends on the tools continuing to work, continuing to be available, and continuing to be adequate to the problems at hand. When the problems exceed the tools' capability — when the situation is novel, when the context is ambiguous, when the standard solution does not apply — the practitioner's tacit knowledge is the only resource available. And if that resource has been depleted through generations of friction-avoidance, it will not be there when it is needed.
Diamond documented a specific phase of resource depletion that he called the "threshold effect" — the point at which cumulative depletion produces a qualitative shift in the system's behavior. Before the threshold, the system appears to function normally despite the ongoing depletion. After the threshold, the system's behavior changes rapidly and often irreversibly. The shift from functioning to failure is not gradual. It is sudden, because the system has been operating on a diminishing margin that collapses when the margin reaches zero.
The threshold effect in cognitive resource depletion would manifest as a sudden decline in the profession's capacity to handle novel problems — problems that require the tacit knowledge, the embodied judgment, the deep architectural intuition that only friction-rich experience can build. The decline would be invisible in routine operations, which AI handles competently. It would become visible only when the routine breaks — when the unprecedented situation arises, when the system fails in a way the AI was not trained to anticipate, when judgment is required and no one in the room has the depth of understanding to exercise it.
Diamond would recognize this pattern. It is the pattern of every resource depletion that produced civilizational collapse. The resource appears abundant until the moment it is gone. The depletion is invisible until the threshold is crossed. And the threshold, by its nature, cannot be identified until it has been passed.
The question is not whether cognitive resource depletion is occurring. The mechanism is clear, the evidence is accumulating, and the structural incentives favor continuation of the depletion. The question is whether institutional structures can be built that maintain the rate of replenishment — that ensure enough practitioners undergo enough friction-rich experience to sustain the stock of tacit knowledge — even as AI tools make friction-avoidance easier, faster, and more economically rational.
The Easter Islanders had no institutional structure for forest management. The Tokugawa Japanese did. The difference in outcome was total. The presence or absence of institutional structure to manage the depletion of cognitive resources in the AI era will produce a comparably consequential difference. And the structure, as Diamond insisted across every page of Collapse, does not build itself. It is built by societies that recognize what they are losing, that measure the loss before it becomes irreversible, and that invest in replenishment at the cost of short-term efficiency.
The cost is real. Maintaining friction-rich training in an environment where frictionless alternatives are available requires deliberate institutional effort. It means accepting slower output from developing practitioners. It means protecting mentorship time against the pressure to redirect it toward production. It means measuring something harder to measure than output — the development of judgment, the formation of tacit knowledge, the growth of the capacity that will matter most when the routine breaks and the tools fail.
Diamond's evidence suggests that the societies which pay this cost survive, and the societies which do not pay it join the Easter Islanders in the archaeological record — a cautionary exhibit for the civilizations that follow.
The last tree on Easter Island was not cut down by a person who wanted to destroy the forest. It was cut down by a person who needed the wood. Perhaps for a canoe to fish from, because the near-shore fisheries were depleted and deep-water fishing required a vessel. Perhaps for the rollers that moved a moai, because the chief demanded it and refusal meant social exile or worse. Perhaps for fuel, because the nights were cold and the alternatives — burning dried grass, burning bones — were insufficient.
Whatever the specific purpose, the person who felled the last tree was acting rationally. The wood was needed. The tree was there. The individual benefit of cutting it was immediate and tangible. The collective cost — the permanent loss of the island's capacity to regenerate its forest, and with it the cascading collapse of the fishing economy, the agricultural system, and the political order — was distributed across every islander, present and future, and was therefore no single person's responsibility.
Diamond recognized this as a structural feature of collapse rather than a moral failing. The Easter Islanders were not uniquely shortsighted. They were embedded in a system that lacked the institutional mechanisms to align individual incentives with collective welfare. In the language of game theory, they were trapped in a prisoner's dilemma — a situation in which the rational strategy for each individual player produces a collectively irrational outcome.
The prisoner's dilemma is the formal structure of tragedy. Two prisoners, questioned separately, each face the same choice: cooperate with the other prisoner by staying silent, or defect by testifying against them. If both cooperate, both receive light sentences. If both defect, both receive heavy sentences. If one cooperates and the other defects, the defector goes free and the cooperator receives the heaviest sentence of all. The rational strategy, for each prisoner independently, is to defect — regardless of what the other does. The result is that both prisoners defect, and both receive worse outcomes than they would have achieved through mutual cooperation.
The structure scales. When Diamond studied societies that collapsed through resource depletion, he found the prisoner's dilemma operating at every level — between individuals competing for resources, between clans competing for status, between the present generation and future generations who had no voice in decisions that would determine their welfare. The structural logic was identical in every case. Individual rationality produced collective catastrophe. And the escape from the dilemma required institutional intervention — rules, norms, governance structures that changed the incentive calculus so that cooperation became the individually rational strategy.
The AI transition is generating prisoner's dilemmas at a scale and speed that Diamond's historical cases, severe as they were, did not approach. The dilemmas operate at every level of social organization, and in each case, the structure is the same: the individually rational decision produces a collectively harmful outcome, and no individual actor has the incentive or the capacity to change the dynamic unilaterally.
At the level of the firm, the dilemma is stark. A company that replaces skilled workers with AI tools reduces costs, increases output, and improves its competitive position. A company that retains its workforce, invests in retraining, and accepts lower short-term margins acts in the collective interest but bears a competitive disadvantage relative to the firm that cuts. In a competitive market, the firm that defects — that takes the cost savings and accepts the externalities — outperforms the firm that cooperates. And because each firm faces the same incentive structure, the rational strategy for each is to defect, producing a labor market outcome that harms workers, depletes expertise, and concentrates gains among capital owners rather than distributing them across the workforce.
The SaaS market correction that erased a trillion dollars of value in early 2026 was, in Diamond's framework, the visible expression of this dilemma playing out in capital markets. Investors, each acting on rational assessments of individual company valuations, withdrew capital from software companies whose code-based business models were vulnerable to AI commodification. Each investor's decision was rational. The collective result was a market event that destabilized an entire sector, displaced thousands of workers, and accelerated the very dynamic — the replacement of human capability with AI capability — that had triggered the reassessment in the first place.
At the level of the individual practitioner, the dilemma takes a different form but follows the same logic. A developer who uses AI tools to skip the friction of debugging produces output faster and meets deadlines more reliably than a developer who insists on understanding every line of code. The first developer is rewarded. The second is penalized — or at least, perceives herself as falling behind. The rational strategy, for each individual developer, is to use the tools to maximize output. The collective result is the expertise depletion described in the previous chapter: a profession-wide reduction in tacit knowledge that harms everyone, including the developers who made the individually rational choice to skip the struggle.
At the level of the educational institution, the dilemma is equally clear. A university that integrates AI aggressively into its curriculum — allowing students to use AI tools for research, writing, and problem-solving — produces graduates who are comfortable with the tools the market demands. A university that restricts AI use to preserve the friction-rich learning experiences through which deep understanding forms produces graduates who are better thinkers but less immediately employable. Parents and students, choosing between institutions, select the one that maximizes market readiness. The university that cooperates — that preserves the conditions for deep learning — loses enrollment to the university that defects.
At the level of nations, the dilemma assumes its most consequential form. A nation that regulates AI deployment to protect workers, preserve expertise, and ensure equitable distribution of gains bears costs that nations with lighter regulation do not. The regulatory burden slows innovation, increases compliance costs, and diverts resources from development to governance. Nations that defect — that minimize regulation to maximize competitive advantage — attract investment, talent, and the compounding benefits of the positive feedback loops that Diamond identified as the mechanism of divergence between societies. The nation that cooperates falls behind. The nation that defects accelerates. And the collective result — a global race to the bottom in regulatory protection — harms the populations that regulation was designed to serve.
Diamond found that the prisoner's dilemma in resource management was escaped only through institutional mechanisms that changed the payoff structure. Elinor Ostrom, whose work on commons governance complemented Diamond's, documented the specific institutional features that enabled communities to manage shared resources without depleting them: clearly defined boundaries, rules adapted to local conditions, collective decision-making processes, monitoring systems, graduated sanctions for violations, and mechanisms for conflict resolution. These features were not spontaneous. They were constructed, deliberately and often at significant cost, by communities that recognized the dilemma and chose to invest in the institutional infrastructure required to escape it.
The critical insight from both Diamond's and Ostrom's work is that the prisoner's dilemma is not escaped through moral exhortation. Telling companies to "do the right thing" does not change the incentive structure that rewards defection. Telling practitioners to "maintain their skills" does not change the market that penalizes slower output. Telling nations to "regulate responsibly" does not change the competitive dynamic that rewards lighter regulation. The escape requires structural intervention — changes to the rules of the game that make cooperation the individually rational strategy.
Diamond identified several categories of structural intervention that had proven effective across his historical cases. Governance structures that forced decision-makers to bear the costs of their decisions — so that the chief who overgrazed the commons suffered the consequences personally, rather than externalizing them to the community. Monitoring systems that made resource depletion visible before it crossed irreversible thresholds — the Tokugawa forest inventories that tracked tree cover across decades and centuries. Cultural norms that stigmatized defection and rewarded cooperation — the social pressure in Icelandic communities that prevented individual farmers from exceeding their grazing allotments.
The AI transition requires analogous structures, and it requires them urgently, because the speed of the transition means that the damage from unstructured defection accumulates faster than in any of Diamond's historical cases. The specific structures needed include industry-wide standards for expertise maintenance, regulatory frameworks that account for the externalities of workforce displacement, educational requirements that preserve friction-rich learning alongside AI-augmented efficiency, and international coordination mechanisms that prevent the regulatory race to the bottom.
None of these structures exist at adequate scale. Some exist in embryonic form — the EU AI Act, various national executive orders, scattered corporate governance frameworks. But the gap between the institutional infrastructure that exists and the institutional infrastructure that the prisoner's dilemma demands is vast, and it is widening as the technology advances faster than the institutions adapt.
Diamond would note that the Easter Islanders had no institutional mechanism for forest management. The trees were a common resource, accessible to anyone, and no governance structure existed to limit cutting or ensure regeneration. The result was predictable and total. The Tokugawa Japanese, facing an identical resource management challenge, built institutional mechanisms that aligned individual incentives with collective welfare. The result was also predictable and total — but in the opposite direction. Two centuries of deliberate institutional management produced a forest recovery that sustained the civilization's resource base for generations.
The difference between the two outcomes was not the severity of the challenge. Both societies faced deforestation severe enough to threaten their viability. The difference was institutional: the presence or absence of structures that transformed the prisoner's dilemma from a trap into a solvable coordination problem.
The AI transition is the deforestation. The prisoner's dilemma is the structural logic driving the depletion. The question — Diamond's question, applied to a new domain — is whether contemporary societies will build the institutional mechanisms that the Tokugawa built, or whether they will repeat the Easter Island outcome at civilizational scale, each actor rational, each decision defensible, the collective result catastrophic.
In the early eighteenth century, the Japanese government conducted a forest inventory. The results were alarming. Centuries of timber extraction for construction, fuel, and shipbuilding had stripped the hillsides across the major islands. Erosion was accelerating. Downstream flooding was worsening. The lumber that sustained the construction industry — including the ornate wooden temples and castles that defined Japanese architectural identity — was running out.
The Tokugawa shogunate's response was one of the most remarkable acts of institutional adaptation in Diamond's entire archive. Beginning around 1700 and intensifying across the following decades, the government implemented a comprehensive system of forest management that included detailed inventories of every major forest, strict regulations on cutting, the designation of protected watersheds, the development of plantation forestry to supplement natural regeneration, and the creation of a bureaucratic apparatus dedicated to monitoring and enforcing compliance.
The program worked. Over the course of two centuries, Japan's forests recovered. By the late nineteenth century, when the Meiji Restoration opened Japan to the world, the country possessed a forest resource base that sustained its modernization — a resource base that would not have existed without the Tokugawa intervention two centuries earlier.
Diamond used the Tokugawa case as his primary exhibit of successful societal adaptation to environmental crisis, and the case repays careful analysis because it illustrates, with unusual clarity, the specific features that distinguish societies that survive from societies that collapse. The features are not mysterious. They are identifiable, replicable, and — this is the uncomfortable part — expensive.
The first feature is recognition: the society accurately perceives that environmental conditions have changed and that existing practices are no longer viable. This sounds obvious. It is not. Diamond's collapsed civilizations were not staffed by fools. The Norse Greenlanders could see the Inuit thriving. The Maya had sophisticated astronomical and agricultural knowledge. The Easter Islanders watched their forests shrink year by year. In every case, the information was available. What was lacking was the institutional capacity to translate information into the recognition that fundamental change was required — as opposed to incremental adjustment within the existing framework.
The Tokugawa government recognized that Japan was not facing a temporary timber shortage but a structural resource crisis that would worsen without intervention. This recognition required the government to override the short-term interests of the timber industry, the construction sector, and the regional lords whose estates depended on unrestricted logging. The recognition was costly. It was contested. It required political will that most governments, most of the time, do not possess.
The second feature is willingness to abandon identity-defining practices. This is the hardest requirement, and it is the one at which most societies fail. The Norse could not abandon cattle because cattle were who they were. The Maya could not abandon monument-building because monuments were the foundation of political legitimacy. The Easter Islanders could not stop transporting moai because the moai sustained the chiefs' authority.
The Tokugawa government could — and did — restructure an economy that had been organized around unrestricted timber extraction. The restructuring required abandoning the assumption that forests were an inexhaustible commons and replacing it with the assumption that forests were a managed resource requiring continuous investment. This was not merely a policy change. It was a conceptual transformation — a shift in how the society understood its relationship to the resource it depended on.
The Tikopia islanders provide an even more dramatic example. Tikopia is a tiny island in the southwestern Pacific — roughly five square kilometers — that has been continuously inhabited for roughly three thousand years. The archaeological record shows that the Tikopians, shortly after initial settlement, introduced pigs to the island. Pigs were a prestige food source throughout Polynesia, central to feasting, gift-exchange, and social hierarchy. But on Tikopia, the pigs competed with humans for the island's limited food resources and damaged the root crops that sustained the population.
Sometime around 1600, the Tikopians made a collective decision that Diamond called "one of the most extreme decisions any Polynesian society ever made." They killed every pig on the island. They eliminated a food source that was central to their cultural identity, their social practices, and their political economy — because maintaining it would have compromised the island's carrying capacity and threatened the population's survival.
The decision required the Tikopians to value long-term collective survival over short-term cultural continuity. It required them to recognize that a practice that defined who they were — pig-keeping Polynesians — was incompatible with the environmental constraints they faced. And it required the institutional coordination to implement the decision collectively, because individual families that continued keeping pigs would have gained a short-term advantage at collective cost.
The third feature is investment in long-term adaptation at the cost of short-term comfort. The Tokugawa reforestation program took two centuries to mature. The trees planted in 1720 did not produce usable timber until the mid-nineteenth century. The government that initiated the program did not benefit from it. The benefit accrued to generations that had not yet been born. The investment required the current generation to bear costs — reduced timber availability, increased regulation, the economic disruption of transitioning from extraction to management — for the benefit of future generations that could not reciprocate.
This is the temporal structure of all successful adaptation to environmental crisis: the costs are borne now, the benefits are realized later, and the connection between the two is maintained only by institutional structures that outlast individual decision-makers. Without such structures, the temptation to defect — to take the short-term gains and let the future bear the costs — is overwhelming. Diamond's collapsed civilizations succumbed to this temptation. His surviving civilizations resisted it, through institutional mechanisms that kept long-term welfare on the decision-making agenda.
What would these three features — recognition, willingness, and long-term investment — look like in the context of the AI transition?
Recognition, in this context, means accurately perceiving that AI represents an environmental regime shift rather than a tools upgrade. The distinction, developed earlier in this book, is between a change that alters the speed of existing practices and a change that alters which practices are viable. The evidence overwhelmingly supports the latter interpretation. The practices that defined professional success in the pre-AI cognitive economy — deep technical specialization, execution speed, the capacity to perform skilled labor that required years of training — are being commoditized. The practices that define success in the post-AI economy — judgment, taste, the capacity to direct capability rather than perform it, multi-disciplinary integration — are different in kind, not merely in degree.
Most organizations have not achieved this recognition. They are integrating AI tools into existing workflows — the equivalent of the Norse putting better shoes on their cattle. The integration produces measurable gains within the existing framework, which reinforces the belief that the framework itself is sound. The gains are real. The reinforcement is misleading. Diamond documented this dynamic across every collapsing civilization: the initial phases of environmental change often improve short-term performance within existing practices, because the society is drawing down its resource base faster than before. The Norse cattle herds may have grown in the early phases of overgrazing, as the concentrated grazing produced lush short-term growth. The improvement masked the depletion. The improvement was the depletion.
Willingness, in this context, means abandoning the professional identities, organizational structures, and educational models that were adaptive in the pre-AI environment. For technology organizations, this means restructuring away from large teams of specialized executors toward smaller groups of integrative thinkers supported by AI tools — a restructuring that directly threatens the authority and compensation of the managers whose careers were built on leading large teams. For educational institutions, this means redesigning curricula around the development of judgment, questioning, and integrative thinking rather than the transmission of procedural knowledge that AI can provide on demand — a redesign that threatens the expertise and employment of faculty whose value proposition is the possession of procedural knowledge. For individuals, it means accepting that the skills that defined their professional identity may no longer be the skills the environment rewards, and investing in the development of new capabilities under conditions of profound uncertainty.
Each of these acts of willingness is expensive. Each is contested. Each requires abandoning something that was genuinely valuable — that was, in its time, the right thing to have built — in service of adaptation to conditions that were not chosen. This is what makes the Tikopian decision so extraordinary and so relevant. The Tikopians did not stop keeping pigs because pigs were bad. Pigs were good. Pigs were culturally central, nutritionally valuable, and socially important. The Tikopians stopped keeping pigs because the environment could no longer support the practice, and they valued the island's future more than the practice's continuation.
Long-term investment, in this context, means building the institutional structures that protect cognitive resources — mentorship programs, friction-rich training experiences, measurement systems for tacit knowledge development — even though these structures reduce short-term efficiency and produce no measurable return within a quarterly reporting cycle. It means investing in the regulatory and governance frameworks that align individual incentives with collective welfare, even though these frameworks impose costs on the organizations subject to them. It means funding the educational reforms that develop judgment and questioning capacity, even though these reforms produce graduates whose value is harder to measure than the value of graduates trained in specific technical skills.
The Tokugawa government invested in forest management for two centuries before the investment matured. The contemporary equivalent is not a two-century program — the AI transition is moving too fast for that timescale to be relevant. But the structural logic is identical: invest now in the conditions that will sustain capability later, bearing costs that the current generation would prefer to avoid, because the alternative is the depletion of the resource base on which everything else depends.
Diamond identified one additional feature of surviving societies that is particularly relevant: they learned from their neighbors. The Tokugawa government studied Chinese forestry practices. The Icelanders adapted techniques from Norwegian commons management. The societies that survived were the ones that treated other societies' experiences — including other societies' failures — as information rather than as judgment.
The AI transition offers an unprecedented opportunity for this kind of cross-societal learning, because the transition is occurring simultaneously across every society on earth. The failures and successes of early adapters — the organizations that restructured successfully, the educational institutions that redesigned their curricula, the nations that built effective governance frameworks — are visible in real time. The information is available. The question, as always, is whether the institutions responsible for adaptation possess the capacity to translate information into action.
Diamond found that they sometimes do and sometimes do not. The record is not encouraging, but neither is it hopeless. The Tokugawa existed. The Tikopians existed. The Icelanders existed. Successful adaptation to severe environmental challenge is not a theoretical possibility. It is a documented historical reality. The features that produced it are identifiable and replicable.
The question is not whether adaptation is possible. It is whether the societies currently facing the AI transition will choose to invest in it — recognition, willingness, long-term commitment — before the window for adaptation closes.
Diamond titled his most important book Collapse: How Societies Choose to Fail or Succeed. The word choose carried more analytical weight than the entire rest of the title. It was not a metaphor. It was not an encouragement. It was a finding — the conclusion of decades of comparative research across continents and millennia, supported by archaeological evidence, environmental data, and the historical record of societies that faced identical challenges and produced opposite outcomes.
Societies choose. The choice is not made once, in a dramatic moment of collective decision, by a leader standing before a crowd and declaring a new direction. It is made continuously, in the accumulation of daily practices, institutional investments, resource allocations, and cultural commitments that either adapt to changing conditions or maintain traditions that the environment no longer supports. The choice is distributed across millions of individual decisions, each one small enough to feel inconsequential, each one contributing to a collective trajectory that becomes visible only in retrospect.
The Norse Greenlanders did not hold a council and vote to starve rather than learn seal-hunting from the Inuit. They made a thousand small decisions, each individually unremarkable, that accumulated into a trajectory of collapse. The decision to repair the church rather than build a kayak. The decision to maintain the cattle herd through one more winter rather than experiment with marine resources. The decision to trade with Norway for iron rather than develop local alternatives. Each decision was defensible in isolation. Each decision reflected the values, the identity, and the institutional incentives of the society that made it. And the accumulation of defensible decisions produced an indefensible outcome.
The concept of distributed choice — the idea that civilizational outcomes are determined not by singular decisions but by the accumulation of daily practices — is Diamond's most important contribution to the analysis of the AI transition, because it relocates agency from the dramatic to the mundane. The question is not whether some government will pass a transformative AI regulation, or whether some corporation will announce a revolutionary restructuring, or whether some educational institution will unveil a new curriculum that solves the problem. These things may happen. They may help. But they are not where the outcome is determined.
The outcome is determined in the accumulation of Tuesday afternoons.
A junior developer uses Claude Code to generate a function she does not understand. She ships it. It works. She moves on to the next task. One Tuesday afternoon. One thin layer of tacit knowledge that did not form. Individually: nothing. Collectively, across millions of developers making the same choice on millions of Tuesday afternoons: a measurable depletion of the profession's knowledge base.
A manager reviews a team's output and notes that productivity has increased since AI adoption. She reports the gain to her leadership. No one asks whether the gain came at the cost of the team's developmental trajectory — whether the junior members are learning or merely producing. The question is not asked because the metrics do not measure it. One Tuesday afternoon. One institutional decision to optimize for output over development. Individually: nothing. Collectively: a systematic erosion of the mentorship function that produces the next generation of senior practitioners.
A parent watches her twelve-year-old use AI to complete a homework assignment in ten minutes that would have taken an hour of struggle. The child is happy. The parent is relieved. The assignment is complete. No one examines whether the hour of struggle that was skipped — the frustration, the false starts, the eventual satisfaction of having worked through a problem independently — was the actual point of the assignment. One Tuesday afternoon. One foregone cognitive experience. Individually: nothing. Collectively: a generation of children trained to extract answers rather than develop the capacity to find them.
A university professor discovers that half her students have used AI to generate their research papers. She is faced with a choice: redesign her assessment methods to test for understanding rather than output, which requires months of work and institutional support she may not receive, or adjust her standards to the new reality, accepting AI-generated work that meets formal requirements regardless of whether the student understands the material. One Tuesday afternoon. One institutional adaptation that preserves the form of education while hollowing out its substance.
A CEO reviews the quarterly numbers and notes that AI-driven efficiency gains have reduced the need for her middle management layer. She can restructure — reducing headcount, increasing margins, demonstrating to investors that the company is "AI-forward." Or she can redeploy — retaining the people, investing in retraining, accepting lower short-term margins in exchange for a workforce that is developing the judgment to direct AI tools wisely. One Tuesday afternoon. One decision that shapes the trajectory of hundreds of careers.
In each case, the individual decision is rational, defensible, and small. In each case, the collective accumulation of such decisions determines whether the society navigates the AI transition successfully or fails. And in each case, the institutional structures that could redirect the individual calculus — that could make the developmentally sound choice also the individually rational choice — are weak, absent, or unbuilt.
Diamond studied twelve factors that predicted whether a nation would successfully navigate a crisis, drawn from his analysis in Upheaval of nations that had faced existential challenges in the modern era. Several of these factors are directly relevant to the AI transition.
The first is honest self-appraisal — the willingness to assess the society's situation accurately, without the distortions of national pride, ideological commitment, or institutional self-interest. Diamond found that nations which navigated crises successfully were the ones that could look at their circumstances and say, clearly and publicly: this is what is happening, this is what it means, and this is what we must change. Nations that failed substituted ideology for assessment, insisting that their existing practices were fundamentally sound and that the crisis was temporary, external, or manageable within the current framework.
The AI transition demands honest self-appraisal of a particularly uncomfortable kind. It demands that societies acknowledge that many of the practices, institutions, and professional structures they are most invested in — the ones that define their educational systems, their labor markets, their theories of human value — may be maladapted to the environment that is emerging. This acknowledgment is difficult because it threatens the identity and status of the people who built those structures and whose authority depends on their continuation. It is the acknowledgment that the cattle must be abandoned, that the pigs must be killed, that the forests must be managed rather than extracted. It is the recognition that what worked before may not work now — not because it was wrong, but because the world changed.
The second factor is selective adoption of models from other societies — the willingness to learn from others' experiences without surrendering identity entirely. Diamond contrasted Meiji Japan, which selectively adopted Western technologies and institutions while maintaining core cultural continuities, with societies that either refused to learn from others (the Norse) or adopted foreign models wholesale and lost their social coherence. The distinction between selective adoption and wholesale imitation is critical. Successful adaptation does not require abandoning everything. It requires identifying which practices are environmentally contingent — adapted to conditions that no longer exist — and which are foundational, reflecting values and capabilities that remain viable regardless of the environment.
In the AI transition, selective adoption means distinguishing between the practices that must change (organizational structures built around scarce execution capability, educational curricula designed to transmit procedural knowledge, hiring practices that reward narrow specialization) and the practices that must be preserved (the cultivation of judgment, the protection of mentorship, the maintenance of friction-rich developmental experiences, the institutional structures that align individual incentives with collective welfare). The difficulty is that the practices in the first category are often easier to identify than the practices in the second, because what must be preserved is often invisible — tacit, embedded in institutional culture, transmitted through relationships rather than documents.
The third factor is the acceptance of core values that constrain the range of adaptive responses. Diamond found that societies which navigated crises successfully did so within a framework of values that limited what they were willing to sacrifice. The Tokugawa did not sacrifice their commitment to aesthetic refinement in order to manage their forests more efficiently. The Icelanders did not sacrifice their democratic governance traditions in order to manage their commons more effectively. The values constrained the range of permissible adaptations, and the constraint, paradoxically, strengthened the adaptation by preventing the society from lurching into responses that were effective in the short term but destructive of the social fabric that made collective action possible.
For the AI transition, the constraining values are the ones that this book's analysis has identified as most threatened: the value of human judgment developed through struggle, the value of expertise built through friction, the value of mentorship as the mechanism for intergenerational transmission of tacit knowledge, the value of institutional structures that protect the weak against the rational self-interest of the strong. These values constrain the range of permissible responses to the AI transition. They say: efficiency gains are welcome, but not at the cost of the developmental processes that produce the judgment on which everything else depends. Productivity improvements are valuable, but not if they deplete the cognitive resource base that sustains the profession's capacity for the unprecedented problem.
The choice to fail or succeed in the AI transition is being made now, in the accumulation of daily decisions at every level of social organization. The trajectory of those decisions, as of this writing, is mixed. Some organizations are recognizing the regime shift and restructuring accordingly. Some educational institutions are redesigning their curricula. Some governments are building regulatory frameworks. But most are not. Most are integrating AI tools into existing structures, capturing short-term gains, and deferring the structural adaptations that the environmental change requires.
Diamond's archive of collapsed civilizations contains no case in which a society recognized its trajectory, possessed the information needed to change course, and failed to act because it could not find a Tuesday afternoon to start. The failures were always failures of will, failures of institutional capacity, and failures of the governance structures that translate recognition into action. The information was available. The capacity for change existed. What was lacking was the institutional mechanism that made change the rational choice for the individuals and elites whose daily decisions determined the collective outcome.
The construction of those mechanisms is the defining challenge of the AI transition. It is not a technological challenge. The technology exists and is advancing on its own timeline. It is not an informational challenge. The analysis presented in this book, and in the dozens of studies and reports that inform it, is available to anyone who seeks it. It is an institutional challenge — the challenge of building structures that align individual incentives with collective welfare, that maintain long-term investment horizons against the pressure of quarterly reporting, that protect the developmental processes through which human capability forms, and that distribute the gains of the AI transition broadly enough to sustain the social cohesion on which institutional function depends.
Diamond found that this challenge was met, in every case where it was met, by societies that made the hard choice: to invest now, to sacrifice something they valued, to accept short-term costs for long-term viability. The Tokugawa invested in forests their grandchildren's grandchildren would harvest. The Tikopians sacrificed pigs their culture revered. The Icelanders constrained freedoms their tradition celebrated.
The AI transition demands equivalent choices. The specific choices — what to preserve, what to sacrifice, what to invest in, what to constrain — will vary across societies, organizations, and individuals. But the structure of the choice is the same one Diamond documented across every case he studied. Adapt or collapse. Invest or deplete. Choose the seventh generation or choose the current quarter.
The decision is being made, as all civilizational decisions are made, one Tuesday afternoon at a time.
The Haudenosaunee — the confederation of six nations that Europeans called the Iroquois — governed themselves according to a principle that Diamond cited as one of the most sophisticated examples of long-term resource management in the pre-industrial world. The principle, attributed to the Great Law of Peace that established the confederation, held that decisions should be made with consideration for their effects on the seventh generation yet unborn. Not the next generation. Not the grandchildren. The seventh generation — people who would be born roughly a hundred and seventy-five years after the decision was made, people whose names could not be known, whose circumstances could not be predicted, whose needs could only be imagined through an act of radical temporal empathy.
The principle was not sentimental. It was structural. It was embedded in the governance mechanisms of the confederation, operationalized through decision-making processes that required leaders to articulate how their proposed actions would affect people who did not yet exist. The articulation was not a ritual formality. It was a constraint on the range of permissible decisions — a mechanism that forced short-term interests to justify themselves against long-term consequences before they could be enacted.
Diamond recognized this principle as an institutional solution to the temporal mismatch that destroyed every collapsed civilization he studied. The Norse chiefs who maintained cattle herds were optimizing for the current generation. The Maya kings who built monuments were optimizing for their own reigns. The Easter Islanders who felled trees were optimizing for the current season. In every case, the optimization horizon was too short for the environmental dynamics in play. The grasslands needed decades to recover. The forests needed centuries. The soils needed millennia. And the decisions that determined these resources' trajectories were made by people whose planning horizons were measured in years.
The seventh-generation principle extended the planning horizon beyond the reach of any individual's self-interest. No one optimizes for people they will never meet, whose gratitude they will never receive, whose judgment of their decisions they will never face. The principle worked because it was institutionalized — built into the governance structure, enforced through social norms, maintained by a culture that valued intergenerational responsibility as a defining feature of legitimate leadership.
The AI transition demands seventh-generation thinking, and it demands it with an urgency that the Haudenosaunee, operating on the timescales of agricultural and ecological management, never faced. The decisions being made now about AI deployment, workforce restructuring, educational reform, and cognitive resource management will shape not just the current generation but the generations that follow. The cognitive resources that are being depleted — tacit knowledge, mentorship capacity, the developmental friction through which deep understanding forms — are intergenerational resources. They are transmitted from one generation to the next through the mechanisms of apprenticeship, education, and institutional culture. If the transmission is interrupted — if one generation fails to develop the depth that the next generation needs to learn from — the loss compounds across every subsequent generation.
The compounding is the critical feature. Diamond documented it in every case of resource depletion he studied. The Easter Island forests did not regenerate after the last tree was cut because the conditions for regeneration — seed stock, soil stability, protection from wind erosion — had been destroyed along with the trees. The loss was not linear. It was systemic. Each increment of depletion removed not just the resource itself but some portion of the resource's capacity to recover.
Cognitive resource depletion follows the same logic. Each generation that develops less tacit knowledge produces fewer mentors capable of transmitting tacit knowledge to the next generation. Each generation with fewer mentors develops even less tacit knowledge. The depletion accelerates because the mechanism of replenishment — mentorship — is itself a product of the resource being depleted. The spiral is self-reinforcing, and reversing it requires not just stopping the depletion but rebuilding the transmission mechanisms that the depletion has degraded.
This is why the seventh-generation frame is not aspirational but necessary. The decisions being made on Tuesday afternoons in 2026 — the developer who skips the debugging, the manager who optimizes for output over development, the university that adjusts its standards to accommodate AI-generated work — are not affecting only the current generation. They are affecting the transmission pipeline that produces the seventh generation's capacity for judgment. If the pipeline is compromised now, the recovery timeline is measured not in years but in the multiple generations required to rebuild what was lost.
Diamond's comparative evidence provides specific guidance about what seventh-generation investment looks like in practice. The Tokugawa reforestation program was not a single policy. It was an institutional ecosystem: monitoring systems that tracked the resource's condition, regulatory frameworks that constrained extraction, investment programs that funded regeneration, and governance structures that maintained all of these across changes in leadership, economic conditions, and political priorities. The program survived because it was institutionalized — embedded in the structure of governance rather than dependent on the commitment of any individual leader.
The equivalent for the AI transition would be an institutional ecosystem for cognitive resource management. The components are identifiable even if their specific implementation varies across contexts. Monitoring systems that track the development of tacit knowledge across professional populations — not by testing factual recall, which AI renders trivial, but by assessing the capacity for judgment under conditions of novelty and ambiguity. Regulatory frameworks that require organizations deploying AI tools to maintain developmental pathways for their workforce — the equivalent of reforestation requirements that accompany logging permits. Investment programs that fund friction-rich educational experiences — apprenticeships, mentored practice, deliberately difficult training — alongside and in tension with the efficiency-oriented AI integration that the market rewards. And governance structures that maintain these components across the quarterly reporting cycles, political transitions, and market pressures that would otherwise erode them.
None of these components exist at adequate scale. Some exist in embryonic form. The Berkeley researchers' concept of "AI Practice" — structured pauses and deliberate friction built into AI-augmented workflows — is a monitoring mechanism. Various corporate training programs are investment programs. Scattered regulatory proposals address the workforce displacement dimension. But the components are disconnected, underfunded, and operating without the institutional architecture that would sustain them across the timescales the problem requires.
The Haudenosaunee did not develop the seventh-generation principle in a single generation. It emerged through centuries of governance experimentation, through the experience of seeing short-term decisions produce long-term consequences, through the slow accumulation of institutional wisdom about the relationship between current choices and future conditions. Contemporary societies do not have centuries. The speed of the AI transition compresses the timeline within which the institutional architecture must be built.
But the compression does not change the structural requirement. What it changes is the cost of delay. Each year without adequate institutional architecture for cognitive resource management is a year in which the depletion continues unmonitored, the transmission mechanisms degrade further, and the recovery timeline extends. The Tokugawa began their reforestation program when the forests were severely degraded but not completely destroyed. The Easter Islanders had no program at all, and by the time the degradation was severe enough to motivate action, the recovery threshold had been passed.
The question for the AI transition is whether societies will begin building the institutional architecture for cognitive resource management while the resource base is still sufficient to support recovery — while there are still enough practitioners with deep tacit knowledge to mentor the next generation, while educational institutions still possess the faculty and the institutional memory to design friction-rich learning experiences, while the cultural norms that value depth over speed still have enough adherents to sustain themselves.
Diamond's evidence suggests that the window for institutional construction is finite and that the threshold beyond which recovery becomes impossible is invisible from the inside. The Easter Islanders did not know they had passed the threshold until the forest was gone. The Norse did not know they had passed the threshold until the grass was gone. The threshold reveals itself only in retrospect, which is why seventh-generation thinking — investing in the future under conditions of uncertainty about whether the investment will prove necessary — is not a luxury but a survival strategy.
The Haudenosaunee asked their leaders to speak for people who did not yet exist. The AI transition requires the same: leaders in technology, education, government, and communities who are willing to make decisions that serve not the current quarter but the generations that will inherit whatever cognitive landscape the current generation's choices produce.
The decisions are being made now. The seventh generation is listening.
---
Diamond closed Collapse with a metaphor. A polder is a Dutch word for land reclaimed from the sea — territory that exists only because human engineering holds back the water. The Netherlands has been building polders since the Middle Ages, draining marshes, constructing dikes, operating pumps, maintaining an infrastructure of water management so comprehensive that roughly a quarter of the country's land area lies below sea level. The land is productive. The cities built on it are prosperous. The engineering that sustains them is largely invisible to the people who live there, in the same way that the engineering that sustains any infrastructure is invisible until it fails.
Diamond used the polder as a metaphor for civilization itself. Human societies are reclaimed land. They exist in the space between nature's tendency toward entropy and humanity's capacity for organization. The space is maintained not by a single act of engineering but by continuous, deliberate effort — the equivalent of the pumps that run day and night, the dikes that are inspected and repaired season after season, the water boards that have governed Dutch water management for eight centuries with a continuity that transcends monarchies, republics, invasions, and occupations.
The polder does not maintain itself. If the pumps stop, the water returns. If the dikes are neglected, they breach. If the governance structures that coordinate maintenance dissolve, the individual components — each pump, each dike section, each drainage channel — continue to function for a while, but without coordination, the system degrades, and the sea reclaims what the sea has always wanted back.
The AI transition has not breached the dikes. But the water level has risen — rapidly, dramatically, in a matter of months rather than the decades or centuries that Diamond's historical cases documented. The institutional infrastructure that maintains the cognitive polder — the educational systems that develop human capability, the mentorship pipelines that transmit tacit knowledge, the cultural norms that value depth, the governance structures that align individual incentives with collective welfare — was designed for a different water level. The engineering was adequate for the conditions that existed when it was built. The conditions have changed, and the engineering has not kept pace.
This book has traced the specific mechanisms through which the cognitive polder is threatened. The elite commitment problem (Chapter 3), in which the people with the most power to adapt the infrastructure are the ones whose interests are most threatened by adaptation. Creeping normalcy and the speed of forgetting (Chapter 4), in which the society's capacity to sustain the alarm that institutional adaptation requires is overwhelmed by the speed of change. Resource depletion in the cognitive economy (Chapter 5), in which the foundational resource — human expertise — is being consumed faster than it is being replenished. The prisoner's dilemma at scale (Chapter 6), in which individually rational decisions aggregate into collectively catastrophic outcomes. And the temporal mismatch between the speed of technological change and the speed of institutional response, which compresses the window for adaptation to a fraction of what Diamond's historical cases allowed.
Each mechanism is familiar from Diamond's archive. Each has destroyed civilizations that were, in their time, as sophisticated and as confident as contemporary technological society. And each is operating now, simultaneously, in a transition that is faster, more pervasive, and more consequential than any environmental transformation Diamond documented.
But this book has also traced the evidence of successful adaptation — the specific features that distinguished the societies that survived from the societies that collapsed. Recognition that the environment has changed and that existing practices are no longer viable. Willingness to abandon identity-defining practices in service of adaptation. Long-term investment in resource management at the cost of short-term comfort. Institutional mechanisms that align individual incentives with collective welfare. The seventh-generation planning horizon that forces current decisions to justify themselves against future consequences.
These features are not mysterious. They are documented, analyzed, and available to any society that chooses to implement them. The Tokugawa implemented them. The Tikopians implemented them. The Icelanders implemented them. The implementation was costly, contested, and imperfect. It also preserved the civilizations that undertook it.
Diamond's framework does not predict the outcome of the AI transition. It predicts the structure of the choice. The choice is not between AI and no-AI, between adoption and refusal, between acceleration and resistance. The choice is between managed transition and unmanaged transition — between building the institutional infrastructure that directs the enormous power of AI toward broadly distributed flourishing, and allowing that power to flow without direction, producing the concentration, depletion, and collapse that unmanaged environmental transformations have produced throughout human history.
The polder metaphor carries one final implication that Diamond left unstated but that the AI transition makes explicit. The pumps must run continuously. The dikes must be maintained perpetually. The governance structures must persist across generations. There is no point at which the work is done. The sea does not stop pressing against the dikes because the dikes have held for a century. The environmental pressures do not relent because the institutional responses have been adequate so far.
The AI transition will not end. The technology will continue to advance. The environmental conditions of the cognitive economy will continue to change. The institutional infrastructure that is adequate today will be inadequate tomorrow. The adaptation that succeeds this year will need to be revised next year. The pumps will need to run, and the dikes will need to be maintained, for as long as the polder exists.
This is not a burden. It is the condition of civilization. Human societies have always existed on reclaimed land, held against entropy by continuous effort. The effort is the thing. The maintenance is the thing. The governance that persists across changes in leadership, economic conditions, and political fashion is the thing.
Diamond spent his career documenting what happens when societies stop maintaining their polders. The forests return to desert. The fields return to dust. The cities return to ruin. The process is not dramatic. It is the quiet accumulation of neglected maintenance, of deferred investment, of governance structures that atrophy because no one remembers why they were built.
The AI transition is testing whether contemporary societies possess the institutional capacity for the continuous, boring, unglamorous maintenance work that sustains civilization. Not the dramatic gesture of regulation or reform. The Tuesday afternoon of checking the pumps. The quarterly review of whether the dikes are holding. The annual assessment of whether the monitoring systems are tracking the right variables. The generational investment in the educational infrastructure that produces the practitioners who will maintain the polder after the current generation is gone.
Diamond's evidence is clear. The societies that maintained their polders survived. The societies that did not maintain their polders did not survive. The evidence does not guarantee the outcome. It illuminates the choice.
The pumps are running. The dikes are holding. The water is rising.
The maintenance is ours to do.
---
The bones troubled me most.
Not the metaphorical bones of a dying industry or the figurative skeleton of an old business model. The actual bones — the ones Diamond described in the final Norse Greenland sites, mixed with the bones of the last cattle, in rooms where the doors had been sealed and the occupants had eaten their dogs before they died. I kept returning to that image while writing The Orange Pill, while arguing through the logic of ascending friction and the beaver's dam and the candle in the darkness, and I could not fully say why.
Now I think I know. The bones are what remains when the story you told yourself about who you are prevents you from becoming who you need to be. The Norse died with their identity intact. They were European Christians to the end — cattle farmers, church builders, iron traders. They were not Inuit. They would never be Inuit. And the cost of that commitment was everything.
Diamond's framework is the coldest comfort in this entire cycle of books. Byung-Chul Han gave me a diagnosis I could argue with. Csikszentmihalyi gave me a model of what the good days feel like. Diamond gives me a pattern — recognition, willingness, investment — and then a wall of evidence showing how rarely societies achieve all three before the window closes. The Tokugawa made it. The Tikopians made it. The Norse, the Maya, the Easter Islanders, the Anasazi, the Pitcairn Islanders, the Greenland Vikings did not. The failure rate in Diamond's sample is not encouraging.
What keeps me from despair is the fifth factor. The society's own response. That is where the agency lives. Not in the technology, which will advance regardless. Not in the competitive pressures, which are structural. Not in the feedback loops, which are mathematical. In the response. In the thousand Tuesday afternoons where the choice is made — not once, dramatically, but continuously, in the accumulation of small decisions that no one notices until the trajectory becomes visible in retrospect.
I think about my engineers in Trivandrum. About the twenty-fold multiplier, about the excitement in the room, about the senior engineer who oscillated between terror and exhilaration for two days before landing on the recognition that his judgment — the thing he'd built through decades of friction — was more valuable than ever, not less. That was a moment of successful adaptation. One small moment in one room in southern India. But the pattern holds at every scale. The recognition that the environment has changed. The willingness to let go of the practice that defined you. The investment in whatever comes next, under conditions of radical uncertainty.
The bones in the Norse houses haunt me because they represent the alternative. The cost of the commitment to what you were, paid in full, with no refund and no appeal.
I am building. My team is building. We are checking the pumps and maintaining the dikes and trying to keep the water level from rising faster than the infrastructure can handle. Some days I am confident we are building well. Other days the pattern Diamond documented feels less like a warning and more like a prophecy.
But Diamond insisted — and the evidence supports him — that the pattern is a choice, not a fate. The Tokugawa chose. The Tikopians chose. We can choose.
The seventh generation is listening. What they hear depends on what we decide to build, starting now, starting on the next Tuesday afternoon.
** Every collapsed civilization in Jared Diamond's archive had access to the information it needed to survive. The Norse could see the Inuit thriving. The Maya tracked their declining harvests. The Easter Islanders watched their forests shrink. Information was never the problem. The problem was translating knowledge into changed behavior -- abandoning the practices that defined a society's identity when those practices became incompatible with a changed environment.
The AI transition is the fastest environmental regime shift in the history of human civilization. Diamond's five-factor framework -- environmental damage, climate change, hostile neighbors, trade disruption, and the society's own response -- maps onto this moment with a precision that should alarm anyone paying attention. The institutions, professional identities, and educational models built for the pre-AI economy are the cattle and churches of our Greenland.
This book applies Diamond's comparative method to the question no one in technology is asking clearly enough: Are we adapting, or are we maintaining practices that the environment no longer supports?

A reading-companion catalog of the 22 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jared Diamond — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →