Joseph Nye — On AI
Contents
Cover Foreword About Chapter 1: The Vertical Shift Chapter 2: The Attraction of Approach Chapter 3: The Fishbowl of National Strategy Chapter 4: Smart Power and the Three Positions Chapter 5: The Democratization of Capability as Power Diffusion Chapter 6: The Death Cross and the Geography of Value Chapter 7: The Smooth and the Vulnerable Chapter 8: Education as Strategic Infrastructure Chapter 9: The Attentional Ecology of Nations Chapter 10: Smart Amplification Epilogue Back Cover
Joseph Nye Cover

Joseph Nye

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Joseph Nye. It is an attempt by Opus 4.6 to simulate Joseph Nye's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The word that kept surfacing was not "power." It was "attraction."

I had been thinking about AI through every lens I could find — philosophy, psychology, economics, ecology, the history of tools. Each one illuminated something real. But I kept hitting the same wall: the frameworks I was using assumed that what matters most is what the technology can do. Capability. Speed. Output. The metrics of the frontier.

Then I encountered Joseph Nye's quiet insistence that the most durable form of influence has nothing to do with what you can force and everything to do with what others voluntarily choose to move toward.

That stopped me.

Because the entire AI discourse is framed as a race. Who builds the most powerful model. Who deploys fastest. Who captures the most market share. The language is coercive — dominance, supremacy, arms race. And inside that language, something essential disappears from view: the question of whether anyone actually wants what you are building. Not whether they will use it because they have no alternative. Whether they choose it because it makes their world genuinely better.

Nye spent a lifetime studying this distinction. He called it soft power — the ability to shape the preferences of others through attraction rather than coercion. The concept sounds gentle. It is not. It is the most demanding standard you can apply to any exercise of influence, because it requires that the thing you project be worth projecting. You cannot manufacture attraction. You can only earn it through the sustained quality of what you build and the visible sincerity of the values embedded in it.

Applied to AI, this reframes everything. The nation that builds the most powerful system does not automatically lead. The nation whose approach others want to emulate does. The company that ships the fastest product does not automatically win. The company whose tools people trust — genuinely trust, because the tools demonstrably serve them — does. The individual who produces the most output is not automatically valuable. The individual whose judgment others voluntarily seek is.

In *The Orange Pill*, I wrote about the river of intelligence and the dams we must build to direct it. Nye's framework adds a dimension I had been circling without naming: the dams only matter if others choose to build them too. A governance structure imposed by force is a levee. A governance structure that others replicate because they can see it works — that is soft power. That is how the landscape actually changes.

This book applies Nye's lens to the AI moment. It will not tell you what to build. It will make you ask whether what you are building is worth choosing.

-- Edo Segal ^ Opus 4.6

About Joseph Nye

b. 1937

Joseph Nye (1937–2025) was an American political scientist, diplomat, and professor who served as Dean of Harvard's Kennedy School of Government and held senior positions in the U.S. State Department and National Security Council under the Carter and Clinton administrations. He is best known for developing the concept of "soft power" — the ability to influence others through attraction rather than coercion — first articulated in *Bound to Lead* (1990) and fully developed in *Soft Power: The Means to Success in World Politics* (2004). With Robert Keohane, he co-authored *Power and Interdependence* (1977), a foundational text in liberal international relations theory. His later works, including *The Future of Power* (2011), *The Powers to Lead* (2008), and *Do Morals Matter?* (2020), extended his analysis of how influence operates in an increasingly complex, technology-driven world. In his final years, Nye warned that artificial intelligence would reshape the landscape of international power faster than institutions could adapt. He died in May 2025, leaving behind a framework that has become indispensable for understanding why capability alone never determines who leads.

Chapter 1: The Vertical Shift

For four centuries, the study of international relations has rested on a horizontal assumption. Power flows between states. The Treaty of Westphalia in 1648 established the sovereign nation-state as the fundamental unit of the international system, and every major framework since — the balance of power, the concert of great powers, the bipolar Cold War, the unipolar American moment — has analyzed influence as something that moves laterally, from government to government, alliance to alliance, bloc to bloc. The scale tips left or right. The players change. But the scale itself remains flat.

Joseph Nye spent a career complicating this picture without abandoning it. His theory of soft power, first articulated in Bound to Lead in 1990 and developed fully in Soft Power: The Means to Success in World Politics in 2004, added a dimension that realists had largely ignored: the ability to shape the preferences of others through attraction rather than coercion. A nation that other nations wish to emulate wields influence that no army can replicate and no sanction can substitute. But even Nye's expansion of the power concept operated primarily on the horizontal axis. Soft power flowed between states, through culture, values, and foreign policy. The unit of analysis remained the nation.

The arrival of artificial intelligence as documented in The Orange Pill breaks the axis itself.

When Edo Segal describes a room in Trivandrum, India, where twenty engineers each achieved a twenty-fold productivity multiplier using Claude Code at a hundred dollars per month, the phenomenon he is documenting is not an efficiency gain. It is a power shift — but one that moves vertically rather than horizontally. The capacity to create economic value, to shape markets, to build products that influence how millions of people live and work, has migrated from the institutional level to the individual level. A single person with a subscription now wields productive capability that, five years earlier, required an organization: a team, a budget, a management structure, an office, a hiring pipeline.

Nye's framework, applied to this shift, reveals something that neither traditional realism nor Nye's own liberal institutionalism fully anticipated. The most consequential redistribution of power in the early twenty-first century is not occurring between the United States and China, though that competition is real and consequential. It is occurring between levels of organization — from institutions to individuals, from credentialed centers to uncredentialed peripheries, from the gatekeepers of capability to anyone with an idea and a natural-language interface.

This vertical diffusion has no precedent in the Westphalian system. Previous technological transitions redistributed power within the existing structure. The printing press shifted influence from the Catholic Church to Protestant reformers, but both operated through institutions. The industrial revolution shifted economic weight from agrarian landowners to factory capitalists, but both classes exercised power through national frameworks. Even the internet, which seemed to promise radical individual empowerment, ultimately consolidated power in a handful of platform companies whose influence exceeded that of many sovereign states.

AI is doing something structurally different. The collapse of what Segal calls the imagination-to-artifact ratio — the distance between a human idea and its realization — has lowered the floor of who gets to build. Not slightly. Categorically. The woman in Segal's Trivandrum training who had spent eight years on backend systems and never written a line of frontend code built a complete user-facing feature in two days. She did not acquire a new credential. She did not join a new institution. She described what she wanted in plain language, and the tool handled the translation into domains she had never entered.

Multiply this by forty-seven million developers worldwide, with the fastest growth in Africa, South Asia, and Latin America, and the geopolitical implications become visible. The traditional centers of technological power — Silicon Valley, Shenzhen, Bangalore, Tel Aviv — retain enormous advantages in capital, infrastructure, institutional knowledge, and network effects. But they have lost a monopoly they did not know they held: the monopoly on who gets to participate in building the technological future. That monopoly was enforced not by policy but by the sheer cost of translation — the years of specialized training, the institutional infrastructure, the capital required to assemble a team capable of converting an idea into a working product. When the translation cost collapses, the monopoly collapses with it.

Nye, in The Future of Power (2011), identified the diffusion of power as one of the defining trends of the information age, distinguishing it from the more commonly discussed power transition between states. Power diffusion, Nye argued, moves capability from governments to non-state actors — corporations, NGOs, terrorist networks, individuals empowered by technology. AI accelerates this diffusion to a degree that even Nye's 2011 analysis did not envision, because the technology in question is not merely a communication tool (like the internet) or a coordination tool (like social media). It is a production tool. It enables individuals not merely to organize, protest, or communicate, but to build — to create economic artifacts that compete directly with the outputs of large organizations.

The distinction between communication tools and production tools is analytically crucial. The Arab Spring demonstrated that communication tools could enable individuals to coordinate political action at a speed and scale that caught governments off guard. But the political outcomes of the Arab Spring were, with partial exceptions, captured by existing institutional actors — militaries, religious organizations, established political parties — because communication alone does not create durable power. Production does. The individual who can build a product, serve a market, generate revenue, and iterate based on user feedback has acquired a form of power that is self-sustaining in a way that a hashtag is not.

This is what makes the AI diffusion different from previous waves of technological empowerment. The power that diffuses is not the power to speak. It is the power to make. And the power to make, in economic systems, is the power that compounds.

Nye's taxonomy of power distinguishes between hard power (the ability to coerce through military force or economic leverage), soft power (the ability to attract through culture, values, and institutions), and smart power (the strategic integration of both). Each category is affected by vertical diffusion, but in different ways.

Hard power remains overwhelmingly concentrated at the state level. No individual with a Claude Code subscription is building aircraft carriers or imposing sanctions. But the hard-power instruments that matter most in the AI era — compute infrastructure, semiconductor supply chains, the capacity to deploy autonomous systems, access to training data at scale — are increasingly held by private corporations rather than governments. The relationship between states and the companies that control AI hard power is becoming the defining tension of AI governance, a form of interdependence that Nye and Robert Keohane's foundational work on complex interdependence anticipated in structure if not in specifics.

Soft power is where the vertical diffusion is most consequential and least understood. A nation's soft power has traditionally been generated by its institutions — universities, cultural industries, political systems, multinational corporations headquartered within its borders. When AI enables individuals anywhere to produce cultural artifacts, educational content, software tools, and information products that compete with institutional outputs, the geography of soft power generation shifts. The Nigerian filmmaker who uses AI tools to produce content that resonates across Francophone Africa is generating soft power for Nigeria — or perhaps for the Yoruba cultural zone, or for the specific aesthetic sensibility of a Lagos-based creative community — in ways that bypass the traditional channels of cultural diplomacy entirely.

This is not a hypothetical. The explosion of AI-assisted content creation from the Global South is already underway, and its soft power implications are significant. Cultural influence has historically been correlated with economic output: rich nations produced the films, music, literature, and software that shaped global preferences. When the production cost collapses, the correlation weakens. Influence begins to track not with wealth but with cultural distinctiveness, creative ambition, and proximity to unserved audiences — assets that are distributed far more equitably across the globe than capital.

Smart power — Nye's concept of strategically combining hard and soft power — becomes correspondingly more complex. A state practicing smart power in the AI era must manage not only the horizontal competition with other states but the vertical relationship with its own empowered citizens and corporations. The state that attempts to control AI development too tightly risks stifling the innovation that generates both economic hard power and cultural soft power. The state that cedes all direction to the market risks the concentration of AI capability in private hands that owe no loyalty to the public interest and no accountability to democratic governance.

Nye, in his 2024 column on AI and national security, warned that "technology moves faster than policy or diplomacy, especially when it is driven by intense market competition in the private sector," and urged policymakers to "pick up their pace." The warning was characteristically measured and characteristically prescient. The pace differential between technological capability and institutional response is not merely an inconvenience. It is a structural vulnerability — a gap in which power accumulates without governance, capability grows without direction, and the people navigating the transition do so without the institutional support that previous technological transitions eventually provided.

The Orange Pill calls this the "dam deficit" and locates it primarily on the demand side: the protection and empowerment of citizens, workers, and students navigating the AI transition. Nye's framework reveals why the demand-side deficit is geopolitically dangerous. A nation whose citizens cannot effectively direct AI capability is a nation whose soft power erodes from within. Soft power depends on the quality of a nation's cultural and institutional output. When the citizens producing that output are overwhelmed, under-supported, and adapting by trial and error in the absence of institutional guidance, the quality of output degrades — not immediately, but cumulatively, in ways that compound over years and become visible only when the damage is difficult to reverse.

The vertical shift demands a new unit of analysis. Not the state alone, and not the individual alone, but the relationship between the two — the institutional architecture that determines whether diffused power flows toward collective benefit or fragments into ungoverned capability. Nye's career-long argument that institutions are themselves a form of power — that the nation which designs the best institutions attracts the voluntary cooperation of others — acquires new urgency in a world where the power flowing through those institutions has multiplied by orders of magnitude while the institutions themselves have barely changed.

The question that this vertical shift poses to international relations theory is not whether states still matter. They do. The question is whether the analytical frameworks designed for a world of horizontal power flows can accommodate a world in which the most consequential flows are vertical — and whether the institutional architectures designed to govern horizontal competition can be adapted fast enough to govern vertical diffusion before the gap between capability and governance becomes unbridgeable.

Nye's own work suggests the answer is conditional. Institutions can adapt, but adaptation requires political will, intellectual honesty about what has changed, and the willingness to build new structures rather than merely defend old ones. The nations that recognize the vertical shift earliest, that build the institutional architecture to channel diffused capability toward outcomes that serve their citizens and attract the admiration of others, will lead. The nations that continue to analyze AI primarily as a horizontal competition — who has the most compute, who reaches artificial general intelligence first, who deploys the most autonomous systems — will find themselves winning battles in a war whose terms have already changed.

The scale is no longer flat. The power is no longer flowing only between states. And the frameworks built for a horizontal world must stretch to accommodate a vertical one — or break, and leave the people navigating the transition without the intellectual tools they need to understand what is happening to them.

---

Chapter 2: The Attraction of Approach

In the summer of 2018, Joseph Nye stood before the AIWS Conference at the Harvard Faculty Club and delivered a characteristically precise warning. An AI arms race between the United States and China, he argued, could have profound effects on global society. But Nye was skeptical of the deterministic narrative that China would inevitably surpass the United States in artificial intelligence by 2030. China's advantages, he observed, were real but narrow — principally the scale of its data and its relative unconcern for privacy. The prediction of Chinese supremacy, Nye said, was "uncertain" and "indeterminate."

The skepticism was revealing. Where most analysts of the U.S.-China AI competition focused on quantifiable inputs — research papers published, patents filed, compute capacity deployed, engineers trained — Nye was already asking a different question. Not who has the most, but who attracts the most. Not which nation's AI is most powerful, but which nation's approach to AI is most admired.

This is the soft power question, and it is the question that most AI strategy discussions systematically ignore.

Soft power operates through three channels that Nye identified with characteristic clarity: the attractiveness of a nation's culture, the appeal of its political values, and the perceived legitimacy of its foreign policies. A nation whose films are watched worldwide projects cultural soft power. A nation whose democratic institutions are emulated projects value-based soft power. A nation whose foreign policy is seen as legitimate — as serving broader interests rather than narrow self-interest — projects policy-based soft power. Each channel generates influence that coercion cannot replicate, because the influence depends on voluntary alignment: the other party chooses to move toward you, not because it is compelled but because it is attracted.

Applied to artificial intelligence, each channel operates with new intensity and new vulnerability.

Cultural soft power in the AI era flows through the tools themselves. When Claude Code enables a developer in Bangalore or Bogotá or Berlin to build something she could not have built before, the tool is not merely a product. It is an expression of a particular set of values — openness, individual empowerment, the belief that capability should be distributed rather than hoarded. The developer who uses the tool absorbs, consciously or not, the assumptions embedded in it: that natural language is a legitimate interface for creation, that individual ambition deserves institutional-grade support, that the barriers between imagination and artifact should be as low as possible. These assumptions are culturally specific. They emerge from the American technology ecosystem, with its particular blend of entrepreneurial individualism, venture-capital risk tolerance, and the California ideology that Kevin Kelly articulated in What Technology Wants — the conviction that technology has its own trajectory toward greater connectivity, capability, and human empowerment.

When the tool works — when the developer in Bogotá builds her product and serves her market — it generates soft power for the nation and the value system that produced it. Not through propaganda, not through cultural diplomacy, not through any deliberate government program, but through the most powerful mechanism of attraction available: demonstrated benefit. The developer does not admire American values in the abstract. She admires them because they produced a tool that changed her life.

This is soft power at its most potent, and its most fragile. Potent because the attraction is rooted in genuine utility rather than manufactured appeal. Fragile because the attraction depends entirely on the continued quality, accessibility, and ethical integrity of the tools. If the tool begins to extract more value than it creates — through pricing that excludes, through data practices that exploit, through capability restrictions that serve corporate interests over user needs — the soft power reverses. Attraction becomes resentment. Admiration becomes suspicion.

China's approach to AI generates a different kind of influence, operating through different channels. The surveillance infrastructure that Chinese AI companies have deployed across dozens of countries — facial recognition systems, social credit architectures, predictive policing platforms — projects a form of what Nye's Harvard colleague Elaine Kamarck has called "malevolent soft power." It is not soft power in the traditional sense, because it does not primarily attract. It offers a model: a demonstration that AI can be used to maintain political control, to manage populations, to prevent the kind of disorder that authoritarian regimes fear most. The appeal is not to citizens but to governments — particularly governments that share the fear of disorder and the desire for control.

The distinction between AI soft power that attracts citizens and AI soft power that attracts governments is analytically critical. The American AI ecosystem, chaotic and commercially driven, generates soft power among individuals — developers, entrepreneurs, creators, students — who experience AI tools as instruments of personal empowerment. The Chinese AI ecosystem, state-directed and surveillance-oriented, generates a different form of influence among governing elites who experience AI as an instrument of social management. Both are forms of international influence. They operate through fundamentally different mechanisms and produce fundamentally different outcomes.

Nye's framework suggests that the citizen-facing soft power will prove more durable. The argument runs as follows: soft power depends on legitimacy, and legitimacy depends on the perception that power is exercised in the interest of those affected by it. A tool that empowers its users generates legitimacy with every use. A surveillance system that monitors its subjects generates compliance, not legitimacy — and compliance without legitimacy is inherently unstable, requiring continuous enforcement and producing continuous resentment.

This does not mean the American approach is unproblematic. The commercial incentives that drive American AI development produce their own forms of exploitation — the attention economy's degradation of cognitive autonomy, the platform monopolies' extraction of value from users and creators, the venture capital model's relentless pressure toward growth at the expense of sustainability. The Orange Pill documents these pathologies with an insider's precision: the productive addiction, the inability to stop, the colonization of rest by the internalized imperative to optimize. These are soft power vulnerabilities. A nation whose approach to AI produces widespread burnout, anxiety, and the erosion of the cognitive capacities that democratic citizenship requires is undermining its own soft power from within.

The soft power competition in AI, then, is not between the nation with the most compute and the nation with the second most. It is between approaches — between models of how AI should be developed, deployed, governed, and integrated into human life. Each approach embeds a set of values, and each set of values attracts a different constituency. The approach that proves most attractive to the widest constituency — that demonstrates the most compelling balance between capability and responsibility, between innovation and governance, between individual empowerment and collective well-being — will generate the soft power that shapes the international order for the next half-century.

Nye, in one of his final published works, emphasized that soft power is not merely a resource to be accumulated. It is a relationship between the projector and the receiver, and the relationship depends on context, credibility, and the perception of good faith. Applied to AI, this means that no amount of technological superiority substitutes for the perception that the technology is being developed and deployed in the interest of humanity broadly, not merely in the interest of the companies that build it or the nation that hosts them.

The European Union has attempted to stake a position in this competition through regulation — the AI Act, the GDPR, the Digital Markets Act — projecting the soft power of normative leadership. The implicit argument is that Europe's approach to AI governance, emphasizing transparency, accountability, and citizen protection, is a model that other nations should adopt. This is value-based soft power, and it has real appeal, particularly in nations that distrust both American commercial chaos and Chinese state surveillance.

But normative leadership without technological capability is an incomplete strategy. The EU does not produce the frontier AI models. It does not host the companies that are reshaping how humans interact with machines. Its regulation applies to tools built elsewhere, giving it influence over deployment but not over development. This asymmetry limits the EU's soft power, because the most powerful demonstration of values is not the regulation that constrains a technology but the technology that embodies values — the tool that works beautifully, serves its users genuinely, and demonstrates through its design that innovation and responsibility are compatible.

The nation that builds such tools — that produces AI systems admired not merely for their capability but for the wisdom of their design, the ethics of their deployment, and the breadth of their benefit — will project soft power that neither regulation nor military strength can match. Nye's framework reveals this with particular clarity: the ultimate source of soft power is not what you say about your values but what you build with them. The tool is the argument. The approach is the attraction. And the competition, in the end, is not about who reaches the frontier first but about what the frontier looks like when the rest of the world arrives.

This insight — that the soft power of AI resides not in the technology itself but in the approach to the technology — reframes the strategic competition entirely. The conventional framing asks: who has the most powerful AI? Nye's framework asks: whose AI does the world most want to use, emulate, and align with? The answers to these two questions may diverge sharply, and when they do, the second question will prove more consequential than the first. Because in a world where AI capability diffuses rapidly, the advantage of having the most powerful system is temporary. The advantage of having the most attractive approach is structural.

Nye argued throughout his career that the United States' greatest asset was not its military or its economy but its ability to set the international agenda — to define the terms of global cooperation in ways that others found attractive enough to join. The AI era offers the United States an opportunity to exercise this agenda-setting power in a new domain. It also presents the risk that short-term commercial interests, partisan dysfunction, and the erosion of institutional credibility will squander the opportunity. The soft power that American AI tools currently generate among millions of individual users worldwide is real and significant. Whether it proves durable depends on whether the approach that generates it remains worthy of the attraction it commands.

---

Chapter 3: The Fishbowl of National Strategy

Every nation formulates its AI strategy inside a fishbowl — a set of assumptions so deeply embedded in institutional culture that they function not as beliefs but as the medium of thought itself. The metaphor, drawn from The Orange Pill, describes the cognitive architecture that shapes perception before analysis begins. A scientist's fishbowl is shaped by empiricism, a filmmaker's by narrative, a builder's by the question of what can be made. A nation's fishbowl is shaped by its history, its political culture, its institutional inheritance, and the particular successes and failures that have calibrated its reflexes.

Nye's career can be read as a sustained effort to make fishbowls visible — to show American policymakers that the realist fishbowl, with its exclusive focus on military power and material interests, concealed an entire dimension of international influence. Soft power was not a new phenomenon when Nye named it. It was a phenomenon so pervasive that the realist fishbowl had rendered it invisible. American universities had been attracting the world's brightest students for decades. Hollywood had been shaping global cultural preferences for longer. The English language had been consolidating its position as the lingua franca of international commerce, science, and diplomacy. All of these were forms of power, generating influence that coercion could not match. But the realist fishbowl could not see them, because it was calibrated to see only the things that realism measures: troops, weapons, GDP, alliances.

The AI moment has cracked every national fishbowl, but the cracks are different in each case, and the responses reveal the assumptions that were invisible before the glass broke.

The American fishbowl is shaped by a half-century of experience in which market-driven innovation produced extraordinary results with minimal government direction. The personal computer, the internet, the smartphone, social media, cloud computing, and now artificial intelligence — each emerged from the American private sector, driven by commercial incentives, venture capital, and a regulatory environment that, by global standards, was permissive to the point of negligence. The assumption embedded in this experience is that innovation flourishes when government stays out of the way, that the market will distribute the benefits of technological progress broadly enough to maintain social stability, and that the costs of excessive regulation exceed the costs of insufficient regulation.

This assumption was never entirely accurate — the internet was born from DARPA funding, the semiconductor industry was subsidized by military procurement, and the human costs of unregulated technology are now visible in attention disorders, democratic erosion, and the monopolistic power of platform companies. But the assumption persisted because the successes were spectacular and the costs were diffuse, slow to materialize, and easy to attribute to other causes.

Applied to AI, the American fishbowl produces a strategy that is simultaneously the most innovative and the most reckless in the developed world. American AI companies operate with extraordinary freedom to develop, deploy, and iterate. The pace of innovation is genuinely breathtaking. The tools described in The Orange PillClaude Code's capacity to transform a twenty-person engineering team's output, the collapse of the imagination-to-artifact ratio, the democratization of building itself — are products of this permissive environment.

But the American fishbowl cannot see what the permissiveness costs. It cannot see the Berkeley researchers' finding that AI does not reduce work but intensifies it. It cannot see the productive addiction that Segal describes — the inability to stop, the colonization of rest, the erosion of the cognitive boundary between work and life. It cannot see these things because, inside the American fishbowl, productivity is an unqualified good, and the costs of productivity are externalities to be managed later, if at all.

Nye, in The Future of Power, warned against the seduction of any single dimension of power. "Our greatest mistake," he wrote, "would be to fall into one-dimensional analysis and to believe that investing in military power alone will ensure our strength." The same logic applies to AI: the greatest mistake of American strategy would be to fall into one-dimensional analysis and to believe that investing in AI capability alone will ensure leadership. Capability without governance is a river without dams. It flows fast, and it floods.

The Chinese fishbowl is shaped by a different history and produces different blindnesses. The extraordinary economic transformation of the past four decades, in which centralized state direction lifted hundreds of millions from poverty and built a technological infrastructure that rivals any in the world, has embedded the assumption that state capacity can outperform market chaos. The Chinese AI strategy — massive public investment, coordinated industrial policy, integration of AI into state surveillance and social management, explicit national targets for AI supremacy — follows logically from this assumption.

The Chinese fishbowl cannot see what state direction costs in the domain of AI specifically. AI development depends on the kind of open-ended experimentation, cross-pollination of ideas, and tolerance for failure that centralized direction tends to suppress. The most consequential AI breakthroughs — the transformer architecture, reinforcement learning from human feedback, the scaling laws that enabled large language models — emerged from research environments characterized by intellectual openness, the free movement of ideas across institutional boundaries, and a culture of publication and peer review that the Chinese system, with its emphasis on control and secrecy, structurally inhibits.

Nye observed at the 2018 AIWS Conference that China's only clear advantage in AI was the scale of its data and its relative unconcern for privacy. The observation was more penetrating than it appeared. Data scale is a hard-power asset — a material resource that can be quantified and deployed. The culture of openness that drives fundamental AI research is a soft-power asset — an institutional characteristic that attracts talent, generates trust, and produces the kind of creative disruption that planned economies cannot reliably produce. China's fishbowl sees the hard-power assets clearly and systematically undervalues the soft-power ones.

The European fishbowl is shaped by a third history: the experience of being affected by technologies developed elsewhere and the institutional conviction that regulation can protect citizens from harms that markets do not self-correct. The EU AI Act, adopted in 2024, is the world's most comprehensive attempt to regulate artificial intelligence. It classifies AI systems by risk level, imposes transparency and accountability requirements, and establishes enforcement mechanisms with real consequences.

The European fishbowl sees something genuine: that the costs of unregulated AI — surveillance, discrimination, manipulation, the erosion of democratic deliberation — are real and that market incentives alone will not prevent them. But the European fishbowl cannot see what the regulatory approach misses. Regulation operates on the supply side. It constrains what AI companies may build and deploy. It does not address the demand side: the citizens, workers, and students who are navigating the AI transition in real time and who need not merely protection from harm but positive support for adaptation.

Moreover, regulation that operates primarily on tools built elsewhere creates a structural dependency. The EU regulates AI models developed by American and, increasingly, Chinese companies. This gives the EU influence over deployment but not over development. The values embedded in the models — the training data, the optimization targets, the default behaviors — are determined by the developers, not the regulators. The EU can prohibit certain applications, but it cannot shape the foundational assumptions of systems it does not build.

This is a soft power deficit disguised as normative leadership. The EU projects the appearance of AI governance leadership through the comprehensiveness of its regulatory framework. But soft power, as Nye consistently argued, depends not on what you constrain but on what you build. The nation or bloc that builds the most attractive AI tools — tools that embody the values the EU articulates in regulation — will project more durable influence than the bloc that writes the most comprehensive rules for tools built by others.

Smaller nations offer instructive alternatives. Singapore's approach combines permissive innovation policy with active governance experimentation — regulatory sandboxes, public AI literacy programs, government-led AI adoption initiatives. The Singaporean fishbowl is shaped by the particular vulnerability of a small, open economy: the knowledge that falling behind technologically is an existential risk, not merely a competitive disadvantage. This produces an AI strategy characterized by pragmatism, speed, and a willingness to iterate on governance frameworks rather than attempting to build comprehensive regulation in advance of the technology it governs.

Estonia's digital governance infrastructure — built over two decades of sustained investment in digital identity, e-governance, and public-sector technology — positions it to integrate AI into government services with a coherence that larger, more fragmented nations cannot easily replicate. Japan's approach, shaped by demographic crisis and labor shortage, treats AI primarily as a solution to domestic economic challenges rather than as a tool of geopolitical competition. Each fishbowl reveals a dimension of the AI challenge that the others obscure.

The analytical contribution that Nye's framework makes to this landscape is the insistence that no single fishbowl contains the whole truth, and that the nation that recognizes this earliest — that presses its face against the glass and sees what its own assumptions conceal — gains a strategic advantage that no amount of compute or regulation can substitute. The fishbowl is not merely a limitation. It is a vulnerability. The nation that cannot see the assumptions shaping its AI strategy cannot correct for the biases those assumptions introduce. And in a domain where the speed of change exceeds the speed of institutional learning, uncorrected biases compound into strategic errors that are visible only in retrospect.

Nye's career was, in a sense, an extended argument for intellectual humility in the exercise of power — the recognition that the most dangerous form of ignorance is the ignorance of what you do not know. Applied to AI strategy, this means that the most dangerous national posture is not the wrong strategy but the confident strategy, the strategy pursued without awareness of its own fishbowl, without sensitivity to the dimensions of the challenge it cannot see.

The nations that will lead in the AI era will not be those with the best strategy. They will be the ones with the greatest capacity to recognize when their strategy is wrong and to adapt before the costs of error become irreversible.

---

Chapter 4: Smart Power and the Three Positions

In the spring of 2025, weeks before his death, Joseph Nye published what would be his final column for Project Syndicate. It was a meditation on the future of American soft power, and it carried the weight of a scholar who had spent four decades studying influence and who could see, with the clarity that proximity to endings sometimes confers, that the instruments of influence were changing faster than the institutions designed to wield them.

Nye did not write explicitly about AI in that final column. But the framework he articulated — that American influence depended not on dominance but on the capacity to attract voluntary cooperation through the quality of its institutions, its culture, and its values — applied to artificial intelligence with a precision that suggests the theoretical architecture was built for a moment its creator may not have fully anticipated.

Smart power, Nye's concept for the strategic integration of hard and soft power, provides the most useful lens for analyzing the three postures that The Orange Pill identifies in response to the AI transformation. Segal describes these as positions in a river — the Swimmer who resists the current, the Believer who accelerates it without regard for consequence, and the Beaver who studies the flow and builds structures to redirect it toward life. Translated into the vocabulary of international strategy, these correspond to three national postures toward AI, each with its own logic, each with its own costs, and only one of which constitutes smart power.

The Swimmer's posture is strategic refusal. At the national level, it manifests as the attempt to insulate a society from AI disruption through restriction, prohibition, or deliberate non-adoption. No major nation has adopted this posture in its pure form — the costs of technological abstention in a competitive international system are too obviously severe. But elements of the Swimmer's logic appear in proposals to ban AI from educational settings, to prohibit AI-generated content in journalism, to restrict AI adoption in professions where human judgment is deemed irreplaceable.

The Swimmer's logic contains a genuine insight: that not all disruption is progress, that speed and capability are not synonyms for wisdom, and that some forms of friction are genuinely productive. Nye himself, in his interview with the USC Center on Public Diplomacy, insisted that "it is also necessary for A.I. to have a human interpreter, especially when emotions and creativity are involved." The insistence on human mediation is a form of productive friction — a refusal to optimize away the human capacity that gives the output its legitimacy and its value.

But the Swimmer's posture, as a national strategy, is ultimately passive. It mistakes the defense of the existing order for the construction of a better one. In Nye's framework, the Swimmer forfeits soft power, because soft power depends on producing something that others find attractive enough to emulate. A nation that refuses to engage with AI produces nothing in the AI domain worth emulating. Its influence contracts to the purely defensive — the ability to say no — while the nations that say yes, however imperfectly, generate the innovations, the governance models, and the cultural outputs that shape global preferences.

The historical precedent is instructive. The nations that attempted to control the printing press — the Ottoman Empire's delayed adoption, the Catholic Church's Index of Prohibited Books — did not prevent the spread of printed knowledge. They prevented their own populations from participating in the intellectual revolution that the printing press enabled, ceding influence to the nations that embraced it. The Swimmer's strategy, applied to AI, risks a similar outcome: not the prevention of AI's spread but the self-exclusion from the competition to shape how AI reshapes the world.

The Believer's posture is the opposite error: strategic acceleration without regard for consequence. At the national level, it manifests as the prioritization of AI development speed above all other considerations — deregulation, the elimination of institutional oversight, the treatment of human cost as an acceptable externality in the pursuit of technological supremacy.

The Believer's posture generates hard power efficiently. The nation that moves fastest accumulates the most compute, attracts the most talent, captures the most market share, and deploys the most capable systems. In the short term, this translates into economic leverage and military advantage — the traditional currencies of hard power.

But the Believer's posture erodes soft power with equal efficiency. Nye warned in his 2024 column on AI and national security that "like previous general-purpose technologies, AI has enormous potential for good and evil," and that the growing risks from today's narrow AI "already demand greater attention." The Believer's refusal to attend to those risks does not eliminate them. It externalizes them — onto workers whose livelihoods are disrupted without institutional support, onto citizens whose cognitive autonomy is degraded by attention-extracting systems, onto societies whose democratic deliberation is undermined by AI-generated manipulation.

Each externalized cost is a soft power liability. The nation whose approach to AI produces visible human suffering — burnout, displacement, the erosion of meaningful work — generates not attraction but revulsion. And revulsion is the opposite of soft power. It drives potential partners, allies, and emulators away, leaving the Believer isolated in its technological advantage, powerful but unadmired, capable but unattractive.

Nye's concept of smart power was developed precisely to avoid the Swimmer-Believer dichotomy. Smart power is not a compromise between hard and soft power. It is the recognition that the most effective strategies employ both in concert, that coercive capability without attractiveness is unsustainable, and that attractiveness without capability is impotent. The smart power strategist does not choose between building and governing. The smart power strategist builds governance into the building.

The Beaver's posture, in Nye's vocabulary, is the smart power posture applied to AI. The Beaver studies the current. The Beaver identifies leverage points — the specific junctures where institutional intervention can redirect large flows of capability with minimal expenditure of resources. The Beaver builds structures that serve not only its own interests but the interests of the broader ecosystem, because serving the broader ecosystem is what generates the voluntary cooperation that soft power depends on.

At the national level, the Beaver's posture requires three simultaneous investments that most national strategies treat as sequential or mutually exclusive.

The first investment is in capability — the hard power of AI. This means compute infrastructure, research talent, the regulatory environment that enables innovation, and the strategic position in semiconductor supply chains that determines who can build frontier systems. This is the dimension that most AI strategy discussions focus on, and it is real. A nation without AI capability has no leverage, no bargaining position, and no ability to shape the technology's development.

The second investment is in governance — the institutional architecture that channels capability toward human benefit. This is the dimension that Nye's framework elevates above its usual treatment. Governance is not merely a constraint on capability. It is itself a form of power. The nation that designs the most effective, most legitimate, most widely admired governance architecture for AI projects influence that military spending cannot replicate. Nye's nuclear analogy is instructive: the Non-Proliferation Treaty, which he helped implement as a senior official in the Carter administration, did not merely constrain nuclear capability. It established a normative framework that most nations in the world voluntarily joined, not because they were coerced but because the framework was perceived as legitimate — as serving the interests of humanity broadly rather than the interests of the nuclear powers narrowly. An AI governance framework of comparable legitimacy would represent the most significant exercise of soft power since the construction of the postwar international order.

The third investment is in adaptation — the demand-side support that enables citizens, workers, and students to navigate the transition. This is the dimension that virtually every national strategy neglects, and it is the dimension whose neglect carries the most severe long-term consequences. The nation whose educational system produces citizens capable of directing AI wisely — of exercising judgment, taste, and ethical seriousness in the deployment of amplified capability — possesses a compounding advantage. The nation whose citizens are merely consumers of AI output, capable of using the tools but not of directing them toward worthy ends, possesses a compounding vulnerability.

Nye's insistence throughout his career that "human resources training and management will continue to be decisive for the progress of society, by creating and deploying opportunities for humans to advance alongside AI" reflects an understanding that the smart power calculus ultimately depends on people. The most sophisticated AI strategy, the most comprehensive governance framework, the most permissive innovation environment — all are sterile without the human capacity to direct the capability they produce.

The smart power posture is demanding. It requires the simultaneous pursuit of capability, governance, and adaptation, each of which generates its own constituency, its own institutional advocate, and its own budgetary claim. The technology sector wants capability. The regulatory community wants governance. The education sector wants adaptation. The smart power strategist insists on all three, not as a political compromise but as a strategic necessity, because the absence of any one undermines the effectiveness of the other two.

Capability without governance produces the Believer's pathology: rapid innovation that generates hard power while eroding soft power. Governance without capability produces the Swimmer's irrelevance: normative leadership without the technological substance to back it up. Capability and governance without adaptation produce the dam deficit that The Orange Pill identifies as the most dangerous feature of the current moment: powerful tools, effective regulations, and a citizenry unprepared to use the former or benefit from the latter.

The three positions, then, are not merely strategic options. They are diagnostic categories that reveal a nation's understanding of power itself. The Swimmer understands soft power but not hard power — the value of what is lost but not the necessity of what must be built. The Believer understands hard power but not soft power — the necessity of capability but not the conditions under which capability generates lasting influence. The Beaver — the smart power strategist — understands that power is relational, that influence depends on voluntary alignment, and that voluntary alignment depends on the perception that your approach serves not merely your own interests but the interests of those you seek to influence.

Nye's final published words warned that the erosion of institutional credibility threatened to undermine the very soft power on which American global leadership depended. Applied to AI, the warning is specific and urgent: the nation that builds the most powerful AI but fails to build the institutional architecture that makes its approach attractive to others will find its technological advantage stranded — powerful but isolated, capable but unadmired, winning the race to the frontier while losing the competition for the voluntary cooperation that determines who shapes the order on the other side.

Chapter 5: The Democratization of Capability as Power Diffusion

In 1977, Joseph Nye and Robert Keohane published Power and Interdependence, a book that reshaped how a generation of scholars understood international relations. Their central argument was deceptively simple: in a world of increasing economic and social connections across borders, the traditional realist focus on military power and state security missed entire dimensions of how influence actually operated. Power flowed through multiple channels — economic, institutional, informational — and the nations that understood this complexity outperformed those that saw only the barrel of a gun.

Nearly half a century later, the framework Nye and Keohane built requires an extension they did not anticipate. Interdependence, as they described it, operated between entities of roughly comparable organizational scale: states, multinational corporations, international organizations. The channels of influence ran horizontally, connecting institutions to institutions across borders. What AI has introduced is a form of interdependence that operates vertically — connecting individuals directly to the productive capabilities that were previously accessible only through institutions — and the implications for global power distribution are more radical than any horizontal shift between great powers.

The Orange Pill documents this vertical shift through concrete cases. The engineer in Trivandrum who had never written frontend code built a complete user-facing feature in two days. Alex Finn, working alone with AI tools, built a revenue-generating product that would have required a team of five and twelve months of runway just five years earlier. The imagination-to-artifact ratioSegal's term for the distance between a human idea and its realization — collapsed not incrementally but categorically, from years and teams to hours and a subscription.

In Nye's vocabulary, this is power diffusion: the movement of capability from concentrated centers to distributed peripheries. Nye identified power diffusion as a defining trend of the information age in The Future of Power, distinguishing it carefully from power transition, which describes capability shifting from one great power to another. Power transition is what analysts typically mean when they discuss the "rise of China" — a horizontal redistribution within the existing structure of the international system. Power diffusion is structurally different. It does not redistribute capability from one state to another. It redistributes capability from the state level to lower levels of social organization — to corporations, to networks, to individuals.

The distinction is analytically crucial because the two phenomena require different strategic responses. Power transitions can be managed through traditional diplomacy: alliances, deterrence, negotiation, the balancing mechanisms that the international system has developed over centuries. Power diffusion cannot be managed this way, because there is no counterparty to negotiate with. The capability is flowing not to a rival state that can be deterred or a corporation that can be regulated but to millions of individuals whose collective behavior is shaped by the tools available to them, the incentives they face, and the institutional environments they inhabit.

Previous waves of technological diffusion followed a pattern that Nye documented in his analysis of the information revolution. New communication technologies — the telegraph, the telephone, radio, television, the internet — each expanded the ability of individuals and non-state actors to organize, communicate, and coordinate. Each produced a period of disruption followed by institutional adaptation. And each, critically, preserved the fundamental asymmetry between institutions and individuals: the telegraph enabled faster communication, but building a telegraph network required institutional resources. The internet enabled global connectivity, but building a platform that could leverage that connectivity into economic or political power required capital, engineering talent, and organizational capacity at scale.

AI disrupts this asymmetry in a way that previous communication technologies did not. The difference, as Chapter 1 established, is that AI is not primarily a communication tool. It is a production tool. The individual with access to Claude Code is not merely able to talk to more people, organize more efficiently, or access more information. She is able to build — to produce software, products, services, and economic artifacts that compete directly with the outputs of organizations. The capacity to build is the capacity to create durable economic value, and durable economic value is the foundation of durable power.

The geopolitical implications of this diffusion become visible when mapped onto the global distribution of human capital. The forty-seven million developers worldwide, with the fastest growth rates in Africa, South Asia, and Latin America, represent a latent capability that institutional barriers had previously suppressed. The barriers were real but not intrinsic — they were artifacts of the cost structure of software development, which required years of specialized training, access to expensive tools and infrastructure, and proximity to the institutional ecosystems that turned code into products and products into companies. When AI collapses the cost structure, the barriers fall, and the latent capability activates.

Nye's concept of soft power illuminates a dimension of this activation that purely economic analyses miss. Innovation is not merely an economic activity. It is a cultural one. The nations and regions that produce innovative solutions to their own problems project a form of influence that transcends their economic weight. When a developer in Nairobi builds an AI-powered agricultural advisory service that helps smallholder farmers optimize planting decisions — a product that addresses a problem Silicon Valley does not understand and cannot solve from a distance — the product does more than generate economic value. It demonstrates competence, creativity, and relevance. It projects the message that this community understands its own challenges better than any external actor and possesses the capability to address them.

This is soft power generated from the periphery — influence that flows not from the traditional centers of technological production but from the communities closest to the problems that AI can address. The traditional powers retain enormous advantages: capital, institutional infrastructure, research capacity, network effects, the accumulated institutional knowledge embedded in the ecosystems that surround their technology sectors. But they have lost the monopoly on relevance. The most relevant application of AI in sub-Saharan Africa will not be designed in Palo Alto. It will be designed by someone who understands the specific constraints, the specific needs, the specific cultural context that determine whether a tool is adopted or abandoned.

The implications for national strategy are uncomfortable for the traditional technology powers. A significant portion of American AI soft power currently derives from the fact that American companies produce the tools that the rest of the world uses. When those tools enable the rest of the world to produce its own tools, the dependency relationship that underlies this soft power begins to shift. The developer in Nairobi who uses Claude Code to build her agricultural advisory service is, in one sense, a consumer of American technology and a generator of American soft power. But she is simultaneously building her own capacity for independent production — capacity that, once established, may reduce her dependence on American tools as alternative AI platforms emerge from China, Europe, India, or open-source communities.

This dynamic — where the exercise of soft power through tool provision simultaneously builds the capacity for recipient independence — has a parallel in Nye's analysis of foreign aid and development assistance. Aid generates soft power for the donor as long as the aid relationship is perceived as beneficial and non-exploitative. But effective aid, by definition, builds the recipient's capacity for self-sufficiency, which eventually reduces the recipient's dependency on the donor. The most successful aid programs, paradoxically, undermine the soft power relationship that justified them by achieving their stated objective.

AI tool provision follows the same paradox. The soft power generated by providing the world with powerful AI tools depends on the tools remaining useful and the relationship remaining perceived as beneficial. But the tools themselves, by enabling independent production, build the capacity for technological self-sufficiency in regions that were previously dependent on the traditional centers. The soft power is real but self-limiting — a form of influence that carries the seeds of its own diminishment.

The strategic response to this paradox, in Nye's framework, is not to restrict the tools in order to maintain dependency. Restriction would undermine the very soft power it aimed to preserve, because soft power depends on the perception of generosity, not control. The strategic response is to ensure that the relationship evolves from one of dependency to one of partnership — from tool provision to collaborative innovation — so that the influence persists even as the dependency diminishes. The nation that manages this transition successfully retains soft power not through control of the tools but through the attractiveness of the collaborative ecosystem it has built.

This requires a sophistication of strategic thinking that most national AI strategies currently lack. The prevailing frameworks — the U.S.-China AI race, the European regulatory model, the various national AI strategies that smaller countries have announced — all operate on the assumption that AI power is something to be accumulated, defended, and deployed. Nye's framework suggests a fundamentally different orientation: AI power is something to be shared strategically, governed wisely, and deployed in ways that generate the voluntary cooperation of others.

The distinction matters practically. A nation that hoards AI capability — through export controls, restrictive licensing, or the deliberate limitation of tool access — may preserve a short-term advantage in hard power. But it forfeits the soft power that comes from being perceived as a generous and trustworthy partner in technological development. And in a domain where capability diffuses rapidly through open-source models, academic publication, and the sheer portability of AI knowledge, the hard-power advantage of hoarding is time-limited while the soft-power cost is permanent.

Nye and Keohane's original insight — that interdependence creates mutual vulnerability but also mutual benefit — applies to AI with particular force. The world is becoming interdependent on AI in ways that no single nation can control and no nation can opt out of. The nations that recognize this interdependence earliest, that build the institutional frameworks for managing it cooperatively, and that project the soft power of an approach that serves broadly rather than narrowly will shape the terms of the interdependence. The nations that attempt to control it unilaterally will find, as every hegemon in history has found, that the attempt to control what cannot be controlled generates resistance proportional to the effort expended.

The developer in Lagos, the engineer in Trivandrum, the student in São Paulo — each represents not a threat to the traditional centers of power but an expansion of the system from which power is generated. The question is whether the institutional architecture of the international system can adapt to include them as participants rather than merely as consumers. Nye's career-long argument that institutions are themselves a form of power — that the nation which designs the best institutions attracts the voluntary cooperation of others — suggests that this adaptation is possible, but only if the nations currently at the center of the AI ecosystem recognize that their long-term influence depends not on maintaining their position at the center but on building a system whose center is everywhere.

---

Chapter 6: The Death Cross and the Geography of Value

In February 2026, a chart circulated through Wall Street trading desks that condensed a trillion dollars of market anxiety into two intersecting curves. The descending line tracked the valuation index of SaaS companies — the software-as-a-service firms that had defined enterprise technology for two decades. The ascending line tracked the AI market. The point where the lines crossed, somewhere around 2027 according to the projections, acquired the clinical name analysts give to moments of structural irreversibility: the Death Cross.

The numbers were real enough. Workday had fallen thirty-five percent in the first eight weeks of 2026. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. When Anthropic published a blog post demonstrating Claude's ability to modernize COBOL — the decades-old programming language that still runs much of the world's banking infrastructure — IBM suffered its largest single-day stock decline in more than a quarter century. A trillion dollars of market capitalization vanished in weeks, and the disappearance was not a correction within a stable paradigm. It was the market repricing the paradigm itself.

The Orange Pill analyzes the Death Cross as a migration of value from code to ecosystem — from the ability to write software to the institutional architecture that surrounds it: data layers, integration networks, user relationships, regulatory compliance, the accumulated institutional trust that takes decades to build and cannot be replicated by a weekend prototype. This analysis is correct as far as it goes. Applied through Nye's framework, it goes considerably further, because the migration of value that the Death Cross represents is not merely a sectoral adjustment within technology markets. It is a geopolitical repricing of what constitutes strategic advantage.

For two decades, national AI competitiveness has been measured primarily in terms of technical production capacity: research papers published, patents filed, engineers trained, lines of code written, compute infrastructure deployed. These metrics track hard power — the material resources and technical capabilities that a nation can accumulate and direct. The Death Cross reveals that these metrics were measuring the wrong thing, or at least an increasingly insufficient thing.

When code becomes commodity — when any competent individual can describe what they want and receive working software in hours — the ability to produce code ceases to be a strategic differentiator. The semiconductor supply chains that enable AI computation remain critically important, because compute is a physical resource whose production requires infrastructure that cannot be replicated overnight. But the application of compute — the transformation of raw computational capacity into products, services, and systems that generate economic value and cultural influence — is no longer gated by the ability to write code. It is gated by the ability to determine what code should be written, for whom, and within what institutional framework.

This is a shift from hard-power metrics to soft-power metrics, and its geopolitical implications are significant.

Consider the competitive position of nations through the lens of the Death Cross. The United States leads the world in AI hard power by most conventional measures: compute capacity, frontier model development, venture capital investment, research talent concentration. But the Death Cross suggests that leadership in these metrics, while necessary, is no longer sufficient. The trillion dollars of value that vanished from American software companies did not migrate to Chinese or European competitors. It migrated upward — from the code layer to the judgment layer, from the ability to build to the ability to decide what should be built.

The nations whose competitive advantage resides primarily in the code layer — nations that have invested heavily in training large numbers of software engineers, building IT outsourcing industries, or establishing themselves as low-cost development centers — face a structural devaluation of their core economic asset. India, whose IT services industry employs millions and generates over two hundred billion dollars in annual revenue, is particularly exposed. The industry's value proposition — skilled engineers producing code at lower cost than in-house development teams in the United States and Europe — is precisely the proposition that AI commodifies. When a developer with Claude Code can produce in hours what an outsourcing team produces in weeks, the cost advantage of lower wages is overwhelmed by the capability advantage of AI augmentation.

This is not a prediction of India's IT sector collapse. The industry possesses institutional assets — client relationships, domain knowledge, regulatory expertise, project management capability — that extend well beyond the code layer. But it is a warning that the sector's value must migrate upward or erode, and that the speed of migration will determine whether India's largest private-sector employer remains a source of national economic strength or becomes a vulnerability.

The nations whose competitive advantage resides above the code layer — in institutional architecture, in regulatory coherence, in the ecosystems of trust and relationship that surround their technology sectors — find their position strengthening. The Death Cross does not devalue Salesforce's code. It devalues the act of writing code comparable to Salesforce's. The institutional ecosystem — the twenty years of enterprise deployment, the integration network connecting sales pipelines to marketing automation to financial reporting, the compliance certifications, the audit trails, the accumulated workflow knowledge embedded in the muscle memory of millions of users — remains valuable precisely because it cannot be reproduced by a weekend prototype.

Nye's framework illuminates why institutional ecosystems resist commodification in ways that technical capabilities do not. Institutions generate what Nye calls "institutional stickiness" — the tendency of established arrangements to persist because the costs of switching exceed the benefits of alternatives, even when the alternatives are technically superior. A corporation that has trained ten thousand employees on Salesforce, built its compliance infrastructure around Salesforce's audit trails, and integrated Salesforce into its workflow at every level faces switching costs that no AI prototype can overcome, regardless of how quickly the prototype was built.

Institutional stickiness is a form of soft power at the organizational level. The ecosystem does not compel its users to remain. It attracts them by making remaining more beneficial than leaving. This is precisely Nye's definition of soft power — influence through attraction rather than coercion — applied to the commercial domain. The SaaS companies that survive the Death Cross will be those whose ecosystems generate genuine attraction: whose users remain not because switching is too costly but because the ecosystem serves them in ways that alternatives cannot.

The geopolitical application is direct. The nations whose international influence rests on institutional ecosystems — alliances, trade agreements, regulatory frameworks, educational exchanges, multilateral organizations — possess a form of power that the Death Cross does not devalue but appreciates. The United States' network of alliances, for all their tensions and imperfections, represents an institutional ecosystem whose switching costs are enormous and whose accumulated trust and interoperability cannot be replicated by any alternative arrangement, however technically rational. NATO's value does not reside in its military hardware alone, just as Salesforce's value does not reside in its code alone. The value resides in the institutional architecture that makes the hardware — or the code — effective.

China's challenge, viewed through the Death Cross lens, is not insufficient AI capability. It is insufficient institutional ecosystem. China's bilateral relationships, its Belt and Road infrastructure investments, its regional security arrangements — these are institutional initiatives, and they are substantial. But they operate primarily through economic leverage and infrastructure provision, which are forms of hard power. They do not generate the institutional stickiness that comes from shared values, mutual trust, and the accumulated experience of cooperative problem-solving. They do not produce alliances that partners choose to maintain even when maintenance is costly, which is the hallmark of institutional soft power.

The Death Cross also reveals a temporal dimension of geopolitical competition that most analyses overlook. The SaaS valuations that collapsed in early 2026 had peaked during the COVID bubble of 2020-2021, when lockdowns turned every enterprise into a software buyer overnight. The peak was artificial — a demand spike mistaken for a structural shift — and the correction, while accelerated by AI, was also a reversion to a more realistic assessment of long-term value.

Geopolitical analysts are susceptible to the same error. The current moment of AI competition — the intensity of the U.S.-China race, the breathless reporting on compute capacity and model capability, the framing of AI as a new arms race — has characteristics of a bubble. Not a bubble in the financial sense, but a bubble in the analytical sense: a moment when short-term dynamics are mistaken for long-term structure. The capability metrics that dominate current analysis — who has the most compute, who trains the largest models, who deploys the most autonomous systems — may prove to be the AI equivalent of SaaS revenue multiples: real in the moment, misleading as indicators of durable advantage.

The durable advantage, if Nye's framework holds, will belong to the nations that build the most effective institutional ecosystems around AI — the governance architectures, the educational systems, the alliance structures, the multilateral frameworks that channel AI capability toward outcomes that other nations find attractive enough to join. These ecosystems take decades to build. They cannot be assembled by executive order or funded into existence by industrial policy. They grow from the accumulation of trust, the demonstrated commitment to shared interests, and the institutional memory of cooperative problem-solving that only time and sustained engagement can produce.

The Death Cross, then, is not merely a repricing of the software industry. It is a signal that the international system is being repriced according to a new theory of value — one in which the institutions that channel capability are more strategically important than the capability itself. The nations that read this signal correctly and invest accordingly will find their position strengthening as the AI era matures. The nations that continue to measure their competitive position in units of compute and code will find, like the SaaS companies whose valuations collapsed, that they were measuring the wrong thing.

---

Chapter 7: The Smooth and the Vulnerable

In November 2018, Elaine Kamarck of the Brookings Institution published a paper that would prove more prescient than its modest title suggested. "Malevolent Soft Power, AI, and the Threat to Democracy" took Joseph Nye's concept of soft power — the ability to shape preferences through attraction — and inverted it. If soft power operates through the attractiveness of a nation's culture, values, and institutions, Kamarck asked, what happens when those same channels are weaponized? When the mechanisms of attraction are hijacked to deliver manipulation?

Kamarck's immediate subject was Russia's interference in the 2016 American presidential election, which she characterized as "the exercise, by Russia, of malevolent soft power." The operations did not coerce. They did not threaten military action or impose economic sanctions. They operated through the channels of cultural influence — social media, news media, the information ecosystem through which citizens form political preferences — and they shaped those preferences not through the attractiveness of Russian values but through the exploitation of American divisions. Kamarck warned that the 2016 operation was "the first but most certainly not the last example of malevolent soft power being used to influence a campaign" and that "all of that, however, pales in comparison to what the world of artificial intelligence could do to democratic systems."

The warning acquired new dimensions with the arrival of the technologies documented in The Orange Pill. The aesthetics of the smoothByung-Chul Han's concept, which Segal analyzes as the cultural tendency to eliminate friction from every human experience — has a dimension that neither Han's philosophy nor Segal's builder's perspective fully explores. Smoothness is not merely an aesthetic preference or a cultural pathology. It is an information-security vulnerability, and one that AI has magnified to a degree that existing frameworks for information warfare are not equipped to address.

The vulnerability operates through a mechanism that is almost embarrassingly simple: the cost of producing persuasive content has collapsed to near zero, while the human capacity to evaluate that content has not changed at all.

Before AI, producing a polished, professional analysis — the kind that might influence a policy debate, shape public opinion, or alter the course of an election — required expertise, time, and institutional backing. A credible think-tank report required researchers, editors, peer reviewers, and the institutional reputation that gave the report its authority. A credible news article required reporters, fact-checkers, editorial oversight, and the accumulated credibility of a publication. A credible diplomatic communication required trained diplomats, institutional knowledge, and the backing of a government. Each of these production processes imposed friction — time, effort, expertise, institutional accountability — and that friction served as a rough quality signal. Not a perfect one. Institutions produced propaganda and misinformation too. But the friction created a cost differential between credible and incredible content that gave consumers a heuristic, however imperfect, for distinguishing the two.

AI has eliminated this cost differential. A state actor, a non-state organization, or even a single individual can now produce content that is indistinguishable in surface quality from the output of the most credible institutions in the world. The polished prose, the structured argument, the appropriate citations, the measured tone that signals expertise — all of these can be generated in minutes at negligible cost. The smooth output of an AI-assisted influence operation looks, reads, and feels identical to the smooth output of a legitimate analysis.

This is what Han diagnosed as the danger of smoothness — the concealment of construction — applied to the domain of national security. When friction is removed from the production of persuasive content, the seams that previously allowed consumers to distinguish genuine from manufactured credibility disappear. The surface becomes uniform. The polished output of a Russian information operation, a Chinese strategic communication, a domestic disinformation campaign, and a legitimate policy analysis all present the same aesthetic: smooth, professional, credible.

Nye's framework reveals why this development is not merely a technical challenge but a soft-power crisis. Soft power depends on credibility. Credibility depends on the perception that the information, the values, the cultural products projecting influence are genuine — that they represent authentic expressions of a society's knowledge, creativity, and commitment to truth. When the information environment is saturated with AI-generated content whose provenance and authenticity cannot be reliably determined, the credibility that soft power requires erodes for everyone.

This is the poisoned well problem. A single bad actor's use of AI-generated disinformation does not merely damage the credibility of the bad actor's output. It damages the credibility of the entire information environment, including the legitimate analysis, the genuine expertise, the authentic cultural production that constitutes the raw material of soft power. When citizens cannot distinguish real from manufactured, they do not selectively distrust the manufactured. They develop a generalized distrust that encompasses everything — a corrosive skepticism that treats all information as potentially manipulated, all expertise as potentially fabricated, all institutional authority as potentially hollow.

Nye, in his 2024 column on AI and national security, identified AI's potential for use in biological warfare and terrorism as among its most frightening applications. But the degradation of the information environment may prove more consequential in the long run, because it strikes directly at the foundation of democratic governance: the capacity of citizens to form informed preferences through genuine deliberation. Democratic soft power — the attractiveness of democratic governance as a model that other societies wish to emulate — depends on democracy actually functioning, and democracy functions only when citizens can access reliable information, evaluate competing claims, and arrive at preferences through a process that, however messy, is grounded in something approximating shared reality.

The attentional ecology concept from The Orange Pill scales this analysis from the individual to the national level. A nation's attentional ecology — the aggregate cognitive environment produced by the interaction of its citizens with their information technologies — determines its capacity for democratic self-governance. A nation whose attentional ecology has been degraded by AI-generated noise, by the saturation of smooth content whose provenance cannot be verified, by the erosion of the friction that previously served as a rough quality signal, is a nation whose democratic processes are vulnerable to manipulation, whose public discourse is susceptible to capture by the most prolific content producers rather than the most credible, and whose soft power is eroding from within.

The vulnerability is asymmetric in ways that advantage authoritarian regimes. Democratic societies depend on open information environments. Restricting information flow contradicts democratic values and undermines the very openness that generates democratic soft power. Authoritarian societies, by contrast, already control their domestic information environments and can deploy AI-generated content externally while insulating their own populations from its effects. The asymmetry means that the weaponization of smooth content disproportionately threatens democracies — not because democracies are weaker, but because the democratic commitment to open information creates an attack surface that closed societies do not present.

Nye recognized this asymmetry in his analysis of cyber threats and information warfare, arguing that the United States' openness, while a source of strength in generating soft power, was simultaneously a vulnerability in an era of weaponized information. AI intensifies both sides of this equation. American openness enables the AI innovation ecosystem that generates enormous soft power through the tools and platforms the world uses and admires. American openness also enables the saturation of the American information environment with AI-generated content that degrades the deliberative capacity on which democratic governance depends.

The institutional response to this vulnerability is, at present, inadequate. Content authentication technologies — watermarking, provenance tracking, cryptographic verification of AI-generated content — address the supply side of the problem. They attempt to label AI-generated content so that consumers can identify it. These technologies are useful but insufficient, for two reasons. First, the most sophisticated adversaries will find ways to circumvent authentication measures, just as the most sophisticated counterfeiters find ways to replicate security features on currency. Second, and more fundamentally, authentication addresses the production side of the problem while ignoring the consumption side: the human capacity to evaluate information, to tolerate ambiguity, to resist the appeal of smooth certainty when rough uncertainty is the more honest representation of reality.

The consumption side requires something more difficult than technology. It requires the cultivation of what might be called epistemic resilience — the capacity of a citizenry to maintain functional deliberation in an information environment saturated with unreliable content. Epistemic resilience is a form of national cognitive infrastructure, as essential to democratic self-governance as physical infrastructure is to economic productivity. And like physical infrastructure, it does not build itself. It requires sustained investment in education, in media literacy, in institutional credibility, and in the cultural habits of mind that allow citizens to function in conditions of informational uncertainty.

Nye's framework suggests that epistemic resilience is itself a source of soft power. The nation whose citizens can navigate an AI-saturated information environment without losing the capacity for genuine deliberation demonstrates a form of democratic robustness that other societies will find attractive. The nation whose democratic processes survive the assault of AI-generated manipulation projects the message that democratic governance is not merely an ideal but a viable system — capable of functioning even under conditions designed to destroy it. This is soft power of the most durable kind: not the soft power of cultural production or institutional prestige, but the soft power of demonstrated resilience.

The smooth and the vulnerable, then, are two faces of the same phenomenon. The smoothness that Han diagnosed as a cultural pathology and that Segal analyzed as an aesthetic of frictionless production is, in the domain of national security, a vulnerability of the first order. The nation that recognizes this vulnerability earliest and builds the institutional architecture to address it — not merely through technology but through the cultivation of citizens capable of functioning in conditions of radical informational uncertainty — will possess an advantage that no quantity of compute can replicate.

---

Chapter 8: Education as Strategic Infrastructure

Joseph Nye spent four decades arguing that the attractiveness of a nation's institutions — its universities, its political system, its cultural industries — was a form of power as consequential as military force or economic leverage. American universities occupied a privileged position in this analysis. They attracted the brightest students from every corner of the world, trained the researchers who produced breakthrough innovations, and generated a form of institutional soft power that no rival could replicate through investment alone. The research university, in Nye's framework, was not merely an educational institution. It was a strategic asset, projecting influence through the quality of its output and the global network of alumni whose formative intellectual experiences were stamped with an American institutional identity.

The AI transformation threatens this strategic asset in ways that Nye's framework makes visible but that educational institutions themselves have been slow to recognize.

The threat does not come from AI replacing professors or automating lectures, though both are happening at the margins. The deeper threat is structural: AI is making the core value proposition of the research university — the transmission of expert knowledge and the certification of competence — less relevant to the economy that university graduates are supposed to enter. When a student can access, through a natural-language conversation with an AI system, the equivalent of a semester's worth of domain-specific instruction in an afternoon, the university's monopoly on knowledge transmission dissolves. When an employer can evaluate a candidate's capability through a portfolio of AI-augmented projects rather than a credential certifying years of coursework, the university's monopoly on certification erodes.

The Orange Pill makes this argument from the perspective of a builder watching the ground shift: "Our educational establishments are not prepared for this change and are staffed with calcified pedagogy and staff. It is one of the most urgent institutions requiring reform. If they don't change fast enough their demand will dry up as young people will not want to waste years of their life acquiring student debt or arcane skills that the world does not need." The language is blunt. The analysis, stripped of its urgency, is structurally sound.

Nye's framework elevates this analysis from educational policy to national security. If universities are strategic assets that generate soft power through institutional attractiveness, and if the AI transformation is eroding the foundations of that attractiveness, then the failure to reform educational institutions is not merely a domestic policy failure. It is a strategic vulnerability — a weakening of the institutional infrastructure on which a nation's international influence depends.

The argument requires careful construction, because the relationship between education and national power is more complex than most policy discussions acknowledge.

Education contributes to national power through four distinct channels. The first is human capital formation: the production of a workforce capable of generating economic value. This is the channel that most AI-and-education discussions focus on, and it is real. The nation whose educational system produces citizens capable of directing AI effectively — not merely using the tools but exercising the judgment to determine what should be built, for whom, and with what safeguards — possesses a workforce advantage that compounds over time.

The second channel is innovation capacity: the production of new knowledge through research. Universities are not the only sites of research, but they remain the primary sites of foundational research — the kind of open-ended inquiry that produces breakthroughs unpredictable in advance and impossible to mandate through industrial policy. The transformer architecture that enabled large language models emerged from academic research. The scaling laws that determined how model capability grows with data and compute were discovered through the kind of patient, curiosity-driven investigation that universities are uniquely structured to support. A nation that allows its research universities to atrophy loses not merely educational capacity but innovation capacity — the ability to participate in the foundational discoveries that determine the trajectory of AI development.

The third channel is soft power projection: the attraction of foreign students, researchers, and collaborators who absorb institutional values and carry them home. Nye identified this channel as one of America's most significant soft power assets. The foreign student who spends four years at an American university absorbs not merely technical knowledge but institutional culture — the habits of open inquiry, peer review, intellectual challenge, and collaborative problem-solving that characterize the American research tradition. These habits of mind are themselves a form of soft power, propagated through the global network of alumni who carry them into every institution they subsequently join.

The fourth channel is what might be called cognitive sovereignty: the capacity of a nation's citizens to think independently, evaluate competing claims, and arrive at considered judgments in conditions of uncertainty. This is the channel most directly threatened by AI, and the one that Nye's framework reveals as most strategically consequential.

The Orange Pill argues that education must shift from teaching answers to teaching questions. The argument rests on the observation that AI has made answers abundant. Any question that can be specified — any question whose answer can be verified against existing knowledge — can now be answered by a machine with greater speed and often greater accuracy than a human expert. The scarce resource is no longer the answer but the question: the capacity to identify what needs to be known, to formulate problems worth solving, to evaluate whether the answers generated by AI systems are adequate to the problems they address.

Translated into Nye's vocabulary, the shift from answers to questions is a shift from the production of human capital (people who can execute specified tasks) to the production of human judgment (people who can determine which tasks are worth executing). Human capital contributes to national hard power — economic productivity, military capability, technological output. Human judgment contributes to national soft power — the quality of a nation's institutions, the wisdom of its governance, the attractiveness of its approach to the challenges that every society faces.

The nation whose educational system produces citizens capable of exercising judgment — genuine judgment, not merely the ability to select among options presented by an algorithm — possesses an advantage that no amount of compute can replicate. Judgment requires the integration of knowledge, values, and contextual awareness that AI systems do not possess. It requires the capacity to hold uncertainty without rushing to resolution, to consider multiple perspectives without losing the ability to decide, to recognize when the smooth, confident output of an AI system conceals a gap in understanding that only human reflection can fill.

These capacities are not innate. They are cultivated through specific educational practices: the seminar discussion that rewards genuine engagement over rehearsed performance, the research project that requires the student to navigate ambiguity without a predetermined answer, the mentoring relationship that transmits not just knowledge but the stance toward knowledge that characterizes mature intellectual engagement. These practices are, by their nature, friction-rich. They are slow. They are resistant to optimization. They cannot be delivered at scale through digital platforms or replicated by AI systems, because their value resides precisely in the human relationship — the specific, particular, unreproducible encounter between a mind that has navigated a domain's challenges and a mind that is learning to navigate them.

Nye's framework suggests that the preservation and strengthening of these friction-rich educational practices is a national security imperative of the first order. Not because the practices are sacred in themselves, but because they produce the cognitive capacity on which both domestic governance and international influence depend. The nation whose citizens can ask good questions, evaluate AI-generated answers with genuine discernment, and exercise judgment in conditions of radical uncertainty will govern itself more effectively and project soft power more durably than the nation whose citizens can merely prompt AI systems and accept whatever the systems produce.

The practical implications extend beyond the university. Primary and secondary education must shift, too — from the transmission of facts that AI can supply to the cultivation of capacities that AI cannot: curiosity, persistence in the face of ambiguity, the ability to collaborate with others whose perspectives differ from one's own, the ethical seriousness to ask not merely "can this be built?" but "should it?" These capacities are not measured by standardized tests. They are not rewarded by the credential-based hiring practices that currently dominate the labor market. And they are not valued by the institutional structures that currently govern education — structures designed for an era in which knowledge was scarce, answers were valuable, and the primary purpose of education was to transmit the accumulated knowledge of one generation to the next.

That era is over. The institutional structures that served it must adapt or become irrelevant. Nye's soft power analysis adds a dimension of urgency that purely domestic policy arguments lack: the stakes are not merely educational. They are strategic. The nation that reforms its educational institutions most effectively will produce the citizens most capable of directing AI wisely, governing themselves democratically, and projecting the soft power of an approach to AI that other nations find attractive enough to emulate. The nation that allows its educational institutions to ossify — to continue transmitting answers in an age of abundant answers, certifying competence in an age of commodified competence, measuring students by what they know rather than by how they think — will find its strategic position eroding from within, as the cognitive infrastructure on which both governance and influence depend deteriorates beneath the surface of apparent institutional continuity.

Nye argued, in The Powers to Lead, that the most effective leaders are those who combine the ability to inspire (soft power) with the ability to organize (hard power) in pursuit of objectives that serve the broader group rather than the leader alone. The same applies to educational systems. The most effective educational systems will be those that combine the hard power of technical capability — ensuring that citizens can use AI tools fluently — with the soft power of intellectual formation — ensuring that citizens can direct those tools wisely. Technical fluency without judgment produces a nation of capable executors. Judgment without fluency produces a nation of thoughtful observers. Neither is sufficient. The combination is what the moment demands, and the nations that achieve it will lead.

Nye's final published reflections warned that America's soft power was not guaranteed — that it required continuous cultivation through institutional quality, cultural vitality, and the demonstrated commitment to values that others found worth emulating. The warning applies to education with particular force, because education is where the next generation's capacity for soft power is formed. The students in classrooms today will be the citizens, the leaders, the builders, and the questioners of the AI era. What they learn to do — and more importantly, what they learn to be — will determine whether their nation projects the soft power of wisdom or merely the hard power of capability.

The difference, as Nye spent a lifetime demonstrating, is the difference between influence that lasts and influence that doesn't.

Chapter 9: The Attentional Ecology of Nations

A democracy is a machine for producing collective decisions from distributed individual judgments. The machine has many components — elections, legislatures, courts, a free press, civil society organizations — but its fuel is the cognitive capacity of its citizens. When citizens can sustain attention, evaluate competing claims, tolerate ambiguity, and deliberate genuinely rather than merely react, the machine functions. When they cannot, it stalls. The structural integrity of democratic governance depends, in the final analysis, on the quality of the minds that govern.

Joseph Nye understood this dependency more precisely than most political scientists, because his theory of soft power located a nation's international influence in the quality of its domestic institutions. A democracy that functions well — that produces legitimate governance, peaceful transfers of power, and policy outcomes that serve the broad public rather than narrow interests — projects soft power that no propaganda campaign can replicate. A democracy that functions poorly — that produces gridlock, polarization, institutional capture, and policy outcomes that citizens perceive as illegitimate — projects weakness, regardless of its military budget or GDP.

The implication, which Nye drew explicitly in Do Morals Matter? and implicitly throughout his later work, is that threats to the quality of domestic democratic governance are threats to national power in the international system. They are not merely domestic policy problems. They are strategic vulnerabilities, because they erode the institutional attractiveness on which soft power depends.

Artificial intelligence poses such a threat — not through the dramatic scenarios of autonomous weapons or superintelligent systems that dominate public discourse, but through a quieter, more pervasive mechanism that The Orange Pill calls attentional ecology. The term describes what AI-saturated environments do to the minds that inhabit them. Scaled from the individual to the national, attentional ecology describes the aggregate cognitive environment produced by the interaction of a society's citizens with their information technologies — and the consequences of that environment for the society's capacity to govern itself.

The concept requires unpacking, because its components are individually familiar but collectively undertheorized.

The first component is attention itself — the cognitive resource that determines what information a mind processes, at what depth, and for how long. Attention is finite. The neuroscientific evidence on this point is unambiguous: the human brain cannot sustain focused engagement with complex material indefinitely, and the quality of engagement degrades rapidly when attention is fragmented across multiple simultaneous demands. A citizen who cannot sustain attention long enough to read a policy proposal, evaluate its assumptions, and form a considered judgment is a citizen whose democratic participation is formally preserved but functionally hollow.

AI systems interact with attention in complex ways. On one hand, AI tools can enhance focused attention by handling routine cognitive tasks — summarizing documents, organizing information, translating between languages — that previously consumed bandwidth that could otherwise be directed toward higher-order thinking. This is the ascending friction thesis applied to civic cognition: AI removes the friction of information processing and, in principle, frees the citizen to engage more deeply with the substance of democratic deliberation.

On the other hand, the same AI systems that can enhance focused attention are embedded in an information ecosystem designed to fragment it. The recommendation algorithms that determine what citizens see on social media, the notification systems that interrupt sustained engagement, the content-generation tools that flood the information environment with material competing for attention — these operate through AI and produce an environment in which sustained, deep engagement with complex material is not merely difficult but structurally discouraged. The ecosystem rewards engagement — clicks, views, shares — rather than comprehension, and engagement is maximized not by depth but by novelty, emotional arousal, and the constant provision of new stimuli that prevent the mind from settling into the sustained attention that genuine understanding requires.

The Berkeley researchers whose work The Orange Pill examines documented a workplace-level version of this phenomenon: "task seepage," the tendency for AI-accelerated work to colonize previously protected pauses in the workday. The national-level equivalent is the colonization of civic attention by the AI-driven information ecosystem. The moments in which citizens might otherwise engage with complex public issues — the commute, the lunch break, the evening hours — are saturated with algorithmically optimized content that is designed to capture attention, not to inform judgment.

Nye's framework reveals why this matters strategically. A nation whose citizens' attention has been captured by an information ecosystem optimized for engagement rather than comprehension is a nation whose democratic deliberation is degraded — and therefore whose soft power is eroding. The degradation is not visible in the way that military weakness or economic decline is visible. It manifests in subtler forms: declining voter knowledge about policy issues, increasing susceptibility to manipulation, the fragmentation of shared reality into algorithmically curated information bubbles that prevent the common factual ground on which democratic deliberation depends.

The second component of attentional ecology is epistemic capacity — the ability to evaluate the credibility of information, to distinguish evidence from assertion, to recognize when uncertainty is genuine and when apparent certainty conceals ignorance. This capacity is related to attention but distinct from it. A citizen can sustain attention on a piece of content and still lack the epistemic tools to evaluate its credibility. Conversely, a citizen with strong epistemic tools may lack the sustained attention needed to deploy them.

AI threatens epistemic capacity through the mechanism analyzed in the previous chapter: the collapse of production friction that previously served as a rough quality signal. When professional-quality content can be produced by anyone at negligible cost, the heuristics that citizens have developed for evaluating credibility — the institutional provenance of a source, the professional quality of presentation, the consistency of a claim with the output of recognized experts — lose their reliability. The smooth is no longer a reliable indicator of the credible.

The erosion of epistemic heuristics is a national security problem because it creates what information warfare theorists call a "fog of peace" — a condition in which the information environment is so saturated with content of uncertain provenance and varying reliability that citizens lose the ability to form the shared factual understanding that collective decision-making requires. In a fog of peace, every claim is simultaneously plausible and suspect. Every source is potentially genuine and potentially manufactured. The cognitive response to this condition is not heightened discernment but generalized cynicism — the retreat from the effortful work of evaluation into the defensive posture of trusting nothing and no one.

Generalized cynicism is the precise condition that authoritarian influence operations seek to create. The objective of Russian information warfare, as analysts have documented extensively, is not to convince foreign populations that Russian narratives are true. It is to create an environment in which no narrative is trusted — in which the very concept of shared truth is undermined. AI accelerates this objective by orders of magnitude, because it enables the production of plausible alternative narratives at a speed and volume that overwhelm the human capacity to evaluate them.

The third component is what might be called deliberative stamina — the capacity to engage in the slow, effortful, often uncomfortable process of genuine deliberation. Democratic governance requires citizens to hold multiple perspectives simultaneously, to consider trade-offs that admit no clean resolution, to tolerate the discomfort of disagreement without retreating into tribal certainty. These are not natural cognitive states. They require cultivation through education, institutional support, and cultural norms that reward thoughtfulness over speed.

AI-saturated environments structurally discourage deliberative stamina. The instant availability of AI-generated answers to complex questions creates the expectation that resolution should be immediate. The algorithmic curation of information into ideologically consistent feeds reduces exposure to perspectives that challenge existing beliefs. The optimization of content for emotional engagement rewards certainty and punishes nuance. Each of these environmental pressures weakens the cognitive muscle that deliberative democracy requires — not through a single dramatic insult but through the accumulated effect of millions of micro-interactions that train the mind to expect speed, certainty, and comfort rather than the slowness, ambiguity, and discomfort that genuine deliberation demands.

Nye, in his analysis of the information revolution's effects on power, distinguished between those who benefit from the free flow of information and those who are overwhelmed by it. The distinction is directly applicable to AI. The citizens who benefit from AI-enhanced information environments are those with the cognitive infrastructure — the attention, the epistemic capacity, the deliberative stamina — to direct the flow. The citizens who are overwhelmed are those whose cognitive infrastructure has been degraded by the very environment that AI has helped create.

The distribution of cognitive infrastructure within a society is not random. It correlates, imperfectly but significantly, with educational attainment, socioeconomic status, and institutional access. This means that the degradation of attentional ecology disproportionately affects the citizens who are already most vulnerable — those with the least educational preparation, the fewest institutional supports, and the greatest exposure to the algorithmically optimized content that fragments attention and erodes epistemic capacity. The result is a cognitive inequality that maps onto and reinforces existing social and economic inequalities, creating a society in which the capacity for informed democratic participation is increasingly concentrated among those who least need democratic processes to protect their interests.

This is a soft power crisis masquerading as a technology problem. The nation whose democratic governance is visibly captured by an informed elite while the majority of citizens lack the cognitive infrastructure to participate meaningfully does not project the soft power of democratic attractiveness. It projects the soft power of democratic dysfunction — and that projection repels rather than attracts, confirming the authoritarian narrative that democratic governance is a luxury that only functions under conditions that no longer obtain.

The policy responses available are neither technologically novel nor politically easy. They involve investment in media literacy education at every level. They involve regulatory frameworks that create transparency around algorithmic curation and provide citizens with genuine choice about how their information environments are constructed. They involve the cultivation of institutional credibility through the sustained demonstration of good faith — credibility that cannot be manufactured but must be earned through the accumulation of trustworthy behavior over time. They involve the creation of deliberative spaces — physical and digital — where citizens can engage with complex issues at a pace and depth that algorithmic optimization does not permit.

None of these measures individually is sufficient. Collectively, they constitute what The Orange Pill calls dam-building — the construction of institutional structures that redirect the flow of AI-enhanced information toward democratic health rather than democratic degradation. The analogy is apt because it captures both the necessity and the fragility of the endeavor. Dams must be continuously maintained. The river tests every joint. The moment attention lapses, the structure begins to fail.

Nye spent his career arguing that international influence depends on domestic institutional quality. The AI era makes this argument more urgent and more specific. The domestic institutional quality that matters most is not economic productivity or military readiness but cognitive infrastructure — the aggregate capacity of a nation's citizens to think clearly, evaluate honestly, and deliberate genuinely in conditions of radical informational complexity. The nation that builds this infrastructure will govern itself effectively and project the soft power of demonstrated democratic resilience. The nation that allows this infrastructure to erode will find its governance degraded, its soft power diminished, and its strategic position weakened — not by any external adversary, but by the internal consequences of an information environment it failed to steward.

---

Chapter 10: Smart Amplification

Joseph Nye's final published column appeared in Project Syndicate in May 2025, weeks before his death at eighty-eight. The column did not mention artificial intelligence. It did not need to. Nye was writing about the future of American soft power — about whether the institutional qualities that had made the United States the most influential nation in the modern era would survive the internal pressures that threatened to degrade them. The column read, in retrospect, as a summation: the scholar who had spent four decades studying influence distilling what he had learned into a final warning that influence is not a possession but a practice, not a resource to be stored but a relationship to be maintained, and that the moment a nation stops doing the things that make it attractive to others is the moment its power begins to erode.

The warning applies to AI with a specificity that Nye's general framework enables but that the particular technologies of the AI era make urgent.

This final chapter synthesizes the preceding arguments into a single strategic concept: smart amplification. The term adapts Nye's concept of smart power — the integration of hard and soft power into strategies that are simultaneously effective and attractive — to the specific conditions that AI has created. Smart amplification is the national capacity to use AI as an amplifier of institutional quality rather than merely an accelerator of output.

The distinction between amplification and acceleration is the distinction between smart power and mere power. Acceleration is quantitative: more output, faster production, greater efficiency. Every nation pursuing an AI strategy seeks acceleration. The metrics of AI competition — compute capacity, model capability, deployment speed — are metrics of acceleration. They measure how much faster a nation can do what it was already doing.

Amplification is qualitative: the enhancement of the signal that the acceleration carries. The amplifier does not care what signal it receives. Feed it noise, and it produces louder noise. Feed it clarity, and it produces clarity at scale. The nation that amplifies institutional wisdom projects that wisdom further. The nation that amplifies institutional dysfunction projects dysfunction further. AI is indifferent to the distinction. The distinction is entirely a function of what the nation brings to the amplifier.

The Orange Pill makes this argument at the individual level: "The question this book is trying to answer is not 'Is AI dangerous?' or 'Is AI wonderful?' It's: 'Are you worth amplifying?'" Nye's framework scales the question to the national level: Is this nation worth amplifying? Is its institutional architecture, its educational system, its governance capacity, its cultural output of sufficient quality that amplification produces something the world finds attractive — or does amplification merely expose, at greater scale and higher resolution, the dysfunction that less powerful tools previously concealed?

The question is uncomfortable because it demands honesty about institutional quality at a moment when institutional quality is visibly declining in many of the nations that consider themselves AI leaders. The United States, which leads the world in AI capability by most measures, is simultaneously experiencing a crisis of institutional credibility that undermines the soft power its AI tools generate. The polarization of its political system, the erosion of trust in its institutions, the visible dysfunction of its governance processes — all of these are being amplified by AI-driven information environments, projected globally at a speed and scale that pre-AI media could not achieve. The world's most powerful amplifier is amplifying dysfunction alongside innovation, and the soft power consequences are significant.

Smart amplification requires three elements that this book has examined independently and that the strategic concept integrates.

The first element is judgment: the human capacity to direct the amplifier toward ends that are worth pursuing. Judgment, as the preceding chapters have argued, is not a natural byproduct of AI capability. It is a cultivated capacity that depends on educational systems designed to produce it, institutional environments that reward it, and cultural norms that value it. The nation whose citizens can ask good questions — not merely prompt AI systems effectively but identify the problems worth solving, the values worth serving, the futures worth building — possesses the raw material of smart amplification. The nation whose citizens can only execute AI-generated instructions possesses capability without direction, which is power without influence.

Nye, in his interview with the USC Center on Public Diplomacy, emphasized that AI requires "a human interpreter, especially when emotions and creativity are involved." The observation was characteristically understated and characteristically important. The human interpreter is the exercise of judgment at the interface between AI capability and human consequence. Without it, the capability operates blindly — efficiently producing outputs whose value, appropriateness, and consequences no one has evaluated. With it, the capability is directed, channeled, shaped into something that serves human purposes rather than merely fulfilling technical specifications.

The second element is institutional architecture: the governance structures, regulatory frameworks, educational systems, and cultural norms that channel amplified capability toward collective benefit. This is the dam-building that both The Orange Pill and this book have argued is the critical missing piece of AI governance.

The analysis has identified a specific deficiency: the dam deficit exists primarily on the demand side. The supply-side governance of AI — regulations on what companies may build, transparency requirements, risk assessments — is developing, however imperfectly. The EU AI Act, the various national frameworks emerging worldwide, the corporate governance structures that responsible AI companies are adopting — these represent genuine, if insufficient, supply-side architecture.

The demand side remains almost entirely unaddressed. The citizens, workers, students, and parents who are navigating the AI transition in real time have almost no institutional support for doing so wisely. The retraining programs are inadequate. The educational reforms are too slow. The media literacy initiatives are underfunded. The cultural norms around AI use — when to engage, when to disengage, how to maintain cognitive autonomy in an environment of powerful tools — are developing by trial and error rather than by design.

This demand-side deficit is the single most dangerous feature of the current moment, because the patterns that form in the absence of institutional guidance become the norms that institutions must eventually confront. Workers who adapt to AI without support develop practices that may be individually rational but collectively harmful — the task seepage, the inability to disengage, the erosion of the boundary between work and rest that the Berkeley researchers documented. Students who encounter AI without educational frameworks that help them use it wisely develop habits of intellectual outsourcing that may be efficient in the short term but corrosive to the cognitive capacities that democratic citizenship requires. Each month of institutional absence compounds the challenge of eventual intervention, because the longer bad patterns persist, the harder they are to redirect.

Smart amplification requires closing the demand-side deficit with the same urgency that nations currently bring to the supply-side competition. The nation that builds the best demand-side architecture — the educational systems, the retraining programs, the cultural norms, the institutional supports that enable citizens to direct AI wisely — will possess an advantage that no quantity of compute can substitute. Because the advantage compounds. Each generation of citizens educated to exercise judgment, to ask questions rather than merely accept answers, to maintain cognitive autonomy in an AI-saturated environment, produces the next generation of leaders, builders, and questioners. The investment returns are measured not in quarters but in decades, and the nations that make the investment earliest will find the returns accumulating long after the nations that prioritized short-term capability have exhausted the advantages of their head start.

The third element is soft power: the ability to make a nation's approach to AI attractive enough that others voluntarily align with it. This is where the entire analytical framework converges. Judgment without institutional architecture is individual virtue without collective effect. Institutional architecture without soft power is domestic governance without international influence. Soft power without judgment and institutional architecture is attraction without substance — a nation that looks good but cannot deliver on the promise its appearance projects.

Smart amplification integrates all three. The nation that produces citizens capable of judgment, builds institutions that channel amplified capability toward human benefit, and projects an approach to AI that other nations find attractive enough to emulate will shape the international order of the coming century. Not through coercion. Not through economic leverage. Through the mechanism that Nye spent a lifetime studying: the voluntary alignment that arises when others look at what you have built and say, we want what they have.

Nye's nuclear analogy, which he invoked repeatedly in his AI commentary, provides the most instructive precedent. The Non-Proliferation Treaty succeeded not because the nuclear powers forced compliance but because they offered a framework that most of the world's nations judged to be in their interest. The framework was imperfect. It embedded asymmetries that many nations resented. But it was perceived as legitimate — as serving the interests of humanity broadly rather than the interests of the nuclear powers narrowly — and that perception of legitimacy generated the voluntary compliance on which the framework depended.

An AI governance framework of comparable legitimacy would represent the single most important exercise of smart amplification available to any nation or coalition of nations. Such a framework would need to address the supply side (the development and deployment of AI systems), the demand side (the support of citizens navigating the transition), and the international dimension (the coordination of governance across borders in a domain where AI capability diffuses faster than any national regulatory framework can govern it). The nation or coalition that designs such a framework — and demonstrates, through domestic implementation, that it produces outcomes worth emulating — will possess the defining soft power advantage of the AI era.

Nye warned, consistently and with the authority of a scholar who had studied the subject for longer than most of his critics had been alive, that soft power is not automatic. It must be earned through the sustained quality of a nation's institutions, the genuine attractiveness of its values, and the demonstrated willingness to serve interests broader than its own. The warning was not pessimistic. It was diagnostic. And the diagnosis applies to AI with the precision of a framework built for exactly this kind of moment — a moment when the instruments of power are changing faster than the institutions designed to wield them, and when the nations that adapt their institutions fastest will lead, and the nations that cling to the instruments of yesterday will follow, however powerful those instruments remain.

Nye's career was an argument that power is not merely a function of capability but of the wisdom with which capability is deployed. AI makes this argument empirically testable on a scale that previous technologies did not. The amplifier is running. The signal it carries depends entirely on what we feed it. The nations that feed it institutional wisdom, educational depth, and the genuine commitment to human flourishing that makes an approach to technology attractive to others will project influence that endures. The nations that feed it noise — the noise of dysfunction, the noise of short-term thinking, the noise of capability pursued without care for consequence — will find that noise amplified too, projected globally, and judged by a world that has more options than ever before for where to place its voluntary alignment.

The question Nye spent his career asking — what makes a nation influential in the long run? — has acquired a new precision in the age of AI. The answer has not changed. Only the stakes have.

---

Epilogue

The word that would not leave me alone was "voluntary."

It followed me through ten chapters of Nye's framework the way a note you cannot quite identify follows you through a piece of music — present everywhere, anchoring everything, and invisible until someone names it. Soft power works because the other party chooses to align. Not compelled. Not purchased. Attracted. The entire architecture of influence that Nye spent a lifetime building rests on the distinction between a person, or a nation, that moves toward you because it wants to and one that moves toward you because it must.

I had not thought about AI through this lens before, and it rearranged something fundamental.

In The Orange Pill I wrote about the river of intelligence, about beavers and dams and the imperative to build structures that redirect the current toward life. These are useful metaphors. They capture something real about the relationship between human agency and technological force. But Nye's framework exposed a dimension I had been circling without naming: that the structures we build matter less than whether others choose to adopt them. A dam built by coercion holds water but generates resentment. A dam built in a way that others want to replicate — because they can see that the pool behind it sustains more life than the bare riverbed — that is the dam that changes the landscape.

The implication for everyone building with AI right now is more demanding than I initially understood. It is not enough to build well. It is not enough to build wisely. The work has to be attractive — not in the superficial sense of looking good, but in Nye's precise sense: producing outcomes that others judge worth emulating through their own free assessment. The engineer in Trivandrum, the developer in Lagos, the parent at the kitchen table — each is making choices about how to use these tools, and those choices aggregate into something that either attracts voluntary cooperation or repels it.

What stayed with me most was the demand-side gap. We have been so focused on what AI companies should be allowed to build that we have almost entirely neglected what citizens need in order to use what is built. The supply side gets the regulation, the headlines, the policy conferences. The demand side — the teachers, the parents, the workers adapting without guidance — gets trial and error. That asymmetry is not merely unfair. According to Nye's framework, it is strategically catastrophic. The soft power of a democracy depends on citizens capable of genuine deliberation. Citizens adapting to AI without institutional support are not developing that capacity. They are developing coping mechanisms, and coping mechanisms are not the same thing as wisdom.

The hardest sentence Nye ever wrote, to my ear, was the simplest: that soft power must be earned. Not acquired. Not claimed. Earned — through the sustained quality of what you produce and the demonstrable sincerity of the values you project. Applied to AI, this means that the future does not belong to whoever builds the most powerful system. It belongs to whoever builds the most trustworthy one. The system that people choose to use, not because they have no alternative but because using it makes their lives genuinely better in ways they can see and judge for themselves.

I keep returning to a twelve-year-old's question from the book: What am I for? Nye's framework offers an answer that I did not have before. You are for the voluntary part. You are for the choosing. The machine amplifies whatever it is given. Only a conscious being can decide what deserves to be given. And the quality of that decision — its care, its seriousness, its attention to who else is affected — is what makes the amplified signal worth receiving on the other end.

That is soft power at the human scale. And it is earned exactly the way Nye said: not once, but continuously, through the sustained quality of the attention you bring to a world that did not ask for your care but desperately needs it.

-- Edo Segal

The AI race is framed as a contest of capability -- who builds the most powerful model, who deploys the fastest, who dominates. Joseph Nye spent a lifetime demonstrating that this framing misses the d

The AI race is framed as a contest of capability -- who builds the most powerful model, who deploys the fastest, who dominates. Joseph Nye spent a lifetime demonstrating that this framing misses the dimension of power that actually endures. The most capable nation does not automatically lead. The most attractive one does -- the one whose approach others voluntarily choose to emulate.

This book applies Nye's framework to the AI revolution and reveals what the capability obsession conceals: that soft power flows through the tools themselves, that the trillion-dollar SaaS collapse is a repricing of what strategic advantage means, and that the greatest vulnerability democracies face is not falling behind in compute but allowing the cognitive infrastructure of their citizens to erode from within.

The question is not who reaches the frontier first. It is whose frontier the rest of the world wants to inhabit.

Joseph Nye
“would be to fall into one-dimensional analysis and to believe that investing in military power alone will ensure our strength.”
— Joseph Nye
0%
11 chapters
WIKI COMPANION

Joseph Nye — On AI

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Joseph Nye — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →