Nations do not adopt technologies; they adopt stories about technologies, and the stories determine adoption. The United States adopted an internet-freedom narrative—minimal regulation, maximal private investment, platforms protected from liability. China adopted an information-control narrative—heavy regulation, tight surveillance, sovereignty over data flows. Same technology, different fictions, divergent outcomes. Harari's framework predicts AI will follow this pattern. The American fiction treats AI as economic opportunity (maximize private investment, minimize regulatory friction). The Chinese fiction treats AI as strategic asset (centralized coordination, state direction of research, data as national resource). The European fiction treats AI as rights challenge (regulate to protect privacy, democracy, equity). Each captures something real, misses something critical. None constitutes a wise global response.
The divergence is not superficial but constitutive. The American market fiction has produced: rapid capability development, concentrated gains, minimal transition support for displaced workers, world-leading AI companies. The Chinese state-capacity fiction has produced: strategic deployment, centralized control, integration of AI into authoritarian governance infrastructure, domestic champions (Baidu, Alibaba, Tencent). The European rights fiction has produced: the EU AI Act (2024), strong protections for citizens, slower capability development, dependence on American and Chinese technology. None of these is optimal. Each reflects the coordinating fiction shaping that nation's institutional response.
Harari has been particularly attentive to the asymmetric risk AI poses to governance models. In democracies, 'algorithms prioritize engagement over accuracy,' fragmenting shared reality and weakening institutional trust. In autocracies, 'AI offers unprecedented tools for surveillance and control,' strengthening the regime's capacity to monitor and manage populations. If this asymmetry holds—AI destabilizes democracies while strengthening autocracies—the geopolitical competition is not merely between nations but between governance systems, and the technology itself tilts the field. The liberal democratic model, which Harari argues is already under strain, faces a technology that amplifies its vulnerabilities (polarization, misinformation, trust erosion) while reinforcing authoritarian advantages (surveillance, control, narrative management).
The absence of a shared global fiction is the structural vulnerability Harari warns about most urgently. Previous transformative technologies—nuclear, internet, biotech—were mediated by some international framework: IAEA for nuclear, ICANN for internet governance, WHO for health. AI has no equivalent. International AI governance is, as of this writing, patchwork: bilateral agreements, voluntary commitments, declarative principles without enforcement. The gap between AI development speed (months) and international institution-building speed (years, decades) is the civilizational risk. The technology that Harari calls 'the first that can make decisions' is being developed inside competing national fictions with no overarching coordination framework.
Harari elaborated the geopolitical-fictions framework in Nexus (2024) and in numerous essays and interviews 2023-2025. The approach synthesizes his longstanding thesis that nations are imagined communities (from Benedict Anderson) with technology studies' insight that the same technology produces different societies depending on institutional context (Lynn White Jr.'s medieval technology work, Thomas Hughes' systems approach).
Three divergent fictions, three divergent outcomes. American market story (innovation speed, concentrated gains), Chinese state story (strategic deployment, centralized control), European rights story (strong protection, capability lag).
The fiction determines the response. Same technology, different narratives, materially different institutional structures—regulation, investment, distribution, governance.
Asymmetric risk to democracy. AI fragments democracies (through engagement-optimized polarization) while strengthening autocracies (through surveillance enhancement). The technology is not neutral between governance models.
No shared global fiction. Unlike nuclear (IAEA) or internet (ICANN), AI has no international governance framework. The vacuum is the vulnerability.
Speed mismatch. Technology develops in months; international institutions require years. The gap between capability and coordination is unprecedented and widening.