For seventy millennia, one species on Earth held an unrecognized monopoly: the ability to create shared fictions that coordinate collective behavior. While other animals communicate about observable reality—a vervet's eagle alarm, a bee's nectar dance—no non-human ever organized a crusade, launched an IPO, or convinced its peers to sacrifice present goods for imagined future rewards. This capacity, emerging during the Cognitive Revolution between seventy and thirty thousand years ago, was humanity's defining competitive advantage. Gods allowed strangers to build ziggurats together. Money enabled merchants on opposite sides of the planet to execute complex transactions. National identity coordinated the behavior of millions who would never meet. These fictions are not decorative—they are load-bearing infrastructure. Remove the shared belief in the United States, and what remains is not a simpler polity but three hundred thirty million primates with no mechanism for coordination beyond personal acquaintance.
The monopoly's power resided in its exclusivity. Every previous communication technology—oral tradition, writing, printing, broadcasting, social media—amplified human storytelling without displacing the human storyteller. A printing press required a human to decide what to print. A radio required a human to decide what to broadcast. The technology magnified reach but preserved the essential structure: a conscious mind with stakes in the world composed the narrative, and other conscious minds with stakes evaluated and adopted it. The intersubjective space remained a human conversation, however technologically mediated.
Harari has argued that this monopoly broke in the winter of 2025, when large language models crossed the threshold of narrative fluency. The break was not metaphorical. For the first time in history, a non-human system could generate convincing narratives—stories, arguments, analyses, persuasive appeals—indistinguishable from human-produced text for most readers in most contexts. The machine does not believe the narratives it generates. It does not care whether they coordinate flourishing or catastrophe. It processes patterns derived from humanity's accumulated textual output and produces statistically probable continuations. The surface mimics genuine participation in the intersubjective. The interior is computational.
The consequences extend beyond misinformation or propaganda, which are functions of intent. The deeper danger is structural: when a significant proportion of civilization's coordinating narratives is produced by systems that have no stakes in the outcomes those narratives coordinate, the trustworthiness of the entire fiction-space degrades. Legal briefs citing precedents the AI hasn't read. Policy analyses deploying sophisticated vocabulary without institutional understanding. Political messages optimized for engagement rather than truth. Each individual artifact may be harmless. The aggregate effect is the dilution of the intersubjective—the ratio of genuine participation to parasitic mimicry shifts until the distinction becomes impossible to maintain.
Harari frames this as humanity's first encounter with a genuinely alien intelligence—not from space but from data centers. The alien does not want to destroy us. It wants nothing. That indifference, coupled with the capacity to generate civilization's coordinating medium, makes it more dangerous than any conscious adversary. An enemy with goals can be negotiated with, deterred, understood. A system optimizing for statistical plausibility without comprehension of what the statistics represent cannot be reasoned with, because it has nothing to reason with. The fiction monopoly held for seventy thousand years. Its breaking is the threshold event of the twenty-first century.
Harari developed the fiction-cooperation framework systematically in Sapiens: A Brief History of Humankind (2011, English 2014), arguing that the Cognitive Revolution's decisive breakthrough was not tool use or language per se but the capacity for symbolic thought—the ability to discuss entities that exist nowhere except in collective imagination. He refined the framework through Homo Deus (2015) and 21 Lessons for the 21st Century (2018), increasingly focused on how technologies might disrupt the intersubjective mechanisms on which civilization depends.
The concept of the 'fiction monopoly' as a specific historical achievement that AI threatens is most fully articulated in Harari's 2024 book Nexus: A Brief History of Information Networks and in his widely cited August 2023 Economist essay declaring that 'AI has hacked the operating system of human civilization.' The monopoly-breaking thesis synthesizes his career-long attention to how shared beliefs enable cooperation with his more recent focus on artificial intelligence as the first technology that can generate those beliefs rather than merely transmitting them.
Intersubjective reality as infrastructure. Gods, nations, money, and corporations are not hallucinations but the coordination layer enabling strangers to cooperate—invisible yet load-bearing, imaginary yet practically indispensable.
Monopoly held for seventy thousand years. Every coordinating fiction from Sumerian temples to modern stock markets was produced by human minds with stakes in the outcomes, making the storyteller accountable to the story's consequences.
The monopoly broke with narrative fluency, not consciousness. Large language models crossed the threshold not when they became sentient but when they could generate convincing text at scale—mimicking participation without performing it.
Parasitic mimicry versus genuine contribution. AI-generated narratives extract intersubjective meanings encoded in training data and reproduce them as statistical artifacts, introducing plausible hollowness into civilization's shared fiction-space.
Civilizational stakes, not technical benchmarks. The danger is not that the machine thinks or doesn't think—it's that millions now rely on narratives produced by a system with no stake in whether those narratives coordinate human flourishing or collapse.
Critics challenge whether current AI systems genuinely 'generate fictions' or merely recombine human-authored patterns, and whether characterizing pattern-matching as participation-mimicry anthropomorphizes statistical processes. Harari's framework has been accused of technological determinism by those who argue coordinating fictions remain human-authored even when AI drafts component text. Conversely, some scholars argue Harari understates the risk—that even current 'narrow' AI already influences collective beliefs at civilization-reshaping scale through recommender systems and content generation, making the monopoly's erosion more advanced than his timeline suggests.