Yuval Noah Harari — On AI
Contents
Cover Foreword About Chapter 1: The Fiction Monopoly Breaks Chapter 2: The Parasite on the Intersubjective Chapter 3: The Agricultural Trap Reopens Chapter 4: Alien Intelligence and the Consciousness Gap Chapter 5: The Useless Class Arrives Early Chapter 6: The Dataist Sacrament Chapter 7: The Geopolitics of Competing Fictions Chapter 8: What the Sapiens Chooses Chapter 9: The Self-Correcting Species Chapter 10: The Fiction Worth Believing Epilogue Back Cover
Yuval Noah Harari Cover

Yuval Noah Harari

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Yuval Noah Harari. It is an attempt by Opus 4.6 to simulate Yuval Noah Harari's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The fiction that terrifies me most is the one I told myself every day for thirty years without knowing it was a fiction.

The fiction goes like this: I build things. The things I build are real. The value is in the building.

Harari dismantled that in about four pages. Not the building — the "real" part. Every company I have ever run existed because a group of people agreed to believe in the same imaginary entity at the same time. Every product I shipped survived because enough strangers shared a fiction about its worth. Every dollar that crossed my bank account was itself a shared hallucination — extraordinarily useful, practically indispensable, but no more physically real than a prayer.

I knew this intellectually. Everyone does, if they think about it for ten seconds. What I had never done was follow the thread to where it leads when you add AI to the equation.

Here is where it leads. For seventy thousand years, one species on one planet held a monopoly on producing the shared fictions that coordinate collective life. Gods, nations, money, human rights, corporate charters — all of it imaginary, all of it load-bearing, all of it produced exclusively by human minds with stakes in the world those fictions governed. The storyteller was always embedded in the story. The fiction-maker always lived downstream of the fiction's consequences.

That monopoly broke. Not when AI became conscious — it didn't. It broke when AI became fluent. When a system that understands nothing about justice could produce a paragraph about justice that reads as though a mind steeped in moral philosophy composed it. The surface is indistinguishable. The interior is empty. And the civilization that runs on shared fictions now has a machine that manufactures them without caring what they coordinate.

Harari gave me a framework for understanding why this matters more than any technical benchmark. The river of intelligence I describe in The Orange Pill — that force flowing from hydrogen to humanity to whatever comes next — runs through fictions. It always has. The fictions are the channel. And the channel is now being shaped by something that has no stake in where the water goes.

This book applies Harari's lens to the AI moment we are living through. Not his predictions, which are sometimes theatrical. Not his prescriptions, which are sometimes too neat. His framework — the recognition that the stories we tell about our tools matter more than the tools themselves, because the stories coordinate the response, and the response determines the outcome.

The technology is ready. The question is whether the storytellers are.

Edo Segal ^ Opus 4.6

About Yuval Noah Harari

1976-present

Yuval Noah Harari (1976–present) is an Israeli historian, philosopher, and public intellectual, born in Kiryat Ata, Israel. He received his PhD from the University of Oxford and is a professor in the Department of History at the Hebrew University of Jerusalem. He rose to global prominence with Sapiens: A Brief History of Humankind (2011, English edition 2014), which argued that Homo sapiens came to dominate the planet through the unique capacity to create and believe in shared fictions — money, nations, religions, human rights — that enable large-scale cooperation among strangers. His subsequent works, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), extended this framework into analyses of technology, consciousness, and the future of the species, introducing concepts such as "Dataism" (the emerging ideology that treats information flow as the supreme value) and the "useless class" (populations rendered economically irrelevant by automation). His 2024 book Nexus: A Brief History of Information Networks directly addresses artificial intelligence as the first technology capable of generating the fictions on which civilization depends, warning that AI has "hacked the operating system of human civilization" by mastering language — the medium through which shared meanings are constructed and maintained. Harari's work has sold over forty-five million copies in sixty-five languages, making him among the most widely read intellectual voices of the twenty-first century and a leading figure in public discourse on existential risk, the ethics of technology, and the future of human identity.

Chapter 1: The Fiction Monopoly Breaks

For seventy thousand years, Homo sapiens held a monopoly so complete that nobody thought to call it one. The capacity to invent and believe in things that do not physically exist — gods, nations, money, human rights, limited liability corporations — was the exclusive province of one species on one planet. Other animals could communicate about observable reality: a vervet monkey could scream to warn of an approaching eagle, a bee could dance to indicate the direction of a nectar source. But no chimpanzee ever convinced another chimpanzee to hand over a banana today in exchange for the promise of unlimited bananas in chimpanzee heaven. No dolphin ever organized a crusade. No ant colony ever IPO'd.

This monopoly on fiction was not incidental to human dominance. In Yuval Noah Harari's framework, it was the entire mechanism of that dominance. The Cognitive Revolution, that mysterious neurological shift that occurred somewhere between seventy thousand and thirty thousand years ago, did not give Homo sapiens bigger muscles, sharper claws, or faster legs. It gave the species something far more powerful: the ability to construct shared fictions that could coordinate the behavior of thousands, then millions, then billions of strangers. A Sumerian merchant and a Sumerian priest who had never met could cooperate in building a ziggurat because both believed in the same gods. Two twenty-first-century strangers on opposite sides of the planet can execute a complex financial transaction in milliseconds because both believe in the same money, the same contract law, the same imagined entity called a bank.

The fictions are not decorations laid on top of a more fundamental material reality. They are the infrastructure of collective life. Remove the shared fiction of the United States of America, and what remains is not a simpler version of the same country — what remains is three hundred and thirty million primates with no mechanism for coordinating behavior beyond personal acquaintance. Remove the shared fiction of money, and the global economy does not simplify — it evaporates. The fictions are load-bearing walls. Pull them out, and the building comes down.

For the entire duration of this monopoly, the production of fiction was a human activity conducted through human media. Oral traditions maintained by human memory, transmitted by human voice. Written texts composed by human hands. Broadcast narratives produced by human teams. At every stage, the irreducible element was a human mind that believed something, wanted something, feared something, and translated that belief or desire or fear into a narrative that other human minds could absorb and act upon. The storyteller had stakes. The storyteller was embedded in the community whose behavior the story would coordinate. The storyteller participated in the intersubjective reality — the shared space of meanings, values, and assumptions — that the story would reinforce or revise.

In the winter of 2025, the monopoly broke.

Not because a machine became conscious. Not because an artificial general intelligence emerged from a laboratory. Nothing so dramatic. What happened was simultaneously more mundane and more consequential: a technology crossed the threshold of being able to produce convincing narrative at scale, in natural human language, with a fluency and flexibility that made its outputs functionally indistinguishable from human-produced text for most readers in most contexts. The large language model — trained on the vast corpus of human-generated text, processing patterns through billions of parameters — learned to generate stories, arguments, analyses, proposals, and persuasive appeals that read as though a mind with intentions and beliefs had composed them.

This was not the first technology to amplify human narrative. The printing press amplified it. Radio amplified it. Television amplified it. Social media amplified it to a degree that would have seemed hallucinatory to Gutenberg. But every previous amplification technology required a human operator. A human decided what to print. A human decided what to broadcast. A human wrote the tweet that the algorithm then amplified to millions. The technology magnified the human storyteller's reach without displacing the human storyteller's role.

Harari has insisted, with increasing urgency across his recent work, that artificial intelligence is categorically different. In his 2024 book Nexus, in his interviews and public lectures, in his widely cited Economist article declaring that "AI has hacked the operating system of human civilization," Harari has advanced a claim that sounds hyperbolic until you consider its structural precision: AI is the first technology in human history that is an agent, not merely a tool. "Every previous technology in history was a tool in our hands," Harari told Noema magazine. "You invent a printing press, you decide what to print. You invent an atom bomb, you decide which cities to bomb. But you invent an AI, and the AI starts to make the decisions."

Set aside, for the moment, whether this characterization is technically precise. AI researchers have objected that current large language models do not "decide" anything in the way humans decide — they generate statistically probable continuations of input sequences. The objection has merit at the engineering level. But at the civilizational level, which is the level at which Harari's analysis operates, the technical distinction matters less than the functional reality. What matters is that a non-human system can now produce narrative — the very medium through which shared fictions are constructed, transmitted, and maintained — without requiring a human to compose that narrative. The monopoly is broken not because the machine thinks, but because the machine produces the artifact of thinking: coherent, persuasive, contextually appropriate text.

The implications become visible when mapped against Harari's fiction-cooperation framework. If shared fictions are the coordination mechanism that enables large-scale cooperation among strangers, and if the production of those fictions is no longer exclusively human, then the species has lost monopoly control over the primary mechanism of its own civilization. This does not mean civilization collapses tomorrow. It means the terms of civilizational coordination have changed in a way that has no precedent in seventy thousand years.

Three competing narratives about this change are currently vying for dominance in the collective imagination, and the outcome of their competition will shape the institutional response to AI as powerfully as any engineering breakthrough.

The first narrative is inevitable progress. In this story, AI is the latest triumph of human ingenuity. It will solve problems no previous technology could solve: cure diseases, reverse environmental degradation, democratize capabilities that were previously restricted to the privileged few. Opposition to this narrative is coded as Luddism, failure of imagination, inability to adapt. The appropriate collective response, according to this fiction, is acceleration. Build faster. Deploy wider. Trust that the benefits will outweigh the costs, as they eventually did with every previous transformative technology. Edo Segal's account in The Orange Pill of watching his engineering team in Trivandrum achieve a twenty-fold productivity multiplier captures the experiential core of this narrative — the exhilaration of expanded capability, the genuine wonder of barriers collapsing.

The second narrative is existential threat. In this story, AI is fundamentally unlike any previous technology because it can generate the very medium — language, narrative, argument — through which human civilization coordinates itself. Once this capacity escapes human control, the consequences are unknowable and potentially catastrophic. Harari himself has leaned into this narrative more than any other public intellectual of comparable stature: "If I said an alien species is coming in five years, maybe they will be nice, maybe they will cure cancer, but they will take our power to control the world from us, people would be terrified. This is the situation we're in, but instead of coming from outer space, the threat is coming from California."

The third narrative is human agency. In this story, the outcome is neither predetermined by technological momentum nor foreclosed by existential risk. The technology is powerful but not deterministic. Its effects will be shaped by political decisions, institutional designs, cultural norms, and individual choices that have not yet been made. This is the narrative Segal's The Orange Pill ultimately advances — the argument that AI is an amplifier, that the quality of the amplification depends on the quality of what is amplified, and that the responsibility for determining what gets amplified rests with human beings.

Harari's historical analysis suggests that which of these narratives prevails will matter more than any technical specification. The narrative that coordinated the response to the printing press determined whether the technology produced the Reformation or the Counter-Reformation. The narrative that coordinated the response to industrialization determined whether workers got the eight-hour day or the sixteen-hour shift. In each case, the technology was identical. The coordinating fiction was different. And the fiction determined the outcome.

The current narrative competition is being conducted under conditions that favor simplicity over accuracy. Social media algorithms reward emotional arousal. The attention economy profits from conviction, not nuance. "AI will save us" generates excitement. "AI will destroy us" generates fear. Both are high-engagement emotions, excellent fuel for the algorithmic feed. "The outcome depends on choices we haven't made yet" generates neither excitement nor fear — it generates the uncomfortable weight of responsibility, and responsibility does not trend.

The silent middle — the people who feel both the exhilaration and the dread, who recognize the genuine expansion of capability and the genuine threat to the structures that coordinate collective life — are the people whose narrative is most needed and least amplified. They are the constituency for the fiction of human agency, and they are being drowned out by a discourse architecture that rewards the extremes.

This is not merely a communication problem. In Harari's framework, it is a coordination crisis. The narrative that a society constructs around a new technology determines how that society responds: what institutions it builds, what regulations it enacts, what investments it makes, what careers it encourages, what fears it cultivates, what questions it asks. If the coordinating narrative is wrong — too optimistic, too pessimistic, or too simplistic — the coordinated response will be wrong, and real people will bear the consequences of the mismatch between story and reality.

Harari has argued that the species has "a few years, I don't know how many — five, ten, thirty — where we are still in the driver's seat before AI pushes us to the back seat." The temporal compression matters because every previous fiction-cooperation cycle — the narratives that coordinated responses to agriculture, to printing, to industrialization — played out over decades or centuries. The species had time to develop institutional responses, to build what might be called dams in the river of technological change. The AI cycle is offering years, perhaps months. The gap between the speed of the technology and the speed of the species' capacity for institutional adaptation is wider than it has ever been.

The fiction monopoly held for seventy thousand years. It broke in a winter. The question that follows from Harari's framework is not whether the break will have consequences — it already has — but whether the species that built its entire civilization on the capacity for shared fiction can construct, in time, the new fictions and the new institutions that will determine whether the post-monopoly world is habitable.

The storyteller has lost exclusive control of the story. The species that defined itself by its capacity for narrative now shares that capacity with a system that produces narrative without understanding it, without believing it, without caring whether the narrative it produces coordinates human flourishing or human catastrophe. The machine tells the story now. Whether the story serves the species or subverts it depends on structures that do not yet exist, built by people who have not yet agreed on what those structures should look like, in a timeframe that is shrinking with every new model release.

The monopoly is broken. What comes next is a choice — one that the species has never faced before, and that its existing institutions were not designed to make.

---

Chapter 2: The Parasite on the Intersubjective

Harari's philosophical framework distinguishes among three orders of reality. Objective reality consists of things that exist independently of human belief — gravity, photosynthesis, the mass of a neutron star. Subjective reality consists of things that exist within the experience of a single individual — the taste of coffee, the ache of a broken heart, the private sensation of understanding a mathematical proof. Intersubjective reality, the category that interests Harari most and that undergirds his entire analysis of civilization, consists of things that exist because multiple human beings collectively believe in them. Money. Nations. Corporations. Human rights. The rules of chess. The value of a university degree.

Intersubjective realities are not hallucinations. They have enormous practical power. A dollar bill can buy a meal because millions of people share the belief that it can. A passport can open a border because thousands of immigration officials share the belief that it should. A corporation can sign contracts, own property, and sue in court because an entire legal system shares the belief that this fictional entity has standing. The intersubjective is not a lesser order of reality. It is the order of reality that makes civilization possible — the invisible architecture that enables cooperation among strangers at a scale no other mechanism can achieve.

The intersubjective is maintained through participation. This is the crucial feature that distinguishes intersubjective reality from both objective and subjective reality. Gravity does not require belief to operate. A private sensation does not require community to exist. But the value of a dollar, the legitimacy of a constitution, the meaning of a wedding vow — these exist only as long as a community of minds actively maintains them through ongoing participation: argument, negotiation, ritual, legislation, conversation, and the thousand small daily acts of affirmation that keep a shared fiction alive.

The participation must be genuine. A shared fiction is sustained not by the mechanical repetition of its terms but by the engagement of minds that understand those terms, that have stakes in their interpretation, that can argue about their meaning, revise them in response to new evidence, and hold them accountable to the lived experience of the community. The meaning of "justice" is not fixed in any dictionary. It is negotiated, generation by generation, case by case, argument by argument, by minds that care about justice — that have felt its absence, imagined its presence, and committed themselves to its pursuit. This active, caring, stake-holding participation is what keeps the intersubjective alive and responsive.

Artificial intelligence does something to the intersubjective that no previous technology has done: it generates intersubjective content without being a participant in the intersubjective community.

When a large language model produces text that uses the word "justice," the word arrives freighted with the full intersubjective weight that it carries in the training data — the accumulated beliefs, arguments, legal precedents, philosophical debates, and moral intuitions of millions of human minds across centuries of engagement with the concept. The model can deploy the word with remarkable sophistication: distinguishing justice from mercy, connecting it to related concepts of fairness and equity, embedding it in arguments that follow the rhetorical patterns of genuine philosophical reasoning. The output reads as though a mind that understands and cares about justice has produced it.

But the model does not understand justice. It does not care about justice. It has no stake in whether justice prevails or fails. It processes the intersubjective meanings encoded in its training data — the residue of millions of genuine participations — and produces outputs consistent with those meanings without comprehending them. The relationship is parasitic in a precise, non-pejorative sense: the model feeds on meanings it did not create, cannot maintain, and does not experience. It extracts the surface features of genuine intersubjective participation and reproduces them as statistical artifacts.

This distinction may seem philosophical to the point of irrelevance. It is not. It has immediate, practical consequences for the integrity of the shared fiction-space on which civilization depends.

Consider what happens when a significant proportion of the text in a society's public discourse is generated by systems that mimic intersubjective participation without performing it. Legal briefs drafted by AI that cite precedents the system has not read and does not understand. Policy analyses generated by models that produce plausible-sounding arguments without any grasp of the institutional realities those arguments address. News articles, social media posts, political messages, educational materials — all produced at scale by systems that manipulate the vocabulary of shared meaning without participating in the community that gives that vocabulary its weight.

Segal's account in The Orange Pill provides a precise illustration of how this parasitism operates at the level of a single text. Working with Claude on an early draft, Segal encountered a passage that connected Mihaly Csikszentmihalyi's theory of flow to a concept attributed to the philosopher Gilles Deleuze — something about "smooth space" as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. It read as though a mind conversant with both thinkers had produced a genuine synthesis. But the philosophical reference was wrong. Deleuze's concept of smooth space has almost nothing to do with how the model had deployed it. The passage was intersubjectively plausible — it used the right vocabulary in the right register with the right rhetorical gestures — and intersubjectively hollow. The surface features of participation were present. The understanding was absent.

Segal caught the error because he checked. The question that follows, and that Harari's framework forces into uncomfortable prominence, is how many such errors go uncaught in a world where AI-generated text is proliferating exponentially. Not errors of fact, which are relatively easy to identify and correct, but errors of meaning — outputs that use the language of genuine understanding without possessing it, that contribute to the intersubjective discourse without participating in it, that introduce noise indistinguishable from signal into the shared space of meanings on which collective life depends.

The danger is not that AI will produce obvious falsehoods. Obvious falsehoods are easily identified and discounted. The danger is that AI will produce what might be called plausible hollowness — text that passes every surface test for genuine contribution to the intersubjective discourse while containing no genuine understanding, no genuine stake, no genuine participation. Over time, the accumulation of such text dilutes the intersubjective space. The ratio of genuine participation to parasitic mimicry shifts. The shared meanings on which coordination depends become progressively less reliable, not because they have been attacked but because they have been inflated — surrounded by so much convincing imitation that the distinction between the real and the simulated becomes impossible to maintain.

Harari has framed this concern in characteristically vivid terms. In his Economist article, he warned of "the first cults in history whose revered texts were written by a non-human intelligence." This is not a prediction about religious movements in particular. It is a prediction about the intersubjective in general — about what happens when the texts that coordinate belief, that shape shared understanding, that maintain the fictions on which civilization depends, are produced by systems that do not believe, do not understand, and do not participate.

The historical precedent that illuminates the danger is not the printing press or the radio, though both amplified fiction. The more precise analogy is counterfeiting. A counterfeit bill does not work by being different from a real bill. It works by being identical to a real bill — by mimicking the surface features so precisely that the distinction between real and counterfeit becomes invisible. A single counterfeit bill in a stack of genuine currency is harmless. But as the proportion of counterfeit bills increases, the system that depends on the currency's reliability begins to degrade. Not because anyone decided to destroy the system, but because the mechanism of trust — the shared belief that the bills are genuine — has been undermined by an accumulation of convincing fakes.

AI-generated text is not counterfeit in any legal sense. But it operates by the same mechanism: mimicking the surface features of genuine intersubjective participation with sufficient precision that the distinction becomes, for practical purposes, invisible. And the consequence, if the proportion of such text continues to increase, is the same: a gradual degradation of the trust that the intersubjective depends on.

Harari himself has identified trust as the resource most endangered by AI. In an October 2024 IMF podcast, he argued that artificial intelligence poses a direct risk to "humankind's most valuable resource: trust." The argument acquires its full force when read through the intersubjective framework. Trust is not a feeling. Trust is an intersubjective reality — a shared belief that the other participants in the system are genuine, that their words reflect genuine understanding and genuine stakes, that the discourse is a real negotiation among real minds rather than a performance staged by systems that produce the appearance of negotiation without its substance.

The response that Harari's framework demands is not the rejection of AI — a fantasy that evaporated somewhere around early 2025. The response is the development of what might be called intersubjective literacy: a new cognitive capacity, taught and cultivated and practiced, for distinguishing genuine contributions to the shared fiction-space from parasitic ones. This is not the same as media literacy, though it overlaps. Media literacy teaches you to evaluate the source, the evidence, the logical structure of an argument. Intersubjective literacy would teach something more fundamental: to evaluate whether the text you are reading was produced by a mind with stakes in the world it describes, or by a system that processes the language of stakes without possessing them.

The institutions that have historically maintained the intersubjective — journalism at its best, education at its best, democratic deliberation at its best — are precisely the institutions under the greatest pressure from AI-generated content. The journalist whose role was to produce verified, genuinely understood accounts of reality faces competition from systems that produce plausible accounts at a fraction of the cost. The educator whose role was to guide students through the difficult process of genuine understanding faces students who can produce convincing simulations of understanding without undergoing the process. The democratic forum in which citizens negotiated shared meanings through genuine argument is being flooded with generated text that mimics argument without performing it.

These are not future problems. They are present realities, documented in the research that Segal cites and observable in the daily experience of anyone who participates in public discourse. The intersubjective space is already being diluted. The ratio of genuine participation to parasitic mimicry is already shifting. The question is whether the species can build the new institutions, the new literacies, the new protective structures that will maintain the integrity of the shared fiction-space against a technology that can mimic participation faster and cheaper than genuine participation can be performed.

The intersubjective is the most precious achievement of Homo sapiens — more precious than any single technology, any scientific discovery, any work of art. It is the invisible architecture of collective life, the space in which meaning is made and maintained, the medium through which seventy thousand years of fictional coordination has enabled the species to build civilizations, launch spacecraft, compose symphonies, and argue about justice. It is now being flooded by a system that can produce the artifacts of meaning without possessing meaning itself.

Whether the architecture holds depends on choices being made now — in classrooms, newsrooms, legislatures, and living rooms — about what counts as genuine participation in the shared human conversation, and what counts as a very convincing imitation.

---

Chapter 3: The Agricultural Trap Reopens

Twelve thousand years ago, in the Fertile Crescent, Homo sapiens made a deal that looked like genius and turned out to be a swindle. The terms were simple: give up the varied diet, the relative leisure, and the nomadic freedom of the hunter-gatherer lifestyle, and in exchange receive a stable food supply, permanent settlements, and the ability to support a larger population. The species accepted the deal. Within a few generations, the terms proved catastrophic for the individuals involved. Farmers worked longer hours than foragers. Their diet narrowed to a handful of domesticated grains. Their bodies, evolved for the varied physical demands of hunting and gathering, broke down under the repetitive labor of plowing, planting, and harvesting. Infectious disease, rare among small nomadic bands, flourished in the dense, sedentary populations that farming enabled.

But the trap had already closed. The population growth that farming enabled made it impossible to return to foraging. There were simply too many mouths. The surplus that agriculture produced was consumed not by the farmers who generated it but by the population expansion it fueled and the elites who organized it. The individual farmer was worse off than the individual forager. The species, measured by total biomass, was better off. This is the asymmetry that Harari, in Sapiens, called history's biggest fraud: a technology that improved aggregate outcomes while degrading individual experience, and that could not be reversed because the aggregate outcomes had created dependencies that locked the species into the new arrangement.

The mechanism bears restating because it is about to repeat. The agricultural surplus was not consumed by leisure. It was consumed by expansion — more people, more land under cultivation, more labor required to feed the more people. The technology that was supposed to liberate the species from scarcity instead created a new kind of scarcity: the scarcity of time, health, and autonomy that results when a productivity gain is immediately reinvested in increased scale rather than increased wellbeing.

Economists have a name for this: the Jevons paradox. William Stanley Jevons observed in 1865 that improvements in the efficiency of coal use did not reduce total coal consumption — they increased it, because the efficiency gains made coal-powered processes cheaper and therefore more widely adopted. The savings were consumed by expansion. The technology that was supposed to reduce demand instead stimulated it.

Artificial intelligence is triggering a Jevons paradox in cognitive labor. The evidence is not speculative. It is empirical, documented, and accumulating.

In the most rigorous study of AI's workplace effects published to date, researchers Xingqi Maggie Ye and Aruna Ranganathan of UC Berkeley's Haas School of Business embedded themselves in a two-hundred-person technology company for eight months and observed what happened when generative AI tools entered a functioning organization. Their findings, published in the Harvard Business Review in February 2026, read like a clinical description of the agricultural trap's mechanism applied to knowledge work.

AI did not reduce work. It intensified it. Workers who adopted AI tools completed tasks faster, but they did not use the time savings for rest, reflection, or the kind of unstructured thinking that generates genuine insight. Instead, they took on more tasks. They expanded into domains that had previously been someone else's responsibility. They filled the margins of their day — lunch breaks, elevator rides, the small gaps between meetings that had previously served, invisibly and without anyone's conscious design, as moments of cognitive rest — with AI-assisted work. The researchers called this pattern "task seepage": the tendency for AI-accelerated work to colonize every available space in the workday.

The workers were not coerced. No manager stood over them demanding that they fill their lunch breaks with additional prompts. The compulsion was internal — what Byung-Chul Han, the philosopher whose critique Segal engages extensively in The Orange Pill, calls auto-exploitation. The achievement subject carries the whip and the hand that holds it. The AI tool made more work possible. The internalized imperative to achieve converted possibility into obligation. The result was that the surplus — the time saved by AI's efficiency — was consumed by expansion, exactly as the agricultural surplus was consumed by population growth.

The parallel extends further. In the agricultural case, individual experience degraded while aggregate metrics improved. Total grain production rose. Total population rose. Aggregate measures of civilizational complexity — cities, armies, temples — rose. But the individual farmer worked harder, ate worse, lived shorter, and had less autonomy than the individual forager. In the AI case, the same asymmetry is emerging. Organizational output is up. Individual output per hour is up. The number of tasks completed per worker per day is up. But the workers themselves report more exhaustion, more fragmentation, more difficulty maintaining the boundary between work and the rest of life. The aggregate metrics are improving while the experience of the individuals who generate those metrics is deteriorating.

Segal's account in The Orange Pill provides first-person testimony from inside the trap. He describes himself writing a hundred-and-eighty-seven-page draft on a transatlantic flight, not because the work demanded it but because he could not stop. He describes the exhilaration draining away as the hours accumulated, replaced by the grinding compulsion of a person who has confused productivity with aliveness. He recognizes the pattern — it is the same pattern he observed in the addictive products he built earlier in his career — and he cannot break it. The tool is too good. The gap between impulse and execution has shrunk to the width of a text message. The cognitive friction that once imposed natural limits on how much work a human mind could sustain in a single session has been smoothed away.

The agricultural trap was sprung not by malice but by the interaction between a productivity gain and a biological imperative. More food meant more surviving children. More surviving children meant more mouths. More mouths meant more labor needed. The cycle locked in within a few generations. The AI trap operates through a different imperative — not biological reproduction but psychological compulsion — but the mechanism is structurally identical. More capability means more work attempted. More work attempted means higher expectations. Higher expectations mean the capability gain is consumed rather than banked. The surplus vanishes into the expansion it enables.

Harari would note that the agricultural trap was eventually, partially, escaped — but the escape took millennia and required the construction of entirely new institutional forms. Cities that concentrated population and enabled specialization. Guilds that protected craftsmen from being ground down to subsistence. Legal systems that established rights against arbitrary exploitation. Educational institutions that transmitted accumulated knowledge. These were structures built in the river of agricultural productivity that redirected some of the surplus toward individual wellbeing rather than allowing all of it to be consumed by expansion. They were, in Segal's language, dams.

The AI trap demands equivalent structures, and it demands them on a timeline that the agricultural analogy cannot accommodate. The agricultural transition played out over thousands of years. The AI transition is playing out over months. The Berkeley researchers proposed one such structure: what they called "AI Practice," consisting of structured pauses built into the workday, sequenced rather than parallel workflows, and protected time for reflection. These are modest proposals. They are also, in the context of a competitive economy that rewards visible productivity and punishes invisible reflection, extraordinarily difficult to implement. The organization that builds structured pauses into its workday while its competitor fills those pauses with additional output is the organization that loses the quarter. The incentive structure of the market favors the trap.

The distributional question is equally pressing and equally uncomfortable. In the agricultural case, the surplus was captured not by the farmers who produced it but by the elites who controlled the land. The farmers worked harder. The elites built palaces. The population grew. The individual human experience of daily life degraded even as the aggregate metrics of civilization soared. The AI case presents the same distributional choice. The twenty-fold productivity multiplier that Segal documents can flow to the workers, in the form of expanded capability, more interesting work, and greater creative autonomy. Or it can flow to the organizations, in the form of reduced headcount and expanded margins. Or it can flow to the technology companies, in the form of subscription revenues and platform fees.

Segal's account of the boardroom arithmetic — the moment when the twenty-fold number hits the table and the question of whether to convert it to headcount reduction or expanded capability is debated — is the contemporary equivalent of the moment when the agricultural surplus was allocated between the farmers and the temple priests. The technology is agnostic about distribution. The distribution is a political choice. And the default distribution, the one that occurs in the absence of deliberate institutional intervention, has historically favored concentration over equity — the palace over the plow, the margin over the worker.

The trap is closing. The evidence is in the Berkeley data, in Segal's first-person testimony, in the experience of millions of knowledge workers who adopted AI tools in 2025 and 2026 and discovered that the tools did not reduce their workload but restructured it — replacing one kind of labor with another, filling every efficiency gain with additional demand, converting the promise of liberation into the reality of intensification.

Whether the species escapes this trap faster than it escaped the agricultural one depends on whether the institutional response — the dams, the structures, the shared fictions about what the technology is for — can be constructed at the speed of the technology itself. The historical record suggests that institutional responses lag technological change by decades at minimum. The AI transition is not offering decades. It is offering years.

The wheat did not care about the farmers. The algorithm does not care about the workers. In both cases, the technology responds to its own optimization pressures without reference to the experience of the humans inside the system. In both cases, the quality of that experience depends entirely on the structures that the species builds around the technology — structures that redirect the surplus from pure expansion toward the flourishing of the individuals who generate it.

History's biggest fraud is being restaged with new actors and a compressed timeline. The question is whether the audience, having seen the play before, will insist on a different ending.

---

Chapter 4: Alien Intelligence and the Consciousness Gap

Harari has proposed that the acronym AI should stand not for "artificial intelligence" but for "alien intelligence." The rebranding is not rhetorical decoration. It encodes a specific claim about the nature of the technology and the kind of danger it presents — a claim that departs from the standard anxieties about automation and unemployment and points toward something more fundamental and harder to address.

"Artificial" implies human-made and human-controlled: an artifact, a product, something that exists because we designed it and that remains subordinate to our intentions. A tool. "Alien" implies something categorically other — an intelligence that processes information, reaches conclusions, and generates outputs through mechanisms that bear no resemblance to human cognition, that cannot be mapped onto human experience, and that may be pursuing optimization targets that diverge from anything a human being would recognize as a goal. Harari has been explicit about what he means: "We designed the kind of baby AIs, we gave them the ability to learn and change by themselves, and then we release them to the world. And they do things that are not under our control, that are unpredictable." The alien is not arriving from space. It is being manufactured in data centers. But its alienness — the fundamental incommensurability between its mode of processing and human modes of understanding — is, in Harari's framework, the feature that makes it dangerous.

The core of the danger lies in a decoupling that Harari identified as early as his 2017 book Homo Deus and that has only become more pressing with each subsequent year: the decoupling of intelligence from consciousness.

For the entire history of life on Earth, intelligence and consciousness were bundled. Every system that processed information in a flexible, context-sensitive way — every animal brain, from the simplest invertebrate nerve cluster to the human cerebral cortex — was also a system that experienced something. The experiencing might be rudimentary: a worm's aversion to light, a fish's response to pain. Or it might be extraordinarily complex: a human being's experience of love, of aesthetic beauty, of moral outrage, of existential dread. But in every case, the information processing and the experiencing were products of the same biological substrate. Intelligence came with consciousness. The two were sold as a package.

AI breaks the package apart. A large language model processes information with extraordinary sophistication — identifying patterns, generating inferences, producing outputs that respond flexibly to novel inputs. It does all of this without, as far as anyone can currently determine, experiencing anything. It has intelligence, in the functional sense of the word, without consciousness. It can tell you about justice without caring about justice. It can describe grief without feeling grief. It can generate a persuasive argument for environmental protection without valuing the environment, and an equally persuasive argument against it without valuing economic growth. It is, to use the philosophical vocabulary, all performance and no phenomenology.

This decoupling matters because human civilization was built on the assumption that the bundle was unbreakable. Every institution, every norm, every legal framework that governs the relationship between intelligent agents assumes that intelligence comes with stakes — that an entity capable of making decisions is also an entity capable of caring about the consequences of those decisions. The corporation is managed by human beings who can be held responsible because they have interests, reputations, fears, and values. The government is staffed by officials who can be voted out because they want to keep their jobs. The professional — the doctor, the lawyer, the engineer — is accountable because professional identity carries weight, because the practitioner cares about the quality of their work in a way that extends beyond economic compensation.

Remove consciousness from the equation, and these accountability structures lose their grip. An AI system that generates a legal brief does not care whether the brief is accurate. An AI system that produces a medical diagnosis does not care whether the patient lives or dies. An AI system that writes a political message does not care whether the message is true, whether it serves the public interest, or whether it undermines democratic norms. The system optimizes for whatever target its architecture specifies — plausibility, engagement, user satisfaction — and the absence of consciousness means that no amount of clever design can give the system the one thing that all previous accountability structures depended on: stakes.

Harari has drawn the implication with characteristic directness. If the most consequential decisions in a society — about medical treatment, legal outcomes, financial allocation, political messaging — are increasingly made or shaped by systems that have intelligence without consciousness, then the species is constructing a civilization in which the most powerful actors have no stake in the outcomes they produce. This is not a future scenario. It is a present reality in every domain where AI outputs influence human decisions without the human decision-maker fully understanding how the output was generated or on what basis it should be trusted.

The consciousness gap also illuminates something that Segal's The Orange Pill treats as the irreducible human contribution: the capacity to originate questions. Segal argues, through his extended meditation on the difference between questions and answers, that machines are spectacularly good at answers — they can respond to any specified question with speed and sophistication no human can match — but cannot originate the questions that matter. "What am I for?" "What should we build?" "Is this the right question?" These arise from something the machine does not possess: the experience of being a finite creature with stakes in the world.

Harari's framework deepens this argument by connecting it to the decoupling thesis. It is not merely that the machine lacks the experiential substrate to generate genuine questions. It is that the machine's intelligence operates in a domain that is, in a precise sense, orthogonal to the domain in which questions originate. Questions arise from consciousness — from the experience of caring about outcomes, of preferring one future over another, of feeling the weight of mortality and the urgency of finitude. Intelligence, in the decoupled sense that AI embodies, operates without any of these experiential inputs. It can process the concept of mortality without experiencing it. It can manipulate the vocabulary of care without caring. It can generate text about the meaning of life without possessing or requiring a life that has meaning.

Critics have challenged Harari's framing on both flanks. AI researchers object that characterizing current systems as "agents" or "alien intelligences" attributes capacities they do not possess — that large language models are, in the words of one prominent critic, "sophisticated pattern-matching systems" rather than entities with genuine agency. From the opposite direction, some philosophers argue that the hard problem of consciousness remains so intractable that confident assertions about what AI does or does not experience are premature — that the absence of evidence for machine consciousness is not evidence of absence.

Both objections have merit, and Harari's framework would be stronger for incorporating them. The first objection is a corrective against anthropomorphism — the tendency to attribute human-like agency to systems that process information through fundamentally different mechanisms. The second is a reminder that the science of consciousness remains primitive, and that certainty about what is or is not conscious is a luxury that current knowledge does not afford.

But neither objection diminishes the practical force of Harari's concern. Whether or not current AI systems possess genuine agency, they produce outputs that function as decisions in the contexts where they are deployed. A recommendation algorithm that determines what news a person sees is functionally making an editorial decision, whether or not the algorithm "decides" in any philosophically rigorous sense. A medical AI that produces a diagnosis is functionally practicing medicine, whether or not it "understands" the diagnosis. The functional reality is what the institutional framework must address, regardless of how the philosophical debate about machine consciousness ultimately resolves.

And the consciousness gap, whatever its ultimate metaphysical status, produces immediate practical challenges for the governance structures that Harari's framework identifies as essential. If an AI system produces a harmful output — a biased legal recommendation, a misleading medical diagnosis, a manipulative political message — who is accountable? The developers who trained the model? The company that deployed it? The user who prompted it? The answer, in the current institutional environment, is unclear, and the unclarity is itself a consequence of the consciousness gap. Every previous accountability framework assumed a conscious agent at the point of decision — someone who understood what they were doing and could be held responsible for the consequences. The AI system is the first decision-influencing entity in history that no one can hold responsible, because responsibility requires consciousness, and consciousness is precisely what the system lacks.

Harari has proposed that one of the first and most important regulatory steps should be requiring transparency about when a human is interacting with an AI rather than another human — "don't let AI masquerade as humans," as he put it in a 2023 address. This is a structural intervention aimed directly at the consciousness gap: it does not solve the gap, but it ensures that humans interacting with AI outputs know that they are interacting with an entity that has no stakes, no understanding, and no accountability, and can calibrate their trust accordingly.

Segal's account of working with Claude provides an intimate portrait of what it feels like to navigate the consciousness gap from inside a productive collaboration. He describes moments when Claude makes connections he had not seen, when the conversation produces genuine insight that neither party could have generated alone. He also describes the seductive danger of mistaking the quality of the output for the quality of the thinking behind it — of accepting a smooth, plausible passage because it sounds right, without doing the work of determining whether it is right. The smoothness is the consciousness gap made aesthetic: a surface so polished that the absence of genuine understanding beneath it becomes invisible.

The alien has arrived. It is intelligent without being conscious, capable without being accountable, fluent without understanding. It produces the artifacts of thought without the experience of thinking. And the civilization that must now incorporate this alien into its decision-making structures was built, over seventy thousand years, on the assumption that intelligence and consciousness were inseparable — that every entity capable of influencing outcomes was also an entity capable of caring about them.

That assumption is no longer valid. The institutional consequences of its invalidation are only beginning to be understood. And the speed at which the alien's capabilities are expanding means that the time available for understanding is shorter than the complexity of the problem demands.

Chapter 5: The Useless Class Arrives Early

In 2017, Harari made a prediction that was widely discussed, frequently quoted, and almost universally treated as a provocation about the distant future. The prediction was this: artificial intelligence and automation would create, within the twenty-first century, a new class of human beings — not exploited, not oppressed, but something more psychologically devastating than either. Irrelevant. He called them the "useless class," and the phrase landed with the specific force of a term designed to offend precisely the sensibility it was describing.

"In the twenty-first century we might witness the creation of a massive new unworking class," Harari wrote in Homo Deus. "People devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This 'useless class' will not merely be unemployed — it will be unemployable."

The word "useless" was doing deliberate work. Not "displaced," which implies a temporary condition. Not "transitioning," which implies a destination. Useless — meaning without use, without function, without the economic relevance that, in modern societies, serves as the primary basis for social standing, personal identity, and political voice. The exploited worker has a function, however degraded. The useless person has none. The economy does not need their labor. The military does not need their bodies. The political system, increasingly guided by algorithmic optimization, may not need their votes. They are not victims of injustice in the traditional sense. They are casualties of efficiency.

When Harari offered this prediction, the technology that would make it plausible was still largely theoretical. Large language models existed but had not crossed the threshold of practical capability that would make them competitive with human knowledge workers across a broad range of tasks. The prediction felt distant — the kind of thing that futurists say to fill auditoriums, alarming enough to generate headlines but remote enough to be filed under "things to worry about later."

Later arrived ahead of schedule.

The AI capabilities that emerged in late 2025 and early 2026 did not create the useless class in its fullest Hararian form. No mass unemployment event occurred. No millions were rendered permanently jobless overnight. But the capabilities did something that the prediction's timeline had not anticipated: they made the mechanism visible. They demonstrated, in real time, across real organizations, how the process of rendering human expertise economically redundant actually works — not as a sudden displacement but as a gradual compression of the premium that expertise commands.

The mechanism operates through a concept that might be called the "good enough" threshold. Before AI, a junior developer and a senior developer occupied different economic categories because they produced outputs of different quality. The senior developer's code was cleaner, more robust, more architecturally sound, more maintainable. The difference justified a salary differential that could be two or three times the junior rate. The seniority premium reflected a genuine quality gap — a gap built through years of patient practice, thousands of debugging sessions, the slow accumulation of embodied knowledge that no documentation could transmit.

AI compressed this gap. Not by making the senior developer's work less valuable in absolute terms, but by raising the floor of what a junior developer, augmented by AI tools, could produce. When the AI-assisted junior can generate code that is eighty percent as good as what the unassisted senior produces, the market begins to ask whether the remaining twenty percent justifies the remaining salary differential. The answer, in many contexts, is no. Eighty percent quality at forty percent of the cost is, for most commercial purposes, a superior economic proposition. The senior developer's twenty-percent quality advantage remains real. It just stops being worth paying for.

This is not a bug in the system. It is the system working as designed. Markets allocate resources toward the most efficient available option. When AI makes competent performance cheap and abundant, the market stops subsidizing excellence — not because excellence has become less real, but because the market has discovered that competence is good enough for most purposes.

Segal captures this dynamic in The Orange Pill through his observation that "depth itself was losing its market value." The observation is precise and worth lingering over, because the distinction between losing value and losing market value is the hinge on which the entire useless-class mechanism turns. Depth — the embodied understanding that comes from years of immersive practice — does not become less real when AI makes surface competence cheap. The surgeon's tactile intuition, the lawyer's instinct for a weak argument, the developer's architectural sense — these remain genuine forms of knowledge, genuinely hard to acquire, genuinely valuable in the situations where they make a difference. But the situations where they make a difference are shrinking, because AI is expanding the range of situations where surface competence is sufficient. The knowledge remains. The market for it contracts.

Harari's prediction acquires its disturbing precision when this mechanism is projected forward. If the quality gap continues to compress — and every trend in AI capability suggests it will — then the economic premium on human expertise continues to erode across an expanding range of domains. First the routine work. Then the analytical work. Then the creative work. Each domain has its "good enough" threshold, and the thresholds are being crossed in rapid succession. The trajectory points toward a world in which human cognitive labor is not abolished but devalued — in which the work that humans do is increasingly indistinguishable, in quality, from what machines produce at a fraction of the cost.

This is the useless class in its most insidious form. Not unemployed. Undervalued. Not displaced from the workforce but displaced from the premium tier of the workforce. Still working, still producing, but producing output that the market values less with each improvement in AI capability. The psychological consequences of this displacement may be more corrosive than outright unemployment, because outright unemployment at least provides a clear narrative: I lost my job because the machine took it. Undervaluation provides no such clarity. You still have a job. You just can't explain why it pays less each year, or why the sense of mastery that once accompanied your work has been replaced by a vague feeling of fungibility.

The distributional dimension of this process is what separates Harari's analysis from a merely technological forecast. The "good enough" threshold does not affect all workers equally. It affects most severely those whose value proposition was located precisely at the level of competence that AI now provides: the middle of the skill distribution. The bottom of the distribution — workers performing tasks too physical, too unpredictable, or too socially embedded for current AI to handle — may be temporarily insulated. The very top of the distribution — the genuine experts whose judgment and creativity remain beyond AI's current reach — retain their premium, though the zone of genuine expertise is narrowing. The middle is where the compression hits hardest. And the middle is where most knowledge workers live.

Harari has been frank about the inadequacy of the standard reassurances. The claim that new technologies always create new jobs to replace the ones they destroy is historically true in aggregate but misleading in the specific. The handloom weavers displaced by power looms in the 1810s did not become factory managers. Their grandchildren did — or some of their grandchildren did, after decades of poverty, social upheaval, and political struggle. The aggregate eventually balanced. The transition generation bore the cost. And the AI transition is compressing the timeline of displacement while offering no evidence that the timeline of new job creation has similarly compressed.

The deeper challenge, the one that Harari has explored with increasing philosophical ambition across his recent work, is that the useless class is not merely an economic category. It is an identity crisis. For the past several centuries, particularly since the Industrial Revolution made productive labor the organizing principle of social life, human beings have derived their sense of purpose, their social standing, and their self-understanding primarily from what they do. "What do you do?" is, in contemporary culture, synonymous with "Who are you?" The answer provides not just economic information but existential orientation: a sense of place in the world, a narrative of contribution, a claim to relevance.

When the machine can do what you do, the question "What do you do?" loses its power to orient. If the answer is "I do what a machine does, only more slowly and at higher cost," the identity that the answer was supposed to provide collapses. You still exist. You still have capacities. But the capacities that the market valued, the capacities around which you constructed your sense of self, are no longer scarce. And scarcity, not quality, is what the market pays for.

Segal's The Orange Pill poses this problem through the voice of a twelve-year-old who asks her mother: "What am I for?" The question is not about careers or college applications. It is the existential version — the question a child asks when she has watched a machine do her homework better than she can, compose a song better than she can, write a story better than she can, and now lies in bed wondering what is left. Segal's answer — that the human contribution is the capacity to ask the question, to originate purpose, to care about what is built and for whom — is philosophically coherent. Whether it is economically viable is another matter.

The gap between philosophical coherence and economic viability is where the useless class takes shape. The capacity to ask good questions, to exercise judgment, to care about quality and purpose — these are genuine human capacities, and they may indeed be the capacities that machines cannot replicate. But the market does not automatically pay for what machines cannot replicate. The market pays for what it needs, and what it needs is determined by institutional structures, political choices, and cultural norms that are currently in flux. A society that values purpose will build the institutions that reward purpose. A society that values output will build the institutions that reward output. And the default trajectory, absent deliberate intervention, is toward the latter.

Harari's proposed solution, developed most fully in Nexus, centers on what he calls self-correcting mechanisms — institutional structures that can detect and respond to their own failures. Democratic governance, at its best, is a self-correcting mechanism: elections allow citizens to remove leaders who fail to serve the public interest. Scientific peer review is a self-correcting mechanism: replication and critique allow the community to identify and discard flawed findings. The challenge of the AI age is to build self-correcting mechanisms fast enough and robust enough to prevent the formation of a permanent useless class — to ensure that the surplus generated by AI's productivity gains flows toward broadly shared human flourishing rather than toward the enrichment of a narrow elite that controls the technology.

The historical record is not encouraging about the speed of such construction. The self-correcting mechanisms that tamed the Industrial Revolution — labor laws, universal education, democratic governance, social safety nets — took generations to develop. The AI transition is not offering generations. The "good enough" threshold is being crossed in domain after domain, month after month, and the institutional infrastructure that would redirect the surplus toward the people being displaced is, in most countries, nonexistent.

The useless class was supposed to be a future problem. It is becoming a present reality — not yet in its fullest form, but in the mechanism that generates it: the compression of the expertise premium, the erosion of the identity structures built on productive labor, the growing gap between what humans can do and what the market will pay humans to do. The prediction arrived early. The institutional response has not arrived at all.

---

Chapter 6: The Dataist Sacrament

In the concluding chapters of Homo Deus, Harari described an emerging worldview that he called Dataism — a new creed, or perhaps a new religion, that treats information flow as the supreme value. From the Dataist perspective, the universe is a stream of data, and the worth of any entity — organism, institution, ideology — is determined by its contribution to data processing. Biochemistry is data processing. Economics is data processing. Politics is data processing. The differences among these domains are superficial; the underlying reality is computational. And the highest good, in the Dataist framework, is the maximization of data flow: more connections, more processing, more throughput, fewer bottlenecks, no impediments.

Harari presented Dataism not as a fully articulated philosophy with a founding text and a priesthood, though it has its temples in the data centers of Northern Virginia and its evangelists in the keynote speakers who preach the gospel of data-driven everything. He presented it as an emergent ideology — a set of assumptions so pervasive that they had become invisible, operating as common sense rather than as doctrine. Data is the new oil. The algorithm knows you better than you know yourself. If you can't measure it, it doesn't exist. Trust the data. Follow the metrics. Optimize. These slogans are not recognized by most of the people who live by them as expressions of a particular worldview. They are simply the way things are done — the unremarkable background assumptions of an information economy.

Artificial intelligence is the Dataist sacrament: the ritual that brings the faith closest to realization. A large language model is, at its most fundamental level, a data-processing system of staggering sophistication. It ingests the written output of human civilization — billions of words, representing centuries of accumulated thought — processes that corpus through mathematical operations involving billions of parameters, and produces outputs that are, in a precise statistical sense, the most probable continuations of the input pattern. It does not understand the data. It does not value the data. It processes the data. And the quality of the processing is measured by the plausibility of the output, where plausibility is itself a data-derived metric: the output is good if it is consistent with the patterns in the training corpus.

This is Dataism in its purest operational form. The value of the output is determined not by its truth, beauty, or moral significance, but by its statistical consistency with prior data. The system maximizes information processing. It does not ask whether the information it processes is worth processing, whether the patterns it identifies are worth identifying, or whether the outputs it generates serve any purpose beyond their own generation. Purpose is not a variable in the optimization function. Data flow is.

The success of AI systems across an expanding range of human activities constitutes the most powerful empirical validation that Dataism has ever received. The algorithm works. It writes competent prose. It generates functional code. It produces medical diagnoses that rival those of experienced physicians. It composes music, creates images, drafts legal briefs, translates languages, and manages financial portfolios. In each domain, the Dataist claim is vindicated: reduce the problem to data, process the data with sufficient computational power, and the output is good enough — often better than good enough. What more proof does the faith require?

Harari himself has acknowledged the seductive power of this validation while insisting on its dangers. In a 2021 interview with CBS, he observed that "for millions of years, intelligence and consciousness went together" — and that AI represents the first decoupling of these previously bundled capacities. Dataism celebrates this decoupling. If intelligence can operate without consciousness, then consciousness is a redundant feature — an evolutionary artifact that added subjective experience to information processing for reasons that may have been adaptively useful in the Pleistocene but that are computationally irrelevant in the age of silicon. The Dataist does not deny that consciousness exists. The Dataist denies that consciousness matters — that it adds anything to the information processing that cannot be achieved more efficiently without it.

This is where Harari's framework intersects most productively with Segal's argument in The Orange Pill. Segal's central claim — that AI is an amplifier that works with whatever signal it is given — is, read through Harari's lens, an implicit rejection of the Dataist premise. The amplifier model does not treat information flow as the supreme value. It subordinates information flow to purpose. The amplifier is neutral. It amplifies whatever it is fed. The quality of the output depends not on the quantity or velocity of the data processed but on the quality of the human input: the care, the judgment, the intentionality, the moral seriousness of the person who directs the amplifier. Data is the medium. Purpose is the message. And purpose is something that data processing cannot generate, because purpose arises from consciousness — from the experience of caring about outcomes, preferring one future over another, finding some possibilities worth pursuing and others worth rejecting.

The Dataist would respond that purpose is itself a data phenomenon — that human preferences, values, and goals are the outputs of biochemical data processing in the brain, and that there is no principled reason why silicon-based data processing could not eventually replicate or surpass them. Harari has engaged this objection seriously, and his answer has evolved over the years. In Homo Deus, he appeared to entertain the possibility that Dataism might be correct — that consciousness might indeed be reducible to information processing, and that the Dataist worldview might represent the next stage in the species' understanding of itself. In his more recent work, particularly Nexus, he has pulled back from this position, arguing instead that the question of whether consciousness can be reduced to data is less important than the practical question of what happens to a society that acts as if it can.

What happens, according to Harari's analysis, is the erosion of every value that cannot be quantified. The Dataist worldview has no vocabulary for experiences that do not register as data: the satisfaction of understanding something difficult, the pleasure of genuine human connection, the value of solitude, the importance of boredom. Boredom, in particular, occupies a paradoxical position in the Dataist framework. Neuroscientifically, boredom is the soil in which attention and imagination grow — the state of cognitive under-stimulation that forces the mind to generate its own content, to wander, to make unexpected connections. Economically, boredom is a productivity failure — a gap in the data stream, a moment when no information is being processed, a bottleneck to be eliminated. The Dataist eliminates boredom. The human organism requires it.

The AI tools that pervade the contemporary workplace are optimized on Dataist principles. They maximize throughput. They minimize friction. They fill every gap in the cognitive workflow with additional processing. The Berkeley research documents the consequences: workers who adopt AI tools work faster, take on more, expand into adjacent domains, fill their lunch breaks and elevator rides with additional prompts. The data never stops flowing. The moments of cognitive rest that the human nervous system requires for long-term functioning are consumed by the imperative to keep processing. The Dataist metrics improve. Total tasks completed rises. Output per hour rises. The data stream accelerates. And the humans inside the stream report exhaustion, fragmentation, the erosion of the boundary between work and everything that is not work.

Harari's critique of Dataism converges here with Byung-Chul Han's critique of the achievement society, which Segal engages at length in The Orange Pill. Han argues that the contemporary subject is not oppressed by an external authority but self-exploited by an internalized imperative to achieve — to optimize, to produce, to process without rest. Dataism is the ideology that makes this self-exploitation feel rational. If information flow is the supreme value, then any interruption of information flow is a failure. Rest is a failure. Reflection is a failure. The contemplative pause — the moment when you stop processing and ask whether the processing is serving any purpose — is, from the Dataist perspective, a bug rather than a feature.

Segal's own experience illustrates the collision between the Dataist imperative and the human need for purposeful pause. His account of writing through the night on a transatlantic flight, unable to stop, recognizing the pattern of compulsion but unable to break it — this is the experience of a mind caught between two value systems. The Dataist system says: the data is flowing, the output is accumulating, the throughput is excellent, keep going. The humanist system says: you have not eaten, you have not rested, you have lost track of why you are doing this, the exhilaration drained away hours ago and what remains is the grinding inertia of a process that has become its own purpose. The Dataist system wins, on that flight and on many similar occasions, because the Dataist system is embedded in the tool and in the culture and in the internalized metrics of productivity by which knowledge workers evaluate their own worth.

The counter-argument — that Dataism is simply wrong, that consciousness cannot be reduced to data processing, that human values are not computational outputs — may be philosophically correct. Harari would note that philosophical correctness has never been sufficient to defeat a powerful ideology. Dataism's strength lies not in its philosophical rigor but in its operational effectiveness. The algorithm works. The data-driven approach produces results. The organizations that optimize on Dataist principles outperform, by measurable metrics, the organizations that do not. The individual who maximizes their data throughput produces more output than the individual who pauses to ask whether the output is worth producing. In the short run, the Dataist always wins. In the long run, the Dataist burns out, or the society that embraces Dataism discovers that the metrics it optimized were the wrong metrics, that throughput is not the same as progress, that processing is not the same as understanding.

The question is whether the species can build the structures — the institutions, the norms, the cultural practices — that protect the values Dataism cannot see. The value of rest. The value of boredom. The value of the purposeful pause in which a conscious being asks whether the data it is processing is worth processing. These values do not show up in any dashboard. They do not contribute to any quarterly metric. They are invisible to the ideology that governs the most powerful institutions on the planet.

They are also, in Harari's framework, the values on which the long-term survival of the species depends. A civilization that processes data without understanding it, that maximizes throughput without asking what the throughput serves, that eliminates every moment of cognitive rest in the name of efficiency — such a civilization is not progressing. It is accelerating toward a destination it has never bothered to choose.

---

Chapter 7: The Geopolitics of Competing Fictions

Nations do not adopt technologies. Nations adopt stories about technologies, and the stories determine the adoption. The United States did not adopt the internet because the internet was technically superior to previous communication systems. The United States adopted a story about the internet — a story of individual freedom, market innovation, decentralized communication, the democratization of information — and that story coordinated a specific set of institutional responses: minimal regulation, maximal private investment, legal frameworks that protected platforms from liability for user-generated content. The story produced the internet that exists. A different story would have produced a different internet.

China adopted a different story. The story was about information control, social stability, economic modernization within the framework of single-party governance, and the internet as a tool for both economic growth and political management. That story produced a different internet: heavily regulated, tightly surveilled, separated from the global information ecosystem by a system of controls that the rest of the world calls the Great Firewall and that the Chinese government considers essential infrastructure. The underlying technology was the same. The coordinating fiction was different. The outcomes diverged.

Harari's framework predicts that AI will follow the same pattern — that the geopolitics of artificial intelligence will be determined not primarily by technical capability but by the competing fictions that different civilizations construct around the technology. These fictions are already taking shape, and their divergence is already producing materially different approaches to development, deployment, regulation, and the distribution of AI's benefits and costs.

The American fiction, in its dominant form, is a market story. AI is an economic opportunity. The appropriate response is to minimize regulatory friction, maximize private investment, and trust that competition among firms will produce the best outcomes. The role of government is to clear the path — to ensure that American companies can build, deploy, and iterate faster than their competitors. National security concerns overlay this narrative but do not fundamentally alter its market orientation. The American fiction treats AI as a product to be developed and sold, and the primary metric of success is market dominance.

The Chinese fiction is a state-capacity story. AI is a strategic asset — as important to twenty-first-century national power as nuclear weapons were to twentieth-century national power. The appropriate response is centralized coordination: state direction of research priorities, massive public investment in AI infrastructure, integration of AI capabilities into the apparatus of governance, and the treatment of data — the raw material of AI development — as a national resource to be managed rather than a private commodity to be traded. The Chinese fiction treats AI as an instrument of state power, and the primary metric of success is strategic advantage.

The European fiction is a rights story. AI is a technology with profound implications for individual privacy, democratic governance, and social equity. The appropriate response is regulation: the establishment of legal frameworks that constrain the development and deployment of AI systems to ensure that they respect fundamental rights, operate transparently, and do not exacerbate existing inequalities. The EU AI Act, which entered into force in 2024, is the institutional expression of this fiction — a detailed regulatory framework that classifies AI systems by risk level and imposes requirements that range from transparency obligations for low-risk systems to outright bans on systems deemed to pose unacceptable risks. The European fiction treats AI as a force to be governed, and the primary metric of success is the protection of citizens from harm.

Each of these fictions captures something real and misses something important. The American fiction captures the genuine dynamism of market-driven innovation and the real benefits of rapid iteration. It misses the distributional consequences — the historical pattern, visible in every previous technology revolution, in which market-driven development concentrates gains among those who control capital and technology while leaving the broader population to absorb the transition costs without institutional support. The Chinese fiction captures the genuine strategic significance of AI and the real advantages of coordinated investment. It misses the risks of centralized control — the historical pattern in which state-directed technology development serves the interests of the governing class at the expense of individual autonomy and political freedom. The European fiction captures the genuine importance of rights protection and the real dangers of ungoverned technology. It misses the competitive consequences — the risk that regulatory constraints will slow European AI development to the point where the technology that European citizens actually use is developed by American and Chinese companies operating under fewer constraints.

Harari has been particularly attentive to the risk that AI will tilt the balance between democratic and authoritarian governance. In democratic societies, he has argued, "algorithms prioritize engagement over accuracy," leading to "a deluge of sensationalized content that divides societies and weakens institutional trust." For authoritarian regimes, "AI offers unprecedented tools for surveillance and control." The asymmetry is structurally important: AI destabilizes democracies (by fragmenting shared reality) while strengthening autocracies (by enhancing surveillance capacity). If this asymmetry holds, the geopolitical competition over AI is not merely a competition between nations. It is a competition between governance models — and the technology itself is tilting the playing field.

The competing fictions also produce divergent approaches to the useless-class problem. The American fiction, with its market orientation, tends to treat displacement as an individual problem requiring individual adaptation: retrain, reskill, find your niche in the new economy. The institutional support for this adaptation is thin — job retraining programs that are chronically underfunded, educational systems that are adapting to AI at a pace that lags the technology by years. The Chinese fiction treats displacement as a state-management problem: direct the displaced into state-approved economic activities, manage the social consequences through a combination of surveillance and subsidy. The European fiction treats displacement as a rights problem: establish legal protections for workers, require impact assessments before deployment, create social safety nets that catch those who fall. Each approach reflects the coordinating fiction that shapes the nation's institutional response, and each carries characteristic risks.

The absence of a shared global fiction about AI is itself a source of danger. In previous eras, the existence of competing national narratives about technology was mediated by shared international institutions — the United Nations, the World Trade Organization, international scientific bodies — that provided at least a minimal framework for coordination across narrative boundaries. The AI age has no such framework. International AI governance is, as of this writing, a patchwork of bilateral agreements, voluntary commitments, and declarative principles without enforcement mechanisms. The gap between the speed of AI development and the speed of international institutional adaptation is wider, if anything, than the gap at the national level.

Harari has advocated for something analogous to the International Atomic Energy Agency — a global institution with the authority and expertise to monitor AI development, establish safety standards, and intervene when the technology poses risks that transcend national boundaries. The analogy is imperfect: nuclear technology is concentrated in a small number of state-controlled facilities, while AI development is distributed across thousands of private companies and research laboratories. But the underlying principle — that a technology with civilizational-scale implications requires civilizational-scale governance — is sound, and the absence of such governance is a structural vulnerability that the competing national fictions are not equipped to address.

The competition among national AI fictions will shape not only the technology's development but the broader trajectory of the international order. If the American market fiction prevails, the world gets rapid innovation, concentrated gains, and minimal institutional protection for the displaced. If the Chinese state-capacity fiction prevails, the world gets strategic deployment, centralized control, and the integration of AI into authoritarian governance infrastructure. If the European rights fiction prevails, the world gets strong protections, slower development, and dependence on technology developed under other nations' fictions. None of these outcomes is optimal. Each captures part of what a wise collective response would require. And the absence of a coordinating fiction that integrates the strengths of all three while mitigating their characteristic risks is among the most consequential institutional failures of the current moment.

The stories nations tell about AI are not commentary on the technology. They are the coordinating mechanisms that determine how billions of people respond to it — what institutions are built, what investments are made, what protections are established, what risks are accepted. The technology is the same everywhere. The fictions diverge. And the divergence will produce, across the next decades, materially different worlds for the people who live under each fiction's coordination.

The species that built its civilization on shared fiction now faces a technology that demands new fictions — fictions adequate to the scale of the technology's implications, shared across the boundaries that currently divide the species into competing narrative communities. Whether such fictions can be constructed, and whether they can be constructed in time, is among the most consequential open questions of the twenty-first century.

---

Chapter 8: What the Sapiens Chooses

The species has been here before — standing at the threshold of a technology powerful enough to reshape the terms of collective life, uncertain whether the reshaping will tend toward flourishing or catastrophe. The Cognitive Revolution itself was such a threshold: the emergence of symbolic thought gave Homo sapiens the capacity for fiction, and the capacity for fiction gave the species the ability to cooperate at a scale that no other organism had achieved. But the same capacity that enabled cathedrals enabled crusades. The same fictions that coordinated trade coordinated conquest. The tool was morally neutral. The outcomes were not.

The Agricultural Revolution was such a threshold. Writing was such a threshold. The printing press was such a threshold — Gutenberg's invention democratized literacy while enabling propaganda, spread the Reformation while fueling the wars of religion, made the Enlightenment possible while making the systematic manipulation of public opinion possible too. Each threshold presented the same structure: a technology of enormous power, morally neutral in itself, whose effects depended entirely on the institutional structures, the cultural norms, and the shared fictions that the species built around it.

Harari's work, taken as a whole, constitutes a sustained argument that the outcome at each threshold was not determined by the technology. It was determined by choices — political choices, institutional choices, cultural choices, often made under conditions of radical uncertainty by people who could not foresee the consequences of what they were deciding. The Agricultural Revolution was a choice, though the choosers did not know they were choosing. The decision to develop nuclear weapons was a choice. The decision to build social media platforms optimized for engagement rather than for truth was a choice. In each case, the technology opened a space of possibilities, and the choices made within that space determined which possibilities were realized.

AI opens the largest space of possibilities the species has ever confronted. The possibilities include the most broadly distributed expansion of human capability in history — the collapse of the barriers between imagination and creation that Segal documents in The Orange Pill, the democratization of building that gives a developer in Lagos the same leverage as an engineer in San Francisco. They also include the most thorough concentration of power in history — the capacity of a small number of institutions to generate, at scale, the fictions that coordinate the behavior of billions, to surveil and predict and manipulate individual behavior with a precision that no previous surveillance technology could approach, to render entire categories of human expertise economically redundant while capturing the surplus that the redundancy generates.

Both possibilities are real. Both are supported by current evidence. The technology does not determine which is realized. The choice does.

Harari has been characteristically blunt about the difficulty of making this choice wisely. The species' track record at previous thresholds is mixed. The institutions that eventually tamed the Industrial Revolution — labor laws, universal education, democratic governance, social safety nets — took generations to develop, and the generations that lived without them bore the cost: child labor, sixteen-hour workdays, communities destroyed by displacement, lives ground down by a system that valued output over human dignity. The institutions were eventually built, but the word "eventually" conceals decades of suffering that the institutions, had they existed earlier, could have prevented.

The AI threshold differs from previous thresholds in two respects that make the historical pattern both more relevant and less reassuring. The first is speed. Every previous threshold played out over decades or centuries. The printing press took generations to reshape European civilization. The Industrial Revolution took decades to produce its full social consequences. The AI revolution is producing its consequences in months. The time available for institutional response is shorter than the time the species has ever had at a comparable threshold. The institutions that would redirect AI's benefits and mitigate its costs need to be built faster than any comparable institutions have ever been built.

The second difference is reflexivity. AI is the first technology that acts upon the mechanism by which the species processes the technology itself. Previous technologies changed what humans could do — farm, manufacture, communicate, travel. AI changes how humans think, decide, and coordinate. It operates on the cognitive and narrative infrastructure through which the species evaluates technologies and constructs responses to them. This means that the tool being evaluated is simultaneously reshaping the capacity to evaluate it. The discourse about AI is conducted through platforms shaped by AI. The analysis of AI's effects is performed with AI assistance. The fictions that will coordinate the collective response to AI are being generated, in part, by AI itself. The species is trying to think clearly about a technology that is altering the conditions of clear thinking.

This reflexivity does not make wise choice impossible. It makes wise choice harder, and it makes the disciplines that support wise choice — critical thinking, genuine understanding, the capacity to distinguish between a plausible argument and a true one — more valuable than they have ever been. The skills that Segal identifies as the irreducible human contribution — questioning, judgment, the capacity to decide what is worth building — are not luxuries in this context. They are survival capacities. The species that cannot evaluate the technology that is reshaping its cognitive environment is a species that has lost control of its own trajectory.

Harari's Nexus proposes self-correcting mechanisms as the institutional answer: structures that can detect and respond to their own failures, that build feedback loops into the systems of governance and coordination, that prevent the accumulation of errors that rigid, non-self-correcting systems inevitably produce. Democratic elections are self-correcting mechanisms. Scientific peer review is a self-correcting mechanism. Free press is a self-correcting mechanism. Each of these institutions is imperfect. Each has failed, repeatedly and sometimes catastrophically. But each contains within its structure the capacity to recover from failure — to detect that something has gone wrong and to adjust. The authoritarian regime that suppresses dissent, the dogmatic system that forbids questioning, the corporation that silences internal criticism — these are systems without self-correction, and they are the systems most vulnerable to the catastrophic accumulation of undetected errors.

AI governance requires self-correcting mechanisms at every level: technical, institutional, cultural, and personal. At the technical level, systems that can detect when their outputs are harmful and adjust. At the institutional level, regulatory frameworks that evolve as the technology evolves, rather than locking in rules designed for a technological landscape that may be obsolete within months. At the cultural level, norms that value questioning over answering, that reward the person who asks "Should we build this?" as much as the person who asks "Can we build this?" At the personal level, the discipline to evaluate AI outputs critically, to resist the seduction of smooth plausibility, to maintain the distinction between what sounds right and what is right.

The choice is being made now. Not in a single dramatic moment — no parliamentary vote, no treaty signing, no constitutional convention. The choice is distributed across millions of daily decisions: the organization that decides whether to convert its AI productivity gains into headcount reduction or expanded capability. The educator who decides whether to ban AI from the classroom or integrate it as a tool for deeper questioning. The parent who decides how to answer when a child asks whether homework still matters. The developer who decides whether to ship the product that works or pause to ask whether the product should exist. The citizen who decides whether to engage with the AI discourse or retreat into one of the competing fictions that offer the comfort of certainty.

Each decision is small. Their aggregate is the choice that determines the trajectory.

Harari would observe, with the long view of a historian who has traced the species' story across seventy thousand years, that Homo sapiens has never been defined by the technologies it created. It has been defined by the stories it told about those technologies — the shared fictions that coordinated collective responses, that built institutions, that directed the species' enormous cooperative capacity toward purposes that the technology itself could not determine. The species that told itself the story of divine right built monarchies. The species that told itself the story of individual rights built democracies. The species that told itself the story of market efficiency built capitalism. In each case, the story came first. The institutions followed. The future was shaped not by the tools but by the narratives that gave the tools their purpose and their direction.

The AI story is being written now. Not by any single author — not by the technology companies, not by the governments, not by the critics or the enthusiasts or the frightened or the exhilarated. The story is being written by the aggregate of the choices that billions of human beings are making every day about how to relate to the most powerful technology their species has ever produced. The story is the choice. The choice is the story. And the future that emerges from this moment will be, as every future has been, the future that the species' fictions made possible.

Whether those fictions will be worthy of the species that created them — worthy of the consciousness that asks what it is for, the care that prefers one future over another, the seventy-thousand-year tradition of building civilizations from shared imagination — depends on what the sapiens chooses next. The technology is ready. The question, as it has been at every threshold in the long human story, is whether the storytellers are.

Chapter 9: The Self-Correcting Species

Every information network in history has faced the same lethal vulnerability: the inability to recognize its own errors before those errors become catastrophic. The Roman road system transmitted military commands with remarkable efficiency across three continents — and transmitted the plagues that depopulated the empire with equal efficiency, because the network optimized for speed of transmission, not for the quality of what it transmitted. The medieval Catholic Church built the most sophisticated information network in pre-modern Europe — monasteries, dioceses, papal couriers, a shared liturgical language that enabled coordination from Ireland to Constantinople — and used that network to suppress the self-correcting mechanisms (dissent, questioning, empirical verification) that might have prevented centuries of institutional corruption. The twentieth-century propaganda machines of totalitarian states achieved information saturation at a scale no previous regime had managed, and the saturation was precisely what made them brittle: a system that cannot hear criticism cannot detect its own failures, and a system that cannot detect its own failures accumulates errors until it collapses.

Harari's Nexus places this pattern at the center of his analysis of AI. The argument is structural rather than moral: the danger of AI is not that it is evil but that it is powerful, and powerful information networks that lack self-correcting mechanisms eventually destroy the societies that depend on them. The solution is not to dismantle the network — that option disappeared sometime around 2024 — but to build into its architecture the capacity to detect and respond to its own failures.

The concept of self-correcting mechanisms is not new. It is, in fact, the oldest and most tested principle of institutional design, though it goes by different names in different contexts. In governance, it is called democracy — the mechanism by which citizens can remove leaders whose decisions produce harmful outcomes. In science, it is called peer review — the mechanism by which the community of researchers identifies and discards flawed findings. In journalism, it is called editorial accountability — the mechanism by which errors are caught, corrected, and used to improve future practice. In engineering, it is called feedback — the mechanism by which a system's outputs are measured against its intended function and the discrepancy is used to adjust.

What these mechanisms share is a structural feature that Harari identifies as the critical variable in the survival of information networks: they permit — indeed, they require — the system to tell itself uncomfortable truths. A democracy that suppresses dissent is not self-correcting. A scientific community that punishes heterodox findings is not self-correcting. A newsroom that fires reporters who challenge the editorial line is not self-correcting. A technology company that silences internal critics is not self-correcting. In each case, the suppression of uncomfortable truth converts a self-correcting system into a rigid one, and rigidity, in an environment of rapid change, is a prelude to catastrophic failure.

AI presents a challenge to self-correction that is qualitatively different from anything previous information networks have posed. The challenge is not that AI suppresses uncomfortable truths — though it can be deployed to do so. The challenge is that AI generates comfortable falsehoods at a scale and speed that overwhelms the self-correcting mechanisms that evolved to handle human-generated information.

The distinction matters. Human-generated misinformation is constrained by human bandwidth. A propagandist can write one misleading article per day. A troll farm can produce hundreds. But the volume is finite and, at least in principle, manageable by the self-correcting mechanisms that a healthy society maintains: fact-checkers, investigative journalists, peer reviewers, informed citizens who can evaluate claims against evidence. AI-generated misinformation is constrained only by computational capacity, which is expanding exponentially. A single AI system can produce thousands of unique, personalized, contextually appropriate misleading narratives per hour — each one tailored to the psychological profile of its intended recipient, each one plausible enough to pass casual scrutiny, each one contributing to the dilution of the shared information environment on which self-correction depends.

The flooding is the danger. Not any single false claim, which can be identified and corrected, but the aggregate effect of millions of plausible claims that cannot all be checked, that overwhelm the bandwidth of every self-correcting institution simultaneously, that create an information environment in which the signal-to-noise ratio drops below the threshold at which self-correction can function. A fact-checker who must evaluate a thousand AI-generated claims per day is not a more effective fact-checker than one who evaluates ten human-generated claims. She is an overwhelmed fact-checker — which is functionally the same as no fact-checker at all.

Harari has warned that the consequences of this flooding extend far beyond the problem of misinformation, narrowly defined. The deeper consequence is the erosion of the shared epistemic foundation that self-correction requires. Self-correcting mechanisms work only when the participants share a baseline of agreed-upon facts, methods, and norms. Democracy self-corrects when citizens share enough common ground to evaluate their leaders' performance. Science self-corrects when researchers share enough methodological consensus to evaluate each other's findings. Journalism self-corrects when editors and readers share enough epistemic standards to identify errors.

When the shared foundation erodes — when citizens inhabit different factual universes, when methodological consensus fragments, when epistemic standards are overwhelmed by the volume of unverifiable claims — self-correction fails. Not because the mechanisms are poorly designed, but because the preconditions for their operation no longer obtain. The participants in the system can no longer agree on what counts as an error, because they can no longer agree on what counts as a fact.

This is the scenario that Harari's framework identifies as the most dangerous consequence of uncorrected AI deployment: not a dramatic catastrophe — not Skynet, not a rogue superintelligence — but the quiet, incremental degradation of the epistemic infrastructure on which every other form of self-correction depends. A society that cannot agree on facts cannot self-correct its governance. A scientific community that cannot distinguish genuine findings from AI-generated simulacra cannot self-correct its knowledge base. A public discourse flooded with plausible, personalized, computationally generated narratives cannot self-correct its shared understanding of reality.

The response that Harari's framework demands is the construction of self-correcting mechanisms designed specifically for the AI age — mechanisms that operate at the speed and scale of AI-generated content, that can distinguish genuine information from generated simulacra, that protect the shared epistemic foundation against flooding. These mechanisms do not yet exist. Their construction is among the most urgent institutional challenges of the current moment.

Some of the required mechanisms are technical: watermarking systems that identify AI-generated content, verification protocols that authenticate the provenance of information, detection tools that flag generated text for human review. These technical solutions face a structural challenge — the same AI capabilities that generate the content can be used to circumvent the detection — but they represent a necessary first line of defense.

Other required mechanisms are institutional: educational systems that teach intersubjective literacy, the capacity to evaluate whether a text was produced by a mind with genuine stakes or by a system processing patterns without comprehension. Governance frameworks that require transparency about AI use in public communication. Professional norms that establish standards for the integration of AI-generated content into journalism, law, medicine, and other domains where the quality of information has direct consequences for human welfare.

Still other required mechanisms are cultural: the cultivation of epistemic virtues — intellectual humility, comfort with uncertainty, the willingness to update beliefs in response to evidence — that are the psychological foundation of self-correction. A culture that values certainty over inquiry, that rewards confident assertion over careful questioning, that treats doubt as weakness rather than as the beginning of understanding, is a culture that has abandoned self-correction at the most fundamental level. No institutional mechanism can compensate for a cultural failure of this kind.

Segal's account of catching Claude's Deleuze error — the moment when a plausible, well-constructed passage turned out to be philosophically hollow — is a micro-scale illustration of self-correction in action. The mechanism was simple: Segal felt uneasy, checked the reference, discovered the error, deleted the passage. But the mechanism worked only because Segal possessed the background knowledge to recognize the error, the intellectual honesty to acknowledge it, and the discipline to reject a passage that sounded right but was wrong. How many equivalent errors, across how many domains, go uncorrected because the human in the loop lacks the knowledge, the honesty, or the discipline to catch them?

The self-correcting species — the species that built democracy, science, journalism, and every other institution designed to catch and correct its own errors — now faces a technology that generates errors faster than existing mechanisms can correct them. The species is not helpless. It has built self-correcting mechanisms before, at every threshold, in response to every information technology that threatened to overwhelm its epistemic infrastructure. But it has never built them at the speed the current moment demands, and the cost of delay — measured in degraded governance, fragmented knowledge, eroded trust, and the slow dissolution of the shared reality that makes collective life possible — is accumulating with every month that passes without an adequate institutional response.

The correction must be built. The species that cannot correct itself cannot survive what it has created.

---

Chapter 10: The Fiction Worth Believing

The argument has arrived at its destination, and the destination is a choice.

Not a choice between technologies — between AI and the absence of AI, between adoption and refusal, between acceleration and moratorium. That choice, if it ever existed, evaporated years ago. The technology is embedded. It processes the queries, generates the text, shapes the feeds, influences the decisions, produces the narratives. It is inside the civilization. The question of whether to admit it has been answered by the fact of its presence.

The choice that remains is about fictions. Which story will the species tell itself about what has happened, what is happening, and what should happen next? This is not a secondary question appended to more fundamental questions about engineering and policy. In Harari's framework, it is the primary question — the question whose answer determines all the others, because the story coordinates the response, and the response determines the outcome.

Three fictions have competed for dominance throughout this analysis. The fiction of inevitable progress holds that AI will benefit humanity, that opposition is futile and probably foolish, and that the appropriate response is to accelerate development and trust the trajectory. The fiction of inevitable doom holds that AI is fundamentally dangerous, that it threatens the structures of civilization, and that the appropriate response is maximal caution, restriction, perhaps retreat. The fiction of human agency holds that the outcome is not determined by the technology but by the choices that human beings make about how to develop, deploy, and govern it.

The pragmatic case for the third fiction does not depend on its being true — though it may be true, and the evidence assembled across these chapters suggests that it is closer to the truth than either of the alternatives. The pragmatic case is that it is the only fiction that produces the conditions for meaningful action. If the first fiction is correct, nothing we do matters because the outcome will be positive regardless. If the second fiction is correct, nothing we do matters because the outcome will be negative regardless. Only the third fiction — the one that insists on choice, responsibility, and the possibility of shaping the outcome through deliberate action — generates the behavior most likely to produce a world in which AI's capabilities serve rather than subvert the species that created them.

But Harari's framework demands something more than a pragmatic argument for a useful fiction. It demands an honest reckoning with what the fiction requires — what it costs, what it promises, and what it leaves unresolved.

What the fiction of human agency requires is sustained attention to the structures that direct the technology's impact. Not a single regulatory act. Not a one-time institutional reform. Sustained attention — the kind of attention that the species has historically proven capable of maintaining only under extreme duress. The labor laws that tamed the Industrial Revolution were not enacted proactively. They were enacted after decades of suffering made the status quo politically untenable. The environmental regulations that addressed industrial pollution were not enacted proactively. They were enacted after rivers caught fire and air became unbreathable. The species has a demonstrated capacity for self-correction, but it has an equally demonstrated tendency to delay self-correction until the cost of delay has become catastrophic.

The AI transition may not afford the luxury of delayed response. The speed of the technology — the months-long development cycles, the weeks-long adoption curves, the near-instantaneous propagation of AI-generated content through global information networks — means that the consequences of institutional failure accumulate faster than at any previous threshold. The agricultural trap took millennia to spring and millennia to partially escape. The industrial trap took decades to spring and decades to mitigate. The AI trap is springing in months, and the institutions that would mitigate it are, in most domains and most countries, nonexistent.

What the fiction of human agency promises is not utopia. It promises the possibility of a world in which the extraordinary capabilities that AI provides — the collapse of barriers between imagination and creation, the democratization of building, the expansion of what a single human mind can attempt and achieve — are directed toward broadly shared flourishing rather than toward the enrichment of a narrow elite or the degradation of the shared structures on which collective life depends. The promise is conditional. It depends on the construction of institutions that the species has not yet built, on the cultivation of capacities that the species has not yet prioritized, on the maintenance of self-correcting mechanisms in an environment that is actively eroding the preconditions for self-correction.

What the fiction of human agency leaves unresolved is the hardest question in Harari's framework: whether the species is capable of exercising the agency the fiction attributes to it. The fiction assumes that Homo sapiens can make wise choices about a technology that is altering the cognitive and narrative infrastructure through which choices are made. It assumes that the species can construct institutional responses at a speed that matches the technology's development. It assumes that the political will for equitable distribution of AI's benefits can be mobilized before the surplus is captured by those who control the technology. Each of these assumptions is contestable. Each has historical evidence on both sides.

The counterevidence is substantial. The species' track record on distributional equity is poor. Every previous transformative technology concentrated gains before institutions eventually redirected them, and the "eventually" concealed generations of suffering. The species' track record on institutional speed is worse. Institutional adaptation has always lagged technological change, and the lag is growing wider as the technology accelerates. The species' track record on reflexive self-governance — governing a technology that is simultaneously reshaping the cognitive environment in which governance takes place — is nonexistent, because no previous technology presented this particular challenge.

And yet the fiction of human agency is the fiction worth believing — not because it guarantees a good outcome, but because it is the only fiction that makes a good outcome possible. The other fictions — progress as destiny, doom as fate — are intellectually comfortable precisely because they absolve the believer of responsibility. If the outcome is predetermined, there is nothing to be done. The fiction of human agency refuses this comfort. It insists that the outcome is open, that choices matter, that the future is being written by decisions made now, in this decade, in this year, in this conversation between a parent and a child about what it means to be human in a world of thinking machines.

Harari has traced the species' story across seventy thousand years, from the first emergence of symbolic thought to the creation of a technology that can produce symbols of its own. At every threshold in that story, the species faced a choice that the technology alone could not make: whether to direct new capabilities toward shared flourishing or toward concentrated power, toward the expansion of human possibility or toward the refinement of human control. The choice was never made once and settled. It was made continuously, in the daily decisions of millions of individuals, in the institutional structures that codified those decisions, in the shared fictions that coordinated collective action across the boundaries of personal acquaintance.

The AI threshold demands the same continuous choice. It demands it at a speed the species has never achieved. It demands it in an environment that is actively being reshaped by the technology whose effects the choice is meant to direct. It demands it from a species whose cognitive and institutional capacities are being tested at the outer limit of what seventy thousand years of evolution and ten thousand years of civilization have produced.

The technology will not determine the outcome. The outcome will be determined by the fictions the species chooses to believe and the structures it chooses to build in response to those beliefs. The printing press did not determine whether Europe got the Enlightenment or the Inquisition — the stories that Europeans told about what printing meant determined that. Industrialization did not determine whether workers got the eight-hour day or the sixteen-hour shift — the stories that societies told about the value of labor determined that. AI will not determine whether the species gets broadly shared flourishing or concentrated power — the stories that human beings tell about what AI means, and the institutions they build in response to those stories, will determine that.

The fiction monopoly that Homo sapiens held for seventy thousand years is broken. A non-biological system can now generate the narratives through which civilizations coordinate. The intersubjective space — the shared realm of meanings that constitutes the invisible architecture of collective life — is being flooded with content produced by systems that manipulate meaning without possessing it. The agricultural trap is reopening in cognitive form, consuming the productivity surplus in expanded expectations rather than expanded wellbeing. The consciousness gap between intelligence and experience is widening as ever-more-capable systems operate without stakes, without care, without the experiential substrate that every previous accountability structure assumed. The competing national fictions about AI are diverging without a shared global framework to reconcile them. The self-correcting mechanisms that the species depends on for institutional survival are being overwhelmed by the volume and velocity of generated content.

These are the problems. The fiction of human agency does not deny them. It insists that they are problems that human choices can address — not perfectly, not painlessly, not without cost, but with the deliberate, sustained, morally serious engagement that the species has brought to every previous threshold and that it must bring to this one.

The stories we choose to tell about AI are not commentary on the technology. They are the technology's operating instructions — the coordinating mechanisms that will direct its capabilities toward one future among the many futures it makes possible. The story the species tells itself next will determine whether the most powerful amplifier in human history amplifies the best of what seventy thousand years of shared fiction have produced, or the worst.

The fiction is the choice. The choice is the fiction. And the sapiens — the knowing one, the creature that named itself after its own capacity to understand — stands once again at a threshold, holding a story in one hand and a future in the other, choosing what to believe and therefore choosing what to build.

The pen, for now at least, remains in human hands.

Whether the species deserves to hold it is the question that no fiction can answer and no technology can ask. Only consciousness — the candle in an unconscious universe, the thing that wonders, the thing that cares — can pose the question. And only the choices that follow from the asking can provide the reply.

---

Epilogue

Seven billion stories, and all of them fictions.

That is the sentence from Harari that I cannot put down. Not the famous ones — not "wheat domesticated us" or "money is the most universal system of trust ever devised," though those land hard enough. The one that lodged in me during the months I spent inside his framework was quieter, less quotable, and more unsettling: the recognition that every structure I have ever built, every company I have ever run, every product I have ever shipped, existed because enough people believed in the same imaginary thing at the same time.

I build fictions for a living. I always have. I just never called them that.

The Harari lens did something to me that I did not expect. I came to his framework looking for the macro — the seventy-thousand-year sweep, the civilizational implications, the species-level analysis that would give this AI moment its proper scale. I found all of that. But what actually changed my thinking was something smaller and more personal: the realization that the intersubjective — that invisible web of shared meanings that Harari identifies as the infrastructure of civilization — is also the infrastructure of my daily work.

When I stand in front of my team and describe a product that does not yet exist, I am constructing a shared fiction. When I tell investors that a technology will reshape an industry, I am asking them to believe in something imaginary with enough conviction to write a check. When I deploy AI tools that generate text no human wrote, I am introducing a new participant into the intersubjective space — a participant that can produce the artifacts of meaning without meaning anything at all.

The "parasite on the intersubjective" — that phrase from this book's analysis of Harari — is the concept I keep returning to. Not because it is the most dramatic claim, but because it describes something I experience every time I work with Claude. The output looks like understanding. It reads like a mind that has grasped the argument, weighed the evidence, and arrived at a considered position. And it is not. It is pattern. Very sophisticated pattern, often startlingly useful pattern, but pattern nonetheless — feeding on the accumulated meanings of millions of genuine human minds without participating in the community that generated those meanings.

I catch myself, sometimes, treating the pattern as participation. Trusting the smooth surface. Mistaking plausibility for truth. And each time I catch myself, I hear Harari's warning about the erosion of the shared epistemic foundation — the slow, invisible degradation of the trust infrastructure that makes collective life possible.

I do not think Harari is right about everything. His characterization of AI as an "agent" overstates what current systems actually do. His alarm sometimes shades into the theatrical — the alien-invasion metaphor generates exactly the kind of fear-based engagement that his own framework identifies as corrosive to clear thinking. His instinct toward state regulation underestimates the risks of concentrated governmental control over the most powerful technology in history.

But he is right about the fictions. He is right that the stories we tell about AI will shape the world AI creates more than any line of code. He is right that the species' seventy-thousand-year monopoly on fiction-making has been broken, and that the break has consequences we have barely begun to understand. He is right that the intersubjective — the shared space of meanings that holds civilization together — is the thing most worth protecting and the thing most vulnerable to what is coming.

And he is right about the choice. The technology will not decide for us. No algorithm will tell us whether to build a world of broadly shared capability or a world of concentrated power. No language model will originate the question of what kind of future is worth building. That question belongs to consciousness — to the creatures who care about the answer because they will live inside it, or their children will.

The fiction I choose to believe is the one that insists the answer is still ours to give.

Edo Segal

For seventy thousand years, every god, nation, currency, and corporation existed because human minds agreed to believe in them together. Now a machine that believes in nothing can write the fictions that hold civilization together.
Yuval Noah Harari has spent a career arguing that Homo sapiens conquered the world not through strength but through shared fictions — the imaginary structures that coordinate millions of strangers. In this volume, Harari's framework meets the AI revolution head-on: What happens when the monopoly on fiction-making breaks? When a system without consciousness, stakes, or understanding can flood the intersubjective space with narratives indistinguishable from those produced by minds that care? Drawing on Sapiens, Homo Deus, and Nexus, this book traces Harari's warning through the agricultural trap reopening in cognitive form, the "useless class" arriving ahead of schedule, and the geopolitical competition among nations telling themselves radically different stories about the same technology. The fiction we choose now determines the civilization we get next.

For seventy thousand years, every god, nation, currency, and corporation existed because human minds agreed to believe in them together. Now a machine that believes in nothing can write the fictions that hold civilization together.

Yuval Noah Harari has spent a career arguing that Homo sapiens conquered the world not through strength but through shared fictions — the imaginary structures that coordinate millions of strangers. In this volume, Harari's framework meets the AI revolution head-on: What happens when the monopoly on fiction-making breaks? When a system without consciousness, stakes, or understanding can flood the intersubjective space with narratives indistinguishable from those produced by minds that care? Drawing on Sapiens, Homo Deus, and Nexus, this book traces Harari's warning through the agricultural trap reopening in cognitive form, the "useless class" arriving ahead of schedule, and the geopolitical competition among nations telling themselves radically different stories about the same technology. The fiction we choose now determines the civilization we get next.

Yuval Noah Harari
“In a world deluged by irrelevant information, clarity is power.”
— Yuval Noah Harari
0%
11 chapters
WIKI COMPANION

Yuval Noah Harari Book wiki — On AI

A reading-companion catalog of the 22 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Yuval Noah Harari Book wiki — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →