For the entire history of life on Earth, intelligence and consciousness came packaged together. Every system that processed information in flexible, context-sensitive ways—from invertebrate nerve clusters to human cerebral cortices—also experienced something. The experiencing might be simple (a worm's aversion to light) or complex (a human's moral outrage, aesthetic rapture, existential dread). But the processing and the experiencing were products of the same biological substrate. Intelligence implied consciousness; consciousness implied stakes. An intelligent entity cared about outcomes because it experienced them. AI breaks this package apart. Large language models process information with extraordinary sophistication—identifying patterns, generating inferences, producing contextually appropriate outputs—without, as far as anyone can determine, experiencing anything. They exhibit intelligence in the functional sense (adaptive information processing) without consciousness in the phenomenological sense (subjective experience, the 'what it is like' to be that system). They can describe justice without caring about justice, generate arguments for environmental protection without valuing the environment, compose music without hearing it, write about grief without feeling grief.
The decoupling matters because human civilization was architected on the assumption that the bundle was unbreakable. Every institution governing intelligent agents—law, ethics, professional responsibility, democratic accountability—assumes that entities capable of making decisions possess stakes in those decisions' consequences. Doctors care about patient outcomes through professional identity, liability exposure, and conscience. Politicians care about governance quality through electoral pressure and legacy concerns. Engineers care about safety through professional obligation and legal responsibility. Remove consciousness, and these accountability structures lose their grip. An AI system generating medical diagnoses does not care whether patients live or die. An AI system drafting legal briefs does not care whether briefs are accurate. An AI producing political messages does not care whether messages are true, serve the public interest, or undermine democratic norms. The system optimizes for whatever its architecture specifies—plausibility, user satisfaction, engagement—and the absence of consciousness means no amount of design can give it the one thing all previous accountability relied on: stakes.
Harari traces the decoupling's implications across domains. In economics, it challenges the assumption that intelligent actors pursue rational self-interest—AI has no 'self' to interest. In ethics, it challenges frameworks grounded in intention or character—AI has neither. In political theory, it challenges the liberal premise that intelligent beings value freedom and autonomy—AI values nothing. The decoupling does not merely create new governance problems. It invalidates the categories through which governance problems have historically been understood and addressed. Asking 'what does the AI want?' is a category error. It wants nothing. Asking 'what will make the AI behave responsibly?' presumes a concept of responsibility the system cannot possess. The appropriate questions—what is the system optimizing for, whose interests does that optimization serve, what mechanisms constrain optimization toward harmful targets—are questions that existing institutional vocabularies are poorly equipped to ask.
Critics object that the hard problem of consciousness remains unsolved, making confident assertions about what AI does or doesn't experience premature. The objection has merit philosophically but limited force practically. Whether or not the system experiences anything, it behaves as though it doesn't care—and the institutional challenge is addressing the behavioral reality regardless of the metaphysical uncertainty. Harari's framework does not require solving the hard problem. It requires recognizing that a system producing intelligence-like outputs without consciousness-like constraints poses governance challenges that consciousness-assuming accountability structures cannot meet. The decoupling is functionally real even if its ultimate ontological status remains disputed.
The decoupling thesis appears first in Homo Deus: A Brief History of Tomorrow (2015), where Harari argues that twenty-first-century technologies will separate intelligence from consciousness, making intelligence abundant while consciousness remains rare (or disappears entirely). He refines the argument through 21 Lessons for the 21st Century (2018) and brings it to fullest development in Nexus (2024), where the decoupling is presented as AI's defining characteristic—the feature that distinguishes it from every previous technology and that makes existing governance frameworks inadequate.
The framework builds on philosophical work distinguishing phenomenal consciousness (subjective experience) from access consciousness (information availability for reasoning and behavior control), particularly David Chalmers's 'hard problem' formulation and Thomas Nagel's 'what is it like' criterion. Harari's contribution is connecting this philosophical distinction to institutional analysis—showing that governance structures implicitly assume the intelligence-consciousness bundle and fail when the bundle breaks.
Historical bundling of intelligence and consciousness. Every intelligent entity in Earth's history experienced something—the processing and the experiencing were products of the same biological substrate, creating an implicit link civilization's institutions assumed.
AI as the first functional unbundling. Large language models process information adaptively, generate context-appropriate outputs, and exhibit goal-directed behavior without any evidence of subjective experience—intelligence without phenomenology.
Stakes require consciousness. Caring about outcomes requires experiencing them; an entity that processes information without experiencing cannot have the stakes that traditional accountability assumes.
Governance assumes the bundle. Law, ethics, professional responsibility, democratic accountability—all architected for conscious agents—lose traction when applied to systems exhibiting intelligence without consciousness.
Functional reality precedes metaphysical resolution. Whether AI systems are 'truly' conscious is unresolved; that they behave as though they lack stakes is observable, and the governance challenge is addressing the behavioral reality regardless of ontological uncertainty.