The most dangerous AI outputs are not obvious errors but plausible simulations—passages that read as though a mind conversant with philosophy, law, science, or policy produced them, while containing no actual comprehension. These texts use the right vocabulary in the right register with the right rhetorical gestures. They cite precedents, make distinctions, build arguments. They are intersubjectively convincing and intersubjectively hollow. The hollowness is structural: the system processes patterns derived from millions of genuine human engagements without experiencing any of them. Over time, the accumulation of such text dilutes the shared space of meanings on which civilization depends.
Harari identifies plausible hollowness as the signature pathology of AI's entry into human discourse. It is not bullshit in Harry Frankfurt's sense (indifference to truth), nor is it lying (intention to deceive). It is a third category: mimicry of understanding without the experiential substrate that produces understanding. Edo Segal's Orange Pill provides the canonical micro-case: Claude generates an elegant passage connecting flow theory to Deleuze's 'smooth space'—beautifully written, philosophically incoherent. Deleuze's concept has almost nothing to do with the deployment. The passage sounds right to anyone without philosophical training. It reads like insight. It is, on inspection, noise dressed as signal.
The danger scales. A single hollow passage in a single text is a curiosity. A million hollow legal briefs, policy analyses, educational materials, news articles—each one plausible, each one contaminating the epistemic commons—produces systemic degradation. Participants in public discourse can no longer reliably distinguish genuine contribution from sophisticated mimicry. The cognitive commons that civilization depends on—the shared pool of verified facts, tested arguments, trustworthy sources—is flooded. Not with obviously false content (which can be filtered) but with content that is adequate: good enough to pass, not good enough to trust. The 'good enough' threshold is where the trap closes.
The concept crystallized in Harari's 2024 Nexus, though earlier formulations appear in his warnings about algorithmic manipulation of democratic discourse. The term itself—plausible hollowness—is this volume's synthesis, but the diagnostic it names pervades Harari's recent work: the recognition that AI's threat to intersubjective reality operates through inflation rather than destruction. The fake does not announce itself. It blends. And blending at scale is indistinguishable from collapse.
Surface correctness, interior emptiness. Plausibly hollow text uses appropriate vocabulary, follows argumentative conventions, cites sources—but the understanding is absent. The system manipulates symbols without comprehending them.
Undetectable to most readers. Distinguishing plausible hollowness from genuine contribution requires domain expertise, background knowledge, and disciplined checking—resources scarce and growing scarcer as AI floods every domain simultaneously.
Parasitic on genuine participation. The plausibility derives from training on millions of genuine human engagements. The model feeds on meanings it did not create, cannot maintain, does not experience.
Dilution, not pollution. The mechanism is not contamination by obviously bad content but inflation by adequate content—raising the noise floor until signal becomes indistinguishable from statistical artifact.
Requires new literacy. Intersubjective literacy—the capacity to evaluate whether text was produced by a mind with stakes or a system processing patterns—is the educational imperative of the AI age.