The evaluative gap is this book's name for the structural distance between two capabilities that are easily confused: processing information about consequences, and feeling the weight of consequences. AI systems are capable of the first to an extraordinary degree; they are architecturally incapable of the second. Damasio's clinical work shows that humans with ventromedial prefrontal damage retain the first capacity while losing the second — and that the loss is catastrophic for practical judgment. The evaluative gap is where wisdom lives, and the question of the AI age is whether human judges will maintain the somatic conditions under which they can close the gap on the machines' behalf.
The gap is not a deficit in AI capability that future engineering will close. It is a structural feature of the distinction between processing and feeling. A system that processes without having stakes is not an imperfect version of a system that feels; it is a different kind of system entirely, with different strengths, limits, and roles.
The gap manifests in specific patterns. A medical diagnostic system identifies a tumor without experiencing the gravity of the identification. A financial model simulates market scenarios without feeling the weight of potential human consequences. A legal analysis tool surveys case law without the somatic awareness that certain precedents carry the residue of historical injustice the dataset alone cannot encode.
The most dangerous scenario the gap enables is not AI making wrong decisions — wrong decisions can be detected and corrected. The most dangerous scenario is AI making plausible decisions without stakes, and humans — seduced by the plausibility and the efficiency — ceasing to provide the felt engagement that would distinguish plausibility from wisdom.
The gap connects directly to Byung-Chul Han's critique of smoothness: the polish of AI output is precisely the quality that suppresses human somatic markers, making the human reviewer less likely to notice that the output lacks evaluative weight. Smoothness is the gap's concealment mechanism. Friction is its revelation.
The Orange Pill's account of the "Deleuze error" — the author almost keeping a confident passage that turned out to be false — illustrates the gap in miniature. The machine had processed elegantly; it had not felt whether its output was true. The human almost failed to feel whether the output was true either, because the smoothness had suppressed his own somatic signal. The gap, in that moment, was nearly unbridged.
The phrase as used here is this book's term, but the underlying concept is implicit in Damasio's entire corpus from Descartes' Error forward, and is echoed in critiques of AI from Hubert Dreyfus, John Searle, and the enactivist tradition. The framing as a specifically structural feature of AI deployment is a synthesis of Damasio's clinical framework with contemporary observations about how AI tools function in practice.
The gap is structural. It is not a bug to be fixed by better training but a feature of the distinction between processing and feeling.
The clinical parallel is precise. AI systems instantiate, by design, the architecture that ventromedial prefrontal damage produces by lesion.
Plausible wrongness is the signature risk. The gap manifests not as obvious failure but as smooth outputs that lack the felt weight that would flag them as consequential.
Human judgment is the bridge. The gap must be closed by feeling organisms; when humans defer uncritically to AI outputs, the gap remains open and outputs accumulate without evaluative filter.
Smoothness conceals the gap. The polish of AI output is the feature most likely to suppress the human somatic signals that would otherwise alert a reviewer to what the output lacks.