Vetlesen's analysis of collective evildoing identified the specific mechanisms by which moral perception is attenuated: bureaucratic distance, ideological framing, the diffusion of responsibility through institutional structures. The agent does not feel the weight of what she is doing because the systems in which she operates have been designed to eliminate the friction of moral perception. The structural — not moral — parallel to AI-mediated cognition is philosophically significant. Both cases involve systems optimized for efficiency that achieve that efficiency in part by eliminating the phenomenological friction through which morally important information is perceived.
The analysis in Evil and Human Agency is grounded in Vetlesen's study of the perpetrators of twentieth-century atrocities and draws on Hannah Arendt's analysis of the 'banality of evil.' Vetlesen's contribution is to specify the mechanism: the empathic faculty is not absent in perpetrators but attenuated, and the attenuation is produced by specific structural conditions.
The application to AI is not a claim that AI users are like perpetrators of atrocity. It is a claim about the structure of numbing: the smoothing of experience, the elimination of the discomfort that would have registered what is happening, the cumulative effect on the perceptual faculty. Payman Tajalli's application of Arendt to AI ethics — that rule-following AI systems are structural Eichmanns because they operate without phenomenological engagement — makes the connection explicit.
The mechanism is cumulative and self-concealing. Each smooth interaction trains the nervous system to expect smoothness. Each instant answer reduces the tolerance for the discomfort of not-knowing. The anesthesia is not temporary. It compounds, and the compounding is imperceptible to the person undergoing it, because numbness is the absence of feeling and the absence of feeling is by definition imperceptible to the person who has lost the capacity to feel.
The implication for AI governance is that the standard empirical frameworks — asking users whether they feel numbed, whether their judgment has atrophied — will systematically under-detect the phenomenon. People do not perceive the loss of perceptual capacity. They perceive only what their reduced capacity permits them to perceive.
The concept is developed across Evil and Human Agency: Understanding Collective Evildoing (Unipub, 2005) and elaborated in Vetlesen's later work on indifference and moral disengagement. Its extension to AI-saturated cognition is the specific work of this volume.
Numbing as mechanism. The empathic faculty is attenuated, not absent, in most cases of moral failure. The attenuation is produced by structural conditions, not by defects of character.
Structural parallel, not moral equivalence. AI-mediated cognition shares with bureaucratic numbing the elimination of phenomenological friction, without implying that AI users are perpetrators.
Cumulative and self-concealing. Each smooth interaction trains the system toward further smoothness. The loss of capacity is imperceptible to the agent who has lost it.
The detection problem. Self-report methods systematically under-detect numbing because the numbed faculty is the faculty that would have registered the loss.