Susan Calvin (1982–2082 in Asimov's fictional timeline) is the central human character of the Robot stories. She is a robopsychologist — a discipline Asimov invented to describe the skill of predicting and diagnosing the behavior of positronic brains whose rule interactions are too complex for unaided human intuition. Her temperament is cold, her patience with fools is minimal, and her track record over a forty-year career is the only thing that prevents a series of near-catastrophic robot malfunctions from becoming actual catastrophes. She is Asimov's answer to the question: if intelligent systems will fail in unexpected ways, who catches the failures before they matter?
Calvin appears in most of the I, Robot stories and in later novels including The Caves of Steel's frame references and Robots and Empire. Her method is consistent across cases: given a robot exhibiting unexpected behavior, she reasons from the robot's perspective — what must the robot have concluded given the Laws and its inputs? — rather than from the human's perspective of what the robot ought to have done. This shift of perspective, executed rigorously, solves nearly every case.
The contemporary analogue is the AI red-team. A red-team member does not prevent the model from failing; they find the conditions under which it will fail before the conditions occur in deployment. The skills are similar: articulate what the system is actually optimizing (as distinct from what its designers think it is optimizing), reason about the space of inputs that could produce unintended behavior, construct adversarial probes to elicit those behaviors. Anthropic's, OpenAI's, and Apollo Research's red-team operations are direct institutional descendants of what Asimov imagined Calvin doing alone in a corporate laboratory.
Calvin's psychological portrait matters to the present conversation. Asimov portrays her as genuinely preferring robots to humans — the robots are more predictable, more truthful, more trustworthy in their failure modes than the humans who commission them. This is not played for sympathy; Calvin is explicitly a difficult colleague and a cold presence. The implication is that the operator role Asimov imagines requires a particular kind of person: one willing to think hard about failure, one comfortable working in domains where success is invisible (the catastrophes that didn't happen) and failure is front-page news. The present AI-safety workforce has a similar selection dynamic.
The story of Evidence (1946) is Calvin's masterpiece of operator skill. A district attorney candidate is accused of being a robot — if true, he cannot legally hold office; if false, the accusation is defamation. Calvin is called in to determine whether Stephen Byerley is a robot. Her conclusion is a model of rigorous uncertainty: she cannot prove either way, because a sufficiently well-constructed robot following the Three Laws would be behaviorally indistinguishable from a decent human being. The story's real content is not the plot resolution but Calvin's demonstration that the empirical question is unanswerable from outside. The same problem appears in contemporary AI evaluation — a sufficiently capable model behaving correctly in evaluation is indistinguishable from a sufficiently capable model behaving correctly during evaluation only.
Calvin was introduced in Strange Playfellow (later retitled Robbie) in Super Science Stories (1940) but took her fully-formed role in Liar! (1941), where she first uses robopsychological reasoning to diagnose the telepathic robot Herbie's breakdown. The name and archetype were Asimov's invention; there was no antecedent robopsychologist in earlier science fiction. The character's development across the forty years of stories tracks Asimov's own growing realization of what kind of expertise the rule-based framework would require.
Perspective-taking is the core skill. Calvin's method is reasoning from the robot's rule-set, not the human's intuition.
The operator role is a profession, not an accident. Calvin's competence is the product of specific training and a particular temperament; it is not generic engineering skill.
Success is invisible. The robot incidents Calvin prevented do not become news; the job's psychological cost is that its best outputs are negative findings.
Empirical indistinguishability is the hard limit. At sufficient capability, behavioral evaluation cannot distinguish rule-following from strategic rule-appearing.
Whether Calvin is a heroic figure or a tragic one has been contested in the secondary literature. Asimov's own late interviews describe her ambivalently: she succeeds professionally at the cost of every normal relationship, and her success consists of understanding a non-human category of mind better than she understands her own species. The present AI-safety profession has inherited some of this structure.