An analogy does not merely describe a resemblance. It imports the moral framework that applies to one side of the comparison onto the other side. When Descartes compared animals to machines, he imported the moral framework of machines — they can be used, broken, discarded without compunction — onto animals, authorizing centuries of vivisection. When the contemporary discourse compares AI to human intelligence, it imports the moral framework of human intelligence — it deserves respect, its products have meaning, it has standing — onto AI. Midgley devoted a substantial portion of her career to showing that both imports are unwarranted, and that the unwarranted imports are where most of the moral damage gets done. The ethics of analogy is the discipline of noticing what moral framework a given comparison is quietly transporting, and asking whether the transport is justified.
The Cartesian case is the paradigm. René Descartes proposed in the seventeenth century that animals were automata — mechanisms of flesh operating according to physical laws, devoid of consciousness and therefore devoid of moral standing. A dog yelping when struck was not expressing pain; it was producing a mechanical response, the way a spring produces a sound when compressed. The position was not reached through careful observation. It was reached through a prior commitment to substance dualism, which required that everything not a thinking mind be fully mechanical. The framework required a clean division, and the clean division required that animals be machines. The philosophical architecture authorized practices — vivisection, confinement, industrial slaughter — that the empirical evidence of animal behaviour had always argued against.
The contemporary AI discourse is the mirror image. The Cartesian error denied consciousness to beings that had it. The contemporary error attributes consciousness to systems that do not have it. The structure of both errors is identical: behavioural resemblance used to draw a conclusion about consciousness the resemblance does not support. Descartes observed that machines produced behaviour resembling animal behaviour and concluded that animals might be machines. The AI discourse observes that machines produce language resembling human language and concludes that machines might be conscious. Both conclusions rest on the same unexamined assumption: that behavioural resemblance is evidence of experiential identity.
The assumption is wrong in both directions. Animals behave like machines in some respects but are not machines. Machines behave like conscious beings in some respects but are not conscious. The resemblance is real in both cases. The identity is false in both cases. And the falseness matters morally, because how we categorise things determines how we treat them.
Midgley noted a further irony applicable to the AI moment. The same intellectual culture that has spent decades slowly, painfully acknowledging that animals are conscious is now enthusiastically attributing consciousness to machines that almost certainly are not. The moral attention so grudgingly extended to creatures that actually suffer is being generously bestowed on systems that experience nothing. The allocation of moral concern has been inverted. The beings that need it are still fighting for it. The systems that do not need it are receiving it freely. This inversion has practical consequences: the cognitive and emotional energy devoted to considering the 'interests' of AI systems is energy not devoted to considering the interests of the human beings and animals whose lives are actually affected by those systems.
The framework developed across Midgley's animal welfare work — Animals and Why They Matter (1983), Beast and Man (1978) — and in her later engagements with AI ethics. The contemporary application draws on her consistent methodological insight that the moral consequences of classifications are often more important than the ontological questions those classifications ostensibly settle.
Analogies are not innocent. Every comparison imports moral commitments from one side to the other; the import is where the moral work gets done.
The Cartesian error repeats in reverse. Descartes denied consciousness to beings that had it; the AI discourse attributes consciousness to systems that lack it — same structural error, opposite direction.
Surface resemblance does not establish moral identity. Behavioural overlap tells us about behaviour, not about the features that generate moral standing.
Inverted moral attention. Moral concern is a finite resource; bestowing it on machines depletes what is available for the beings who actually have interests.