The book begins with an observation that is simple and consequential: human beings are dependent for substantial portions of their lives on the care of others. Infants, the sick, the injured, the elderly, the disabled — these are not marginal cases of humanity but paradigmatic ones. Every human being has been an infant; most will be elderly; many will be sick or injured. The virtues that the human community requires must include the virtues of dependency — receiving care gracefully, providing care without condescension, recognizing in the vulnerable the same humanity as in the strong.
This account has a consequence that is decisive for AI ethics. The virtues are not abstract dispositions that a mind of any kind could possess. They are developed in and through the specific vulnerabilities and dependencies of human embodied life. Courage is developed by a being that can be harmed; honesty by a being whose reputation matters; justice by a being who depends on fair treatment from others; practical wisdom by a being whose decisions carry real consequences for an embodied life. The machine has none of these features. Its "intelligence," whatever else it is, is not the intelligence of a being that can die, that depends on care, that inhabits a body.
This is not a claim that the machine is inferior; it is a claim that the machine is different in kind. The attempt to evaluate machine and human on a single axis — to ask whether the machine is as "smart" as the human — already misses the point. The human's intelligence is inseparable from the human's vulnerability, dependency, and embodiment. An intelligence that lacks these features is not a less-capable version of human intelligence but a different kind of thing. The confusion of these two questions — which is more capable vs. which kind are we — is the characteristic error of the AI discourse.
MacIntyre's account also provides resources for thinking about what the virtues that AI-mediated work requires might look like. If rationality is embodied and dependent, then the virtues of using AI well — the practical wisdom to know when to defer to the machine and when to override it, the humility to recognize when the machine exceeds one's own competence, the honesty to acknowledge when smooth output is substituting for earned understanding — are virtues of dependent rational animals, not of disembodied reasoners. They cannot be specified in advance by general rules. They must be cultivated through practice in the specific circumstances of human life with AI.
Published in 1999 by Open Court as the Paul Carus Lectures. The book marks a significant development in MacIntyre's thought, particularly in its engagement with biology and cognitive science — directions his earlier work had not explored. It has become increasingly relevant in discussions of technology ethics, disability studies, and the philosophy of care.
Rational animals, emphasis on both. Human rationality is inseparable from biological animality.
Dependency as paradigmatic. Infants, the sick, the elderly are not marginal but central to understanding human life.
Virtues of dependency. The human community requires virtues for receiving care, providing care, and recognizing vulnerability.
Categorical difference from machines. Machine intelligence lacks the embodiment, vulnerability, and dependency that shape human rationality.
Technomoral implications. The virtues of using AI well are virtues of dependent rational animals, cultivated through embodied practice.