Dependent Rational Animals: Why Human Beings Need the Virtues is MacIntyre's extension of his earlier virtue ethics in a direction he later acknowledged had been underdeveloped in After Virtue. The book's central thesis is that human beings are rational animals — and the emphasis falls on both words. We are not disembodied reasoners who happen to inhabit bodies. We are animals whose rationality is expressed through, and conditioned by, biological vulnerability, embodied existence, and dependency on other animals of our kind. The virtues are cultivated in and through this embodied, dependent condition. For the AI moment, this means that whatever "intelligence" the machine has, it is not the intelligence of a dependent rational animal — and the attempt to understand AI without attending to this difference inevitably distorts both machine and human.
There is a parallel reading that begins not from embodiment's philosophical significance but from its material conditions. MacIntyre's dependent rational animals require extensive infrastructures of care — hospitals, schools, eldercare facilities, disability services — all of which are being systematically restructured by AI optimization. The virtues of dependency he celebrates emerge within specific political economies of care work, and these economies are precisely what machine intelligence disrupts. When AI-driven scheduling determines nurse staffing, when algorithmic assessment shapes special education plans, when automated systems mediate between the vulnerable and their caregivers, the substrate of virtue formation itself is transformed.
The deeper issue is that MacIntyre's framework assumes care relationships that capital is actively dismantling. The "virtues of receiving care gracefully" presuppose care that is actually available; the "virtues of providing care without condescension" assume caregivers who aren't themselves crushed by algorithmic management and perpetual understaffing. The machine may indeed lack embodiment, vulnerability, and dependency — but it is precisely these lacks that make it so useful for managing the embodied, vulnerable, and dependent at scale. The categorical difference MacIntyre identifies between human and machine intelligence becomes, in practice, the mechanism by which machine intelligence governs human vulnerability. The question is not whether AI can possess the virtues of dependent rational animals, but how AI's deployment reshapes the conditions under which humans can develop or exercise these virtues. In eldercare facilities where interaction time is algorithmically allocated, in schools where AI tutors supplement overwhelmed teachers, in hospitals where diagnostic algorithms mediate between doctor and patient — the substrate of virtue formation is being replaced by the substrate of optimization.
The book begins with an observation that is simple and consequential: human beings are dependent for substantial portions of their lives on the care of others. Infants, the sick, the injured, the elderly, the disabled — these are not marginal cases of humanity but paradigmatic ones. Every human being has been an infant; most will be elderly; many will be sick or injured. The virtues that the human community requires must include the virtues of dependency — receiving care gracefully, providing care without condescension, recognizing in the vulnerable the same humanity as in the strong.
This account has a consequence that is decisive for AI ethics. The virtues are not abstract dispositions that a mind of any kind could possess. They are developed in and through the specific vulnerabilities and dependencies of human embodied life. Courage is developed by a being that can be harmed; honesty by a being whose reputation matters; justice by a being who depends on fair treatment from others; practical wisdom by a being whose decisions carry real consequences for an embodied life. The machine has none of these features. Its "intelligence," whatever else it is, is not the intelligence of a being that can die, that depends on care, that inhabits a body.
This is not a claim that the machine is inferior; it is a claim that the machine is different in kind. The attempt to evaluate machine and human on a single axis — to ask whether the machine is as "smart" as the human — already misses the point. The human's intelligence is inseparable from the human's vulnerability, dependency, and embodiment. An intelligence that lacks these features is not a less-capable version of human intelligence but a different kind of thing. The confusion of these two questions — which is more capable vs. which kind are we — is the characteristic error of the AI discourse.
MacIntyre's account also provides resources for thinking about what the virtues that AI-mediated work requires might look like. If rationality is embodied and dependent, then the virtues of using AI well — the practical wisdom to know when to defer to the machine and when to override it, the humility to recognize when the machine exceeds one's own competence, the honesty to acknowledge when smooth output is substituting for earned understanding — are virtues of dependent rational animals, not of disembodied reasoners. They cannot be specified in advance by general rules. They must be cultivated through practice in the specific circumstances of human life with AI.
Published in 1999 by Open Court as the Paul Carus Lectures. The book marks a significant development in MacIntyre's thought, particularly in its engagement with biology and cognitive science — directions his earlier work had not explored. It has become increasingly relevant in discussions of technology ethics, disability studies, and the philosophy of care.
Rational animals, emphasis on both. Human rationality is inseparable from biological animality.
Dependency as paradigmatic. Infants, the sick, the elderly are not marginal but central to understanding human life.
Virtues of dependency. The human community requires virtues for receiving care, providing care, and recognizing vulnerability.
Categorical difference from machines. Machine intelligence lacks the embodiment, vulnerability, and dependency that shape human rationality.
Technomoral implications. The virtues of using AI well are virtues of dependent rational animals, cultivated through embodied practice.
Whether MacIntyre's emphasis on biological embodiment is compatible with functionalist views in philosophy of mind that hold intelligence could in principle be realized in any substrate. The MacIntyrean position is that virtues are substrate-dependent in a way that intelligence in the functionalist sense may not be.
The tension between MacIntyre's virtue ethics and the material conditions of AI deployment resolves differently at different scales. At the level of philosophical anthropology, MacIntyre is entirely right (100%): human rationality is indeed inseparable from embodiment and dependency, and this creates a categorical difference from machine intelligence. No amount of computational power can replicate the formative experience of infant helplessness or elderly frailty. But shift the question to institutional implementation, and the contrarian view gains force (70%): AI systems are already mediating the care relationships where virtues form, often degrading rather than supporting them.
The weighting shifts again when we consider individual practice. Here the perspectives balance (50/50): healthcare workers using diagnostic AI must indeed cultivate MacIntyrean virtues of practical wisdom and humility, but they do so within systems increasingly structured by algorithmic logics that may undermine those very virtues. A nurse's phronesis in knowing when to override an AI recommendation emerges from embodied experience, yet that experience occurs within staffing patterns optimized by machines that recognize neither vulnerability nor care.
The synthetic frame the topic needs is one of scale-dependent virtue formation. MacIntyre provides the ontological foundation — what human virtue is and why it differs from machine capability. The political economy critique identifies the institutional substrate — how AI deployment reshapes the conditions for virtue development. Together they suggest that preserving human virtue in the AI era requires not just philosophical clarity about what we are, but active protection of the social spaces where dependent rational animals learn to care for one another. The virtues cannot be specified in advance, as MacIntyre says, but neither can they emerge from conditions that no longer support their cultivation.