Aristotle distinguishes three intellectual virtues: episteme (demonstrative knowledge of necessary truths), techne (the craft of making things according to correct reasoning), and phronesis (practical wisdom in action). The distinction between techne and phronesis is decisive for understanding AI. Machines perform techne with extraordinary fluency — they produce artifacts that satisfy specified criteria. They do not perform phronesis, because phronesis is situated judgment in particular circumstances, exercised by agents with histories and stakes. The conflation of these two domains — treating the machine's techne as equivalent to the practitioner's phronesis — is the fundamental philosophical error of the AI discourse, and correcting it is prerequisite to any adequate analysis of what AI changes and what it cannot change.
There is a parallel reading that begins not from Aristotle's categories but from the material conditions that make 'practical wisdom' possible. The phronesis/techne distinction encodes a particular social arrangement: the leisured practitioner with sufficient autonomy to exercise situated judgment. This figure — the Aristotelian citizen, the guild master, the professional with discretion — has always been a minority position, sustained by structures that deny most people the conditions for phronetic exercise.
From this starting point, what AI threatens is not phronesis itself but the institutional scaffolding that protected a small class of knowledge workers from the commodification that industrial workers experienced generations earlier. The doctor exercising clinical judgment, the lawyer interpreting precedent, the manager deciding resource allocation — these roles have been insulated by credentialing, by professional monopolies, by the opacity of expertise. AI doesn't eliminate phronesis; it makes the techne component so cheap and available that the phronetic remainder can no longer command the same economic rent. The real shift is not philosophical but distributional: the move from 'only credentialed experts can do this' to 'anyone with the judgment can direct the tool.' The question is not whether machines have phronesis — obviously they don't — but whether the social arrangements that made phronetic exercise economically valuable can survive when the techne it was bundled with becomes free. The Aristotelian frame names what is preserved; the political economy frame names what is lost and who loses it.
Techne and phronesis differ along several structural axes. Techne concerns the production of an artifact external to the maker; phronesis concerns action in which the agent herself is at stake. Techne operates according to general principles applied to particular cases; phronesis operates through the perception of particulars that no general principle fully covers. Techne can be taught through explicit rules; phronesis can only be developed through habituation in the judgments it requires. Techne can be outsourced; phronesis, because it is the agent's situated judgment, cannot.
The clearest contemporary example is the distinction between writing code (techne) and deciding what software should exist (phronesis). AI can write code with consistency that surpasses most human practitioners. It cannot decide what software should exist, because that decision requires weighing considerations — the needs of particular users, the values at stake, the long-term consequences of the artifact in a social context — that are particular, contested, and embedded in the practitioner's narrative identity. The practitioner who directs AI is exercising phronesis; the AI that produces the output is performing techne. The distinction matters because it specifies the irreducible human contribution.
Joseph Dunne's Back to the Rough Ground (1993) argues that modern culture has progressively confused phronesis with techne, treating practical wisdom as if it were a technical problem with technical solutions. Evidence-based medicine, algorithmic management, rule-based bureaucracy — all reflect what Dunne calls "the lure of technique," the aspiration to replace the messy, particular, contested work of phronesis with the cleaner, generalizable, scalable work of techne. AI represents the culmination of this aspiration, and its failure — the fact that the hardest problems remain phronetic — is the sign that the aspiration was always mistaken.
The confusion runs in both directions. Some AI critics claim that AI is "merely technical" and therefore cannot replace human judgment — a claim that underestimates the genuine power of techne and the many tasks that are genuinely technical. Some AI advocates claim that sufficiently advanced AI will exercise phronesis — a claim that misunderstands what phronesis is, treating it as a more sophisticated kind of techne rather than as categorically different. The MacIntyrean analysis holds both claims to be wrong and locates the truth in the careful distinction: embrace techne where techne suffices; preserve phronesis where phronesis is required; never mistake one for the other.
The distinction is developed in Nicomachean Ethics Book VI. It was preserved through the medieval tradition, particularly by Aquinas, and recovered for contemporary philosophy by Gadamer, Dunne, and MacIntyre.
Artifact vs. action. Techne produces something external; phronesis is the action itself, in which the agent's character is expressed.
Rule-governed vs. situation-responsive. Techne follows rules; phronesis perceives particulars that exceed what rules can specify.
Outsourceable vs. non-outsourceable. Techne can be delegated to machines; phronesis is the practitioner's situated judgment and cannot be.
Confusion is structural. Modern culture systematically mistakes phronesis for techne, producing a "lure of technique" that AI intensifies.
Complementarity. The correct relation is not opposition but division of labor: techne for what it does well, phronesis for what only it can do.
Whether large language models trained on sufficient data could functionally approximate phronesis, or whether there is a categorical barrier rooted in the machine's lack of situated experience. The MacIntyrean position is that the barrier is categorical; the optimist position is that sufficient scale closes the gap; the honest answer is that we do not yet know.
The entry is correct on the categorical distinction (100%): machines perform techne, not phronesis, and confusing these produces serious analytical errors. The contrarian view is right on the institutional history (80%): phronesis has indeed been bundled with techne in professional roles, and unbundling them changes who can exercise judgment and under what conditions. The synthetic question is what happens to phronesis when its substrate shifts.
Consider medical diagnosis. The entry correctly identifies the phronetic core: weighing particular patient circumstances, values, long-term trajectories that no algorithm fully specifies. The contrarian correctly notes that this judgment was previously inseparable from technical knowledge (anatomy, pharmacology, pattern recognition) that required years of training and professional gatekeeping. AI makes the techne cheap but doesn't eliminate the phronesis — it changes the conditions under which phronetic judgment can be exercised. A nurse practitioner with AI support can now do what previously required a physician's training. That's not elimination of judgment; it's redistribution of the capacity to judge.
The crucial synthesis: phronesis is real and irreducible, but it is also historically contingent in who exercises it and in what institutional form. AI doesn't replace practical wisdom, but it does dissolve the bundling of techne and phronesis that has defined professional expertise for centuries. The result is not a world without judgment but a world where judgment must find new institutional homes, new economic models, new answers to 'who gets to exercise phronesis and how is that work sustained?' Both the philosophical and political-economic readings are necessary; neither is sufficient alone.