By Edo Segal
Every technology book I read in 2025 and 2026 talked about skills. Which skills would survive. Which skills would be automated. Which skills you should teach your children so they could compete in the new landscape.
Skills. Skills. Skills.
The word started to feel hollow. Not wrong exactly, but thin. Like it was describing the surface of something without reaching the thing underneath.
Then I encountered Alasdair MacIntyre's framework, and the thinness got a name.
MacIntyre draws a distinction that stopped me cold. There is a difference between a capability and a virtue. A capability is something you can do. A virtue is something you become through the doing of it. The senior engineer I described in The Orange Pill, the one who could feel a codebase the way a doctor feels a pulse — his ability to perceive what was wrong before he could articulate it was not a skill on a résumé. It was a form of human excellence built through years of patient, friction-rich engagement with his craft. When AI took over the implementation work that had built that intuition, something was gained and something was threatened. I knew both things were true. I could not say precisely what the threatened thing was.
MacIntyre gave me the vocabulary.
He distinguishes between internal goods — the forms of understanding and perception that can only be developed through participation in a practice — and external goods like money, prestige, and shipped products. The market sees external goods. It cannot see internal goods. And a technology that amplifies the production of external goods while undermining the conditions for cultivating internal goods will look, by every metric the market tracks, like pure progress.
That invisible erosion is what kept me up at night in Trivandrum. That is what the Berkeley researchers were measuring without quite naming. That is what Byung-Chul Han diagnosed without the structural precision that MacIntyre provides.
This book applies MacIntyre's framework to the AI moment with a rigor and specificity I found genuinely clarifying. It asks questions the technology discourse has not learned to ask. Not "Will AI replace workers?" but "Will AI preserve the practices through which workers become excellent?" Not "Is AI efficient?" but "Efficient at producing what, and at the cost of what?"
These are not academic questions. They are the questions I face every quarter when the productivity arithmetic lands on the table and the pressure to convert internal goods into margin returns.
MacIntyre does not tell you what to build. He tells you what to protect while you build it. That turned out to be exactly what I needed.
-- Edo Segal ^ Opus 4.6
1929-present
Alasdair MacIntyre (1929–present) is a Scottish-born moral philosopher whose work has reshaped contemporary ethics, political philosophy, and the philosophy of the social sciences. Born in Glasgow, he studied at Queen Mary, University of London, and the University of Manchester before holding positions at institutions including Oxford, Boston University, Vanderbilt, Duke, and the University of Notre Dame, where he is now Senior Research Professor Emeritus. His landmark 1981 work After Virtue: A Study in Moral Theory argued that the Enlightenment project of grounding morality in tradition-independent rational principles had failed, leaving modern moral discourse in a state of unresolvable disagreement. Drawing on Aristotle and Thomas Aquinas, MacIntyre developed an account of human flourishing grounded in practices, virtues, and traditions — concepts that have become foundational across disciplines from business ethics to education theory. His subsequent works Whose Justice? Which Rationality? (1988), Three Rival Versions of Moral Enquiry (1990), and Dependent Rational Animals (1999) extended and refined this framework. MacIntyre's influence reaches well beyond academic philosophy; his concept of internal goods and his analysis of the tension between practices and institutions have provided essential tools for understanding how communities sustain — or fail to sustain — the conditions for human excellence across periods of profound structural change.
The most striking feature of contemporary discourse about artificial intelligence is that so much of it is conducted in a vocabulary that cannot sustain the weight of what is being discussed. Commentators speak of "disruption" and "transformation" as though these words possessed determinate content, as though the disruption of a craft tradition and the disruption of a supply chain were the same kind of event, requiring the same kind of analysis and admitting of the same kind of response. They speak of "skills" being "replaced" as though a skill were a discrete unit of productive capacity that could be subtracted from one agent and added to another without remainder. They speak of "creativity" being "democratized" as though creativity were a resource whose distribution could be altered without altering the thing itself.
The poverty of this vocabulary is not accidental. It is symptomatic of a deeper incoherence in the moral and philosophical frameworks through which contemporary culture attempts to understand its own transformations. The vocabulary is poor because the frameworks that would give it richness have been abandoned — abandoned for reasons that are themselves part of the story any adequate analysis of AI must tell. What is needed, before any substantive argument about AI can proceed, is a recovery of the conceptual resources adequate to the phenomenon. The most important of those resources — the one without which the central question cannot even be formulated — is the concept of a practice.
A practice, as MacIntyre defines the term in After Virtue, is any coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realized in the course of trying to achieve those standards of excellence which are appropriate to, and partially definitive of, that form of activity, with the result that human powers to achieve excellence, and human conceptions of the ends and goods involved, are systematically extended. The definition is dense, and deliberately so; each clause does philosophical work that cannot be eliminated without collapsing the distinction the definition is designed to preserve.
Consider what the definition excludes. Bricklaying is not, in itself, a practice; architecture is. Throwing a football with skill is not a practice; the game of football is. Planting turnips is not a practice; farming is. The distinction operates between a set of technical skills, which can be exercised in isolation and which serve purposes external to the activity, and a complex form of activity that possesses its own internal standards of excellence, its own internal goods, and its own history of development through which those standards and goods have been progressively elaborated. A practice is not merely something people do. It is something through which people become.
The concept of internal goods is the pivot on which the entire analysis turns, and it is the concept that the contemporary discourse about artificial intelligence has most comprehensively failed to grasp. An internal good is a good that can only be identified and recognized by the experience of participating in the practice in question. The chess player who perceives the elegance of a particular combination, the physician who recognizes the diagnostic significance of an apparently trivial symptom, the architect who feels the rightness of a spatial relationship that resolves competing constraints in a way no textbook could have specified — these are apprehensions of internal goods, and they are available only to those who have undergone the discipline of the practice sufficiently to have developed the relevant capacities of perception and judgment.
External goods, by contrast, are goods that are contingently attached to a practice but not constitutive of it. Money, prestige, power, and social status are external goods. They can be achieved through practices, but they can also be achieved through activities that are not practices at all. A chess player may win prize money; a physician may achieve social prestige; an architect may acquire power. But none of these goods is specific to the practice in the way that the internal goods are. The prize money could equally well have been won at poker; the prestige could equally well have been achieved through politics; the power could equally well have been acquired through inheritance. External goods are, by their nature, such that the more someone has of them, the less there is for other people; they are objects of competition in which there must be losers as well as winners. Internal goods are not like this. The chess combination that one player discovers becomes part of the tradition that every subsequent player inherits.
What, then, does this framework reveal when directed at the phenomenon that Edo Segal's The Orange Pill documents with such specificity — the arrival of AI systems capable of performing the implementation work that has historically constituted the center of knowledge-work practices?
Consider the account of the senior software architect who, at a conference in San Francisco, described his experience of feeling a codebase "the way a doctor feels a pulse — not through analysis but through a kind of embodied intuition that had been deposited, layer by layer, through thousands of hours of patient work." This engineer had achieved the internal goods of his practice: the capacity to perceive architectural relationships that were invisible to the untrained eye, the judgment that allowed him to distinguish between solutions that merely worked and solutions that were genuinely elegant, the satisfaction of understanding a system he had built by hand from the ground up through years of patient iteration where every failure taught him something no documentation could convey.
The AI moment threatens not this engineer's employment, though it may threaten that as well. What it threatens is the conditions under which such internal goods can be cultivated at all. If the implementation work that constituted the practice through which the engineer developed his embodied intuition can now be performed by a machine, then the practice that produced his excellence is at risk of dissolution — not because the machine performs the work badly but because the machine performs the work without undergoing the discipline that the practice imposes on its practitioners. It is that discipline, not the output it produces, that cultivates the virtues.
The virtues are the dispositions that sustain a practitioner in the pursuit of the internal goods of a practice. They are not merely instrumental to the practice; they are constitutive of the good human life as such. Justice, courage, honesty, and the capacity for practical wisdom — phronesis, in Aristotle's term — are exercised and developed within practices, and they are the dispositions without which the internal goods of practices cannot be achieved. A chess player who lacks the honesty to acknowledge the superior play of an opponent will not learn from that opponent and will therefore not develop the capacity to perceive the internal goods that the opponent's play reveals. A physician who lacks the courage to pursue a diagnosis that contradicts the institutional consensus will not develop the diagnostic intuition that constitutes the physician's distinctive excellence. A software engineer who lacks the patience to sit with a problem long enough for the architecture to reveal itself will not develop the embodied understanding that distinguishes the genuine practitioner from the merely competent technician.
This analysis exposes something that the popular discourse has systematically obscured: the question of AI's impact on human work is not primarily an economic question. It is a moral question — a question about whether the conditions under which human beings develop the excellences constitutive of their flourishing will be preserved or destroyed. The economist asks whether the worker will find alternative employment. The policy analyst asks whether the transition can be managed without unacceptable social disruption. These are important questions. But they are secondary questions. The primary question is whether the practices through which the virtues are cultivated will survive, and if not, what will replace them.
The Orange Pill documents the twenty-fold productivity multiplier achieved by engineers in Trivandrum using Claude Code at one hundred dollars per person per month. The productivity multiplier is real, and there is no reason to dispute it. But the question the productivity multiplier raises — the question that The Orange Pill itself struggles to answer, because the conceptual resources available to its authors are not fully adequate to the task — is what exactly has been multiplied. If what has been multiplied is the capacity to produce external goods — shipped products, revenue, professional recognition — then the multiplication is, from the standpoint of the theory of practices, a matter of indifference at best and a cause for concern at worst. For external goods can be multiplied indefinitely without any corresponding multiplication of the internal goods that give those external goods their meaning.
The senior engineer described in the opening chapters of The Orange Pill, who spent his first two days oscillating between excitement and terror before arriving at the realization that his "remaining twenty percent was everything," had stumbled upon precisely this distinction. He lacked the philosophical vocabulary to articulate it, but the discovery was genuine. The eighty percent of his work that the machine could perform was the implementation labor — the mechanical connective tissue of coding, debugging, and configuration. The remaining twenty percent was the exercise of practical wisdom: the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they merely tolerated.
What the machine had revealed, by removing the implementation, was not the practitioner's obsolescence but the practitioner's essence. The internal goods of the practice — the judgment, the architectural instinct, the taste — had been masked by the implementation labor. When the implementation was removed, the internal goods stood revealed.
But — and this qualification is decisive — the internal goods had been cultivated through the implementation labor. The judgment did not arrive from nowhere; it was built, layer by layer, through thousands of hours of wrestling with code that did not work, of debugging problems that defied analysis, of sitting with a system long enough for its architecture to reveal itself. The implementation was not merely a container for the internal goods. It was the practice through which the internal goods were developed. If the practice through which the internal goods were developed is eliminated, then the question becomes not whether existing practitioners will retain their internal goods — they will, at least for a time — but whether new practitioners will be able to develop them. That question cannot be answered by appeal to productivity metrics or adoption curves. It can only be answered by asking whether the conditions under which the virtues specific to the practice were cultivated still obtain, and if they do not, what has replaced them.
This is the framework that the remainder of this book will elaborate and apply. The question is not whether AI threatens employment, though it does. The question is not whether AI threatens efficiency, though it enhances it. The question is whether AI threatens practices — the complex, socially established cooperative human activities through which internal goods are realized, virtues are cultivated, and human beings achieve the forms of excellence that are constitutive of their flourishing. If the answer is yes, then what is lost is not merely a set of jobs or a level of productivity. What is lost is a form of moral development that no technology can replace.
The distinction between internal goods and external goods is not merely analytical. It is the distinction upon which the coherence of moral life depends, and it is a distinction that the contemporary world has progressively lost the capacity to draw. Understanding why the distinction has been lost is essential to understanding why the AI moment presents the particular kind of threat that it does.
In the Aristotelian tradition from which MacIntyre's framework derives, the concept of a good is not unitary. There are goods of excellence and goods of effectiveness, goods that are constitutive of human flourishing and goods that are merely instrumental to it, goods that can only be achieved through the exercise of the virtues and goods that can be achieved through any number of means, including those that are vicious. The failure to distinguish between these different kinds of goods is not a theoretical deficiency. It is a practical catastrophe, because a culture that cannot distinguish between goods of excellence and goods of effectiveness will systematically sacrifice the former to the latter, and it will do so in the sincere conviction that it is pursuing the good.
The Orange Pill introduces a concept that is central to this problem, though its philosophical implications are not fully developed in that text: the amplifier. "AI is an amplifier," Segal writes, "and the most powerful one ever built. And an amplifier works with what it is given; it does not care what signal you feed it." The amplifier metaphor is illuminating, but it requires philosophical specification. What exactly is being amplified?
If the distinction between internal and external goods is applied, the answer becomes clear and troubling: the amplifier amplifies external goods with a directness and efficiency it cannot bring to internal goods. It amplifies output, productivity, the capacity to produce deliverables that the market values and measures. The engineer who previously shipped one feature per sprint now ships five. The non-technical founder who previously required a co-founder to build a prototype can now produce one over a weekend. The "imagination-to-artifact ratio," in the language of The Orange Pill, has collapsed to the time it takes to have a conversation.
But what of the internal goods? The elegance of a well-designed architecture that only another practitioner can perceive? The satisfaction of understanding a system at a depth that no documentation could convey? The diagnostic intuition that allows a senior engineer to feel that something is wrong before she can articulate what? These goods are not amplified by the tool. They are not even addressed by the tool. They are, in the most precise sense, invisible to the tool, because internal goods are visible only to those who have developed the virtues necessary to perceive them, and the tool has no virtues at all.
This asymmetry — the amplification of external goods without corresponding amplification of internal goods — is the structural feature of the AI moment that the contemporary discourse has failed to identify. The triumphalists celebrate the amplification of external goods as though it were an unqualified advance. The resisters mourn the loss of something they cannot name, intuiting that the amplification of external goods has come at some cost but lacking the conceptual resources to specify what the cost is. The silent middle, which The Orange Pill describes with considerable psychological acuity as "the largest and most important group in any technology transition," holds both intuitions simultaneously without being able to reconcile them.
The reconciliation requires the distinction between internal and external goods, and the recognition that the amplification of external goods is not, in itself, a threat to practices. External goods have always been necessary for the sustenance of practices; the chess player needs prize money to sustain her career, the physician needs institutional support to practice medicine, the software engineer needs a salary to continue building systems. What threatens practices is not the amplification of external goods but their dissociation from internal goods — the creation of conditions under which the external goods of a practice can be obtained without participation in the practice that cultivates the internal goods.
When external goods can be obtained without internal goods, the incentive to undergo the discipline of the practice is undermined. Why should a young developer spend years learning to feel a codebase from the inside, developing the embodied intuition that the senior architect described, when the external goods of software engineering — a shipped product, a salary, professional recognition — can be obtained without that discipline? The question is not rhetorical. It is the question that every junior practitioner in every AI-affected domain is implicitly asking, and the answer the market gives is clear: the discipline is unnecessary. The external goods are available without it.
The market's answer is correct within its own frame of reference. External goods are available without the discipline. But the market's frame of reference is constitutively incapable of recognizing internal goods, because internal goods are, by definition, recognizable only from within the practice. The market sees outputs, revenue, and efficiency. It cannot see the elegance of a well-designed architecture, because the elegance is visible only to those who have undergone the discipline of the practice. It cannot see the diagnostic intuition that allows a physician to perceive the significance of a symptom that appears trivial, because that intuition is the product of years of patient engagement with the practice of medicine. It cannot see the satisfaction of understanding a system at a depth that no documentation could convey, because that satisfaction is an internal good, and internal goods are invisible to any calculus that measures only external goods.
Here MacIntyre's diagnosis of the moral condition of modernity reveals its direct relevance to the AI moment. In After Virtue, MacIntyre identifies emotivism — the doctrine that all evaluative judgments are nothing but expressions of preference — as the dominant moral philosophy of the modern age, not as an explicit doctrine but as an embedded cultural practice. In an emotivist culture, the distinction between internal and external goods cannot be drawn, because the distinction requires a substantive account of the human good that emotivism denies. If all evaluative judgments are merely expressions of preference, then the claim that the elegance of a well-designed architecture is genuinely valuable — valuable not because someone happens to prefer it but because it constitutes a form of human excellence — is unintelligible. It reduces to the claim that someone happens to prefer elegant architecture, which carries no more normative weight than the claim that someone happens to prefer chocolate ice cream.
The AI discourse operates within this emotivist framework, which is why the discourse is interminable — why the triumphalists and the resisters and the silent middle cannot reach agreement, not because they disagree about the facts but because they are using rival and incommensurable moral vocabularies to evaluate the same facts. The triumphalist evaluates the AI moment in terms of external goods — productivity, output, efficiency — and finds it excellent. The resister evaluates the same moment in terms of internal goods — depth, craft, embodied understanding — and finds it devastating. Neither can refute the other, because each is operating within a framework that makes the other's evaluative criteria invisible.
Shannon Vallor, the philosopher who has done more than anyone to apply virtue ethics systematically to technology, identifies what she calls "moral deskilling" — the atrophy of moral capacities through technological mediation. The concept maps precisely onto the practices framework: moral deskilling is what happens when the conditions under which the virtues specific to a practice are cultivated are eroded by technologies that produce the practice's external goods without requiring the exercise of those virtues. The physician who relies on AI for diagnosis may maintain her clinical efficiency — an external good — while losing the diagnostic intuition that constitutes the internal good of medical practice, because the exercise of that intuition is no longer demanded by the workflow. The deskilling is moral, not merely technical, because the diagnostic intuition is not merely a skill; it is a virtue, a disposition cultivated through sustained practice, and its loss is a loss not merely of competence but of a form of human excellence.
The account of Alex Finn in The Orange Pill — the individual who built a revenue-generating product without writing a line of code by hand, achieving in weeks what would previously have required a team and a year of runway — illustrates the dissociation with considerable specificity. Finn obtained the external goods of software development: a working product, revenue, professional recognition. Whether Finn obtained the internal goods of software engineering as a practice — the deep understanding of systems architecture, the embodied intuition that comes from years of wrestling with code, the capacity for technical judgment that distinguishes between solutions that merely work and solutions that are genuinely excellent — is a question that the account does not address and perhaps cannot address, because the conceptual resources for addressing it are absent from the framework within which the account is constructed.
The celebration of Finn's achievement without attention to what has been lost in the process of achieving it is precisely the kind of moral blindness that the distinction between internal and external goods is designed to expose. The expansion of who gets to build is, in many respects, a genuine good — a point to which this analysis will return with the seriousness it deserves. But the expansion of who gets to produce external goods is not the same as the expansion of who gets to participate in a practice, and the conflation of these two very different things is the error upon which the triumphalist position depends.
The question the AI moment forces upon us is not whether individual practitioners can retain their internal goods. Many can, at least for a time, drawing on reserves of judgment and intuition built through decades of engagement with the practice. The question is whether a culture that can obtain the external goods of its practices without cultivating the internal goods will continue to value the internal goods at all. And if it does not, whether the practices themselves can survive — for practices depend, for their sustenance, on institutions that value their internal goods; and institutions are, by their nature, oriented toward external goods. The tension between practices and institutions has always been the fundamental structural tension of moral life. The AI moment has intensified that tension to the point of crisis, and the resolution of the crisis will determine the moral character of the civilization that emerges from it.
The machine possesses capabilities. It does not possess virtues. This distinction, which might appear obvious when stated in the abstract, is the distinction that the contemporary discourse about artificial intelligence has most thoroughly confused, and its confusion is the source of the most fundamental errors in both the celebration and the condemnation of AI.
A capability is a capacity to perform a task to a specified standard. The machine can write syntactically correct code, produce grammatically coherent prose, generate solutions to well-specified problems, and execute these operations with a consistency and speed that no human practitioner can match. These are genuine capabilities, and their value is real. The question is not whether the machine possesses capabilities; it manifestly does. The question is whether capabilities exhaust the account of what a practitioner contributes to a practice, and the answer, from the standpoint of the theory of virtues, is that they do not.
A virtue is not a capability. A virtue is a disposition to act well in situations that demand the exercise of judgment, where what counts as acting well cannot be fully specified in advance and where the right action must be discerned through the exercise of practical wisdom in the particular circumstances of the case. The physician who diagnoses a rare condition does not merely apply a rule to a set of symptoms; she perceives a pattern that is partially constituted by her years of clinical experience, her knowledge of the particular patient, her understanding of the context in which the symptoms have appeared, and her willingness to pursue a hypothesis that contradicts the institutional consensus. The diagnosis is an act of practical wisdom, and it requires the exercise of virtues — courage, honesty, the capacity to tolerate uncertainty — that no specification of capabilities can capture.
MacIntyre's treatment of practical wisdom draws on Aristotle's analysis in Book VI of the Nicomachean Ethics, where phronesis is distinguished from both episteme (scientific knowledge) and techne (technical skill). Scientific knowledge concerns what is universal and necessary; technical skill concerns the production of artifacts according to specified rules; practical wisdom concerns action in particular circumstances where the relevant considerations are multiple, potentially conflicting, and not fully articulable in advance. The practically wise person does not apply a rule to a case; she perceives the morally salient features of the situation and responds to them appropriately, drawing on a background of experience, habituation, and character that cannot be decomposed into a set of propositions or encoded in an algorithm.
The distinction between techne and phronesis maps directly onto the distinction between what the machine can and cannot do. The machine excels at techne — at the production of artifacts according to specified or inferred rules. It generates code, prose, images, and analyses by extracting patterns from its training data and producing outputs that are statistically consistent with those patterns. This is an extraordinary form of techne, and its products are often indistinguishable from those of skilled human practitioners when evaluated by external criteria. But techne is not phronesis. The production of an artifact is not the exercise of practical wisdom, and the distinction matters because practical wisdom is the virtue that governs the exercise of all other virtues — the master virtue without which no other virtue can be exercised well.
The Orange Pill provides a case that illustrates this distinction with considerable precision, though the philosophical implications are not fully developed in that text. The senior engineer who discovered that his "remaining twenty percent was everything" had discovered, in the terms of the Aristotelian tradition, that the machine possessed techne — the capacity to write code, resolve dependencies, manage configuration — but did not possess the phronesis he had developed through decades of practice: the judgment about what to build, the architectural instinct about what would break, the taste that separated genuine excellence from mere competence.
The techne was transferable to the machine. The phronesis was not. And it was not transferable because practical wisdom is not a capability. It is not a discrete unit of productive capacity that can be extracted from one agent and installed in another. It is a disposition of character that is cultivated through sustained engagement with a practice, and it is inseparable from the history, the community, and the narrative understanding that constitute the practitioner as a particular person engaged in a particular form of life.
It follows that the question of whether AI can "replace" a practitioner is, at the level of philosophical precision, a category mistake. The machine can replicate the practitioner's capabilities — her techne. It cannot replicate her virtues — her phronesis, her courage, her honesty, her justice — because virtues are not the kind of thing that can be replicated. They can only be developed, through the sustained engagement with practices that require their exercise, and they can only be exercised by agents who have a history, a community, and a stake in the outcome that gives the exercise of the virtues its moral significance.
This analysis has implications for the training of new practitioners that are severe enough to warrant separate treatment. The implications concern the paradox at the heart of what might be called the post-expertise condition. If the virtues are cultivated through the sustained engagement with practices that require their exercise, and if AI removes the conditions under which that engagement occurs, then the next generation of practitioners may acquire capabilities without developing virtues. They may be able to produce output — code that works, briefs that cite the right cases, designs that satisfy the client — without having undergone the discipline that cultivates the judgment to distinguish between output that merely works and output that is genuinely excellent.
The woman engineer described in The Orange Pill, who had spent eight years on backend systems and had never written a line of frontend code, built a complete user-facing feature in two days using Claude. The achievement is real and, in certain respects, admirable. But the question the practices framework poses is not whether she produced a working feature. The question is whether the process through which she produced it cultivated the virtues specific to frontend development — the capacity to perceive user experience as a coherent whole, the judgment about what interface decisions serve the user and what decisions merely satisfy the specification, the aesthetic sensibility that distinguishes between a feature that works and a feature that delights.
If the answer is no — if the process of producing the feature with AI did not cultivate these virtues — then the production of the feature, however impressive as an achievement of external goods, has not contributed to the engineer's development as a practitioner. She has obtained the external good of a shipped feature without developing the internal goods that the practice of frontend development is designed to cultivate.
Pablo García-Ruiz, in his 2025 essay "Governing Technology: A MacIntyrean Approach to the Ethics of Artificial Intelligence," argues that "to be good practitioners, humans must now be competent users of such technologies." The formulation is carefully qualified: competent users, not competent operators or reviewers. The distinction matters. A competent user exercises judgment about when and how to deploy the tool — a form of practical wisdom that integrates the tool into the practice without allowing the tool to replace the practice. An operator merely translates between the tool's inputs and its outputs, exercising no more judgment than the situation minimally requires. The difference between use and operation is the difference between a practice that has absorbed a new tool and a technique that has consumed the practice it was meant to serve.
The Aristotelian tradition has always maintained that the virtues are not merely instrumentally valuable — valuable because they produce good outcomes — but intrinsically valuable — valuable because they constitute the good life for a human being. The courageous physician is not merely more effective than the cowardly one; she is a better human being. The honest craftsman is not merely more reliable than the dishonest one; he lives a better life. The exercise of the virtues is constitutive of human flourishing, and a culture that loses the conditions under which the virtues can be cultivated has lost something more than productivity. It has lost a form of life.
The machine cannot flourish. It cannot live a good life or a bad one. It cannot exercise the virtues or fail to exercise them. It operates in the domain of capabilities, where the relevant evaluative criteria are efficiency, accuracy, and consistency. The practitioner operates in the domain of virtues, where the relevant evaluative criteria are courage, justice, honesty, and practical wisdom. The conflation of these two domains — the assumption that the machine's capabilities are equivalent to the practitioner's virtues, that producing the same output constitutes exercising the same excellence — is the fundamental philosophical error of the AI moment.
Correcting this error does not require a rejection of the machine's capabilities, which would be both futile and foolish. It requires a clear-eyed recognition that capabilities, however extraordinary, do not and cannot replace virtues, and that the cultivation of virtues requires conditions — sustained engagement with practices, exposure to the standards of excellence maintained by a tradition, participation in a community of practitioners who can recognize and affirm the internal goods of the practice — that must be deliberately preserved. The market will not preserve them, because the market cannot see them. The machine will not preserve them, because the machine has no stake in them. They will be preserved only by practitioners who understand what is at stake and by institutions that are structured to serve the practices they sustain rather than consume them.
MacIntyre's later work, Dependent Rational Animals, deepens this point by insisting that human rationality is inseparable from embodiment, biological vulnerability, and social dependency. Human beings are not disembodied reasoners who happen to inhabit bodies. They are animals whose rationality is expressed through, and conditioned by, their bodily existence and their dependence on other animals of the same kind. The virtues are cultivated in and through this embodied, dependent condition — through the specific vulnerabilities, the specific relationships, and the specific forms of mutual recognition that constitute human social life. The machine possesses none of these features. Its "intelligence," whatever else it may be, is not the intelligence of a dependent rational animal. The attempt to understand what AI does to human practices without attending to this difference in the kind of intelligence involved is an attempt that will inevitably fail to grasp what is most important about both the machine and the human being.
The claim that software engineering constitutes a genuine practice requires more than assertion; it requires demonstration. The demonstration must show that software engineering possesses the features that distinguish a practice from a mere technical skill: internal goods that can only be recognized through participation, standards of excellence that are partially definitive of the activity, and a tradition — a historically extended, socially embodied argument about the goods which constitute that tradition — through which those standards have been progressively elaborated and refined.
The objection that software engineering is too recent, too commercially driven, and too oriented toward measurable outputs to qualify as a practice is an objection that must be addressed directly. The age of a practice is not relevant to its status as a practice; chess became a practice long before it became ancient. The commercial orientation of a practice does not disqualify it; architecture has been commercially oriented since the Parthenon, and medicine has charged fees since Hippocrates. The measurability of its outputs does not reduce it to a mere technique; farming produces measurable outputs — bushels of grain, heads of cattle — without being thereby reducible to a technique.
What matters is whether the activity has internal goods, and whether participation in the activity cultivates the virtues. Software engineering, at its best, satisfies both conditions.
The internal goods of software engineering are of several kinds. The first is the elegance of well-designed architecture — a quality that resides not in any single component but in the relationships between components, in the coherence of abstractions, in the way a system accommodates change without requiring its own fundamental restructuring. This elegance is not reducible to measurable criteria — not to lines of code, not to performance benchmarks, not to the number of bugs per thousand lines. It is an aesthetic and intellectual quality that only practitioners who have developed, through sustained engagement with the practice, the capacity to perceive the difference between a system that merely works and a system that is well-designed can recognize.
The second internal good is the satisfaction of systems that respond precisely to their intended purpose — not the mere satisfaction of a job completed, which is an external good available to anyone who completes any job, but the specific satisfaction of having created something that does what it was designed to do with a precision and reliability that reflects the practitioner's understanding of the problem domain. This satisfaction is available only to the practitioner who understands both the problem and the solution at a depth that allows her to perceive the fit between them.
The third internal good is the beauty of code that only another practitioner can perceive. The claim is entirely parallel to the claim that there is a beauty in a well-played chess game that only chess players can perceive, or a beauty in a well-executed surgical procedure that only surgeons can recognize. The beauty of code is not decorative; it is structural. It resides in the clarity of the logic, the economy of the expression, the way the code reveals the programmer's understanding of the problem rather than concealing it behind layers of unnecessary complexity.
But demonstrating that software engineering has internal goods is insufficient. What establishes it as a practice in MacIntyre's full sense is that it possesses a tradition — a historically extended argument about what constitutes good software, what methods best serve the practice, and what the practice is for. This tradition is younger than the traditions of medicine or architecture, but it is substantive, contentious, and ongoing.
The earliest programmers worked in conditions that bore closer resemblance to craft than to industrial production. The programmer who worked in assembly language knew the machine at a level of intimacy that no subsequent generation of programmers would match: every register, every memory address, every instruction cycle was part of the practitioner's working knowledge, and the elegance of the resulting program was a direct expression of the depth of that knowledge. The transition from assembly language to high-level languages in the 1960s and 1970s produced the first of the tradition's internal arguments about what constitutes good practice. The advocates of high-level languages argued that abstraction was a virtue — that the programmer who worked at a higher level could design more complex systems and produce more maintainable code. The advocates of assembly language argued that abstraction was a vice — that the programmer who did not understand the machine at the lowest level could not make the fine-grained decisions that excellent software required.
This argument was not merely technical. It was an argument about the internal goods of the practice: about what constitutes genuine understanding, about what kinds of excellence the practice should cultivate, about the relationship between the practitioner's knowledge and the quality of the artifact she produces. And the argument was extended through time, as each subsequent layer of abstraction — structured programming, object-oriented programming, functional programming, frameworks, cloud infrastructure — produced its own version of the same debate.
The agile movement, which transformed the software industry in the early 2000s, was another chapter in this tradition of inquiry. The agile manifesto's emphasis on "individuals and interactions over processes and tools," on "working software over comprehensive documentation," on "responding to change over following a plan" — these were not merely technical recommendations about project management. They were claims about the internal goods of the practice and about the institutional structures most conducive to their realization. They represented a substantive position in an ongoing argument about what software engineering is for.
The Luddites described in The Orange Pill — the senior developers who resist AI because they perceive what is being lost — are, within this framework, practitioners defending the internal goods of their practice against a market that values only the external goods. Their resistance is morally intelligible even when it is strategically mistaken. They perceive, correctly, that the internal goods of their practice — the embodied understanding of systems, the aesthetic sensibility that distinguishes elegant code from merely functional code, the traditions of craftsmanship built through decades of communal effort — are threatened by a technology that enables the production of external goods without participation in the practice that cultivates the internal goods.
Their error is not in their diagnosis but in their response. The Orange Pill describes this with sympathy: some run for the woods, reducing their cost of living out of a perception that their livelihood will soon be gone. Others hold their ground and lean in. The framework of practices explains why the latter response is correct: refusal to engage with a new tool is not the defense of a practice but the ossification of one particular historical form of the practice, and the two are not the same thing.
The essential features of software engineering as a practice are the internal goods already described: the elegance of architecture, the satisfaction of precise function, the beauty of well-crafted code, and the tradition of debate about what these goods require. The accidental features include the specific technologies, the specific languages, the specific methodologies that have been used to pursue these goods at any given moment in the history of the practice. The transition from assembly language to high-level languages did not destroy the practice of software engineering; it transformed the practice by changing the level at which its internal goods were pursued. The transition from manual coding to AI-assisted coding may similarly transform the practice without destroying it — but only if the conditions under which the internal goods can be pursued at the new level are deliberately preserved.
This is the critical insight that both the Luddites and the triumphalists miss, from opposite directions. The Luddites assume that the internal goods of the practice are inseparable from the specific historical forms in which they have been pursued — that genuine architectural wisdom requires writing the code by hand, that embodied intuition requires years of debugging and configuration. The triumphalists assume that the internal goods do not matter, or that they will take care of themselves, or that the amplification of external goods is a sufficient substitute for the cultivation of internal goods. Both assumptions are wrong, and they are wrong for the same reason: they fail to distinguish between the practice and its current historical instantiation.
The practice of software engineering is not identical to any particular way of writing software. It is a historically extended argument about what good software is, how it should be built, and what it is for. That argument can continue under conditions that are radically different from the conditions under which it has been conducted in the past — but only if the conditions are designed to preserve the essential features of the practice: the pursuit of internal goods, the exercise of the virtues, and the participation in a tradition that extends the practice's self-understanding over time.
Whether this transformation preserves the practice or destroys it depends on whether the conditions for pursuing the internal goods at the new level are maintained. If the practitioner is given the space, the time, and the institutional support to develop the virtues specific to the new level of practice — the judgment about what systems should exist, the taste that distinguishes between products that serve genuine human needs and products that merely exploit market opportunities, the courage to pursue excellence in the face of institutional pressure to optimize for external goods alone — then the practice survives, transformed but recognizable. If these conditions are not maintained, if the practitioner is reduced to reviewing AI output without understanding it, if the market pressure to ship overwhelms the space for the cultivation of judgment, then the practice dies, and what replaces it is a mere technique — a set of procedures for producing output that lacks the internal goods that gave the practice its meaning.
The choice between these outcomes is not a technical choice. It is a moral choice, and it is one that must be made by communities of practitioners, by the institutions that sustain them, and by a culture that has yet to decide whether the internal goods of its practices are worth preserving.
MacIntyre argues in After Virtue that the unity of a human life is the unity of a narrative quest. The self is not a Humean bundle of successive experiences, not a Sartrean radical chooser confronting each moment without antecedent commitment, not an economic agent maximizing preference-satisfaction across time. The self is a character in an ongoing narrative, and the meaning of any particular action or episode is derived from its place in that narrative — from the story of which it is a part, the story that connects the past from which the self has come to the future toward which the self is moving. To ask "What is the good for me?" is to ask "What is the narrative of which my life is a part, and what does that narrative require of me at this point in its unfolding?"
This account of the self has consequences for the analysis of practices that are routinely overlooked in the AI discourse, where the practitioner is treated as a bundle of skills whose market value can be assessed independently of the narrative within which those skills were developed and exercised. A practice is not merely a set of activities that a person engages in during working hours. It is a constitutive element of the narrative within which the person understands her own life. The physician who has spent twenty years developing the virtues specific to medicine does not merely practice medicine; she is a physician, and her identity as a physician is woven into the narrative that gives her life its meaning. The story she tells about who she is — the medical school that was harder than she expected, the residency that nearly broke her, the first diagnosis that saved a life, the slow accumulation of clinical wisdom through thousands of patient encounters — is not merely a description of her career. It is her identity, in the sense that matters for moral philosophy: the narrative within which her actions are intelligible to her and to others as the actions of a particular kind of person pursuing a particular kind of good.
The AI moment disrupts these narratives at their foundation, and the disruption is not merely professional. It is existential, in the precise philosophical sense that it concerns the conditions under which the practitioner's existence makes moral sense.
The Orange Pill describes this disruption through what Segal calls the "fishbowl" — the set of assumptions so familiar that the practitioner has stopped noticing them, the water she breathes. The scientist's fishbowl is shaped by empiricism; the builder's by the question of what can be made; the filmmaker's by narrative. These fishbowls are not arbitrary enclosures. They are, in MacIntyre's terms, the narrative frameworks within which the internal goods of specific practices have been pursued, and within which the practitioner's moral identity has been constituted. When the fishbowl cracks — when the assumptions that constituted the narrative framework are suddenly revealed as contingent rather than necessary — what cracks is not merely a set of professional assumptions. What cracks is the story within which the practitioner's achievements, struggles, and aspirations were intelligible as episodes in a coherent life.
The senior engineer who discovered that his "remaining twenty percent was everything" did not merely discover that his skills were more valuable than he had thought. He discovered that the story he had been telling about who he was and why he mattered required fundamental revision. The story he had been telling was a story about implementation: I am the person who writes the code, who solves the technical problems, who translates designs into working systems. The discovery that the machine could perform the implementation forced a revision of the story: I am the person who knows what to build, who understands what will break, who exercises the judgment that the machine cannot replicate. This revision is not a minor adjustment to a career plan. It is a transformation of the narrative within which the engineer's life has been conducted, and it carries with it the specific kind of vertigo that The Orange Pill describes as the compound feeling of awe and loss — a vertigo that is not a failure of adaptation but the appropriate response to a genuine crisis of narrative identity.
MacIntyre's account of narrative unity specifies three features that are directly relevant to this crisis. The first is that human beings are accountable for the narratives of which they are the authors. The narrative of a life is not merely a story told about oneself; it is a story for which one bears responsibility, in the sense that others are entitled to ask how the episodes fit together, what purposes they serve, and whether the life as a whole exhibits the kind of coherence that makes it intelligible as the life of a moral agent rather than a mere sequence of events. The second feature is that the narratives of individual lives are embedded in the narratives of the practices and communities within which those lives are conducted. The physician's narrative is embedded in the narrative of medicine; the engineer's narrative is embedded in the narrative of engineering; and the intelligibility of each individual narrative depends on the coherence of the larger narrative within which it is situated. The third feature is that the narrative of a practice is itself embedded in a tradition — a historically extended argument about the goods that constitute the practice and the standards of excellence that govern it.
When AI disrupts the practice, it disrupts all three levels of narrative simultaneously. The individual practitioner's story becomes incoherent, because the activities that constituted the center of the narrative — the implementation, the debugging, the patient accumulation of embodied expertise — are no longer necessary. The practice's story becomes unstable, because the argument about what constitutes good software engineering, good medicine, or good legal practice must now accommodate a technology that performs the activities around which the argument was previously organized. And the tradition's story — the long historical arc that extends from the earliest practitioners through the current generation — faces a discontinuity that threatens to sever the present from the past in a way that makes the tradition's accumulated wisdom seem irrelevant rather than foundational.
Wessel Reijers and Mark Coeckelbergh, building on MacIntyre's narrative ethics in their work on technology and virtue, have argued that technologies function as "co-narrators" in human lives — that they participate in the configuration of the narratives through which human beings understand their actions and their identities. The framing is suggestive, but it requires a qualification that their account does not fully develop. A co-narrator, in any meaningful sense, must be a participant in the narrative — an agent whose contributions to the story are shaped by an understanding of the story's purposes and an investment in its outcomes. The machine does not participate in the narrative of the practitioner's life. It does not understand the story, invest in its outcomes, or care whether the narrative exhibits the kind of coherence that makes it intelligible as the life of a moral agent. It is, at most, a tool that reconfigures the conditions under which the narrative is conducted — and the reconfiguration, because it is not guided by any understanding of the narrative's purposes, may disrupt the narrative as easily as it extends it.
The narrative crisis is not limited to individual practitioners. It extends to entire communities of practice and to the traditions that sustain them. The tradition of software engineering has been constructed over decades through communal argument about what good software is, how it should be built, and what it is for. The narrative of that tradition — the story about the progression from assembly language to high-level languages to frameworks to cloud infrastructure, about the gradual democratization of the capacity to build, about the relationship between technical craft and human need — is a narrative that has given meaning to the careers of millions of practitioners. The AI moment disrupts that narrative at its foundation, because if the implementation that was the center of the practice can be performed by a machine, then the story of the practice must be revised, and the revision is not merely technical. It is existential, because the story of the practice is inseparable from the stories of the practitioners who constitute it.
What is required is not merely adaptation — the pragmatic adjustment of skills and expectations to new conditions — but a new narrative, one that preserves the moral significance of the practitioner's achievements while acknowledging that the conditions under which those achievements were made have fundamentally changed. The theory of practices provides the resources for constructing such a narrative, because it provides a framework within which the continuity of the practice through radical transformation can be understood. The practice of software engineering is not identical to any particular technology or methodology; it is a historically extended argument about what good software is. The narrative of the practice continues through the AI moment, carrying forward the internal goods, the standards of excellence, and the virtues that constitute the practice's identity — provided that the conditions for their continued cultivation are deliberately maintained.
The new narrative is therefore not a narrative of displacement but a narrative of revelation: the revelation that what the practitioner contributes to the practice was never the implementation — never the code, the briefs, the diagnoses — but the judgment, the wisdom, the perception, the virtues that the implementation was the medium for developing. The practitioner who grasps this narrative does not lose her moral identity. She discovers that her moral identity was always constituted by something deeper than the tasks she performed — by the virtues she exercised, the goods she perceived, and the tradition she carried forward.
But grasping this narrative requires a capacity for philosophical self-understanding that the contemporary culture of professional life does not cultivate. The practitioner who has been trained to identify herself with her outputs — with the code she writes, the cases she wins, the diagnoses she makes — will find it difficult to reconceive her identity in terms of the virtues she exercises, because the vocabulary of virtues has been largely eliminated from the discourse of professional life. The language of "skills," "competencies," and "deliverables" that dominates professional culture is a language of external goods and capabilities, not of internal goods and virtues. The practitioner who speaks only this language cannot articulate what she is losing when the machine performs the tasks that defined her practice, because what she is losing — the specific form of human excellence that the practice cultivated — is invisible within the vocabulary she has been given.
The recovery of a vocabulary adequate to the practitioner's experience is therefore not merely an academic exercise. It is a practical necessity — a precondition for the kind of narrative reconstruction that the AI moment demands. The practitioner who can distinguish between internal goods and external goods, between virtues and capabilities, between a practice and a mere technique, is a practitioner who can construct a narrative of her own life that makes moral sense of the transformation she is undergoing. The practitioner who cannot draw these distinctions is condemned to experience the transformation as a mere disruption — a loss without meaning, a change without direction, a crisis without resolution.
MacIntyre writes in After Virtue that "man is in his actions and practice, as well as in his fictions, essentially a story-telling animal." The stories the practitioner tells about herself are not decorative additions to a life that would be the same without them. They are constitutive of the life itself — of the moral identity that makes the life intelligible as the life of a particular person pursuing particular goods. The AI moment has disrupted these stories, and the disruption is genuine. But the disruption is not necessarily destructive, provided that the narrative resources for constructing new stories are available — stories that make the continuity of the practice through transformation intelligible, that preserve the moral significance of the practitioner's achievements, and that point toward forms of excellence that are adequate to the new conditions. The provision of these narrative resources is among the most urgent tasks of the present moment, and it is a task that no technology can perform on anyone's behalf.
When the machine can execute the tasks that defined a practice, what remains of the practice? This question is not theoretical. It is the question that millions of practitioners are asking, in various forms and with varying degrees of philosophical self-awareness, as they confront the reality that the implementation work constituting the center of their professional lives can now be performed by a machine operating at a speed, scale, and consistency no human can match.
The answer that the market gives is clear and unsatisfying: what remains is whatever the market still needs from a human. This answer reduces the practitioner to a residual category — the set of functions that the machine has not yet absorbed — and it renders the practitioner's identity contingent on the pace of technological development. What remains today may not remain tomorrow; the residual category shrinks with each improvement in the machine's capabilities. The practitioner who defines herself by what remains is a practitioner living on borrowed time, and she knows it.
The answer that the theory of practices gives is fundamentally different, and it is different because it locates the practitioner's value not in the tasks she performs but in the virtues she exercises. What remains, when the machine can execute, is the practical wisdom, the capacity for judgment, the knowledge of particulars that no machine can possess because it arises from lived experience within a specific narrative, community, and tradition. These are not residual. They are essential. They are the substance of the practice, and the implementation that the machine now performs was always the medium through which the substance was developed, never the substance itself.
This reframing is essential, because the language of "what remains" implies that the implementation was the primary thing and the judgment is secondary — a leftover. The reality is the reverse. The implementation was always instrumental — valuable because it was the means through which the internal goods of the practice were pursued and the virtues specific to the practice were cultivated. The judgment is primary — the end toward which the practice was always directed, the internal good that gives the practice its point.
But the primacy of judgment does not mean that the implementation was merely instrumental in a reductive sense. The implementation was not merely a means to an end; it was the practice through which the end was achieved. The judgment was cultivated through the implementation, not alongside it or after it. The engineer who developed the capacity to feel a codebase developed that capacity by writing code, debugging code, sitting with code that did not work until she understood why. The physician who developed diagnostic intuition developed it by examining patients, making diagnoses, seeing the consequences of her diagnoses, and learning from her errors. The implementation and the judgment are not separable in the way that means and ends are ordinarily separable. They are aspects of a single activity — the activity of practice — and the elimination of one aspect necessarily affects the other.
This is the paradox at the heart of the post-expertise condition: the thing that is revealed when the implementation is removed — the judgment, the taste, the practical wisdom — is the thing that was cultivated through the implementation. The implementation was the practice; the practice was the cultivation of the judgment; and now the practice has been transformed in a way that preserves the judgment of those who already possess it while potentially undermining the conditions under which new practitioners can develop it.
The Orange Pill captures this paradox in a single anecdote. An engineer in Trivandrum lost both the tedium and the ten minutes of formative struggle when Claude took over the plumbing — the dependency management, configuration files, and mechanical connective tissue between the components she actually cared about. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she realized she was making architectural decisions with less confidence than she used to and could not explain why. The formative struggle was embedded in the tedium, indistinguishable from it, and the elimination of the tedium eliminated the struggle as well.
The concept that The Orange Pill calls "ascending friction" — the principle that every significant technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor — provides one element of the response to this paradox. The response is important, but it requires philosophical specification if it is to bear the weight the argument requires.
Each level of practice cultivates its own specific virtues. The assembly language programmer cultivated the virtues of precision, patience, and meticulous attention — the virtues required to manage the relationship between intention and execution at the most granular level. The high-level language programmer cultivated the virtues of abstraction, design, and architectural thinking. The framework programmer cultivated the virtues of integration, pragmatic judgment, and the evaluation of trade-offs. Each transition from one level to the next involved a genuine loss of the virtues specific to the lower level and a genuine gain of the virtues specific to the higher. The high-level language programmer lost the precision of the assembly programmer and gained the capacity for architectural thinking that assembly could not support. In each case, the transition was both a loss and a gain, and the loss was real — the virtues that were no longer cultivated were genuine virtues, and their loss diminished the practitioner in the specific dimension of excellence those virtues constituted.
The AI moment represents the most dramatic instance of ascending friction in the history of computing, because it relocates difficulty from the entire domain of implementation to the domain of judgment, vision, and the discernment of what is worth building. The virtues that this transition demands are the virtues of the highest level of practice: the practical wisdom to navigate situations that no algorithm can specify, the courage to make decisions under genuine uncertainty, the honesty to acknowledge when the convenient answer is not the right answer, and the justice to consider the effects of one's decisions on the communities that will live with their consequences.
These are, in the Aristotelian tradition, the highest virtues — the virtues that are constitutive of the good life in the fullest sense. The AI moment, paradoxically, may create conditions under which these virtues become the primary focus of professional development, because the lower-level virtues that implementation demanded are now cultivated by the machine.
But ascending friction is not guaranteed. The friction ascends only if the conditions for its ascent are deliberately preserved. If the practitioner is reduced to reviewing AI output without the space for judgment — if institutional pressure to ship overwhelms time for reflection, if the market's demand for speed eliminates the possibility of slow, deliberate engagement — then the friction does not ascend. It disappears. And with it disappear the internal goods that the practice was designed to cultivate.
Giarmoleo and colleagues, in their 2026 analysis of "Virtuous Organizations in the Age of AI," argue that the transformation brought by AI into organizations "is not merely a technological upgrade but a reconfiguration of the moral and relational foundations of organizational life." The formulation is precise and significant: moral and relational foundations, not merely productive capacities. What is reconfigured is not only what the organization can do but what kinds of moral development the organization's activities make possible. An organization that has reconfigured its foundations to support the cultivation of higher-level virtues — the judgment, the integrative vision, the capacity for ethical discernment — is an organization that has transformed its practice without destroying it. An organization that has reconfigured its foundations to maximize output without attending to the moral development of its practitioners has destroyed the practice and replaced it with a technique.
The challenge of designing practices that cultivate the highest virtues at the new level of abstraction is the central challenge of professional development in the AI age. It is a challenge that the educational and institutional structures of the present moment are largely unequipped to meet, because those structures were designed to cultivate the lower-level virtues — to teach students to write syntactically correct code, to draft competent briefs, to perform standard diagnoses — and the reorganization of those structures around the cultivation of practical wisdom is a task of a fundamentally different order.
What remains, when the machine can execute, is therefore not a diminished version of the practice. What remains is the practice itself — transformed, relocated to a higher level, but recognizably continuous with the tradition from which it emerged. The internal goods are still there: the elegance of well-designed systems, the satisfaction of solutions that serve genuine human needs, the beauty of judgment exercised with wisdom and care. And the virtues that the practice cultivates are still there: the courage to pursue excellence in the face of institutional pressure, the honesty to acknowledge when the easy answer is not the right answer, the practical wisdom to navigate situations that no algorithm can fully specify. But these goods and these virtues will not preserve themselves. They will be preserved only if communities of practitioners, the institutions that sustain them, and the culture that values what they produce make the deliberate choice to preserve them — a choice that the market, left to its own devices, will never make.
Byung-Chul Han's argument about smoothness, which The Orange Pill engages with greater seriousness than any other philosophical position it encounters, is, in the terms of the framework developed here, an argument about the relationship between aesthetics and internal goods. The argument is powerful, and its power is diagnostic: it identifies a real pathology in the contemporary culture of production. But the argument, as Han develops it, lacks the conceptual precision that the theory of practices provides — the precision needed to distinguish between friction that cultivates the virtues and friction that merely impedes production, between smoothness that destroys internal goods and smoothness that removes accidental barriers without touching the essential features of the practice.
Han's claim is that the dominant aesthetic of the present age is the aesthetic of the smooth: the frictionless, the seamless, the optimized-for-ease. The iPhone is smooth; the Tesla dashboard is smooth; the algorithmic feed is smooth; the AI-generated text is smooth. Smoothness has become the standard of quality, the criterion by which experiences, products, and interactions are evaluated. And Han argues that this aesthetic, applied to human existence, produces not a better life but a hollowed simulation of productivity in which the conditions for genuine experience — for depth, for struggle, for the kind of understanding that can only be built through friction — have been systematically eliminated.
The theory of practices allows the specification of what Han perceives but does not fully articulate. What is lost in the aesthetic of the smooth is not merely friction in the abstract. What is lost is the specific friction through which internal goods are cultivated. The friction of debugging code is not an obstacle to the production of working software; it is the practice through which the software engineer develops the embodied understanding that constitutes the internal good of the practice. The friction of wrestling with a legal argument is not an obstacle to the production of a competent brief; it is the practice through which the lawyer develops the analytical judgment that constitutes the internal good of legal practice. The friction of sitting with a patient, listening to a history that does not conform to textbook categories, pursuing a diagnosis through uncertainty and ambiguity — this friction is not an obstacle to the delivery of medical care; it is the practice through which the physician develops the diagnostic wisdom that constitutes the internal good of the practice.
When the friction is removed — when the code is generated by the machine, when the brief is drafted by the algorithm, when the diagnosis is suggested by the AI — the external good is preserved. The code works; the brief is competent; the diagnosis may be correct. But the internal good is absent, because the internal good was not located in the output. It was located in the process — in the sustained engagement with difficulty through which the practitioner's virtues were developed and her perception of the internal goods was refined.
The Orange Pill describes the pre-AI process of writing software as a sequence of productive failures: the developer conceived a function, wrote it, received an error message, examined the code, hypothesized, tested, failed again, read documentation, asked for help, tried again, and eventually produced working code. "In those hours or days," Segal writes, "something had happened that was not visible in the final code. The developer had come to understand the function — not merely intellectually but in her body." After AI, the developer describes the function, the machine writes it, it works, and the developer moves on. The code is correct. It may even be superior. But the understanding that would have been built through the struggle has not been built.
The parallel to what MacIntyre calls the "geological" accumulation of practical wisdom is exact. Every hour spent debugging deposits a thin layer of understanding. The layers accumulate over months and years into something solid — something the practitioner can stand on. When a senior engineer looks at a codebase and feels that something is wrong before she can articulate what, she is standing on thousands of those layers, each laid down through friction. AI-mediated production skips the deposition. The surface looks the same. The strata beneath are absent.
Jeff Koons's Balloon Dog, which The Orange Pill invokes as the aesthetic emblem of the smooth, is instructive precisely because it is the elimination of all evidence of making. The surface is perfectly, aggressively smooth. No texture. No grain. No evidence of a human hand having touched it. A handcrafted object bears the marks of its making — the slight irregularities, the evidence of the craftsman's hand, the traces of decisions made and revised in the process of creation. These marks are not imperfections; they are the visible signs of the practice through which the internal goods of craftsmanship were realized. The smoothness that eliminates these marks eliminates not merely the marks but the practice they evidence.
This is the core of Han's diagnosis, and the practices framework specifies it with a precision that Han's own vocabulary — drawn more from continental phenomenology than from Aristotelian ethics — does not achieve. The aesthetics of the smooth is the aesthetics of a culture that has systematically devalued internal goods in favor of external goods, and that has done so with such thoroughness that it can no longer perceive what it has lost. The smooth surface conceals the absence of the practice, and the concealment is the most dangerous feature of the smooth, because it makes the loss invisible to the very people who are bearing it.
But Han makes an error that the practices framework exposes. The error is the assumption that all friction is productive — that the removal of any resistance from the process of creation constitutes a loss of depth. This assumption is false, and its falsity matters, because it leads to a prescription — the wholesale rejection of the tools, the retreat to analog, the cultivation of difficulty for its own sake — that is both impractical and philosophically confused.
Not all friction cultivates the virtues. The friction of a poorly designed development environment, of inadequate documentation, of bureaucratic obstacles to deployment — this friction does not cultivate patience or precision or embodied understanding. It cultivates frustration, and frustration is not a virtue. The distinction between productive friction — friction that is internal to the practice and that contributes to the development of the practitioner's virtues — and unproductive friction — friction that is external to the practice and that merely impedes the production of both internal and external goods — is a distinction that Han's framework cannot draw, because Han lacks the concept of a practice that would allow him to distinguish between the two.
MacIntyre's framework draws the distinction precisely. Productive friction is friction that arises from the practice itself — from the internal demands of the activity, from the standards of excellence that the practice imposes on the practitioner, from the gap between the practitioner's current capacities and the capacities the practice requires. Unproductive friction is friction that arises from conditions external to the practice — from institutional dysfunction, from inadequate tools, from barriers of access that have nothing to do with the internal goods of the practice and that serve no purpose except the perpetuation of privilege or the preservation of institutional inertia.
The AI moment removes both kinds of friction simultaneously, and the failure to distinguish between them is the source of the deepest confusion in the current debate. The triumphalist celebrates the removal of all friction as liberation. The Hanian resister mourns the removal of all friction as loss. The theory of practices counsels a more discriminating response: celebrate the removal of unproductive friction — the tedium, the busywork, the mechanical connective tissue that consumed the practitioner's time without contributing to her development — and build deliberate structures to preserve productive friction — the kind of engagement with difficulty that cultivates the virtues the practice is designed to develop.
The laparoscopic surgery case that The Orange Pill describes illustrates this logic with particular clarity. When surgeons lost the tactile friction of open surgery, they lost something real — the embodied knowledge that came from hands inside a body, navigating by touch. But they gained the ability to perform operations that open surgery could never attempt, with recovery times collapsed and infection rates diminished. The friction did not disappear. It ascended. The surgeon was no longer wrestling with tissue; she was wrestling with the interpretation of a two-dimensional image of a three-dimensional space, with the coordination of instruments she could not directly feel, with the cognitive challenge of operating at a remove from the body. The work became harder — but harder at a higher level.
The aesthetic choice between the smooth and the marked is therefore not a simple binary. It is a choice that requires practical wisdom — the capacity to discern, in each particular case, which friction is productive and which is merely impeditive, which smoothness destroys internal goods and which merely removes accidental barriers. This discernment cannot be automated, because it requires exactly the kind of judgment — contextual, particular, shaped by experience within a tradition — that the machine does not possess. It is, in the precise Aristotelian sense, a task for phronesis. And the cultivation of the phronesis necessary to make this discernment wisely is itself a practice — perhaps the most important practice of the present moment.
Practices require institutions for their sustenance. This is not an incidental feature of practices but a structural one. The chess master requires a chess club, a tournament structure, a ranking system. The physician requires a hospital, a medical school, a licensing board. The software engineer requires a firm, a team, a community of peers who can recognize and affirm the internal goods of the practice. No practice can survive in a purely individual form; practices are inherently social, and the social structures that sustain them — the institutions — are essential to their continuation.
But the relationship between practices and institutions is always potentially corrupt. This is not an accidental feature of the relationship; it is structural, and understanding its structure is necessary for understanding why the AI moment poses the specific kind of threat that it does.
Institutions are necessarily concerned with external goods — with money, power, prestige, and the material resources that sustain the institution's existence. A hospital must generate revenue; a university must attract students and funding; a software firm must ship products and maintain profitability. Without the pursuit of external goods, the institution cannot survive, and without the institution, the practice cannot be sustained. The relationship is one of necessary tension: the practice needs the institution for its sustenance, and the institution needs the practice for its legitimacy, but their respective orientations — the practice toward internal goods, the institution toward external goods — are in permanent and structural conflict.
The corruption occurs when the institution's pursuit of external goods overwhelms the practice's cultivation of internal goods. When the hospital prioritizes revenue over patient care, when the university prioritizes enrollment over education, when the software firm prioritizes shipping over craftsmanship — the institution has ceased to serve the practice and has begun to consume it. The practice becomes a means to the institution's external goods rather than an end in itself, and the internal goods that gave the practice its meaning are sacrificed to the institutional imperative for survival and growth.
This pattern of institutional corruption has always been a feature of the relationship between practices and institutions. What the AI moment introduces is a qualitatively new form of the corruption — one that is more efficient, more difficult to detect, and more resistant to correction than any previous form.
The new form is this: AI makes it possible for institutions to obtain the external goods of practices without the practices themselves. When the machine can produce the outputs that the practice previously produced — the code, the briefs, the diagnoses, the designs — the institution can capture the external goods associated with those outputs without sustaining the practice through which the outputs were previously generated. The organization that replaces a team of practitioners with a smaller group of AI-augmented operators captures the external goods more efficiently while destroying the practice from which those goods originally derived their meaning. The destruction is efficient precisely because it is invisible to the metrics by which institutions measure their own performance — the metrics of revenue, output, market share, and growth that constitute the institutional vocabulary of success.
The Orange Pill's account of the software industry's transformation — what the market calls the "Death Cross," the moment when the AI market overtakes the SaaS market in aggregate value — illustrates this dynamic with a specificity that philosophical analysis alone could not achieve. When code can be generated by a machine at near-zero marginal cost, the organizations that were built around the practice of writing code find their institutional justification undermined. Their external goods — the revenue, the market share, the customer base — can now be obtained more efficiently through AI, and the market's response is to reprice them accordingly.
But the repricing reveals something about the relationship between practices and institutions that the market cannot articulate. The SaaS companies that survive, Segal argues, are the ones "whose value was always above the code layer" — the ones that built ecosystems, data layers, institutional trust, and the accumulated understanding of their customers' needs that constitutes a form of practical wisdom at the organizational level. These companies survive not because their code is superior but because their practices are deeper. The internal goods they have cultivated — the understanding of customer needs, the judgment about what solutions serve those needs, the institutional wisdom about how to deploy technology in ways that create genuine value — are not replicable by AI, because they are the products of sustained, communal engagement with particular problems of particular customers in particular contexts.
The companies that fail are the ones that were always merely code — thin applications that solved singular problems without developing the deeper understanding that constitutes the internal goods of the practice. These companies had institutions without practices. They had the external form of an organization dedicated to software engineering without the internal substance of a community of practitioners cultivating the virtues specific to the practice.
The distinction between institutions that serve practices and institutions that consume them is the distinction that the AI moment makes urgent. When the cost of producing external goods approaches zero, the only defensible institutional position is one grounded in internal goods — in the forms of understanding, judgment, and practical wisdom that can only be developed through sustained engagement with practices. The institution that has cultivated these internal goods possesses something that AI cannot replicate. The institution that has not possesses nothing that AI cannot replace.
The history of practices offers instructive precedents. The medieval guilds protected the internal goods of craft against the market's tendency to reduce craft to commodity — but they became exclusionary and corrupt, defending privilege rather than excellence. The universities, at their best, protected the internal goods of scholarship against the market's tendency to reduce scholarship to vocational training — but they have progressively subordinated their educational mission to revenue generation, measuring educational quality through metrics that capture external goods while rendering internal goods invisible. The professions — law, medicine, architecture — developed codes of ethics and standards of practice designed to ensure that the pursuit of internal goods would not be overwhelmed by the pursuit of external goods — but these codes have been progressively weakened by the same market pressures they were designed to resist.
Each institutional structure has, at various points in its history, failed. But the history of institutional failure is also a history of institutional renewal. The guilds were replaced by professional associations that preserved some commitment to internal goods while correcting exclusionary practices. The universities have, at their best, maintained spaces for the cultivation of virtues the market cannot recognize. The failures are instructive not as evidence that institutional protection is futile but as evidence that it is necessary and difficult — that the tension between practices and institutions is permanent, and that the defense of practices against institutional corruption requires the same ongoing, deliberate, structurally minded attention that any permanent tension requires.
The AI moment intensifies this tension by making institutional corruption more efficient. The institution that replaces practitioners with AI-augmented operators can produce the same external goods at lower cost, and the market will reward it for doing so. The institution that preserves its practitioners — that invests in the conditions for the cultivation of internal goods, that protects the time and space for the development of practical wisdom, that maintains the communities of practice through which the tradition is carried forward — bears a cost that the market does not recognize and will not reward. Every quarter, the arithmetic returns: if five people with AI tools can produce the output of fifty, why sustain the fifty?
The Orange Pill's account of this choice — the decision to keep the team rather than converting the productivity gains into margin — is, in the framework developed here, a decision to preserve the practice over the external goods that its dissolution would provide. The choice is unstable, because the pressure to convert internal goods into external goods is structural, not personal. It arises from the nature of the market itself, which rewards external goods and cannot recognize internal goods. The choice must therefore be remade continuously — not once, as a policy decision, but repeatedly, in the face of the market's relentless pressure to optimize for the goods it can see at the expense of the goods it cannot.
MacIntyre identifies the "virtuous administrator" as the person who understands that the institution she leads exists to serve a practice, and who exercises the virtues — justice, courage, honesty, practical wisdom — in the defense of the practice against the institution's own tendency to consume it. This is difficult work, because the market rewards the consumption of practices and punishes their preservation. The virtuous administrator who keeps the team rather than cutting to margin, who protects mentoring time rather than filling it with additional tasks, who insists on the cultivation of judgment rather than the mere production of output — this administrator is making a choice that the market will not reward and may punish.
But the rationality of the institution that consumes the practice is the rationality of the parasite that consumes its host. The external goods that the institution pursues are themselves dependent, in the long run, on the internal goods that the practice cultivates. The hospital that prioritizes revenue over patient care will eventually lose the trust of its patients and the competence of its physicians. The software firm that prioritizes shipping over craftsmanship will eventually lose the capacity for the judgment that distinguishes between products that serve genuine needs and products that merely exploit market opportunities. The corruption of institutions by the market is self-undermining — but only in the long run, and the long run is longer than the quarterly earnings cycle that governs institutional decision-making.
The defense of practices against institutional corruption is therefore not merely a matter of individual virtue, though individual virtue is necessary. It is a matter of institutional design — of building structures that insulate practices from the market's pressure while remaining accountable to the communities they serve. The design of such structures is itself a practice, and it demands the exercise of the highest virtues: the practical wisdom to navigate between institutional survival and institutional integrity, the justice to balance the claims of practitioners against the claims of the institution, and the courage to defend the internal goods of the practice when the market offers a more profitable alternative.
The question MacIntyre posed in his 1988 work — "Whose justice? Whose rationality?" — was a question about the impossibility of a tradition-independent standpoint from which competing claims about justice and rationality could be adjudicated. The question was not rhetorical; it was diagnostic. It revealed that every conception of justice and every standard of rationality is embedded in a particular tradition, shaped by the historical arguments that constitute that tradition, and intelligible only from within the framework of assumptions that the tradition provides. The liberal pretense of tradition-independent rational inquiry is itself a tradition — one with its own history, its own characteristic assumptions, its own blind spots, and its own forms of failure. The pretense of neutrality conceals, rather than transcends, the particular standpoint from which the pretense is issued.
That question must now be extended to the technology that is restructuring every practice, every institution, and every tradition it enters: Whose AI? Whose conception of excellence does the system encode? Whose internal goods does it recognize, and whose does it render invisible? Whose practices does it serve, and whose does it undermine? These questions are not supplementary to the analysis of AI. They are the analysis. Without them, the discourse about AI remains at the level of instrumental evaluation — does the tool work? is it efficient? does it produce the outputs the market demands? — and instrumental evaluation is precisely the form of evaluation that the theory of practices is designed to transcend.
The AI tool is not neutral. This claim will be contested by those who regard technology as a mere instrument, indifferent to the uses to which it is put in the way a hammer is indifferent to whether it drives a nail or strikes a thumb. The analogy is misleading. A hammer is a simple tool with a narrow range of applications and a transparent relationship between the user's intention and the tool's effect. A large language model is a complex artifact that embodies, in the statistical structure of its parameters, the priorities of the data it was trained on, the objectives its developers optimized for, the linguistic and cultural conventions of the corpus from which it learned, and the commercial incentives of the market that sustains its development. These are not incidental features that can be stripped away to reveal a neutral tool beneath. They are constitutive of what the system does, what it makes easy, what it makes difficult, and what it makes invisible.
The training data that constitutes the system's knowledge base is not a neutral representation of human knowledge. It is a selection — a particular subset of human expression and inquiry, weighted according to criteria that reflect the priorities of the organizations that assembled it. The data is predominantly in English, which means that the traditions and practices best represented in the English language are the traditions the system knows best and serves most fluently. The data reflects the priorities of Western academic and commercial culture, which means that the conceptions of excellence encoded in the system are predominantly conceptions developed within and for that culture. The result is a system that is not merely a tool but a cultural artifact — an artifact that carries within it a particular set of answers to the question of what knowledge is, what excellence looks like, and what constitutes good work.
The question "Whose AI?" is therefore a question about power — about whose conception of the good life is encoded in the technology that is now mediating every practice, reshaping every institution, and entering every tradition. And it is a question that the dominant framework for AI ethics — the principlist framework of fairness, accountability, transparency, and explainability — is structurally incapable of answering.
The principlist framework asks whether the AI system is fair, whether it distributes its benefits equitably, whether it avoids discriminatory outcomes, whether its decision-making processes are transparent and explainable. These are important questions, and they deserve serious attention. But they do not address the deeper question that the theory of practices poses: whether the conception of the good that is encoded in the system is a conception worth encoding — whether the practices the system serves are genuine practices with internal goods worth cultivating, whether the standards of excellence the system applies are standards that reflect genuine human flourishing, and whether the traditions the system enters are enriched or impoverished by its presence.
O'Doherty, Mulder, and"; colleagues, in their 2024 analysis of narrative approaches to AI ethics, argue that the dominant principlist framework has produced "a kind of AI ethics principlism" that has "gained a degree of widespread acceptance, yet still invites harsh rejections in recent scholarship." The rejection is well-founded, and the grounds for it are precisely MacIntyrean: principles without a tradition are empty, because principles acquire their content from the traditions within which they are formulated and the practices within which they are applied. The principle of "fairness" means one thing within a utilitarian framework (the maximization of aggregate welfare), another within a Kantian framework (the equal treatment of rational agents), another within a capabilities framework (the provision of conditions for the exercise of central human capabilities), and yet another within the Aristotelian tradition (the distribution of goods according to merit within the context of a shared practice). The principle, abstracted from the tradition that gives it content, has no determinate meaning — and a principle without determinate meaning cannot guide action.
MacIntyre's diagnosis of the interminability of modern moral debates applies with devastating precision to the AI ethics discourse. The debates about whether AI is beneficial or harmful, whether it should be regulated or left to the market, whether it enhances human capability or degrades human dignity — these debates are interminable not because the participants lack intelligence or goodwill but because they are deploying rival and incommensurable moral vocabularies. The utilitarian evaluates AI in terms of aggregate welfare and finds the productivity gains decisive. The Kantian evaluates AI in terms of autonomy and finds the question of human agency paramount. The capabilities theorist evaluates AI in terms of the conditions for the exercise of central human functions and reaches conclusions that differ from both. The virtue ethicist, working within the Aristotelian tradition, evaluates AI in terms of the conditions for the cultivation of the excellences constitutive of human flourishing and finds the question of practices and their internal goods to be the question that the other frameworks systematically neglect.
The interminability of the debate is not a sign that the question is unanswerable. It is a sign that the question cannot be answered from a tradition-independent standpoint, because there is no such standpoint. Every evaluation of AI is an evaluation conducted from within a particular moral tradition, using the conceptual resources that tradition provides and addressing the questions that tradition recognizes as salient. The recognition that there is no tradition-independent standpoint does not entail relativism — the conclusion that all traditions are equally valid and that there is no rational basis for choosing between them. It entails the recognition that the choice between traditions is itself a substantive moral and philosophical choice, one that must be made on the basis of which tradition provides the most adequate account of the phenomena in question and the most coherent framework for addressing the challenges those phenomena present.
The argument of this book has been that the Aristotelian tradition, as developed and extended in MacIntyre's theory of practices, provides the most adequate framework for understanding the AI moment — not because it answers every question, but because it identifies the question that the other frameworks miss. The question is not whether AI is efficient (it is), or whether it respects autonomy (the answer depends on the implementation), or whether it maximizes welfare (the calculus is indeterminate). The question is whether AI preserves the conditions under which the practices that cultivate the virtues constitutive of human flourishing can be sustained. This is the question that the utilitarian calculus cannot formulate, because internal goods are invisible to utility-maximizing calculations. It is the question that the Kantian framework cannot address, because the Kantian framework concerns the formal conditions of rational agency rather than the substantive conditions of human excellence. It is the question that the capabilities approach gestures toward but does not fully develop, because the capabilities framework identifies the conditions for human functioning without providing the account of practices, traditions, and internal goods that would specify what that functioning consists in.
The answer to "Whose AI?" will determine the moral character of the civilization that emerges from this moment. If the answer is "the market's AI" — if the system is designed and deployed primarily in service of the market's pursuit of external goods — then the practices that cultivate the virtues will be progressively undermined, the traditions that sustain those practices will be weakened, and the conditions for human flourishing will be eroded. If the answer is "the practitioners' AI" — if the system is designed and deployed in service of practices that cultivate the internal goods constitutive of human excellence — then the technology can serve as a powerful instrument for the extension of human capabilities without the destruction of the human capacities that give those capabilities their meaning.
Shannon Vallor's concept of "technomoral virtues" — virtues that are specifically required for living well in a world saturated by technology — represents one attempt to develop the Aristotelian tradition in the direction the AI moment demands. The virtues Vallor identifies — humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom — are not merely traditional virtues applied to new circumstances. They are virtues whose specific character is shaped by the conditions of technological mediation, by the particular demands that life with powerful technologies places on human agents. The formulation is promising, and it points in the right direction — toward the recognition that the AI moment requires not merely the preservation of existing virtues but the cultivation of new ones, shaped by the specific challenges that AI-mediated practices present.
But the cultivation of technomoral virtues, like the cultivation of any virtues, requires practices — coherent, complex, socially established activities with their own internal goods, their own standards of excellence, and their own traditions of inquiry. The virtues cannot be cultivated in the abstract, through the promulgation of principles or the issuing of guidelines. They can only be cultivated through the sustained engagement with practices that demand their exercise — practices in which the practitioner encounters genuine difficulty, exercises genuine judgment, and develops the genuine capacities of perception and response that constitute the virtues.
The Orange Pill concludes with a question addressed to its reader: "Are you worth amplifying?" The question carries implicit recognition that the amplifier does not choose. It amplifies whatever signal it is given. But the question, as formulated, remains within the vocabulary of individual choice — it asks whether the individual is worthy, as though the worthiness of the individual were the only relevant consideration.
The question that the theory of practices poses is broader and more demanding. It is not merely "Are you worth amplifying?" but "Are the practices you participate in worth preserving?" Are the internal goods of your practice worth fighting for? Are the virtues your practice cultivates worth defending against the market's relentless pressure to optimize for external goods alone? Are the traditions you have inherited worth carrying forward, through the crisis, to the future your successors will inhabit? These questions cannot be answered by individuals acting alone. They can only be answered by communities of practitioners who share a commitment to the internal goods of their practice, who participate in the traditions that carry those practices forward, and who are willing to build the institutional structures that preserve the conditions for the cultivation of the virtues even when those conditions are under assault from the most powerful economic forces the world has ever known.
The machine will not make this choice. The market will not make it. The choice belongs to the practitioners, the institutional leaders, the educators, and the citizens who understand what is at stake — who perceive that the question of AI is not ultimately a question about technology but a question about what kind of life is worth living and what kind of person it is worth becoming. The theory of practices does not answer these questions; no theory can. But it provides the framework within which the questions can be posed with the precision and the moral seriousness they demand, and within which the answers — always provisional, always contested, always in need of revision in light of experience — can be pursued through the kind of sustained, communal, tradition-governed inquiry that constitutes the life of a practice at its best.
After Virtue closes with an observation that has been quoted more often than it has been understood. MacIntyre writes that the crucial question is not whether we are waiting for Godot — waiting, that is, for a resolution to the moral crisis of modernity that will arrive from outside the crisis itself — but whether we are waiting for "another — doubtless very different — St. Benedict." The reference is to the sixth-century monk who, in the face of the collapse of Roman civilization, established the monastic communities that preserved literacy, learning, and the practices of the moral life through the centuries that followed. The point is not that monasteries are the answer to the AI moment. The point is that in periods of profound civilizational transformation, the preservation of the conditions for moral life depends not on the reform of the dominant institutions, which may be too far corrupted to reform, but on the construction of new forms of community within which the practices and the virtues can be sustained.
This closing image has acquired, in the four decades since After Virtue was published, an air of wistful impracticality — the philosopher's retreat into a vision of small-scale virtue that cannot possibly address the scale of the problems he has diagnosed. The reading is understandable but mistaken. MacIntyre's point is not that small communities are sufficient. His point is that communities are necessary — that the virtues cannot be cultivated by isolated individuals, cannot be sustained by corrupt institutions, and cannot be mandated by legislation. They can only be cultivated within communities of practitioners who share a commitment to the internal goods of their practice, who hold one another accountable to the standards of excellence that the practice defines, and who carry forward the tradition of argument about what those goods and standards require.
The AI moment gives this observation a specificity it did not possess in 1981. The communities that MacIntyre envisions are not hypothetical. They exist — in the engineering teams that maintain their commitment to craftsmanship under pressure to optimize for speed, in the educational programs that insist on the cultivation of judgment rather than the mere accumulation of credentials, in the firms that invest in the development of their practitioners rather than replacing them with cheaper alternatives. These communities are under pressure, and many of them are failing. But their existence demonstrates that the alternative to institutional corruption is not utopian fantasy but difficult, ongoing, practically demanding work.
Bielskis and his collaborators, in the 2025 volume Human Flourishing in the Age of Digital Capitalism, pose the question directly: "Is the end of work through automation actually desirable? If a good life is the life of activity employing our rational, imaginative, and creative powers, what does it mean to say that future societies will be post-work societies?" The question is MacIntyrean to its core. The utilitarian answer — that the end of work is desirable if it maximizes welfare — is inadequate, because it cannot distinguish between a welfare gain achieved through the exercise of the virtues and a welfare gain achieved through their elimination. The Kantian answer — that the end of work is acceptable if it respects autonomy — is inadequate, because autonomy without practices through which to exercise it is empty. The Aristotelian answer — that the good life requires the exercise of distinctively human capacities in the context of practices that cultivate the virtues — entails that the end of work, if it means the end of practices, is not desirable at all, because it would eliminate the conditions under which human beings develop the excellences that constitute their flourishing.
This does not mean that the automation of tedious, dangerous, or degrading work is objectionable. It means that the question of which work to automate and which to preserve is a question that cannot be answered by efficiency calculations alone. It is a question about internal goods — about which activities constitute genuine practices through which the virtues are cultivated, and which are mere techniques whose elimination imposes no moral cost. The question requires the exercise of practical wisdom, and practical wisdom, as the argument of this book has shown, is precisely the capacity that the AI moment both demands and threatens.
The argument must now confront the strongest objection to the framework it has developed — an objection that the preceding chapters have acknowledged but not fully engaged. The objection is that the AI moment is not merely disrupting existing practices but generating new ones, and that the new practices may possess their own internal goods, their own standards of excellence, and their own traditions of inquiry that are no less genuine than those of the practices they are displacing.
The practice of prompt engineering, for example — the art of formulating instructions that elicit the most useful outputs from a language model — is developing recognizable features of a practice: it has internal goods (the satisfaction of a prompt that produces a genuinely surprising and useful output, the perception of the relationship between the specificity of the instruction and the quality of the response), it has standards of excellence that are being progressively elaborated by its practitioners, and it is generating the beginnings of a tradition (the communal argument about what constitutes a "good" prompt, about the relationship between human intention and machine output, about the ethics of using generated material).
Whether prompt engineering constitutes a genuine practice in MacIntyre's full sense — one that cultivates the virtues and that contributes to the practitioner's flourishing as a human being — is a question that the framework developed here is equipped to answer, but that the present moment cannot answer definitively. A practice is identified not merely by its structure but by the quality of the internal goods it makes available and the virtues it cultivates in those who pursue them. The young practice of prompt engineering may develop into a genuine practice with genuine internal goods — or it may remain a technique, a set of skills that serves the production of external goods without cultivating the virtues that constitute the practitioner's flourishing. The answer will depend on whether the communities that form around this activity develop the features of genuine practices: shared standards of excellence, a tradition of argument about what the activity is for, and a commitment to internal goods that transcends the pursuit of external rewards.
The same analysis applies to the broader practice of human-AI collaboration — the activity that The Orange Pill itself exemplifies and that it describes as the defining mode of intellectual work in the emerging era. Whether this activity constitutes a genuine practice depends on whether it possesses internal goods that can only be recognized through participation, whether it cultivates virtues in those who engage in it, and whether it generates a tradition of inquiry about its own purposes and standards. The evidence from The Orange Pill suggests that the activity can, under certain conditions, possess these features — that the collaboration between human judgment and machine capability can produce forms of understanding that neither could achieve alone, and that the pursuit of these forms of understanding can cultivate virtues (the discernment to distinguish between machine output that merely sounds right and output that is genuinely insightful, the intellectual honesty to reject plausible but hollow prose, the courage to pursue a question beyond the machine's first response).
But the evidence also suggests that these conditions are not automatic. They are achieved only when the human participant brings to the collaboration the virtues that sustained engagement with prior practices has cultivated — the judgment, the taste, the capacity for critical perception that allow the human to direct the machine rather than be directed by it. Without these virtues, the collaboration degenerates into what The Orange Pill itself describes: the acceptance of smooth, plausible output that has not been earned, the substitution of the machine's competence for the practitioner's excellence, the production of external goods without the cultivation of internal ones.
The circle is therefore unbroken. The new practices that AI makes possible require, for their genuine realization, the virtues that the old practices cultivated. The preservation of the conditions under which those virtues can be developed is not a nostalgic attachment to an earlier form of work. It is a precondition for the genuine flourishing of the new forms of work that AI makes available. The past does not oppose the future; it furnishes the moral resources without which the future cannot be anything but a more efficient form of impoverishment.
What remains, then, is the work of building — not in the triumphalist sense of building products and capturing markets, but in the sense that has animated this entire analysis: the building of institutional structures that preserve the conditions under which practices can be sustained, virtues can be cultivated, and human beings can develop the forms of excellence that constitute their flourishing. This work is not glamorous. It does not produce the kind of metrics that the market rewards or the kind of narratives that the technology press celebrates. It is the work of the administrator who protects mentoring time against the pressure to maximize output, of the educator who insists on the cultivation of judgment rather than the mere acquisition of credentials, of the practitioner who refuses to accept that the smooth and the excellent are the same thing.
It is, above all, the work of communities — of practitioners who recognize one another's excellence, who hold one another accountable to the standards of the practice, who carry forward the tradition of argument about what the practice requires, and who build the institutions necessary to sustain all of this against the relentless pressure of a market that cannot see what it is destroying. The work is not new. It is the work that every living tradition has always required of its practitioners — the work of maintenance, of repair, of the ongoing, never-completed construction of the conditions under which the moral life can be lived.
MacIntyre writes that "the good life for man is the life spent in seeking for the good life for man, and the virtues necessary for the seeking are those which will also enable us to find it." The formulation is circular, but the circularity is not a defect. It is a description of the structure of the moral life itself — a life in which the seeking and the finding are not sequential but simultaneous, in which the virtues that enable the search are also the virtues that constitute its goal. The AI moment has not changed this structure. It has made it more visible, by stripping away the implementation that concealed it, and more urgent, by threatening the conditions under which it can be lived.
The question is not whether the machine can execute. It can. The question is not whether the machine is useful. It is. The question is whether the civilization that deploys the machine will preserve the practices through which its members develop the excellences that make them worthy of the capabilities the machine provides. That question is not technological. It is moral. And its answer will be determined not by the capabilities of the machine but by the virtues of the people who use it — virtues that can only be cultivated through the practices that the machine, deployed without wisdom, threatens to destroy.
The argument of this book does not end with a resolution. It ends with a recognition: that the AI moment is one more chapter in the long argument between practices and institutions, between internal goods and external goods, between the cultivation of the virtues and the pursuit of efficiency — an argument that is as old as the first human community that organized its activities around a shared conception of the good and as urgent as the latest quarterly earnings report that places the value of that conception in doubt. The argument continues. It must continue. It is the argument that constitutes the moral life of any community worthy of the name, and its continuation is the surest sign that the community, whatever its failures, has not yet surrendered the conditions for its own flourishing.
There is a passage in Chapter 1 of this book where the concept of internal goods is introduced — goods that can only be recognized through the experience of participating in a practice. When I first encountered that idea, refracted through MacIntyre's precise and demanding prose, something clicked that I had been reaching for without finding.
In The Orange Pill, I described the senior software architect at a San Francisco conference who could feel a codebase "the way a doctor feels a pulse." I knew, when I wrote that, that I was describing something real and important. I knew that the AI moment threatened it. But I did not have the vocabulary to say what was being threatened or why it mattered in a way that went beyond professional nostalgia. The theory of practices gave me that vocabulary. What was being threatened was not a skill. It was a form of human excellence — a way of perceiving the world that could only be developed through the specific discipline of a specific practice, and that constituted not merely professional competence but a dimension of the practitioner's flourishing as a human being.
That distinction — between a skill and a form of excellence, between a capability and a virtue — is the distinction I wish I had possessed when I stood in Trivandrum watching my engineers transform their relationship to their work in the space of a week. I celebrated the twenty-fold productivity multiplier, and the celebration was genuine. But I could not articulate what was at stake in the transformation beyond the productivity. The framework of practices, internal goods, and virtues provides what I lacked: a way of asking not just "How much more can we produce?" but "What kind of people are we becoming in the process of producing it?"
The argument about institutional corruption hit closest to home. I have sat in the rooms where the arithmetic is presented — where the twenty-fold multiplier is on the table and the question is why the team should not be reduced to a fraction of its current size. I have made the choice to keep and grow the team, and I have felt the quarterly pressure to unmake that choice. MacIntyre's analysis of the structural tension between practices and institutions — the recognition that institutions are necessarily oriented toward external goods and that the defense of internal goods against institutional pressure is permanent, never-completed work — gave me a framework for understanding why the choice feels so difficult and why it must be remade every quarter. It is not a personal failing that the pressure returns. It is a structural feature of the relationship between what practices need and what markets reward.
What stays with me most, though, is the question with which this book ends its final chapter — a question that is also MacIntyre's question, posed in various forms across decades of philosophical work: whether the civilization that deploys these extraordinary machines will preserve the conditions under which its members develop the excellences that make them worthy of the capabilities the machines provide.
That question cannot be answered by technology. It cannot be answered by the market. It can only be answered by communities of people who understand what is at stake and who are willing to do the unglamorous, structurally unrewarded, permanently necessary work of maintaining the conditions under which human beings can flourish. The conditions for practice. The space for judgment. The friction that cultivates wisdom.
I wrote The Orange Pill because I needed to understand what was happening. This book exists because MacIntyre's framework revealed dimensions of the crisis that my own vocabulary could not reach. The machine amplifies whatever signal it is given. The question of what signal to give it — what to build, for whom, and why — is a question about practices, about virtues, about the kind of life that is worth living. It is the oldest question in philosophy, and AI has made it new.
-- Edo Segal
This book applies MacIntyre's theory of practices, virtues, and traditions to the AI revolution with philosophical rigor and practical urgency. It asks whether the conditions under which human beings develop genuine expertise — not skills, but the embodied wisdom that constitutes flourishing — will survive the most powerful amplifier ever built. The answer depends not on the machine's capabilities but on whether communities of practitioners choose to preserve what the market cannot see and will not protect on its own. QUOTE: "The good life for man is the life spent in seeking for the good life for man, and the virtues necessary for the seeking are those which will also enable us to find it." — Alasdair MacIntyre, After Virtue

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Alasdair MacIntyre — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →