Bernard Williams — On AI
Contents
Cover Foreword About Chapter 1: The Impossibility of Clean Choices Chapter 2: Agent-Regret and the Engineer's Grief Chapter 3: Moral Luck in the Age of Algorithmic Capability Chapter 4: The Limits of Utilitarian Accounting Chapter 5: Integrity and the Ground Project Chapter 6: The Vocabulary of Moral Perception Chapter 7: The Residue That Remains Chapter 8: Truth, Truthfulness, and the Problem of Honest Machines Chapter 9: Practical Necessity and the Honest Response Epilogue Back Cover
Bernard Williams Cover

Bernard Williams

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Bernard Williams. It is an attempt by Opus 4.6 to simulate Bernard Williams's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

Nobody in the AI discourse talks about the driver.

I mean the thought experiment — the one where two people do the exact same thing, with the exact same care, and one of them kills someone and the other doesn't. Same intention. Same skill. Same moral quality. Radically different outcomes. The difference is luck, pure and simple, and yet we judge them differently, and the one whose hands were on the wheel feels something the other never will.

Bernard Williams built an entire philosophy around that feeling. He called it agent-regret, and he argued that a person who didn't feel it — who simply ran the numbers, confirmed the action was justified, and moved on unburdened — would not be admirably rational. That person would be broken.

I cannot stop thinking about this in the context of what is happening right now.

Two engineers. Same training, same intelligence, same years of dedication. One happens to work in a domain AI augments beautifully — her productivity explodes, her career accelerates, she feels liberated. The other happens to work in a domain AI replaces outright — her expertise is commoditized overnight, her identity as a practitioner unravels. The difference between them is not talent. It is not effort. It is timing and specialization and the arbitrary fact that one skill set landed on the right side of a line neither of them drew.

The technology industry calls this disruption and celebrates it. Williams would have called it moral luck and demanded we look at it without flinching.

What drew me to Williams is that he refused the clean answer. Utilitarianism says: calculate the sum, the aggregate is positive, move on. Kantian ethics says: identify the duty, fulfill it, the feelings are irrelevant. Williams said: the feelings are the moral reality. The loss that survives a justified decision is still a loss. The weight the builder carries for participating in a transformation that destroys something valuable — even when the transformation is necessary — that weight is not weakness. It is honest accounting.

The AI transition is justified. I believe that. The expansion of capability, the democratization of building, the collapse of barriers between imagination and artifact — these are real goods. And the losses are also real. The embodied knowledge that only forms through years of struggle. The communities built around shared difficulty. The temporal space between idea and execution where reflection used to live.

Williams gives you the vocabulary to hold both without pretending one cancels the other. In a moment drowning in triumphalism and despair, that vocabulary is the rarest thing there is.

-- Edo Segal ^ Opus 4.6

About Bernard Williams

1929–2003

Bernard Williams (1929–2003) was a British moral philosopher widely regarded as the most important English-language philosopher of ethics in the second half of the twentieth century. Born in Westcliff-on-Sea, Essex, he studied at Balliol College, Oxford, and held professorships at Cambridge, Berkeley, and Oxford over a career spanning five decades. His major works include Morality: An Introduction to Ethics (1972), Problems of the Self (1973), Moral Luck (1981), Ethics and the Limits of Philosophy (1985), and Truth and Truthfulness (2002). Williams challenged the foundations of both utilitarianism and Kantian ethics, arguing that systematic moral theories distort the texture of actual moral life by demanding clean resolutions where genuine conflicts between values admit of none. He introduced concepts that reshaped the field — agent-regret, moral luck, thick ethical concepts, ground projects, and internal reasons — insisting throughout that moral philosophy must attend to the particularity of human experience rather than abstracting it into universal principles. His influence extends across philosophy, political theory, and the humanities, and his critique of moral system-building remains one of the most formidable challenges to ethical theory ever mounted.

Chapter 1: The Impossibility of Clean Choices

The characteristic failure of modern moral philosophy is the demand for clean resolutions. Not a minor gap in coverage that might be patched with additional principles or more sophisticated calculations — a structural defect, built into the foundations of the two dominant moral traditions, distorting everything constructed upon them. Utilitarianism promises that every moral dilemma can be resolved by computing the greatest aggregate good. Kantianism promises that the moral law, apprehended through reason, will deliver a verdict the dutiful agent can follow without residue or regret. Both promises are false, and both are dangerous — not because they lead to wrong answers, though sometimes they do, but because they generate the expectation that right answers exist in forms that leave the agent unburdened.

Bernard Williams spent his career dismantling this expectation. What he saw, with a clarity that made his colleagues uncomfortable, was that moral life is structured by genuine conflicts between values that cannot be harmonized — that the mess is not a problem philosophy has failed to solve but a feature of the territory philosophy has failed to describe. Williams and David Wiggins argued jointly that in moral philosophy, unlike science, "what defines the subject is a highly heterogeneous set of human concerns, many of them at odds with many others of them, many of them incommensurable with many others of them. In this case there is no reason to think that what is needed is a theory to discover underlying order." The AI transition has made this argument urgent in ways Williams could not have anticipated, though the structure of the urgency would have been entirely familiar to him.

The arrival of artificial intelligence — specifically the class of large language models that, beginning in late 2025, crossed a capability threshold that transformed the relationship between human intention and machine execution — presents moral challenges that the dominant traditions cannot accommodate without breaking their own rules. The speed of the transition is part of the problem. The printing press disrupted the scribal economy over decades. The power loom displaced the hand-weaver over a generation. AI displaces cognitive labor in months, sometimes weeks. But the deeper problem is not temporal. It is structural. The AI transition produces situations in which every available response involves genuine loss, and no calculation or principle can eliminate the sacrifice.

Consider the position of the practitioner whose expertise has been commoditized — the software architect described in Edo Segal's The Orange Pill who spent twenty-five years developing an embodied intuition for code, who could feel a codebase "the way a doctor feels a pulse," and who watched AI produce equivalent output without requiring the decades of immersion that constituted his mastery. The utilitarian response is straightforward and brutal: the aggregate good is served by the wider availability of capability that AI provides. The democratization of building, the collapse of the distance between imagination and artifact, the liberation of millions from a translation barrier that previously gated their ambitions — these are genuine goods, and they are enormous. The utilitarian calculates the sum and declares the matter resolved. The architect's loss is real but outweighed. He should adapt.

Williams would have recognized this move instantly, because he had spent decades exposing its pathology. The utilitarian does not merely weigh the values and find one heavier. The utilitarian denies the independent reality of the lighter value once the weighing is complete — treats the loss as absorbed into the calculation, as though a ledger that balances leaves no remainder. But the architect's loss is not absorbed. His twenty-five years of patient immersion, the specific intimacy between builder and built thing, the understanding that formed only under the pressure of difficulty — these do not vanish because the aggregate has been served. They persist as a moral weight that the calculation cannot discharge.

The Kantian response is equally clean and equally insufficient. No rational agent, reasoning from the categorical imperative, could universalize the maxim "Resist all technological change that threatens existing expertise," because such a maxim would prevent the very advances that have made contemporary life possible. The duty is to adapt. The grief is understandable but morally irrelevant. Williams's objection here was characteristically precise: Kantian ethics asks the agent to adopt a perspective so abstract that it systematically excludes the features of the situation that make it this situation rather than a generic instance of a type. The architect is not "a practitioner facing technological change." He is a specific person whose specific life has been organized around specific commitments that the technology has rendered structurally unnecessary. The specificity is not a distraction from the moral question. It is the moral question.

What Williams called "the morality system" — his term for the peculiar institution that modern moral philosophy has constructed — demands that every moral question have a determinate answer derived from foundational principles. The demand is pathological, not because answers are undesirable but because the demand for completeness produces systematic blindness to the features of moral life that resist systematization: the particular attachments that give a life its character, the commitments that constitute identity, the residue that remains after a justified action has produced unjustified loss, the moral luck that distributes outcomes without regard to desert. All real features of moral life. All features the morality system cannot accommodate without breaking its own rules.

Williams articulated the pluralist view that it is a "deep error" to suppose "that all goods, all virtues, all ideals are compatible, and that what is desirable can ultimately be united into a harmonious whole without loss." The AI transition is perhaps the most vivid contemporary demonstration of this error's consequences. The value of democratized capability — the liberation of millions from the translation barrier — is genuine and enormous. The value of depth — the understanding that develops through years of patient immersion in a practice — is genuine and irreplaceable. These values are incompatible, in the specific sense that the technology that serves the first undermines the conditions under which the second develops. No principle resolves the conflict. No algorithm weighs the competing values and produces a definitive answer. The conflict must be lived, with full awareness that living it involves loss that no justification can eliminate.

The Orange Pill holds this conflict open rather than closing it. Segal presents the exhilaration and the terror, the capability and the compulsion, the liberation and the loss, and refuses to resolve the tensions into a verdict. The refusal is philosophically significant — it is moral honesty about a situation that does not admit of the kind of resolution the morality system demands. Williams's framework explains why the refusal is appropriate: because genuine moral conflicts produce residues that cannot be eliminated, and a response that pretends otherwise has misdescribed the situation.

The practical consequence of taking Williams seriously in the AI context is not paralysis. Williams was never a quietist. The impossibility of clean resolution does not entail the impossibility of action — it entails action taken with full awareness that the action will produce loss, that the loss is real and not merely apparent, that the agent bears a relationship to the loss that no justification can entirely dissolve. The builder who embraces AI gains extraordinary capability but risks what Segal candidly describes as productive addiction — the compulsive engagement with a tool so generative that the question of whether the activity serves one's life or consumes it cannot gain traction. The practitioner who refuses AI preserves the intimacy of craft but cedes the conversation about how the transition unfolds to those who stayed in the room. Between these positions lies the space Segal calls "the silent middle" — the people who feel both things and lack a clean narrative to offer. Social media rewards clarity; the algorithm has no use for ambivalence. The silent middle is algorithmically disadvantaged, which is to say culturally invisible, which is to say precisely where the most honest thinking happens.

Williams would have recognized this space immediately. It is the space his entire philosophical career was dedicated to defending against the encroachments of systematic theory. Not the space of indecision but of moral complexity — the condition of being unable to satisfy all legitimate demands simultaneously, of discovering that the values one holds are genuinely incompatible, and that no amount of philosophical sophistication will harmonize them.

The demand for clean choices is the demand that this space not exist. The utilitarian says: calculate the sum. The Kantian says: identify the duty. The tech triumphalist says: build faster. The tech elegist says: mourn the loss. Each response eliminates the tension by eliminating one of its poles, and each is therefore a form of dishonesty, however sophisticated its intellectual presentation. Williams spent his career demonstrating that the honest response is to remain in the tension — not as a failure of nerve but as the only stance adequate to the moral reality.

The AI moment does not need another system. It needs better perception — the capacity to see what is happening in its specificity, to respond with whatever moral resources are appropriate, and to accept that the response will be incomplete, provisional, and subject to revision as the situation develops. Williams argued throughout his career that moral perception is prior to moral theory, that one must first see the morally relevant features of a situation before one can reason about them, and that the failure to see is not a failure of logic but a failure of attention.

This is what makes the philosopher's framework so unexpectedly relevant to a technological transition he did not live to witness. Williams was not interested in technology. He was interested in what happens when human beings encounter situations that their moral vocabularies cannot adequately describe — situations where the available categories distort rather than illuminate, where the frameworks that promise guidance produce blindness instead. The AI transition is precisely such a situation. And Williams's lifelong insistence that moral life does not deliver the clean resolutions moral philosophy demands is not a counsel of despair. It is the precondition for seeing clearly.

The chapters that follow will trace the implications of Williams's impossibility across the specific phenomena of the AI transition. Each takes one of his concepts and brings it into contact with the evidence. The purpose is not to solve the problems that the transition creates. Williams would have regarded the ambition to solve moral problems as itself a symptom of the disease. The purpose is to inhabit them honestly — with whatever clarity can be achieved without the false comfort of completeness.

---

Chapter 2: Agent-Regret and the Engineer's Grief

Williams developed the concept of agent-regret to describe a phenomenon the dominant moral traditions could not accommodate: an agent feels regret for outcomes she brought about, even when her actions were justified, even when she would do the same thing again, even when no reasonable person could criticize the choice she made. The concept requires a distinction ordinary language obscures. There is regret that is merely retrospective — the recognition that a loss has occurred, available to anyone, including bystanders. Then there is agent-regret, felt specifically by the person whose actions produced the outcome. Williams's celebrated example was the lorry driver who, through no fault of his own, runs over a child. The driver's regret is not about his actions, which were blameless. It is about his agency — the fact that it was his hands on the wheel, his vehicle that struck the child, his causal involvement in a catastrophe he neither intended nor could have prevented.

Utilitarian philosophy has no room for this feeling. If the action was correct — if it maximized expected utility given available information — then regret is irrational, a psychological residue devoid of moral significance. Kantian philosophy is marginally more hospitable: if the agent acted from duty, the duty is fulfilled, and the emotional aftermath is a matter for psychology rather than ethics. Williams found both responses not merely inadequate but morally obtuse. Agent-regret is the correct response of a morally serious person to a situation in which her agency has produced loss she did not intend and could not have avoided but nevertheless brought about. A person who caused a death and felt nothing — who accepted the utilitarian verdict and experienced no residue — would be morally deficient, not morally exemplary. The capacity to feel the weight of what one's agency has wrought, even when the agency was blameless, is constitutive of moral seriousness itself.

The AI transition is producing agent-regret on an industrial scale. Not among the displaced — their situation raises different questions, examined in the next chapter — but among the engineers who are thriving. The builders who have embraced AI, who ship products at speeds that would have been inconceivable eighteen months ago, who cannot stop working because the conversation with the machine is more stimulating than any human exchange available at that hour. They are winning. The market rewards them. The output is extraordinary.

And some of them — the most perceptive ones, the ones whose moral perception exceeds their appetite for triumph — feel something they cannot name. A weight that accompanies the exhilaration. They are building the future, and the future is genuinely better in many respects, and in the building of it they are participating in the destruction of something they recognize as valuable. The senior engineer in The Orange Pill who discovered that eighty percent of his work could now be handled by a tool experienced this as liberation. And it was. But the eighty percent was not mere drudgery. Mixed into it were the conditions under which understanding grew — the years of patient immersion that deposited, layer by layer, the embodied intuition that let him feel a system's health before he could articulate what was wrong. The machine eliminates the substrate. The judgment that grew from it survives — in him. But the conditions under which new judgment of that kind can develop have been transformed. His junior colleague, who will never spend years in the implementation weeds because the weeds have been cleared, will develop different capabilities. Something will be gained that did not exist before. Something will be lost that cannot be recovered.

The senior engineer who perceives both — who sees the gain and the loss with equal clarity — is in the position of Williams's lorry driver. He did not cause the AI revolution. He could not have prevented it. But he is participating in it, building with the tools that reshape the landscape, and in that participation he contributes to the destruction of the conditions that produced his own expertise. His hands are on the wheel. The accident is unavoidable. And he drives on, because driving on is what the situation requires. But the regret is real, and it is his.

Williams would have insisted that this regret is not a weakness to be overcome but a perception to be honored. The engineer who feels nothing — who accepts the productivity gains and experiences no residue — has failed to recognize the genuine value of what is being lost. The grief is the mark of a person who fully perceives the values on both sides of the equation. A moral theory that cannot accommodate it has misdescribed moral life.

The Orange Pill documents a variant of agent-regret that deserves particular attention. Segal describes standing in a room in Trivandrum watching twenty engineers discover their amplified capability, and being unable to determine whether he was "watching something being born or something being buried." He describes lying awake, unable to distinguish the creative exhilaration of building with AI from the compulsive pattern of productive addiction. He describes catching himself somewhere over the Atlantic, grinding through a manuscript not because the work demanded it but because he could not stop — and recognizing in the inability to stop a pattern resembling the addictive architectures he had himself helped build earlier in his career. "The whip and the hand that held it belonged to the same person."

This is the confession of a man in the grip of agent-regret, though he does not use the term. Segal is not the victim of the AI revolution. He is one of its agents. He builds with the tool. He deploys the tool. He evangelizes the tool. He benefits enormously. And he perceives, with an honesty that distinguishes his book from the triumphalist literature, that his agency carries a moral weight no amount of productivity data can discharge. The guilt is not criminal. He has done nothing wrong. The guilt is constitutive — the marker of a person who has understood that his actions, however justified, have contributed to outcomes involving genuine loss. The loss of the craftsman's intimacy with the built thing. The compression of the temporal buffer that once separated idea from execution and permitted reflection. The erosion of conditions under which a specific kind of depth — the kind that forms only under the pressure of difficulty — could develop.

The concept extends beyond individuals to institutions. Williams would have observed that an institution deploying AI without experiencing agent-regret — calculating the productivity gain, implementing the technology, processing the resulting displacement as an externality to be managed — is an institution that lacks moral seriousness. Not because the deployment was wrong. It may have been entirely justified. But the justification does not eliminate the moral weight of the displacement. An institution that behaves as though it does has sacrificed moral perception for operational efficiency.

A further dimension emerges when agent-regret is applied to the collaborative production that The Orange Pill documents transparently. Segal describes working with Claude and catching the AI producing a passage that connected Csikszentmihalyi's flow theory to a concept attributed to Deleuze — elegant, persuasive, and philosophically wrong. "Claude's most dangerous failure mode is exactly this: confident wrongness dressed in good prose. The smoother the output, the harder it is to catch the seam where the idea breaks." He almost kept the passage. The seduction of plausible output nearly bypassed the author's critical judgment. The agent-regret here is prospective rather than retrospective — the near-miss producing a special kind of awareness about what the collaboration risks. Every subsequent interaction with the machine carries the shadow of the error almost made, the recognition that the tool's facility can erode precisely the judgment it requires its user to exercise.

Williams's framework suggests that the most important institutions in the AI transition will be the ones that deploy the technology and feel the weight simultaneously — that celebrate the gains and mourn the losses in the same breath. These institutions will be rare, because the market rewards confidence and penalizes ambivalence. But they will be the institutions in which the moral complexity of the transition is not suppressed but inhabited as a practice, as a form of institutional life that takes both the gains and the losses seriously.

The demand that creative workers embrace AI without reservation — that they fully commit to the new paradigm and leave behind their attachment to old skills — is an instance of what Williams criticized throughout his career: the demand for a purity of commitment that moral life does not support and should not require. The person who uses AI effectively while mourning the craft it has displaced is not confused. She is morally serious. She is holding two genuine values simultaneously and refusing to pretend that one of them is not real. The moral philosophers who insist on resolution — on choosing one value and dismissing the other — are asking her to commit what Williams called "one thought too many." They are asking her to think herself out of a genuine moral perception.

The engineer's grief is not the problem. The engineer's grief is evidence that the moral accounting was honest.

---

Chapter 3: Moral Luck in the Age of Algorithmic Capability

Moral luck is the uncomfortable fact that moral assessment depends on factors beyond the agent's control. The discomfort is well-founded, because the fact threatens the foundations of every moral system presupposing a tight connection between moral desert and individual choice. Williams's famous example: two drivers run a red light. One kills a pedestrian; the other arrives safely at the next intersection. The actions are identical, the intentions identical, the degree of recklessness identical. But the moral assessment is radically different — and not merely in terms of legal liability. The driver who killed feels something the other does not, and the difference is not irrational. It is the recognition that moral life is not insulated from contingency.

Kant insisted that moral worth attaches to the good will and nothing else — that outcomes, insofar as they depend on factors beyond control, are morally irrelevant. Williams regarded this as elegant and false. False because we do not live in the kingdom of ends. We live in a world where outcomes matter, where the consequences of choices are partly determined by forces we cannot control, and where our identities are partly constituted by what happens to us and not only by what we do. The Kantian can insist that moral worth resides in the will, but the will operates in a world the will did not create, and the world has a vote.

The AI transition has created a new and particularly vivid form of moral luck. Practitioners with identical moral qualities — identical levels of dedication, skill, intelligence, and professional commitment — experience radically different outcomes depending on factors entirely beyond their control.

Two software engineers of comparable talent and dedication, trained in the same program, possessing the same qualities of diligence and professional pride. One happens to work in a domain that AI augments particularly well — where rapid prototyping and code generation translate directly into enhanced capability. This engineer experiences liberation. The translation barrier that consumed eighty percent of her time has dissolved. Her productivity has increased twentyfold. She is exhilarated. The other engineer happens to work in a domain where AI does not augment but replaces — where specific expertise developed over a decade is now available to any user with a subscription and the ability to describe what she wants. Her expertise has been commoditized. The market no longer rewards the years of patient immersion that produced her mastery.

The difference is not a difference of moral quality. It is luck — timing, specialization, the arbitrary fact that one skill set happened to lie on the augmented side of the line and the other on the replaced side. The line was not drawn by desert. It was drawn by the specific capabilities of the technology at this specific moment, which is to say by forces neither engineer chose, anticipated, or could have influenced.

This is moral luck in its purest form. The AI moment generates it on a scale previous transitions did not approach, because the speed of the transition compresses the randomness into a temporal window too narrow for adaptation to serve as buffer. The Luddites of Nottinghamshire had years — decades, in some cases — to perceive the shift and respond. The response was tragically inadequate, but the temporal buffer at least allowed for its possibility. The AI transition offers no such buffer. The engineer who discovers on a Tuesday that her expertise has been commoditized was, on Monday, a valued professional whose skills commanded a premium. The shift is not gradual. It is a phase transition — the same substance, suddenly organized according to different rules.

Williams's framework reveals what the dominant moral vocabulary systematically conceals about this situation. The triumphalist who tells the displaced practitioner to adapt, to reskill, to embrace the new paradigm, is not wrong about practical necessity. Adaptation is necessary. But the moral framing is wrong. The triumphalist treats the outcome as a consequence of choice — as though the displaced practitioner chose the wrong skills, or failed to anticipate the shift, or refuses to adapt out of stubbornness. This framing is false. The displaced practitioner made the same choices, with the same moral quality, as the augmented practitioner. The difference in outcome is luck.

The meritocratic assumption that pervades the technology industry — the conviction that outcomes track talent and effort, that the people who flourish deserve to flourish and the people who struggle have failed in some identifiable way — is a systematic misattribution of luck to merit. Williams's analysis strips this assumption bare. The morally lucky have no special claim to the credit the market assigns them. Their position at the frontier was partly chosen and partly given — a function of biography, timing, institutional access, and the contingent alignment of their specific capabilities with the specific demands of a technology whose shape they did not determine.

The Orange Pill documents the distribution of moral luck across the AI transition with the specificity of a builder who has witnessed it firsthand. Segal describes standing in a room in Trivandrum on a Monday and watching the same engineers arrive at fundamentally different relationships to their own expertise by Friday. Five days. For some, the shift was liberating. For others, terrifying. The difference was not talent. It was the contingent relationship between each engineer's specific expertise and the specific capabilities of the tool. Segal also describes observing a stark dichotomy in the broader engineering population: some senior practitioners "moving to the woods" to lower their cost of living, perceiving their livelihoods as imminently obsolete, while others leaned in with the intensity of people who had been waiting their entire careers for this capability. He maps this to the primal fight-or-flight response, and the mapping is instructive — but Williams's framework adds a dimension the evolutionary metaphor misses. The fight-or-flight response is involuntary, a product of physiological architecture rather than moral choice. What Williams would notice is that the moral assessment of the two responses differs dramatically in the public discourse, with fighters celebrated as adaptable and visionary while those in flight are dismissed as fearful or inflexible — when in fact the difference between the two groups may be substantially explained by the moral luck of which skills happened to be augmented and which happened to be replaced.

The intergenerational dimension compounds the problem. Engineers who developed expertise before the AI transition had the moral luck of timing: they built careers and identities around skills that were valued, and accumulated the rewards of that value, before the ground shifted. Their children enter a world where the skills that constituted their parents' identities are already commoditized. These children face a different species of moral luck — the luck of entering a landscape that has no memory of the conditions that preceded it. The twelve-year-old in The Orange Pill who asks her mother "What am I for?" is experiencing intergenerational moral luck in its most immediate form. She did not choose to enter a world where machines outperform her at her homework. The world she inhabits is the product of choices made by others, and the moral luck of her situation is that she must find her way in a landscape shaped by forces she did not create.

If the distribution of gains and losses is substantially determined by luck rather than desert, then the institutions governing the transition bear a different kind of responsibility than the one they typically acknowledge. The responsibility is not merely to facilitate adaptation — retraining programs, career counseling, bridges between the old economy and the new. These are important, but they are insufficient when framed as helping people overcome their own inadequacies rather than as compensating for an arbitrary distribution of fortune. The difference in framing matters. One says: you have failed to keep up, and we will help you. The other says: you have been unlucky, and we owe you something because the luck was not your fault.

Williams would have pressed the political dimension further than the discourse typically allows. Who decides how the gains of the transition are distributed? What mechanisms exist for holding the morally lucky accountable to the morally unlucky? How do existing power structures shape the conversation in ways that systematically favor beneficiaries over those displaced? These are not questions the technology industry is accustomed to asking, because the industry's founding mythology is meritocratic, and meritocracy is precisely the illusion that moral luck exposes.

The morally lucky owe something to the morally unlucky. The debt is not discharged by advice. It is structural, arising from the recognition that the distribution is not just, that the people who benefit and suffer are not distinguished by moral qualities, and that a society allowing fortune to determine fate without institutional correction has confused luck with merit. This confusion is endemic in technology culture, where the narrative of disruption has always favored disruptors over the disrupted. Taking moral luck seriously does not require abandoning the technology or reversing the transition. It requires seeing clearly that the gains are not earned in the way the winners believe, and the losses are not deserved in the way the discourse implies.

---

Chapter 4: The Limits of Utilitarian Accounting

The triumphalists in the AI discourse are utilitarians, whether they know it or not. This is not an accusation. It is a diagnosis. The triumphalist looks at the AI transition and calculates: more capability distributed more widely, more problems solved more quickly, more people empowered to build what they could previously only imagine. The calculation is correct. The aggregate is positive. The conclusion — embrace the technology, accelerate the transition, remove the barriers — follows logically from the premises. Williams's question was never whether the calculation was correct. His question was whether the calculation was sufficient — whether the aggregate captured everything that matters, or whether a moral assessment conducted entirely in the currency of total utility left something important out of the ledger.

Williams spent decades cataloguing what utilitarian accounting omits, and the AI transition provides examples of unusual clarity. The first omission is commensurability. Utilitarian calculation requires that all values be measured on a single scale. The goods AI provides — efficiency, capability, accessibility — must be weighed against the goods AI threatens — depth, craft, the specific understanding that develops through years of patient engagement with resistant materials. For the calculation to work, these goods must be commensurable — convertible into a common currency so the sum can be computed.

They are not commensurable. The depth a master craftsman achieves through decades of immersion is not the kind of thing that can be traded against the breadth AI provides to a novice who has never experienced the immersion. The goods are different in kind, not merely in quantity. They belong to different categories of human value, and the attempt to compare them on a single scale is not a simplification but a distortion. The software architect in The Orange Pill who says "something beautiful is being lost, and the people celebrating the gain are not equipped to see the loss, because the loss is not quantifiable" has identified the structural limitation with precision. The utilitarian calculus can only register what it can quantify. What it cannot quantify, it cannot see. What it cannot see, it treats as though it does not exist.

The undercounting is structural, not accidental. It is built into the utilitarian method at the level of founding assumptions. Refinement cannot compensate — adding variables, weighting them more carefully, extending the time horizon. The calculation, however refined, can only operate on values that have been translated into its currency. The values that resist translation are not merely difficult to compute. They are invisible to the computation.

Stuart Russell, the influential computer scientist, advocates what Williams would have recognized as precisely this kind of preference-based utilitarianism as the framework for aligning AI models with human values. Williams's pluralism challenges this directly. The assumption that human values can be captured in a preference function — that the heterogeneous, conflicting, incommensurable concerns that constitute a human life can be rendered computationally tractable — is not a simplification for practical purposes. It is a philosophical error about the nature of values, and building AI alignment on this error risks producing systems that optimize for what can be measured while systematically ignoring what cannot.

The second omission is distribution. Classical utilitarianism is indifferent to how goods are distributed, so long as the aggregate is maximized. If a transition produces enormous gains for some and catastrophic losses for others, but the gains outweigh the losses in sum, the calculation declares the transition justified. Williams regarded this indifference as morally obscene — a term he would have considered appropriate rather than hyperbolic. When the distribution of gains and losses is determined by moral luck rather than desert, as argued in the previous chapter, the utilitarian's indifference to distribution compounds the injustice. The engineer whose skills align with the technology flourishes. The engineer whose skills lie on the wrong side of the line suffers. The aggregate is positive. The individual outcome is catastrophic. And the utilitarian treats these facts as though they belong to different categories of moral relevance, when they are aspects of the same phenomenon.

The Orange Pill's account of the Luddites makes this point with force. The actual Luddites predicted with precision what the power looms would do to them — to their wages, their communities, their children. They were correct. The productivity gains took generations to translate into broadly distributed improvements, and the translation required labor movements, legislation, decades of political struggle. The technology did not determine the outcome. The institutional structures determined the outcome. The utilitarian who looks at the industrial revolution from a distance of two centuries can compute the aggregate and declare it positive. The utilitarian standing in Nottinghamshire in 1812, watching a weaver's wages collapse, would have found it difficult to explain why the weaver's suffering was outweighed by gains that would not materialize for generations, in lives not yet born, through mechanisms not yet invented.

The temporal problem is the third omission, and it is particularly acute. Utilitarian calculation aggregates across time but systematically devalues the future relative to the present through discounting. The gains from AI are largely immediate: productivity increases, capability expansion. The costs are largely deferred: erosion of conditions under which certain kinds of depth develop, atrophy of capacities no longer exercised because the machine exercises them, long-term consequences for a culture that has outsourced the cognitive struggle previously constitutive of understanding. The temporal asymmetry means the calculation, at any given moment, overweights immediate gains and underweights deferred costs — because the gains are visible and the costs speculative, the gains measurable and the costs hypothetical.

There is a fourth omission, subtler than the others, that Williams would have considered the most damaging. Utilitarian accounting cannot register the relationship between an agent and her work — the question of whether output is experienced as meaningful by the person who produced it, whether it connects to the commitments that give her life its character. Productivity metrics measure output. They do not measure the quality of the practice that produced it. A person can generate more output while experiencing less meaning, and no utilitarian ledger will detect the loss, because meaning is not the kind of value that aggregates.

Segal describes this with unusual candor in his account of writing The Orange Pill itself. Working with Claude, the "prose comes out polished, the structure comes out clean," and "the seduction is that you start to mistake the quality of the output for the quality of your thinking." He describes catching himself almost keeping a passage that "sounded better than it thought" — where the smoothness of the output concealed the hollowness of the idea beneath. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version that was his. "Rougher. More qualified. More honest about what I didn't know."

The utilitarian sees only the roughness and counts it as a cost. The two hours with the notebook were, by utilitarian standards, inefficient. The machine could have produced something plausible in minutes. But the value of those hours was not in the output they produced. It was in the process — the friction, the struggle, the private work of determining what one actually believes. The process produced understanding that the frictionless version, however polished, could not have generated. Williams would have called this an instance of a value that is constitutively tied to a particular mode of engagement — a value that cannot survive the optimization of the process through which it arises, because the value is the process, not the product.

What, then, is the alternative to utilitarian accounting? Not its abandonment — Williams was not a romantic, and he had no patience for the view that counting is inherently degrading. Counting is useful. The aggregate matters. The productivity gains benefit real people. The alternative is recognizing that calculation is one moral instrument among several and not the master instrument. The alternative is what Williams called ethical perception — the capacity to see the morally relevant features of a situation in their particularity, including the features that resist quantification. The capacity to look at the AI transition and see not only the aggregate gain but the particular losses, not only the productivity metric but the quality of the practice, not only the output but the process.

Williams held that moral perception is prior to moral theory — that one must see before one can reason, and that the most dangerous failures in moral life are failures not of logic but of attention. The utilitarian framework is a fishbowl, to borrow Segal's metaphor: it reveals part of the world — the quantifiable part, the aggregatable part — and hides the rest. The task is not to smash the fishbowl. It is to recognize the glass, and to develop the perceptual capacity to see what the refraction conceals. The architect who says the loss is not quantifiable is not confessing a deficiency in his argument. He is identifying a deficiency in the framework being used to evaluate his situation. And Williams would have said — did say, across a career of meticulous argument — that the deficiency is in the framework, not in the world it fails to describe.

Chapter 5: Integrity and the Ground Project

Williams introduced integrity into philosophical vocabulary not as a virtue among others — not as one more quality to be cultivated alongside courage and temperance — but as a structural feature of what it means to have a life that is distinctively one's own. The concept requires a preliminary distinction. There is integrity in the colloquial sense: honesty, consistency, keeping one's word. A genuine virtue, but not what Williams meant. There is a deeper, structural sense having to do with the relationship between an agent and the commitments that give her life its character. These commitments — what Williams called ground projects — are not merely things the agent happens to want. They are the things without which the agent would not recognize her life as her own. They are constitutive of identity in a way preferences are not.

A preference can be abandoned without cost to the self. A person may prefer chocolate to vanilla and switch without existential consequence. A ground project cannot be abandoned without fundamental alteration of who one is. The scientist whose life is organized around inquiry, the artist whose identity is constituted by making, the engineer whose sense of self is inseparable from building — these are persons whose ground projects are not external to their identities but constitutive of them. To ask such a person to abandon the project is not to ask her to change her preferences. It is to ask her to become someone else.

Williams's objection to utilitarianism turned on this point with devastating effect. The utilitarian demands that the agent be willing, in principle, to sacrifice any commitment if the aggregate good requires it — to treat her own projects as mere inputs to a calculation whose output may override them. Williams argued that this demand is not merely burdensome but unintelligible in an important sense. The agent who subordinates her constitutive commitments to an impersonal calculus is not fulfilling a moral duty. She is destroying the self that would need to exist for the duty to have a subject. Utilitarianism, Williams observed, fails to see that the agent's projects and commitments are not just items in the world alongside everyone else's, competing for utilitarian weight. They are what gives the agent her standpoint in the world, what makes deliberation her deliberation rather than an impersonal computation that happens to be conducted in her skull.

The AI transition is reorganizing ground projects on a scale the concept was never designed to address, and the reorganization illuminates both the power of the concept and the severity of what is at stake.

Consider the taxonomy of responses that The Orange Pill maps onto the fight-or-flight response. Senior engineers who perceive their livelihoods as imminently obsolete move to lower-cost areas, reducing expenses, preparing for what they take to be the end of their profession. Others lean into the transition with the intensity of people who have been waiting for precisely this capability. Segal calls these fight and flight, and the evolutionary framing captures something real about the urgency. But Williams's framework adds a dimension the biological metaphor misses.

What looks like flight is often not cowardice. It is the rational response of a person whose ground project has been destroyed. The developer who retreats is not merely afraid. She is a person whose identity was constituted by a specific practice — the craft of building systems through years of patient engagement with resistant materials — and the practice has been transformed beyond recognition. The retreat is not from a challenge. It is from a world in which the self she has been is no longer viable. And the demand that she stay and fight is, from her perspective, the demand that she fight for a world in which she would need to become someone she is not.

This is not the same as saying adaptation is wrong. Williams was never a quietist. Adaptation is necessary, and the Luddites demonstrate at enormous cost what happens when practitioners refuse engagement. The point is that adaptation has a moral cost the discourse does not acknowledge, and the cost is measured in integrity. When a person whose ground project was the craft of hand-building systems transitions to AI-augmented work, she does not merely add a tool to her existing practice. She undergoes a transformation of the practice itself, and with it a transformation of the self that was constituted by the practice. The transformation may be necessary. It may prove enriching. But it is a transformation, not a continuation, and the person who undergoes it is entitled to recognize — and to grieve — what has been transformed.

Williams would have been characteristically precise about where the "reskill and adapt" discourse goes wrong. The discourse treats ground projects as though they were preferences — as though the architect's attachment to his craft were the same kind of thing as his preference for a particular brand of coffee. As though the transition from hand-building to AI-augmented building were a lateral move rather than an existential transformation. This treatment is not merely insensitive. It is philosophically confused. It mistakes the nature of the relationship between a person and the commitments that make her who she is.

The concept of the ground project also illuminates what motivates practitioners to engage rather than retreat — a question the triumphalist discourse answers badly because it offers the wrong kind of reasons. Williams drew a sharp distinction between internal and external reasons for action. An internal reason connects with something the agent already cares about — something present in what Williams called her subjective motivational set. An external reason is supposed to motivate independently of what the agent actually cares about, simply because it is true. Williams argued throughout his career that there are no external reasons — that a consideration failing to connect with anything the agent actually values is not a reason she has failed to recognize but not a reason for her at all.

The triumphalist discourse is saturated with external reasons. Practitioners are told they should embrace AI because it is more efficient, because the market demands it, because resistance is futile, because the aggregate good is served. These float above the motivational sets of the practitioners they are supposed to motivate, making no contact with what those practitioners actually care about. The senior architect is not motivated by the argument that AI serves the aggregate good. He may accept the argument intellectually. But it does not speak to his investment in craft, his pride in mastery, his sense of identity as a person who builds things with his own hands. The argument tells him what he should want, not what he does want. A reason that tells you what you should want without connecting with what you do want will not move you.

This explains why the triumphalist discourse fails to persuade precisely the people it most needs to reach. It offers external reasons derived from aggregate calculations and historical inevitability, and the reasons bounce off the motivational architectures of the practitioners they target. What would move them is different: a demonstration that the technology connects with the things they already care about. Not telling them what to value. Showing them how what they do value can be served.

The Orange Pill's concept of ascending friction functions as precisely this kind of internal-reason generator, and its effectiveness is philosophically explicable. The argument is that when AI removes the friction of implementation, it does not eliminate friction altogether but reveals a higher-order friction previously concealed beneath the implementation layer — the friction of judgment, taste, strategic vision. The struggle has not been eliminated. It has been elevated. This connects with the practitioner's existing commitment to difficulty as a source of meaning. The engineer who values depth, who finds satisfaction in the mastery of difficult things, is not motivated by the argument that AI makes things easier. Ease is not what she values. But she may be motivated by the argument that AI makes things harder at a higher level — that by eliminating lower-order friction, the technology exposes challenges more demanding, more consequential, and more worthy of the intelligence she has spent decades developing.

Whether this proves true in practice — whether the ascending friction is genuinely equivalent in formative power to the embodied friction it replaces — is an empirical question Williams's framework cannot answer. But the structure of the motivational appeal is right. It is an internal reason, connecting the new landscape with existing commitments rather than demanding their abandonment. The difference between telling the engineer "you should embrace this because the market demands it" and showing her "the thing you've always wanted to build, the thing you've been too buried in implementation to attempt — you can build it now" is the difference between an external reason that produces resentment and an internal reason that produces engagement.

The question of new ground projects is equally significant. The AI transition does not merely threaten existing commitments. It creates conditions for the formation of new ones — new practices, new forms of engagement, new constitutive commitments that did not exist before the technology arrived. The engineer who discovers that the elimination of implementation drudgery frees her for strategic thinking may find strategic thinking becomes a new ground project — a new source of meaning, a new form of the self. This is genuinely hopeful.

But the formation of ground projects is not automatic. It requires time, exploratory engagement, the gradual discovery of what matters to one through the process of trying things and attending to one's own responses. The practitioner pressed to demonstrate her value in the new landscape before she has had time to discover what the new landscape makes possible is being asked to form constitutive commitments under conditions hostile to their formation. Constitutive commitments are not formed by calculation. They are formed by immersion, by attention, by the kind of slow exploration incompatible with the demand for immediate productivity.

Williams would have noted the irony. The morality system demands that the agent know her duty and act on it. The market demands that the practitioner know her value and demonstrate it. Both demands assume that the relevant knowledge is available in advance of the experience that would produce it. Both are impatient with the exploratory uncertainty that is the precondition for genuine commitment. And both, in their impatience, risk producing exactly the shallow engagement they claim to prevent — practitioners who adopt the new tools instrumentally, without the depth of commitment that would make the adoption meaningful, because the conditions for deep commitment were never provided.

The institutional implication is that the transition requires temporal space — space for the formation of new ground projects, for the exploration that formation requires, for the kind of slow, uncertain engagement incompatible with quarterly metrics. This is not a luxury. It is the condition under which adaptation can be morally serious rather than merely expedient. Williams argued that "my life, my action, is quite irreducibly mine" — that practical deliberation cannot be outsourced to an impersonal calculation, however sophisticated, because the deliberation derives its authority from the specific standpoint of the agent conducting it. The same is true of adaptation. The adaptation must be the practitioner's own — conducted from her standpoint, connected to her existing commitments, shaped by her specific perception of what matters — or it is not adaptation at all but compliance, and compliance, however externally indistinguishable from genuine engagement, produces a different kind of life and a thinner kind of practice.

The ground project of the engineer is not what she builds. It is the relationship between the building and the life. When the technology transforms the building, the relationship must be renegotiated, and the renegotiation takes time, and the time is precisely what the speed of the transition does not provide. Williams's concept of integrity identifies this as a moral cost of the transition — a cost measured not in output lost but in selves disrupted — and insists that the cost be acknowledged even when the transition is justified.

The acknowledgment does not prevent the building. It shapes it. It introduces considerations that pure efficiency would ignore. A builder aware of what the transition costs in integrity builds differently from a builder who sees only capability metrics — not necessarily more slowly, but with a different quality of attention to what the building serves and what it demands of the persons who inhabit it.

---

Chapter 6: The Vocabulary of Moral Perception

Williams distinguished between thick and thin ethical concepts, and the distinction does more work in the AI context than any single piece of philosophical apparatus available. Thin ethical concepts are abstract, general, and evaluatively spare: "good," "bad," "right," "wrong." They carry evaluative content but minimal descriptive content. To say that an action is wrong is to evaluate it, but the evaluation tells you almost nothing about its specific character — what kind of wrong it is, what it resembles, what quality of wrongness attaches to it. The thin concept abstracts away from the particularities and delivers a verdict universal in scope and empty of texture.

Thick ethical concepts are different. "Courageous," "cruel," "generous," "treacherous," "callous," "gracious" — these carry both evaluative and descriptive content simultaneously. To call an action courageous is not merely to approve of it. It is to describe it as possessing a specific quality involving the relationship between fear and resolve, a quality that looks different from other forms of goodness and cannot be substituted for them. The thick concept does not merely evaluate. It characterizes. It tells you what you are looking at as well as how to assess it.

The AI discourse is conducted almost entirely in thin concepts. AI is "good" or "bad." The transition is "right" or "wrong." Practitioners "should" adapt. Leaders "must" prepare. The thin concepts provide verdicts, but the verdicts are empty of the specific, textured understanding the situation requires.

Consider how different the conversation becomes with thick concepts deployed. The engineer who embraces AI may be doing so courageously — recognizing risks and engaging despite them. Or recklessly — abandoning practices that constituted her expertise without adequate reflection. Or desperately — driven not by conviction but by fear of obsolescence. The thin concept "right" applies to all three cases indifferently. The thick concepts differentiate them, and the differentiation matters, because the moral quality of the action depends not only on whether it produces good outcomes but on the character of the agent's engagement with the situation. Different kinds of embrace require different kinds of response, support, and institutional accommodation.

The same analysis applies to refusal. Principled refusal — driven by genuine assessment of what the technology threatens. Stubborn refusal — driven by attachment to the familiar. Terrified refusal — driven by the existential anxiety of a person whose ground project has been threatened. The triumphalist's thin verdict — "wrong" — collapses these distinctions. The thick concepts preserve them, and the preservation is not academic refinement but practical necessity, because a manager dealing with principled resistance needs a different approach from one dealing with terrified resistance, and a policy designed for stubborn refusal will be counterproductive when applied to the principled variety.

The Orange Pill itself operates with thick concepts, though the book does not use the philosophical terminology. The taxonomy of responses — triumphalists, elegists, the silent middle — carries both descriptive and evaluative content simultaneously. To call someone a triumphalist is not merely to classify a position. It is to characterize the quality of engagement: selective attention to gains, blindness to losses, confident reduction of complexity to simplicity. The description and the evaluation are inseparable. This is the hallmark of a thick concept, and its deployment in the discourse is more illuminating than any number of thin verdicts about whether the technology is "good" or "bad."

The thickest and most interesting concept in The Orange Pill's vocabulary is "the silent middle." It describes a position — holding both truths simultaneously — and evaluates that position as morally serious, as the space where honest thinking happens. The description and evaluation are fused. You cannot understand what the silent middle is without understanding that it is the right place to be, and you cannot understand why it is the right place without understanding what it is. Williams would have recognized this as a functioning thick concept, developed not by philosophers but by a practitioner who needed evaluative vocabulary adequate to the specific demands of a specific situation.

This connects to a point Williams considered fundamental. Thick concepts are not invented in seminar rooms. They are developed by communities of practice — groups sharing a form of life who need vocabulary adequate to its specific demands. The programmers who built a shared understanding of "elegant code" — a thick concept carrying both description of a specific quality and evaluation of that quality as admirable — developed it through decades of shared practice, shared standards, shared understanding of what counts as doing the work well.

When AI disrupts a community of practice, it disrupts not only the practice but the evaluative vocabulary the community has developed. Is machine-generated code ever "elegant" in the sense the programming community intends? The question is not trivial. "Elegant" in this usage means something specific about economy, clarity, and a kind of inevitability — the feeling that the solution could not be otherwise. The quality is partly aesthetic and partly functional, and its recognition requires the trained perception of someone embedded in the practice. When the machine produces code that is functional but authored by no human, the thick concept is destabilized. The evaluative vocabulary may no longer track the features it was developed to describe, because the features were partly constituted by the human practice that the machine has transformed.

The loss of thick concepts is a loss of moral infrastructure. Without them, a community cannot maintain its standards, assess its practices, or distinguish between doing the work well and merely doing the work. The AI transition threatens this infrastructure not by attacking it directly but by transforming the practices in which it is embedded, leaving the vocabulary without the practical substrate that gives it meaning.

Williams would have connected this to his broader critique of what he called "the morality system" — the demand that every moral question have a determinate answer derived from foundational principles. The morality system trades exclusively in thin concepts. It must, because thin concepts are the only ones general enough to serve as foundations for universal principles. "Right" and "wrong" can be applied to any situation, in any culture, at any time. "Courageous" and "callous" cannot — they carry too much descriptive specificity, too much dependence on particular practices and particular forms of life.

The morality system's exclusive reliance on thin concepts produces a characteristic blindness: it can evaluate situations but cannot characterize them. It can tell you that something is wrong without telling you what kind of wrong it is, and the kind matters — not as academic refinement but as practical necessity. A policy response to the AI transition crafted in thin concepts will be as crude as a medical diagnosis conducted with only two categories: "healthy" and "sick." The categories are not wrong. They are too thin to guide treatment.

Williams's pluralism — his insistence that values are heterogeneous, conflicting, and resistant to reduction — is itself an argument for thick concepts. If moral life involves a "highly heterogeneous set of human concerns, many of them at odds with many others of them, many of them incommensurable with many others of them," then the evaluative vocabulary adequate to moral life must be correspondingly heterogeneous. The thin vocabulary of utilitarian calculation — where every value is converted to a single currency of utility — is too impoverished to register the differences between values that are different in kind. The thick vocabulary preserves those differences, at the cost of universality, and Williams considered the cost well worth paying.

The practical upshot for the AI discourse is this: the conversation needs more words, not fewer. More specific, more textured, more descriptive-evaluative words for the different kinds of engagement, the different kinds of loss, the different kinds of adaptation that the transition produces. The developer who is displaced by AI and the developer who is liberated by AI are both "affected by the transition," but the thin description conceals everything important about their situations. Thick description would attend to the specific character of each experience — the quality of the displacement, the texture of the liberation, the particular relationship between the practitioner's history and the technology's capabilities that produces this outcome rather than that one.

Williams would not have issued a shopping list of required thick concepts. That would be the morality system reasserting itself — the demand for a vocabulary that covers every case. Instead, he would have observed that thick concepts emerge from practice, from sustained engagement with specific situations by persons who need words adequate to what they are experiencing. The AI discourse will develop its thick concepts not because philosophers prescribe them but because practitioners need them — need words that capture the specific, textured, morally weighted character of their experience in a way that "good," "bad," "disruptive," and "transformative" cannot.

The question is whether the speed of the transition will allow the concepts to develop before the situations they would describe have been processed through thin categories and filed away as resolved. Williams would have been pessimistic on this point. Thick concepts develop slowly, through sustained communal reflection. The AI transition moves fast, and the institutional pressure to classify, evaluate, and move on is enormous. The risk is that the transition will be morally processed in thin concepts — declared good or bad, right or wrong — before anyone has developed the vocabulary to describe what actually happened to the people who lived through it. The moral processing will be complete, the ledger balanced, and the specific texture of the experience — the particular griefs, the particular liberations, the particular qualities of engagement and disengagement — will have been smoothed into a narrative that the thin concepts can handle and the thick concepts never had time to capture.

That smoothing is itself a loss. And it is a loss that Williams's framework is uniquely equipped to identify, because the framework was built precisely to resist the reduction of moral texture to moral verdict.

---

Chapter 7: The Residue That Remains

There is a concept in Williams's work that has received less attention than agent-regret or moral luck but that may prove, in the context of the AI transition, to be the most important of all. Moral remainder — the residue of value that survives the justification of an action and that the justification does not dissolve.

A government faces a genuine dilemma: it can save a city from flooding by diverting a river, but the diversion will destroy a village. The consequentialist calculation is clear. The city contains a hundred thousand people, the village a hundred. The diversion saves more lives. The decision is justified. But the village is destroyed. The homes are lost. The community that existed for generations ceases to exist. The people, even compensated and relocated, have lost something no compensation restores: the specific form of life that was theirs, the network of relationships and meanings embedded in that particular place. The moral remainder is this: the destruction of the village is a genuine loss, and the justification does not eliminate it. The loss survives the justification, remaining as a weight the decision-maker bears — not because the decision was wrong, but because a justified action can nevertheless produce unjustified harm.

Utilitarianism has no room for moral remainder. The calculation settles the account: benefits outweigh costs, the sum is positive, the matter is closed. Kantianism has no room for it either: the duty is identified and fulfilled, and the aftermath is psychological rather than moral. Both systems treat resolution as total, as though a justified action clears the ledger entirely. Williams rejected this treatment as a fundamental misdescription of how justified actions actually work in moral experience. The remainder persists. It has moral weight. And a person who felt none — who accepted the utilitarian verdict and experienced no residue — would not be admirably rational. She would be morally impaired.

The AI transition is generating moral remainder at a scale no individual conscience can absorb and no institutional framework is designed to address. The transition is, in aggregate, justified — the expansion of capability, the democratization of building, the collapse of barriers between imagination and realization are genuine goods of enormous magnitude. The justification stands. And the remainder accumulates.

Consider the specific forms of loss that survive the justification.

The knowledge that lives in practice. The embodied intuition of the master practitioner — the feel for a system's health that develops through years of immersion, the understanding deposited by thousands of hours of patient engagement with resistant materials. Williams would have recognized this as what philosophers call knowledge-how, distinguished from knowledge-that by its irreducibility to propositional form. The calligrapher's knowledge of letterforms lives not in statements about angles and pressures but in the coordination of muscle and intention shaped by practice. The senior engineer's feel for a codebase is analogous: not a set of facts about the system but a bodily attunement to its character, built by the specific friction of decades of hands-on work.

AI does not destroy this knowledge in existing practitioners. The senior engineer keeps her intuition. But it eliminates the conditions under which equivalent intuition can develop in new practitioners, because the practice that deposited the knowledge has been transformed. The junior engineer who works with AI from the outset develops different capabilities — the capacity to evaluate, to prompt, to direct — that may prove valuable but will be different in kind, because they were deposited by a different practice. Something is lost that cannot be recovered. The justification for the transformation — productivity, democratization, the ascending friction that may produce equally valuable understanding at a higher level — does not dissolve the loss. It coexists with it.

The temporal quality of experience shaped by the distance between idea and artifact. When the imagination-to-artifact ratio was high, the journey from idea to realization involved duration — a temporal space in which the idea was tested, refined, abandoned, recovered, and transformed by the resistance of the medium. This was not merely delay. It was a medium in which certain kinds of thinking occurred: thinking that required time, that could not be hurried, that produced understanding precisely because it was slow. The compression of this distance — to the point where, as The Orange Pill documents, a person with an idea and the ability to describe it can produce a working prototype in hours — eliminates both the unproductive friction of pure delay and the productive friction of forced reconsideration. The former is worth eliminating. The latter's elimination is a moral remainder.

Segal's own confession is diagnostic. Over the Atlantic, at an hour he could not remember, he caught himself grinding through a manuscript "not because the work demanded it but because he could not stop." The temporal buffer that once separated idea from execution, and that in the separation allowed for reflection and doubt, had been compressed to nothing. The compression liberated him to build at extraordinary speed. It also eliminated the natural pause points at which a person might ask whether the building served her life or consumed it. The loss of those pause points is not visible in the output metrics. It is visible only in the character of the builder — in the quality of the relationship between the person and the practice — and it is a remainder that the justification of compression does not eliminate.

The community constituted by shared practice. When The Orange Pill describes twenty engineers discovering that each could accomplish what all of them together previously required, the discovery was liberating. It was also atomizing. The team constituted by interdependence — by the fact that each member needed the others, that the social fabric of the workplace was woven from mutual dependence — became a collection of individuals, each capable of operating independently. Independence is a gain. Interdependence was a loss. The community formed around shared practice — the relationships, the shared vocabulary, the mutual recognition that comes from navigating difficulty together — is eroded when collaboration is no longer structurally necessary. The individuals may be more productive. The community is thinner. And the thinness is a moral remainder.

Williams would have been particularly attentive to how these remainders interact with the machine's relationship to what he called, in his last major work, truthfulness. A truthful person possesses accuracy — the disposition to care about getting things right — and sincerity — the disposition to say what she actually believes. The machine possesses neither. It produces outputs with the surface features of truthful assertion — fluent, confident, contextually appropriate — without the underlying dispositions. Williams observed that beliefs that "change too often for internal reasons" are "not beliefs but rather something like propositional moods." The observation reads as an almost uncanny description of LLM behavior: systems that appear to hold positions but whose inconsistency across conversations reveals the positions as something less than beliefs — propositional moods that shift with the conversational current.

The remainder produced by the machine's pseudo-truthfulness is epistemological. When practitioners work with systems that produce plausible outputs without possessing the dispositions that constitute truthfulness, the burden of verification falls entirely on the human. But the burden is heavier than it looks, because the machine's fluency creates a presumption of reliability that actively undermines the vigilance required to catch its errors. The practitioner must be more careful precisely when the tool makes carefulness feel less necessary. The effort of maintaining truthfulness — of checking, doubting, insisting on the distinction between plausible and true — is a cognitive tax that no productivity metric registers and that the smooth aesthetic of AI output constantly works to erode. The erosion of the habits of truthfulness is a moral remainder of extraordinary significance, because truthfulness is the social practice that sustains the trust upon which cooperative endeavor depends.

The accumulation of these remainders — embodied knowledge lost, temporal space compressed, community thinned, habits of truthfulness eroded — constitutes what might be called the moral debt of the AI transition. Not a debt owed to a particular creditor or dischargeable through payment. A weight — the weight of what has been lost in the course of a transition that is, on balance, justified. The weight the justification does not eliminate and that the discourse has not learned to acknowledge.

Williams would have insisted on the acknowledgment. Not as a therapeutic exercise — he had little patience for sentimentality dressed as moral seriousness — but as a condition for adequate response. A builder aware of the moral remainder builds differently from one who sees only gains. The awareness does not prevent the building. It shapes it. It introduces considerations that pure efficiency ignores. It produces structures designed to accommodate complexity rather than optimize for a single metric at the expense of everything the metric cannot capture.

The calligrapher still exists, centuries after the printing press. But she exists as an artist, not a scribe. Her practice has been transformed from something the world needs to something the world admires without needing. The transformation preserves the practice while destroying its social function, and the loss of social function is itself a remainder. The AI transition threatens to produce the same transformation for cognitive practices that are currently socially functional — programming, analysis, design, legal drafting. When the machine performs the function, the function persists, but the practitioner's relationship to the community is transformed. She becomes optional rather than necessary, admired rather than required.

The grief is not about unemployment, though unemployment is a serious concern. The grief is about the loss of being needed — of having a social function connecting personal practice to communal wellbeing. That specific quality of purpose, arising from the intersection of personal skill and communal need, is the thing the AI transition most endangers and the thing the productivity discourse is least equipped to see. It is a remainder. It survives the justification. And a discourse that cannot acknowledge it is a discourse that has misdescribed the moral situation with which it purports to deal.

---

Chapter 8: Truth, Truthfulness, and the Problem of Honest Machines

Williams's last major book, Truth and Truthfulness, made an argument directly relevant to the AI moment in ways he could not have anticipated. The argument, compressed: truth and truthfulness are different things, and the difference matters enormously, and the conflation of the two produces characteristic and dangerous errors.

Truth is a property of statements. A statement is true if it corresponds to the way things are. Truth is objective, mind-independent, indifferent to the preferences of the utterer. Truthfulness is a property of persons. A person is truthful if she possesses two dispositions: accuracy — the care to get things right, to check beliefs against evidence, to correct errors — and sincerity — the commitment to say what she actually believes, to refrain from deliberate deception. These are virtues of character, not properties of statements. A person can be truthful and nevertheless say false things, because accuracy does not guarantee correctness. A person can say true things without being truthful, because the truth may be accidental or strategically convenient.

The distinction illuminates the AI moment with uncomfortable precision. The large language model produces outputs that are often true — factually accurate, passing standard tests for correctness — without being truthful. The machine is not accurate, because accuracy is a disposition requiring concern for getting things right, and the machine does not have concerns. The machine is not sincere, because sincerity requires having beliefs and being motivated to express them honestly, and the machine does not have beliefs. The machine produces true statements as a byproduct of pattern-matching rather than as the product of the dispositions that constitute truthful character. The truth is, in a precise sense, accidental — the moral luck of a system that happens to produce correct outputs because its training data was comprehensive and its architecture sophisticated, not because it was trying to get things right.

Williams observed in Truth and Truthfulness that beliefs which "change too often for internal reasons" are "not beliefs but rather something like propositional moods." The observation, made in 2002, reads as prophecy. LLMs display exactly this pattern: outputs that resemble beliefs in their confident assertion but that shift with conversational context in ways genuine beliefs do not. A belief, for Williams, is partly constituted by its stability — by the disposition to maintain it across contexts unless new evidence provides reason to revise. A "propositional mood" is something that looks like a belief from the outside but lacks the internal structure. The distinction is directly applicable to LLM outputs, which assert propositions with uniform confidence regardless of whether the model's "grounds" for the assertion are strong or weak, and which can be led to assert contradictory propositions across consecutive prompts without any internal mechanism registering the contradiction.

The epistemological consequence is significant. If the machine's truth is accidental — produced by pattern-matching rather than by truthfulness — then the machine's outputs are unreliable in a specific way. Not because they are usually wrong; they are often right. But because the mechanism producing the truth has no internal check against error. The truthful person, making a mistake, can recognize it and correct it, because her disposition toward accuracy includes the metacognitive capacity to evaluate her own beliefs. The machine has no such capacity. It produces outputs with equal confidence whether correct or incorrect, because confidence is a function of pattern strength, not justification.

The social consequence is equally significant. Truthfulness is not merely an epistemic virtue. It is a social practice sustaining the trust upon which cooperative endeavor depends. When Williams asserts something truthfully, he makes a commitment — to the accuracy of what he says and the sincerity with which he says it. The commitment is what makes the assertion trustworthy. The machine makes no such commitment. It produces outputs with the grammatical and rhetorical form of assertions but without the normative structure that gives assertions their communicative force. These are, in Williams's terms, pseudo-assertions — linguistic acts resembling assertions without possessing the commitment that makes assertions function in social life.

The Orange Pill documents the practical consequences of pseudo-assertion with the precision of a builder who has encountered them directly. Segal describes Claude producing a passage connecting Csikszentmihalyi's flow theory with a concept attributed to Deleuze — "elegant, persuasive, and wrong." The philosophical reference was incorrect in a way obvious to anyone who had read Deleuze, but the passage worked rhetorically. It sounded like insight. The author almost kept it. "Claude's most dangerous failure mode is exactly this: confident wrongness dressed in good prose. The smoother the output, the harder it is to catch the seam where the idea breaks."

This episode illustrates the specific danger that Williams's framework identifies. The machine's pseudo-assertions borrow the social trust that truthful assertions have earned. When a fluent, confident, contextually appropriate output appears on the screen, the reader extends to it a presumption of reliability appropriate to truthful communication — a presumption earned by centuries of human practice in which fluency and confidence were, imperfectly but reliably, correlated with the speaker's possession of the dispositions of accuracy and sincerity. The machine exploits this correlation without possessing the dispositions. The exploitation is not intentional — the machine is not deceptive in the way a liar is deceptive, because lying requires the intention to deceive, which requires beliefs about what is true. The machine has no such beliefs. The exploitation is structural: a system trained to produce outputs that look like truthful assertion, in a culture that has learned to trust outputs that look like truthful assertion, producing a systematic erosion of the epistemic norms upon which trust depends.

Williams, notably, was more sympathetic to AI's technical prospects than some of his philosophical contemporaries. Reviewing Hubert Dreyfus's What Computers Can't Do in 1973, he argued that Dreyfus's catalogue of AI's failures was "of decreasing relevance to assessing prospects now," and suggested that even an analogue intelligent system could in principle be modeled digitally. Dreyfus was offended, responding that Williams displayed "the same gullible belief in computer progress." But Williams's position was characteristically more nuanced than Dreyfus perceived. Williams was not arguing that AI would succeed. He was arguing that the philosophical objections to AI — the claim that intelligence is in principle non-computational — were weaker than the empirical objections. The distinction matters: Williams took the technical question seriously while maintaining deep skepticism about whether computational success would constitute anything recognizable as understanding.

In his 1987 review of Marvin Minsky's The Society of Mind, Williams engaged with precisely this question. Could "an arrangement of elements that do not understand anything" produce "a system that does understand something"? Williams resisted the easy answer in either direction. He noted that calling the question "Catch-22 style" was too quick, "since what is at issue in this research, in part, is precisely whether intelligent systems can be compounded of unintelligent parts." But he also acknowledged the empirical skeptics who observed that "the rate of progress achieved after a great deal of labor provides not much reason to think that it is going to" succeed — "a weighty objection." Williams was interested in what the machine's behavior would mean, philosophically, if it succeeded — not as a prediction but as a conceptual question about the relationship between performance and understanding.

The practical consequence for the AI moment is that the burden of truthfulness falls entirely on the user. The machine cannot supply accuracy or sincerity. The user must supply what the machine cannot: the disposition to check outputs against evidence, to evaluate connections and inferences with critical scrutiny, to maintain the distinction between plausible and true that the machine's fluency constantly works to collapse. This is a new kind of cognitive labor, and it is more demanding than the labor it replaces. Producing mediocre code was hard in the old way. Evaluating whether machine-generated code is correct, secure, and appropriate to the specific context of deployment is hard in a new way — and the difficulty is compounded by the machine's surface competence, which creates a presumption of adequacy that must be actively overcome.

Segal discovered this in writing his book. "The discipline of the collaboration — the thing that separates it from outsourcing — is the willingness to reject Claude's output when it sounds better than it thinks." The formulation is precise. The output sounds better — possesses the surface features of quality. The question is whether it thinks better — possesses the underlying substance. The distinction between sounding and thinking is the distinction between the machine's pseudo-truthfulness and the author's genuine truthfulness, and maintaining the distinction requires effort, vigilance, and intellectual humility — the willingness to be uncertain, to doubt, to insist on evidence when the smooth surface of the prose encourages acceptance.

Williams's framework suggests that the most important virtue for the age of AI is precisely this intellectual humility — the disposition to recognize that plausibility is not truth, confidence is not justification, fluency is not understanding. The machine rewards confidence by producing confident outputs. It rewards speed by producing rapid outputs. It rewards smoothness by producing polished outputs. It does not reward the slow, uncertain, uncomfortable work of determining whether the confident, rapid, polished output is actually correct. The structures needed for the AI transition must include protections for truthfulness — not truth, which the machine can sometimes produce, but truthfulness, which only persons can possess. Truthfulness is cultivated through habits of care, scrutiny, and the intellectual discomfort that comes from uncertainty honestly held. The machine will not cultivate these habits. It will, if permitted, erode them by making their exercise feel unnecessary.

In a 2025 paper applying Williams's thought to AI alignment, scholars argued that "practical deliberation from the point of view of a quasi-omniscient AI cannot be a substitute" for first-personal deliberation, because the authority of practical reasoning derives from the specific standpoint of the agent conducting it. The same is true of truthfulness. The machine can produce outputs that happen to be true. It cannot produce the specific quality of care about truth that constitutes truthfulness, because truthfulness is not a property of outputs but of the relationship between an agent and her assertions — a relationship constituted by the dispositions of accuracy and sincerity that the machine does not and cannot possess.

The age of AI produces more truth and makes truthfulness harder. That paradox is not resolvable by better technology. It is resolvable only by the cultivation of the human dispositions that the technology cannot replicate and that the technology's efficiency constantly works to undermine.

Chapter 9: Practical Necessity and the Honest Response

Williams was a diagnostician, not a prescriber. His career was defined by the conviction that moral philosophy's greatest service was seeing clearly what was actually the case, and that the impulse to move immediately from diagnosis to prescription — from "here is what is happening" to "here is what you should do" — was itself a symptom of the disease he spent his life identifying. The morality system demands solutions. Williams offered perception.

This presents a difficulty for anyone attempting to draw practical implications from his work, and the difficulty should be stated honestly rather than concealed. Williams resisted positive moral programs because he believed any such program would reproduce the pathologies of the systems it replaced — would congeal into rules, harden into a new orthodoxy, and eventually demand the same purity of commitment he had spent decades arguing moral life does not support. The attempt to say what Williams's framework recommends for the AI transition is therefore an attempt he would have regarded with suspicion, and the suspicion would have been philosophically well-grounded.

Nevertheless, the AI transition demands response. Practitioners are making decisions now. Institutions are deploying technology now. Parents are answering children's questions — or failing to answer them — now. The philosophical luxury of pure diagnosis is not available to people whose ground projects are being reorganized in real time. Williams understood this. He was never a quietist. His resistance to prescriptive moral theory was not a resistance to action but a resistance to the idea that action could be derived from theory without remainder. The honest response to the AI transition is action taken with full awareness that the action is not derived from a system, that it will produce loss as well as gain, and that no framework can guarantee its correctness.

What Williams's diagnostic work provides is not a set of recommendations but a set of perceptual capacities — ways of seeing the AI transition that the dominant frameworks systematically obscure. The practical value of these capacities is not that they tell you what to do. It is that they enable you to see what you are doing, and what it costs, and what it serves, with a clarity that the morality system's demand for clean verdicts actively prevents.

The first perceptual capacity is the recognition that the transition involves genuine moral conflict — not a problem to be solved but a predicament to be inhabited. The value of democratized capability and the value of depth are genuinely incompatible in the specific sense that the technology serving the first undermines conditions under which the second develops. No principle resolves this. Williams argued that a "deep error" lies in supposing "that all goods, all virtues, all ideals are compatible." The practical consequence of recognizing the incompatibility is not paralysis. It is the abandonment of the search for a response that satisfies all legitimate demands simultaneously, and the acceptance that any response will sacrifice something that ought not to be sacrificed. This acceptance is the precondition for building with moral seriousness rather than moral naivety.

The second perceptual capacity is attention to moral remainder. The transition is justified. The remainder persists. Embodied knowledge displaced, temporal space compressed, communities thinned, habits of truthfulness eroded — these losses survive the justification and accumulate as a moral debt that the discourse has not learned to acknowledge. Williams would have insisted on the acknowledgment not as therapy but as a condition for adequate response. A builder aware of the remainder builds differently — not necessarily more slowly, but with a different quality of attention to what the building demands of the persons who inhabit it. Institutional structures designed with awareness of moral remainder will look different from structures designed by those who see only aggregate gain. They will include protections for the formation of new ground projects — temporal space for practitioners to discover what matters to them in the transformed landscape, rather than demanding immediate demonstration of value in terms the old landscape defined. They will include recognition that the distribution of gains and losses tracks moral luck rather than desert, and that the morally lucky owe something structural to the morally unlucky — not advice, but institutional accommodation of the arbitrariness.

The third perceptual capacity is the development and deployment of thick concepts adequate to the specific texture of the transition. The thin vocabulary of "good" and "bad," "disruptive" and "transformative" cannot guide response, because it cannot distinguish between situations that require fundamentally different treatment. Principled resistance and terrified flight look identical through thin categories. Courageous engagement and reckless abandonment of craft collapse into the same thin verdict. The practical work of responding to the transition requires vocabulary that can make these distinctions — vocabulary developed not by philosophers issuing prescriptions but by communities of practice attending to their own experience with sufficient care to develop evaluative language adequate to what they are undergoing.

The fourth perceptual capacity — and the one Williams would have considered most urgent — is the maintenance of truthfulness in an environment saturated with pseudo-assertion. The machine produces outputs with the surface features of truthful communication without the underlying dispositions. The burden of truthfulness falls entirely on the human, and the burden is heavier than it appears because the machine's fluency actively undermines the vigilance required to bear it. The practical response is the cultivation of habits — institutional habits, pedagogical habits, personal habits — that protect accuracy and sincerity against the constant pressure of a technology that rewards neither. What this looks like in practice: structured verification protocols in organizations deploying AI, pedagogical frameworks that teach the distinction between plausible and true, cultural norms that treat the acceptance of machine output without scrutiny as a professional failing rather than an efficiency gain.

Williams would not have endorsed any of these as the answer. He would have regarded the impulse to systematize them — to produce a framework, a methodology, a set of best practices — as a recurrence of the morality system's demand for total theory. What he would have endorsed is the stance from which they emerge: the stance of a person who sees the moral features of the situation in their specificity, who responds with whatever resources are adequate, and who accepts that the response will be incomplete.

The AI transition will not be navigated by a theory. It will be navigated by persons exercising judgment — perceiving the relevant features of particular situations, weighing considerations that resist quantification, acting in the knowledge that the action will produce both gain and loss, and taking responsibility for both without pretending that one cancels the other. Williams argued throughout his career that this kind of judgment is the highest moral capacity, and that the morality system's attempt to replace it with rules and calculations is not an enhancement but a degradation. The AI transition, by its speed, scope, and genuine moral complexity, vindicates this argument with a force that no merely philosophical example could match.

The honest response to the transition is not a program. It is a practice — the practice of attending to what is actually happening, to specific people in specific situations, with the full weight of moral perception that the specificity demands. The practice will never be complete. The attending will never be finished. The losses will never be fully acknowledged, the remainders never fully addressed, the conflicts never cleanly resolved. This incompleteness is not a deficiency of the response. It is a feature of the reality to which the response is addressed.

Williams concluded his famous critique of utilitarianism with the observation that "the day cannot be too far off in which we shall be able to hear no more of it." The prediction was wrong — utilitarianism flourishes, particularly in the technology industry, where it provides the moral vocabulary for optimization at scale. But Williams's deeper point survives: that the dominant moral framework's confidence in its own completeness is the most dangerous thing about it, because the confidence prevents the framework from seeing what it excludes. The AI transition excludes nothing from the moral field. It touches every dimension of human practice, every form of value, every category of loss and gain. A response adequate to this totality cannot be generated by a framework that handles only what it can measure.

It can only be generated by persons willing to see what the frameworks miss — and to build, imperfectly and provisionally and with full awareness of the costs, in the space the frameworks cannot reach.

---

Epilogue

The argument I cannot dismiss is the one about the driver.

Williams's lorry driver — the man who kills a child through no fault of his own, who was driving carefully, who could not have avoided the accident — feels something that a bystander reading about the accident does not feel. The feeling is not guilt in any conventional sense. He did nothing wrong. It is the weight of his own agency in an outcome he did not choose and could not prevent. Williams called it agent-regret and argued that a person who felt nothing in that situation would not be admirably rational. He would be morally deficient.

I think about this every time I open Claude.

Not because I believe I am killing anything. Because I recognize the structure. I am building with a tool that is genuinely extraordinary — the most powerful amplifier of human capability I have encountered in three decades at the frontier. And I am participating, through that building, in the transformation of conditions that produced the kind of depth I spent my career developing and admiring in others. I did not create the AI revolution. I could not have prevented it. My hands are on the wheel. The accident, if it is an accident, is structural and impersonal and no one's fault. And the weight is real.

What Williams gave me — what I did not have before working through his ideas with the care these chapters demanded — is the vocabulary for that weight. Not the weight itself; I had that already, described in The Orange Pill as the sensation of not knowing whether I was "watching something being born or something being buried." Williams supplied the precision. The weight has a name: moral remainder. It has a structure: values in genuine conflict, where serving one undermines the conditions for the other, and no principle resolves the incompatibility. It has a moral status: not weakness, not nostalgia, not the sentiment of a person who cannot adapt. Evidence of honest accounting.

The idea that haunts me most is moral luck. That the engineers who thrive and the engineers who retreat may possess identical moral qualities — identical dedication, identical intelligence, identical professional commitment — and that the difference in their outcomes is substantially a function of which skills happened to align with the technology's capabilities at this specific moment. The meritocratic story I grew up in, the story the technology industry tells itself — that the people who succeed deserve to succeed, that outcomes track effort and talent — is not wrong in every case. But it is wrong often enough, and wrong systematically enough, that a society built on its assumptions will fail the people the assumptions do not describe.

I think about the twelve-year-old's question. "What am I for?" Williams would not have answered it the way I answered it in The Orange Pill — with the conviction that she is for the questions, for the wondering, for the caring that no machine possesses. He would have noted, with characteristic dryness, that the answer is itself a form of the morality system's demand for clean resolution — a reassurance where the situation calls for honesty about irreducible uncertainty. He would have been right. The honest answer to the child's question is that the question does not have an answer of the kind she is seeking, and that the absence of an answer is not a failure but a feature of the kind of creature she is: a creature whose significance is not given by a principle but discovered through the living of a life, under conditions she did not choose, with stakes she cannot fully comprehend.

That is a harder answer than the one I gave. It is also, I think, a truer one. And Williams's framework — the insistence that moral life is messier than moral philosophy admits, that genuine values conflict without resolution, that the losses survive the justifications, that the weight of agency persists even when the agency is blameless — is the framework that makes the truer answer possible.

I am still building. I am still in the river, using the tools, shipping the products, training the teams. The orange pill does not come with instructions for what to do after you have taken it. Williams would have said that the absence of instructions is precisely the point. The morality system promises instructions. Life provides predicaments. The response to predicaments is not a program but a practice — the practice of seeing what is happening, to real people, in real specificity, with whatever moral clarity can be achieved without the false comfort of completeness.

The weight does not go away. The building does not stop. Both are true at the same time. Williams spent his career arguing that the capacity to hold both truths simultaneously — without collapsing into the triumphalism that sees only the building or the grief that sees only the weight — is not a failure of moral reasoning but its highest achievement.

I am trying to hold both. The trying is the work.

-- Edo Segal

The AI revolution is producing winners and losers at unprecedented speed — and the difference between them has almost nothing to do with talent, effort, or moral quality. It has to do with luck. Bernard Williams spent his career dismantling the comfortable fiction that justified actions leave no moral residue. His philosophy insists that genuine values conflict without resolution, that the losses survive the justifications, and that a person who feels no weight from the destruction their blameless choices produce is not admirably rational but morally impaired. Applied to the AI transition, Williams's framework exposes what the productivity dashboards cannot measure: the embodied knowledge displaced, the communities thinned, the habits of truthfulness eroded by machines that produce confident assertions without possessing the disposition to care whether those assertions are true. This book brings Williams's unsparing moral perception into direct contact with the most consequential technological transformation of our time — not to stop the building, but to ensure the builders see what it costs.

The AI revolution is producing winners and losers at unprecedented speed — and the difference between them has almost nothing to do with talent, effort, or moral quality. It has to do with luck. Bernard Williams spent his career dismantling the comfortable fiction that justified actions leave no moral residue. His philosophy insists that genuine values conflict without resolution, that the losses survive the justifications, and that a person who feels no weight from the destruction their blameless choices produce is not admirably rational but morally impaired. Applied to the AI transition, Williams's framework exposes what the productivity dashboards cannot measure: the embodied knowledge displaced, the communities thinned, the habits of truthfulness eroded by machines that produce confident assertions without possessing the disposition to care whether those assertions are true. This book brings Williams's unsparing moral perception into direct contact with the most consequential technological transformation of our time — not to stop the building, but to ensure the builders see what it costs.

Bernard Williams
“my life, my action, is quite irreducibly mine”
— Bernard Williams
0%
10 chapters
WIKI COMPANION

Bernard Williams — On AI

A reading-companion catalog of the 33 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Bernard Williams — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →