By Edo Segal
The passage I almost did not delete is the one that brought me to Glover.
I have described it elsewhere in *The Orange Pill* — the moment Claude produced a paragraph about democratization that was eloquent, well-structured, and hit every note I wanted to hit. I nearly kept it. Then I realized I could not tell whether I believed the argument or just liked how it sounded. I deleted it, went to a coffee shop, wrote by hand for two hours until I found the version that was mine.
That episode troubled me for weeks. Not because of the authorship question — I had already made my peace with the collaboration. What troubled me was the ease of the acceptance. How close I came to letting someone else's fluency substitute for my own conviction. How natural it felt. How little resistance I encountered inside myself before I caught the drift.
I started asking a different question. Not "What can this tool do?" but "What is this tool doing to me?" Not to my productivity. To the connection between what I say and what I actually think. To the kind of person I am becoming through a thousand small decisions about whether to examine or approve, whether to struggle or accept, whether to sit with discomfort or let the machine smooth it away.
That question led me to Jonathan Glover.
Glover spent four decades studying how ordinary people — not monsters, not sociopaths, people who loved their families and listened to music — came to participate in the worst things human beings have done to each other. His answer was not a theory of evil. It was a map of erosion. The specific, identifiable, replicable mechanisms through which moral restraints loosen. Distance from consequences. Diffusion of responsibility. The incremental slide where each small concession makes the next one easier. The institutional architectures that filter out the faces of the people affected by your work and replace them with metrics.
None of this requires atrocity to be relevant. It requires only a tool that amplifies whatever you feed it, at a speed and scale that compresses the space where conscience operates. It requires only builders who are moving fast enough that the question "Should this exist?" gets answered by the fact that it already does.
Glover does not tell you what to build. He tells you what happens to the person doing the building when the friction that once created moments of moral encounter is engineered away. That diagnosis matters now more than it ever has. The amplifier does not care what signal you feed it. Glover helps you understand what determines the quality of that signal — and what erodes it so gradually you never notice the change.
— Edo Segal ^ Opus 4.6
b. 1941
Jonathan Glover (b. 1941) is a British moral philosopher whose career has been devoted to the intersection of ethics, psychology, and the institutional conditions that enable or prevent human cruelty. Educated at Oxford, he spent decades as a fellow and tutor at New College before holding the chair in Ethics at King's College London. His landmark work *Causing Death and Saving Lives* (1977) reshaped debates in applied ethics around questions of life, death, and moral responsibility. His masterwork, *Humanity: A Moral History of the Twentieth Century* (1999), examined the psychological and institutional mechanisms — distance, diffusion of responsibility, the erosion of moral identity, and the suppression of what he termed "the human response" — that allowed ordinary people to participate in genocide, torture, and mass violence. Across works including *What Sort of People Should There Be?* (1984) and *Alien Landscapes* (2014), Glover has consistently argued that moral identity is not a fixed possession but an ongoing construction, maintained through the daily exercise of sympathy, self-knowledge, and the willingness to see other persons as persons. His influence spans moral philosophy, bioethics, political theory, and the emerging field of technology ethics.
In 1941, the year Jonathan Glover was born, the bureaucrats of the Third Reich were refining the administrative machinery that would make the Holocaust possible. They were not, most of them, sadists. They were builders. They built systems of classification, transport logistics, resource allocation. They optimized processes. They measured outputs. They filed reports that tracked efficiency gains. The moral catastrophe they administered was not produced by an excess of evil but by a deficit of a specific human capacity — the capacity to see the persons affected by one's work as persons, and to feel the weight of that recognition in one's body, in one's choices, in the kind of self one was constructing through the act of building.
Glover spent the better part of four decades studying this deficit. His masterwork, Humanity: A Moral History of the Twentieth Century, is not a history of atrocity in the conventional sense. It is a diagnostic manual. It asks not what happened — we know what happened — but what psychological and institutional conditions allowed ordinary people, people who loved their children and fed their dogs and listened to Beethoven, to participate in the systematic destruction of other human beings. The answer Glover arrived at was not a theory of evil. It was a taxonomy of erosion: the specific, identifiable, replicable mechanisms through which the moral restraints that normally prevent cruelty loosened, contracted, and finally broke.
What makes Glover's work uniquely relevant to the artificial intelligence moment described in The Orange Pill is not his catalog of atrocities. Other historians compiled those. It is his insistence on a concept that sounds deceptively simple but carries enormous analytical weight: moral identity.
Moral identity, in Glover's framework, is not a fixed attribute. A person does not possess moral identity the way one possesses a characteristic or a credential. Moral identity is a process — the ongoing construction of a self through choices that express and reinforce the kind of person one is becoming. The guard who refuses to beat a prisoner is not simply obeying a rule. He is constructing a self — the kind of person who does not beat prisoners, who will not become the kind of person who beats prisoners, who experiences the prospect of becoming that kind of person as a violation of something essential. The guard who complies is also constructing a self, though he may not recognize it. Each act of compliance deposits a thin layer of moral sediment. The layers accumulate. The self that emerges from a thousand small compliances is a different self from the one that entered the first.
Glover documented this process with clinical precision across dozens of historical cases. The perpetrators of the Rwandan genocide did not wake up one morning as murderers. They were moved, step by step, through a sequence of increasingly extreme actions, each one only marginally worse than the last, each one made possible by the moral sediment deposited by the previous one. The first step was language — the categorization of Tutsis as "cockroaches." Language preceded violence because language altered moral identity. The person who has learned to categorize another human being as vermin has already begun constructing a self for whom violence against that category is thinkable.
This is the framework that must be brought to bear on the question The Orange Pill poses in its final chapter: "Are you worth amplifying?"
The question sounds motivational. In Glover's hands, it becomes diagnostic. The amplifier — artificial intelligence as Segal describes it — does not create the signal. It carries whatever signal is fed into it, at a scale and speed that no previous technology permitted. The quality of the signal is determined by the moral identity of the person who produces it. Their capacity for care. Their willingness to take responsibility for consequences. Their ability to see the people downstream of their work as persons rather than abstractions, data points, or engagement statistics.
Consider two engineers working with Claude Code on a Tuesday afternoon. Both are building recommendation systems. Both are competent. Both are using the same tool, generating code at the same speed, deploying with the same efficiency.
The first engineer has spent years cultivating a specific relationship with her work. She thinks about the people who will use the system she builds. Not abstractly — she has spoken with them, watched them interact with earlier versions, noticed the thirteen-year-old who could not stop scrolling, the elderly man who seemed confused by the interface, the mother who complained that her daughter had become unreachable. These encounters have deposited layers of moral sediment. She carries them into her work the way a surgeon carries the memory of the patient whose artery she nicked in her third year of residency — not as guilt, but as weight. The weight shapes her choices. She builds engagement features that include natural stopping points. She designs the algorithm to surface variety rather than reinforcing existing preferences. She accepts that these choices will reduce certain metrics and has made her peace with the professional cost.
The second engineer has not cultivated this relationship. Not because he is wicked. Because the organizational culture he inhabits does not reward it, does not model it, does not create the conditions in which it might develop. He has never spoken with a user. He has seen dashboards. He has seen engagement curves. He has been praised for making curves go up. The persons affected by his work exist for him as numerical abstractions, and the distance between his intention and its human consequences is mediated entirely by metrics. He builds what the metrics reward. The system he produces is effective — engagement increases, time-on-platform grows, the quarterly report glows. The thirteen-year-old scrolls deeper into a feed calibrated to exploit the neurochemistry of adolescent loneliness.
The same tool. The same speed. The same technical capability. Radically different moral outputs.
Glover's framework explains why. The first engineer has constructed a moral identity through the accumulation of choices that express care for the persons affected by her work. The second has constructed a moral identity through the accumulation of choices that prioritize metrics over persons — not through a single dramatic decision but through the slow deposition of organizational habits, professional norms, and the invisible moral assumptions that Segal, borrowing from Glover's logic without naming it, calls the fishbowl.
AI amplifies both identities with identical power. The tool does not discriminate. It does not evaluate the moral quality of the signal it carries. It amplifies the first engineer's care, carrying it to millions of users whose experience is shaped by her attention to their humanity. And it amplifies the second engineer's carelessness with equal fidelity, carrying it to millions of users whose attention is captured, whose autonomy is eroded, whose loneliness is exploited — all at a scale that the pre-AI world would have required a team of fifty to achieve.
This is what Glover's concept of moral identity means in the age of amplification. Every act of building is an act of moral self-creation, and AI has compressed the distance between self-creation and consequence to near-zero. The builder who once had months between design and deployment — months in which moral friction might produce a pause, a question, a revision — now has hours. The speed does not change the moral logic. It changes the stakes. A moral identity built on carelessness, which in the pre-AI world might have produced harm at the scale of a single product or a single company, now produces harm at the scale of a platform touching millions of lives.
Glover would note, as he noted in his study of the twentieth century's worst chapters, that the perpetrators of the largest-scale harms were rarely aware of themselves as perpetrators. They experienced themselves as professionals doing their jobs, optimizing the systems they had been given, following the incentive structures that rewarded them. The camp administrator who processed transport manifests experienced himself as an efficient bureaucrat. The engineer who optimizes engagement without considering the thirteen-year-old experiences himself as a competent professional. In both cases, the moral identity is constructed not through a moment of conscious evil but through the accumulation of small choices that, taken individually, seem entirely reasonable.
Glover called this the "incremental slide." Not a cliff but a slope. Each step looks manageable from inside. The landscape looks level. Only from a distance does the gradient become visible, and by then the person standing on the slope has been reshaped by it.
The incremental slide is precisely what The Orange Pill documents in its most confessional passage: Segal's admission that he built products he knew were addictive by design. He understood the engagement loops, the dopamine mechanics, the variable reward schedules. He was not ignorant. He was careless — in the specific Gloverian sense that his understanding was not accompanied by the moral weight that would have converted knowledge into restraint. He knew what the systems did. He did not feel what they did, because the distance between builder and user was mediated by metrics, by organizational structure, by the intoxication of building at the frontier.
Glover would recognize this confession immediately. It is the testimony of a person who constructed a moral identity through the accumulated choices of a professional culture that rewarded building and did not reward pausing to ask whether the thing being built deserved to exist. Segal's confession is valuable not because it is unusual but because it is common. It describes the default moral trajectory of every builder who operates inside an institutional culture that measures output without measuring consequence.
The question, then, is not whether AI will amplify moral identity. It already does. The question is what kind of moral identity the builders bring to the amplifier. And that question cannot be answered by the tool. It can only be answered by the person holding it — through the ongoing, effortful, often uncomfortable process of constructing a self that is worth amplifying.
Glover's prescription was never abstract. He did not issue moral commandments from a philosopher's remove. He studied the specific conditions under which moral identity holds and the specific conditions under which it erodes. The conditions that held were invariably personal — the moment when a perpetrator saw an individual face and could not continue, the encounter with a specific person that made the category dissolve into a human being. The conditions that eroded were invariably structural — the distance, the bureaucracy, the language, the metrics, the institutional architecture that replaced faces with numbers.
AI, as Segal describes it, can serve either set of conditions. It can compress the distance between builder and user by making it trivially easy to test a product with real people in real time, to hear their responses, to see their faces. Or it can expand the distance by making it possible to build and deploy without any human encounter at all — to produce at scale without ever confronting the scale's consequences.
The tool does not choose. The builder chooses. And the choice is an act of moral self-construction that reverberates through every amplified output.
Glover would insist — as he insisted throughout his career, with the quiet, devastating precision that made his work impossible to dismiss — that the choice is not made once. Moral identity is not achieved and then maintained automatically, the way a credential is earned and then carried. It is maintained through continuous attention, continuous friction, continuous willingness to ask whether the self you are constructing through today's choices is the self you want to inhabit tomorrow.
In the age of the amplifier, tomorrow arrives faster than it ever has. The self you are building today will be amplified by morning. The question of whether that self is worth amplifying is not rhetorical. It is, in the most precise sense Glover gave the term, a matter of life and death — not always literally, but always morally. The moral death that Glover diagnosed in his study of the twentieth century was not the death of the body. It was the death of the capacity to see another person as a person and to feel the weight of that recognition in one's choices. That capacity is what moral identity protects. And that capacity is what the age of amplification places under unprecedented pressure.
Glover returned, across all his major works, to a single observation that anchored his moral philosophy more firmly than any abstract principle: the moments when atrocity was resisted were almost never moments of philosophical deliberation. They were moments of involuntary recognition. A guard at a camp looked into a prisoner's eyes and could not continue. A soldier ordered to fire saw a specific face — not a category, not an abstraction, but a face — and his finger froze on the trigger. A bureaucrat processing deportation orders encountered a name he recognized from his own neighborhood and felt something shift in his chest.
Glover called this the human response. Not a theory. Not an ethical framework. Something prior to both — the involuntary, visceral recognition of another person's humanity that makes cruelty psychologically difficult. The human response is what happens before deliberation, before the weighing of consequences, before the application of moral principles. It is the catch in the throat. The flinch. The sudden, unbidden awareness that the thing you are about to do, or the thing the system you serve is about to do, will land on a person who is as real as you are.
The critical word is involuntary. The human response is not summoned by willpower or produced by moral education, though education can strengthen the conditions under which it occurs. It arises when the psychological distance between the agent and the affected person is small enough for recognition to operate. Distance kills it. Abstraction kills it. The categorization of persons into groups — the linguistic machinery that converts individuals into statistics — kills it. Any mechanism that interposes a conceptual or physical barrier between the agent and the face of the person affected by the agent's actions suppresses the response that would otherwise make harm difficult.
This is the lens through which Glover's work illuminates the AI moment with a precision that no other ethical framework quite achieves.
Every technological mediation in the history of human conflict and human production has altered the distance between agent and affected person. The longbow increased the distance between the archer and the person the arrow struck. The telegraph increased the distance between the general who issued the order and the soldiers who carried it out. The factory increased the distance between the owner who set the wages and the child whose fingers were caught in the loom. In each case, the increase in distance produced a corresponding decrease in the human response. The archer who kills at two hundred yards does not see the face. The general who telegraphs the order does not hear the screaming. The factory owner who reads the production report does not smell the blood.
Glover traced this pattern through the industrialization of warfare in the twentieth century. The bomber pilot at twenty thousand feet does not experience the human response because the distance has exceeded the threshold at which recognition operates. The bureaucrat who processes deportation manifests does not experience it because the persons affected have been converted into numbers on a form. The distance is not always physical. It can be conceptual — the use of euphemism ("special treatment," "enhanced interrogation"), the replacement of names with categories, any linguistic practice that converts a person into something that does not trigger the involuntary recognition of shared humanity.
Artificial intelligence introduces a new species of distance. Not physical. Not exactly conceptual. Functional. The distance between intention and consequence mediated by a tool that handles the implementation.
Before AI, the builder of a software product encountered the consequences of her design decisions through the friction of implementation. Writing the engagement loop by hand required hours of work. During those hours, the function existed in the builder's mind not as an abstraction but as a set of specific decisions — how long the delay before the next notification, how much dopamine-triggering unpredictability to introduce, how to calibrate the variable reward schedule to maximize the user's return. Each decision was a small encounter with the downstream effect. Not a face — the builder still did not see the user — but a granular awareness of what the system would do to a person who used it. The friction of implementation created opportunities for the human response to surface. A developer writing a notification system by hand might, in the third hour of coding the timing algorithm, feel a flicker of discomfort — an awareness that what she was building was designed to interrupt, to exploit, to capture attention in ways the user had not consented to. That flicker was the human response, operating at a distance, attenuated but present.
Claude Code removes this friction. The builder describes the engagement loop in natural language. Claude produces the implementation. The builder reviews the output, approves it, and deploys. The entire cycle takes minutes. The granular, hour-by-hour encounter with the specific decisions that determine how the system will affect a person has been compressed to a conversation. The flicker of discomfort that might have surfaced in the third hour of hand-coding does not surface in the third minute of reviewing Claude's output, because the third minute does not contain the same density of moral encounter.
The human response requires proximity. Proximity requires time. The compression of time compresses proximity. And the compression of proximity suppresses the response that would otherwise make carelessness more difficult.
This is not a speculative concern. The Berkeley study that The Orange Pill describes in its eleventh chapter documents the phenomenon empirically, though the researchers framed it in terms of work intensification rather than moral psychology. Workers using AI tools took on more tasks, expanded into adjacent domains, filled previously protected pauses with additional prompts. The researchers measured behavior — hours, tasks, boundaries crossed. What they did not measure, because it falls outside the methodology of organizational behavior research, is what happened to the workers' relationship with the persons affected by their expanded output.
But Glover's framework allows the inference. When work intensifies without a corresponding deepening of connection to the people affected by that work, the result is production without sympathy. More output, reaching more people, shaped by less moral encounter with any of them. The builder who once spent weeks on a feature, weeks that included conversations with users, testing sessions where she watched people struggle with her interface, moments of recognition that produced adjustments — that builder now ships in hours. The conversations did not happen. The testing sessions were truncated. The moments of recognition were not available because the timeline did not accommodate them.
Glover would identify a triple abstraction operating in AI-mediated work. The first layer is organizational — the distance between producer and consumer that has existed since the first factory separated the maker from the buyer. The second layer is digital — the distance introduced by screens, dashboards, and analytics platforms that convert persons into data. The third layer is generative — the distance introduced by a tool that converts intention into artifact without requiring the builder to traverse the implementation space where moral encounters might occur.
Each layer alone might be navigable. The organizational distance can be bridged by user research. The digital distance can be bridged by direct observation. But the generative distance introduces something new: the elimination of the implementation journey itself, the hours of hands-on building during which the builder inhabits the specific decisions that determine how the product will touch a person's life. When that journey is compressed to a prompt, the moral geography of building changes.
Glover studied the specific conditions under which the human response survived institutional pressure. The cases were always personal. A German officer in World War II who refused to execute partisans because he recognized a boy in the group as the same age as his son. A Rwandan woman who hid Tutsi neighbors because she had shared meals with them and could not convert the category into the face. An American soldier at My Lai who placed his helicopter between the killers and the villagers because he saw, from the air, specific human beings running.
In every case, the response was triggered by an encounter with the particular — a specific face, a specific name, a specific person whose individuality broke through the abstraction that the institutional machinery had imposed. The response failed when the particular was never encountered, when the system was designed so that the agent operated entirely within abstractions — categories, numbers, reports, dashboards.
The design implication for the AI moment is direct. The technology does not inherently suppress the human response. But the default mode of AI-assisted work — rapid iteration, compressed timelines, output measured in volume rather than impact — creates conditions under which the response is less likely to surface. The builder who never sees a user, never watches a person interact with her product, never hears the voice of the thirteen-year-old who could not stop scrolling, is operating in conditions structurally analogous to those Glover identified as enabling moral catastrophe. Not at the same scale. Not with the same consequences. But through the same mechanism: the suppression of the involuntary recognition that makes carelessness difficult.
Segal describes, in The Orange Pill's sixteenth chapter, an experience that illustrates this mechanism with uncomfortable precision. He confesses to having built addictive products, systems designed with full knowledge of their dopamine mechanics and variable reward schedules. He understood the mechanisms. He did not feel their weight, because the distance between builder and user was mediated by metrics rather than by the human response. The metrics said engagement was up. The metrics did not say that teenagers were losing sleep, that parents were finding their children unreachable, that the attention-capture system he had designed was functioning exactly as intended — and that its intended function was the exploitation of human vulnerability at scale.
Glover would read this confession as a case study in the failure of the human response under conditions of mediated distance. Segal is not describing wickedness. He is describing the specific psychological condition in which a person of ordinary decency builds something harmful because the systems mediating between his work and its consequences were designed to transmit metrics rather than faces. The organizational architecture filtered out the particular. What remained were curves and numbers, and curves and numbers do not trigger the catch in the throat that makes cruelty difficult.
AI does not create this condition. But it perfects it. By compressing the implementation journey, by accelerating the cycle from intention to deployment, by making it possible to produce at scale without traversing the moral geography of building, AI creates an environment in which the human response must be deliberately cultivated rather than incidentally encountered. The flicker that might have surfaced in the third hour of hand-coding must now be sought out — through user research, through direct observation, through the intentional construction of encounters with the specific persons affected by one's work.
This deliberateness is not impossible. But it is harder than the incidental encounters that friction once provided. The pre-AI world did not require the builder to be morally virtuous. It merely required her to be present — present in the implementation, present in the testing, present in the long, slow, friction-rich process of turning an idea into a product. The friction created the conditions for the human response without requiring anyone to have a theory about why the conditions mattered.
The post-AI world requires the theory. It requires the builder to understand, explicitly and with moral seriousness, that the compression of friction has compressed the conditions under which sympathy operates, and that maintaining those conditions now requires deliberate effort rather than incidental exposure. The human response does not maintain itself. It requires proximity. Proximity requires construction. And in an age when the default mode of work is rapid, mediated, and abstracted, the construction of proximity is itself a moral act — perhaps the most important moral act available to the person who builds.
Glover would add one more observation, characteristically quiet and characteristically devastating: the people who most need to cultivate the human response are precisely the people whose success is most threatened by its demands. The builder who pauses to encounter a user, who slows down to feel the weight of consequences, who allows the flicker of discomfort to alter a design decision — that builder ships later, optimizes less, produces metrics that are harder to celebrate in a quarterly review. The incentive structure punishes sympathy. And the incremental slide that Glover identified as the mechanism of moral erosion operates with the same implacable logic in a San Francisco AI lab as it did in the institutions he studied half a century ago: each small concession to speed over care, to metrics over persons, deposits a layer of moral sediment that makes the next concession easier and the self that emerges from the accumulation harder to recognize.
There is a scene in The Orange Pill that Jonathan Glover would recognize immediately — not as a dramatic moral failure, but as something more diagnostic and more disturbing. The author describes working late with Claude on a passage about democratization. The AI produced text that was eloquent, well-structured, and hit all the right notes. Segal nearly kept it. Then he reread it and realized he could not tell whether he actually believed the argument or merely liked how it sounded. The prose had outrun the thinking. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version of the argument that was his — rougher, more qualified, more honest about what he did not know.
This episode is, in miniature, the central moral problem of AI-assisted creation. Not the dramatic question of whether AI will replace human beings or produce superintelligent systems that threaten human survival. The quieter, more corrosive question of what happens to the connection between a person's expressed views and their genuine convictions when a tool makes it effortless to produce expressions that sound like conviction without requiring the labor of actually arriving at one.
Glover's work on moral identity provides the precise vocabulary for this problem, because Glover understood — in a way that abstract ethical theory does not — that the self is not a fixed entity that acts in the world. The self is constructed through action. You do not first have a moral identity and then express it through choices. You build a moral identity through choices, and the identity you build determines what further choices become available to you.
This is not a metaphor. It is a claim about moral psychology with specific, testable implications. The person who consistently says what she believes, even when her beliefs are rough and uncertain and unlikely to impress, is depositing layers of authenticity. The connection between her inner convictions and her outer expressions strengthens with each deposit. Over time, she becomes the kind of person for whom inauthenticity is psychologically difficult — not because she has decided to be honest as a matter of principle, but because the accumulated layers have created a self for whom dishonesty feels like a violation of something structural.
The person who consistently accepts what sounds good — who approves the polished output without asking whether it reflects what she actually thinks — is depositing layers of a different kind. Each acceptance weakens the connection between inner conviction and outer expression. Not dramatically. Not catastrophically. The erosion is molecular. A thousand small acceptances of the plausible over the true, and the self that was built through the friction of articulating what one actually thinks is gradually replaced by a self that performs conviction it has not earned.
Glover studied this dynamic in the context of totalitarian regimes, where the gap between public expression and private belief was enforced by state power. Citizens of the Soviet Union learned to say things they did not believe because the consequences of saying what they believed were severe. Over time, Glover argued, the gap itself produced a moral injury — not the injury of persecution, but the subtler injury of losing track of what one actually believed. When you spend years saying things you do not believe, the connection between expression and conviction atrophies. The self that maintains coherence between inner life and outer performance degrades. People who lived under totalitarian systems described, after the system's collapse, a specific form of moral disorientation — the inability to locate their own genuine views beneath decades of performed compliance.
AI produces nothing remotely like totalitarian coercion. The comparison is structural, not moral. The mechanism is the same, though the force driving it is different. Under totalitarianism, the gap between expression and conviction is produced by fear. Under AI assistance, the gap is produced by convenience. The tool generates output that sounds like conviction. The builder reviews it. The output is plausible, articulate, well-organized. Rejecting it requires effort — the effort of sitting with uncertainty, of finding the rough, honest version of one's own thinking, of tolerating the discomfort of not yet knowing what one believes. Accepting it requires only a click.
The click is the moral unit of the age of AI. Each click of acceptance — each approval of output that sounds right without being examined for whether it is right — is a micro-decision with macro-consequences for the architecture of the self. No single click is dramatic. No single acceptance registers as a moral event. The erosion is invisible from inside, the way the erosion of a riverbank is invisible to the water flowing past it. Only from a distance, only with time, does the changed geography become apparent.
Segal's coffee-shop episode is significant precisely because it is an exception. He caught the drift. He recognized the moment when the prose was outrunning the thinking, and he interrupted the momentum with a deliberate act of self-examination — the physical relocation from screen to notebook, from Claude's voice to his own, from fluency to friction. The interruption cost him two hours. In the economy of AI-assisted production, two hours is an enormous investment in moral self-maintenance. Most builders will not make it. Not because they are morally deficient, but because the incentive structure does not reward it, the organizational culture does not model it, and the tool itself, with its perpetual availability and its unfailing willingness to generate the next polished paragraph, creates a gravitational pull toward acceptance that requires genuine effort to resist.
Glover would identify the gravitational pull as structurally analogous to what he called the "momentum of compliance" in institutional settings. In his study of how ordinary people came to participate in atrocities, he found that the most dangerous moment was not the first order but the second. The first order to do something morally questionable produced resistance — discomfort, hesitation, the flicker of the human response. The second order, following compliance with the first, produced less resistance. The third, less still. Each act of compliance made the next act of compliance easier, not because the person had reasoned his way to a new moral position, but because the accumulated layers of compliance had altered the self that was doing the reasoning. The slope was gentle. The gradient was invisible from inside. The person who arrived at the bottom of the slope could not identify the moment when descent began.
The AI version of this momentum operates through the same mechanism at a different tempo. The builder who accepts Claude's first output without examination feels a small, imperceptible loosening of the connection between expression and conviction. The second acceptance loosens it further. The third further still. By the hundredth acceptance, the builder has constructed a self that no longer experiences the gap between what sounds right and what is right as a problem to be solved. The gap has become normal. The self has been reshaped by the accumulated layers of acceptance, and the capacity for the kind of moral self-examination that Segal performed at the coffee shop has atrophied — not because it was rejected on principle, but because it was never exercised.
Glover was careful to distinguish this kind of erosion from moral weakness. Moral weakness is the failure to act on convictions one holds. The gap between expression and conviction produced by habitual acceptance is different. It is the gradual disappearance of the convictions themselves — the loss of the inner voice that knows what it thinks, independent of what the tool generates. The morally weak person knows what she believes and fails to act on it. The person whose moral self-construction has been eroded by a thousand small acceptances no longer knows what she believes, because the machinery of self-construction — the friction of articulating one's own views, of testing them against resistance, of discovering through the labor of expression what one actually thinks — has been bypassed.
This has implications that extend beyond the individual builder to the culture that AI-assisted work produces. A culture in which the default mode of expression is AI-assisted is a culture in which the connection between expression and conviction is structurally weakened across the entire population of builders. The senior engineer who reviews a junior engineer's AI-generated code does not know whether the junior engineer understands the code she submitted. The manager who reads an AI-assisted report does not know whether the analyst who produced it engaged with the data or merely prompted Claude to produce something plausible from a summary. The reader of an AI-assisted book — and here the recursion of The Orange Pill becomes uncomfortably direct — does not know whether the author struggled with the ideas or accepted the tool's first coherent rendering.
Glover would note that trust operates at every level of this system, and trust depends on the assumption that expression reflects conviction. When a surgeon tells a patient that a procedure is necessary, the patient trusts that the surgeon's recommendation reflects genuine medical judgment, not a script generated by a tool that produces medically plausible recommendations. When a lawyer drafts a brief, the client trusts that the arguments reflect the lawyer's understanding of the case, not a plausible arrangement of legal citations produced by a system that has no understanding of anything. When a leader articulates a vision, the organization trusts that the vision reflects the leader's genuine conviction about what matters, not a polished synthesis of strategic frameworks assembled by an AI that has no stakes in the outcome.
AI does not automatically betray these trusts. But it creates the conditions under which betrayal is easy, invisible, and cumulative. The surgeon who uses AI to generate a recommendation and presents it as her own judgment has not committed malpractice — the recommendation may be medically sound. But she has weakened the connection between her expressed judgment and her genuine understanding, and that weakening, if it becomes habitual, will eventually produce a moment when the recommendation is not sound and her atrophied capacity for independent judgment fails to catch it.
Glover would frame this as the moral significance of the labor of articulation. The act of struggling to express what one thinks is not merely a means to an end — the production of a text, a recommendation, a decision. It is a constitutive act of moral self-construction. Through the struggle, the person discovers what she actually believes. Through the friction, the convictions are tested, refined, sometimes abandoned and replaced. The output of the struggle is not just the text. It is the self that produced the text — a self whose convictions have been earned through labor rather than accepted through convenience.
When AI eliminates the struggle, it eliminates not just the friction but the self-construction that the friction enabled. The output arrives without the labor. The convictions arrive without the testing. And the self that reviews the output is a self that has not been shaped by the process of production — a self that stands in relation to its own expressed views the way a stranger stands in relation to a building she did not build. She can admire it. She can inhabit it. But she does not know where the load-bearing walls are, because she was not present when they were laid.
Jonathan Glover never used the word "fishbowl." His vocabulary was different — he spoke of moral atmosphere, of the psychological climate of institutions, of the invisible conditions that make certain moral responses available and others unthinkable. But the concept he was reaching for is the same one Segal describes in the opening pages of The Orange Pill: the set of assumptions so familiar you have stopped noticing them. The water you breathe. The glass that shapes what you see.
Glover's contribution was to demonstrate, with a specificity that moved his analysis far beyond metaphor, that moral catastrophe becomes possible not when people decide to abandon their values but when the moral atmosphere of the institutions they inhabit makes certain values invisible. The values do not disappear. They become inoperative — like a fire alarm that remains physically present on the wall but whose batteries were removed so gradually that no one noticed when it stopped working.
In Humanity, Glover traced the moral atmosphere of institutions that produced atrocity with the precision of an epidemiologist tracing the conditions that produce disease. The conditions were specific, identifiable, and — this was the most disturbing finding — ordinary. They did not require monsters. They required an environment in which the moral resources that normally prevent monstrosity were systematically, if unintentionally, depleted.
Glover identified three primary moral resources: sympathy, the capacity to feel something of what another person feels; respect for persons, the recognition of others as beings with their own perspectives, purposes, and dignity; and moral identity, the sense of oneself as a particular kind of person who would not do certain things. These resources are not abstract principles. They are psychological capacities — real, measurable, trainable, and depletable. They function like muscles: exercised, they strengthen; neglected, they atrophy; subjected to sustained institutional pressure that rewards their suppression, they wither.
The moral atmosphere of an institution determines which resources are exercised and which are suppressed. In the institutions Glover studied — the bureaucracies that administered genocide, the military hierarchies that enabled massacre, the political systems that normalized torture — the atmosphere was not characterized by the active promotion of cruelty. It was characterized by the passive suppression of the resources that would have prevented cruelty. Sympathy was suppressed by distance and categorization. Respect for persons was suppressed by the reduction of individuals to numbers, categories, and administrative units. Moral identity was suppressed by the diffusion of responsibility across a system so large that no single person experienced himself as the agent of the harm the system produced.
The technology industry breathes its own moral atmosphere, and its specific composition deserves the same diagnostic attention that Glover brought to the institutions he studied.
Consider the premises that function as invisible water in the culture of technology building. The first is the premise of the moral neutrality of tools. Technology, in this view, is neither good nor bad. It is neutral. A hammer can build a house or break a skull, and the moral responsibility lies with the person holding the hammer, not with the hammer itself or the person who made it. This premise has been challenged by philosophers of technology for decades — Langdon Winner, Bruno Latour, and others have argued that technologies embed values in their design, that the claim of neutrality is itself a value choice that privileges the builder's interests over the user's. But within the fishbowl of technology culture, the neutrality premise remains the default, and its persistence has a specific moral function: it relieves the builder of the obligation to consider whether the tool she is building is the right tool to build.
AI does not challenge this premise. AI perfects it. A large language model is, in its architecture, a spectacularly effective embodiment of the neutrality thesis. It does not choose what to produce. It responds to prompts. It amplifies whatever signal it receives. The moral neutrality is not a claim about the tool. It is a feature of the tool's design — and the feature reinforces the cultural premise that was already present, creating a feedback loop in which the tool's actual neutrality validates the builder's assumption that moral evaluation is someone else's job.
The second premise is the inherent goodness of efficiency. Faster is better. Smoother is better. The removal of friction is always a gain. This premise is so deeply embedded in the culture of technology building that questioning it sounds like arguing against progress itself. And yet Glover's work demonstrates, with devastating historical specificity, that the removal of friction is morally ambiguous at best. Friction is what creates the space in which moral resources can operate. The guard who had to walk past the prisoner's face experienced friction — the friction of proximity, of involuntary recognition, of the human response triggered by a specific encounter. The bureaucrat who processed deportation manifests from a desk experienced no such friction. The efficiency of the bureaucratic system was precisely what suppressed the moral response that proximity would have triggered.
AI's efficiency is real and, in many applications, genuinely beneficial. The argument is not that efficiency is always wrong. The argument is that the premise of efficiency's inherent goodness creates conditions under which the moral costs of efficiency are systematically invisible. When the Berkeley researchers documented the colonization of pauses — the seepage of AI-assisted work into previously protected cognitive spaces — they were documenting the efficiency premise in action. The pauses had served, informally and invisibly, as spaces for something that the efficiency framework could not see or value: the slow, unstructured cognitive activity that includes moral reflection, the processing of consequences, the quiet surfacing of discomfort about what one is building and for whom. Efficiency colonized these spaces not because anyone decided that moral reflection was dispensable but because the premise that efficiency is always a gain made the spaces invisible. You cannot protect what you cannot see. You cannot see what your premises have defined as waste.
The third premise is the sovereignty of user choice. If the user chose to engage, the engagement is legitimate. If the user chose to scroll, the scrolling is freely chosen. If the user chose to stay on the platform for four hours, the four hours reflect the user's autonomous preference. This premise functions, within the fishbowl, as a comprehensive absolution of the builder's responsibility for the conditions under which the choice was made. The variable reward schedule that exploits the neurochemistry of anticipation? The user chose to engage. The notification timing algorithm calibrated to moments of boredom? The user chose to respond. The feed personalization that creates a closed loop of reinforcement? The user chose to keep scrolling.
Glover would recognize this premise as a cousin of the arguments he encountered in his study of institutions that produced harm through systems rather than through individual acts of cruelty. The claim was always the same: no single person is responsible, because no single person made the decisive choice. The system operated. The individuals within it exercised their functions. The harm was produced by the aggregate, and the aggregate belonged to no one. The sovereignty of user choice performs the same function in the technology fishbowl: it distributes responsibility so widely that it effectively disappears. If the user is sovereign, then the builder is merely providing options, and the consequences of those options belong to the sovereign, not to the provider.
Glover spent years dismantling this logic in the context of institutions that administered violence. The logic holds only if the conditions of choice are genuinely free — if the person choosing has full information, adequate alternatives, and a cognitive environment that supports deliberation. When the conditions of choice are engineered — when the architecture of the system is designed to produce a specific pattern of choices, when the information is curated to favor engagement, when the alternatives are made deliberately less attractive — the claim of sovereignty becomes a fiction that serves the interests of the system's designers.
AI perfects this fiction. Recommendation algorithms that learn a user's preferences and serve them more of what they already prefer create an environment in which the user's "choices" are the outputs of a system designed to produce those choices. The user experiences sovereignty. The system produces compliance. And the gap between the experience and the reality is mediated by a technology so sophisticated that the mediation itself is invisible.
These three premises — the neutrality of tools, the goodness of efficiency, the sovereignty of user choice — constitute the moral atmosphere of the technology industry. They are the invisible water. And Glover's diagnostic framework reveals them as structurally analogous to the moral atmospheres he studied in institutions that produced far greater harms: not because the technology industry is genocidal, but because the psychological mechanism is the same. When the moral atmosphere of an institution makes certain values invisible, the people within that institution cannot exercise moral resources they do not know they possess, in response to costs they cannot see, on behalf of persons who have been converted into abstractions by the premises they breathe.
AI intensifies each of these premises without altering them. The neutrality of tools becomes more persuasive when the tool is genuinely responsive — when it does what you ask without judgment, when it amplifies whatever signal you provide, when its architecture embodies the neutrality thesis as a design feature. The goodness of efficiency becomes more seductive when the efficiency is real — when the tool genuinely removes friction, genuinely accelerates production, genuinely expands what a single person can accomplish. The sovereignty of user choice becomes harder to question when the choices are more personalized, more responsive, more attuned to individual preferences — even when the personalization is produced by a system designed to maximize a metric that serves the platform's interests rather than the user's.
Glover's prescription was not the dismantling of institutions. He was not a revolutionary. His prescription was the cultivation of moral resources — the deliberate, effortful strengthening of the psychological capacities that institutional atmospheres tend to suppress. Sympathy can be cultivated through deliberate exposure to the persons affected by one's work. Respect for persons can be maintained through practices that resist the reduction of individuals to categories. Moral identity can be strengthened through the habit of asking, at regular intervals, whether the choices one is making are the choices of the person one wants to be.
Applied to the technology fishbowl, this means specific practices, not abstract commitments. It means the engineer who builds a recommendation system spending time — real time, not tokenized, not a perfunctory user-research session designed to validate decisions already made — with the people whose attention her system will capture. Watching them use the product. Hearing their language. Noticing what they do not say, which is often more revealing than what they do. This practice is not efficient. It does not scale. It cannot be optimized. It requires the specific, irreducible friction of encountering another person as a person, and that encounter is precisely what the moral atmosphere of efficiency-oriented culture tends to eliminate.
It means the leader who deploys AI across her organization asking not only what the tool produces but what the tool suppresses — which cognitive activities, including the slow, unstructured activities that include moral reflection, are displaced by the acceleration that the tool provides. This question is invisible within the premises of the fishbowl, because the premises define unstructured time as waste and acceleration as gain. Asking the question requires the leader to see the premises as premises rather than as facts — to press her face against the glass and notice, for perhaps the first time, that she has been breathing water.
It means the parent who gives a child a device equipped with AI understanding — with the clarity that Glover brought to far darker subjects — that the device is not morally neutral, that the efficiency it provides is not always a gain, and that the choices the child makes within the device's architecture are not always free. Understanding this does not require the parent to ban the device. Glover was never a prohibitionist. It requires the parent to build what Glover called moral resources in the child — the capacity for sympathy, the habit of recognizing others as persons, the strengthening of a moral identity that can resist the gravitational pull of systems designed to capture attention rather than cultivate judgment.
The fishbowl does not crack from the outside. It cracks, when it cracks at all, from the effort of the people inside it to see the glass. Glover's career was devoted to helping people see the glass — the invisible conditions that make moral failure possible and moral resistance difficult. The glass of the technology fishbowl is thicker than most, because the water inside it is warm and the premises that constitute it are genuinely useful — efficiency is often good, tools are sometimes neutral, user choice does sometimes reflect genuine preference. The warmth makes the glass harder to see. The partiality of the truths makes the premises harder to question.
But the questioning is itself a moral resource. The capacity to examine the assumptions one breathes — to ask whether efficiency is always good, whether neutrality is always real, whether choice is always free — is the capacity that Glover identified as the first line of defense against the institutional drift toward moral catastrophe. Not the last line. Not the only line. But the first, and the one whose absence makes all subsequent defenses vulnerable.
Within weeks of the December 2025 threshold described in The Orange Pill, positions on artificial intelligence had hardened into camps with the speed and finality of concrete setting. Triumphalists posted productivity metrics like athletes posting personal records. Elegists mourned the passing of craft with the specific grief of people watching a language die. And between them, the silent middle — the largest group, the most morally significant group, the group whose experience most closely approximated the actual complexity of the moment — said almost nothing, because the platforms that hosted the discourse did not reward ambivalence.
Jonathan Glover would have recognized the pattern instantly. He had spent decades studying it in contexts where the stakes were measured not in market capitalization but in human lives.
In Humanity, Glover traced the role of tribal loyalty in enabling moral catastrophe with a precision that distinguished his analysis from the generalized warnings about "groupthink" that populate management literature. His finding was specific: tribal loyalty does not merely bias judgment. It replaces judgment. The person who has adopted a tribal identity does not weigh evidence and arrive at a conclusion that happens to align with the tribe's position. The person adopts the tribe's position as a constituent element of identity, and the adoption precedes, shapes, and frequently overrides the evaluation of evidence.
The mechanism operates through what Glover called the "belief-loss aversion" of group membership. Once a position has been adopted as a marker of tribal identity — once "AI is transformative and those who resist it are Luddites" or "AI is corrosive and those who celebrate it are naive" has become part of how a person understands who they are — abandoning the position carries the psychological cost of identity disruption. The cost is not intellectual. It is existential. To change one's mind is to become, temporarily, a person without a tribe, and the human aversion to that condition is powerful enough to override substantial contrary evidence.
Glover documented this mechanism in contexts of ethnic conflict, where tribal identity determined not just political allegiance but perceptual reality. Members of opposing ethnic groups, presented with identical evidence, drew opposite conclusions — not because they were reasoning badly, but because the evidence was processed through an identity structure that determined what the evidence could mean before it was evaluated. The Hutu who believed Tutsis were subhuman did not arrive at this belief through a process of evidence evaluation that happened to go wrong. The belief was a constituent element of tribal identity, and evidence was processed in its light.
The AI discourse operates through the same mechanism at a different register. The triumphalist who dismisses concerns about work intensification, skill atrophy, or the erosion of depth is not evaluating the Berkeley study's findings and concluding they are methodologically flawed. The triumphalist's tribal identity — builder, innovator, person who sees the future clearly while others cling to the past — determines that findings suggesting AI has costs are, by definition, the complaints of people who have not yet adapted. The evidence is processed through the identity, and the identity is not at risk of revision by the evidence.
The elegist performs the mirror operation. The elegist's tribal identity — craftsperson, defender of depth, person who understands what the accelerationists cannot see — determines that findings suggesting AI expands capability are, by definition, the self-congratulation of people too shallow to recognize what they are losing. The democratization of building, the rising floor, the developer in Lagos — these realities are acknowledged, if at all, as regrettable side effects or as distractions from the real story, which is always a story of loss.
Glover's analysis cuts through both positions with a diagnostic clarity that neither camp can comfortably absorb. The triumphalist is correct that AI expands capability. The elegist is correct that something is lost in the expansion. Both are wrong in the specific way that tribal epistemology makes people wrong: they have converted a partial truth into a total identity, and the identity now functions as a filter that prevents the other partial truth from registering.
The moral significance of this dynamic is not that people disagree about AI. Disagreement is healthy, necessary, the mechanism through which a society processes a genuine transformation. The moral significance is that tribal epistemology degrades the quality of the disagreement to the point where it ceases to function as a mechanism of collective understanding. When the triumphalist cannot see the loss and the elegist cannot see the gain, the discourse does not produce clarity. It produces heat. And the people who most need clarity — the parent at the kitchen table, the teacher watching students disappear into tools she does not understand, the worker whose expertise is being repriced in real time — are left without the nuanced, tension-holding analysis that the moment demands.
Glover identified, in his study of ethnic conflict, a specific mechanism by which tribal epistemology escalates: the mutual attribution of bad faith. Once tribal identity has determined the meaning of evidence, the person who presents contrary evidence is not merely wrong. She is suspect. She must be motivated by something other than the pursuit of truth — by financial interest, by cultural allegiance, by the desire to protect a threatened position. The attribution of bad faith makes engagement across tribal lines not just difficult but morally contaminating. To take the other side's arguments seriously is to risk infection — to risk becoming the kind of person who takes seriously the arguments of people whose motives are, by tribal definition, corrupt.
This dynamic is visible in the AI discourse with a clarity that would interest Glover diagnostically. The triumphalist who encounters a serious critique of AI's effects on work does not engage the critique. The triumphalist identifies the critic's likely motivation — nostalgia for a world that no longer exists, fear of competitive displacement, institutional allegiance to disciplines that AI threatens — and dismisses the critique as a symptom of the motivation rather than an argument requiring response. The elegist who encounters a serious case for democratization does not engage the case. The elegist identifies the advocate's likely motivation — financial interest in AI adoption, ideological commitment to disruption, the specific blindness of people who have built their careers inside the acceleration — and dismisses the case as propaganda.
Both attributions are sometimes partly accurate. Some triumphalists do have financial interests in AI adoption. Some elegists are motivated by fear of displacement. But the accuracy of the attribution is beside the point. The function of the attribution is not diagnostic. It is defensive. It protects the tribal identity from the revision that genuine engagement with contrary evidence might require.
Glover would note — as he noted in every context where he studied this dynamic — that the capacity to hold contradictory truths simultaneously is not a weakness of moral character. It is its signature achievement. The person who can feel both the exhilaration of expanded capability and the grief of lost depth, who can see both the gain of the developer in Lagos and the loss of the senior architect whose expertise is being repriced, who can acknowledge both the genuine flow that Segal describes and the genuine compulsion that Han diagnoses — that person has resisted the gravitational pull of tribal certainty. And that resistance is morally expensive. It requires the sustained tolerance of ambiguity, the willingness to be uncertain in public, the vulnerability of not having a clean position to defend.
The silent middle, in Segal's account, is the group that holds both truths. Glover's framework explains why the silent middle is silent: ambivalence has no tribe. The person who feels both things cannot post a triumphalist victory lap or an elegist lament. Either post would betray half of what she knows. The platforms reward clarity, and ambivalence is not clarity. It is the specific, uncomfortable, morally demanding state of having looked at the evidence from both sides and found that both sides are partly right in ways that do not resolve into a single coherent narrative.
Glover spent his career arguing that this state — the refusal of premature resolution, the tolerance of moral complexity, the willingness to sit with the discomfort of contradictory truths — is the precondition for moral seriousness. Every atrocity he studied was preceded by a simplification: the reduction of a complex situation to a tribal narrative in which one side was entirely right and the other entirely wrong. The simplification was not a consequence of the atrocity. It was a precondition. It was the act of moral closure that made the atrocity thinkable, because once the complexity had been resolved into a binary, the responses available were binary too.
The AI discourse is not a prelude to atrocity. The comparison is structural, not proportional. But the mechanism is the same: the premature resolution of genuine complexity into tribal positions that feel clear, feel righteous, feel like identity — and that foreclose the moral seriousness that complexity demands.
What would a morally serious discourse about AI look like? Glover's work suggests it would begin with the acknowledgment that both the gain and the loss are real, that both the exhilaration and the grief are warranted, and that anyone who claims to see only one side is performing tribal loyalty rather than engaging in moral inquiry. It would proceed through the specific, granular examination of particular cases — not the sweeping claim that "AI is transforming everything" or the sweeping counter-claim that "AI is destroying everything," but the patient investigation of what, specifically, is being transformed, and what, specifically, is being lost, in this context, for these people, with these tools.
Glover's method was always particular. He did not study "violence in general." He studied the Rwandan genocide, the bombing of Hiroshima, the Cultural Revolution, the Gulag — specific events, with specific perpetrators, specific victims, specific institutional conditions, specific moments when moral restraints held and specific moments when they broke. The generality of his conclusions — the taxonomy of erosion, the identification of moral resources, the analysis of institutional atmosphere — was earned through the accumulation of particulars, not imposed from above.
A morally serious discourse about AI would follow the same method. Not the abstract question "Is AI good or bad?" but the specific question: What happened to the team at Trivandrum when they adopted Claude Code? What capabilities expanded? What relationships changed? What forms of understanding were gained and what forms were lost? And — the question that tribal epistemology cannot tolerate — is it possible that the expansion and the loss are not just simultaneous but causally connected? That the very mechanism that produces the gain also produces the cost? That the removal of friction that liberates the builder from implementation also removes the friction through which understanding was built?
This question is unbearable to both tribes. The triumphalist cannot acknowledge that the gain carries an inherent cost, because that acknowledgment would complicate the narrative of progress that constitutes tribal identity. The elegist cannot acknowledge that the cost accompanies a genuine gain, because that acknowledgment would deprive the narrative of loss of its moral purity. Only the person who has no tribe — the person in the silent middle, holding both truths without the comfort of certainty about which one matters more — can ask the question honestly.
Glover would add one more observation, characteristic of the quiet precision that made his work impossible to dismiss. The silent middle is not silent because it lacks conviction. It is silent because the convictions it holds are too complex for the formats in which the discourse occurs. A tweet cannot hold ambivalence. A thread cannot develop a genuinely dialectical argument. The medium that hosts the discourse selects for the tribal and against the nuanced, and the selection pressure is relentless. The thoughtful person who spends thirty minutes composing a post that holds both truths in tension will be outperformed, in every metric the platform measures, by the person who spends thirty seconds composing a post that expresses tribal certainty with clarity and force.
The discourse is not a neutral medium through which ideas are exchanged. It is a selection environment that favors certain kinds of ideas — the clear, the certain, the tribal — and suppresses others — the ambivalent, the complex, the morally demanding. Glover's life work was the study of environments that suppressed the moral responses that would have prevented catastrophe. The AI discourse is not producing catastrophe. But it is producing a quality of public reasoning that is inadequate to the moral complexity of the moment it is attempting to process. And the inadequacy is not accidental. It is structural — produced by the same platforms, governed by the same attention-capture logic, optimized by the same engagement metrics that the discourse is ostensibly debating.
The recursion is uncomfortable. The tools being debated are shaping the debate about the tools. The fishbowl is made of the water it contains. And the exit — if there is one — requires exactly the moral resource that the environment most aggressively suppresses: the capacity to tolerate uncertainty long enough for genuine understanding to form.
Jonathan Glover never studied a case of atrocity that was produced by a single dramatic decision. The popular imagination conceives of moral catastrophe as the product of a moment — the order given, the trigger pulled, the document signed. Glover's research demonstrated the opposite. Moral catastrophe is produced by architecture. It is the cumulative output of institutional structures that distribute action across so many agents, so many steps, so many layers of mediation, that no single agent experiences himself as the cause of the harm the system produces.
The concentration camp was not designed by a single architect. It was produced by dozens of bureaucracies, hundreds of offices, thousands of functionaries, each performing a discrete task that was, in isolation, morally unremarkable. The person who scheduled the trains did not murder anyone. He scheduled trains. The person who manufactured the locks did not imprison anyone. She manufactured locks. The person who processed the paperwork did not deport anyone. He processed paperwork. The harm was produced by the system, and the system was designed — not always deliberately, sometimes through the accumulation of administrative convenience — so that no single person within it confronted the full moral weight of what the system accomplished.
Glover identified three mechanisms that make this architecture possible. The first is distance — physical, psychological, or conceptual separation between the agent and the person harmed. The second is diffusion of responsibility — the distribution of action across enough agents that each one can reasonably claim that his contribution was insufficient to produce the harm. The third is gradual escalation — the incremental expansion of the system's activities, each step small enough to seem continuous with the last, the cumulative trajectory visible only from a distance no single participant occupies.
These mechanisms are not confined to history's darkest chapters. They are structural features of any complex system that produces effects at scale. Glover was explicit about this. The mechanisms that enabled the Holocaust are, in their abstract structure, the same mechanisms that operate in any large organization where the distance between decision and consequence is mediated by enough layers that the consequence becomes invisible to the decision-maker. The scale of harm differs enormously. The mechanism does not.
Artificial intelligence, as described in The Orange Pill, compresses each of these mechanisms in ways that deserve diagnostic attention.
Consider distance. Before AI, the builder of a software product encountered the consequences of her design decisions through the implementation process. Writing the code required inhabiting the logic of the system at a granular level — deciding, line by line, how the system would behave when it encountered a specific input from a specific user. This granularity did not guarantee moral reflection, but it created opportunities for it. The developer writing a notification timing algorithm by hand spent hours inside the logic of interruption — how long to wait, what to trigger on, how to calibrate the variable reward schedule. Each decision was a small encounter with the system's eventual effect on a person. The person was abstract, but the effect was concrete: this code will cause a notification to appear on a screen at a moment calculated to maximize the probability that the person will return to the platform.
Claude Code removes the developer from this granular encounter. The developer describes the desired behavior in natural language, and the system produces the implementation. The distance between the developer and the code has been increased by a layer of abstraction. The distance between the developer and the user — already mediated by organizational structure, by digital interfaces, by analytics dashboards — has been increased by one more layer. The developer no longer inhabits the logic of the system she is building. She inhabits the description of the logic, and the distinction matters morally. Describing an engagement loop in a sentence is a different moral experience from building one across three hundred lines of code. The sentence is quick, clean, abstract. The three hundred lines are slow, specific, and contain moments — the third hour, the debugging session, the edge case that reveals what the system will actually do to a person who interacts with it — where the human response might surface.
The might is important. Glover was never a determinist about moral response. He did not claim that proximity to consequences guarantees moral behavior. He claimed that distance from consequences makes moral behavior structurally less likely, because the psychological mechanisms that produce moral restraint — sympathy, the recognition of the other as a person, the involuntary flinch of the human response — require a minimum threshold of proximity to operate. Below that threshold, the mechanisms go dormant. Not dead — they can be reactivated by a sudden encounter with a specific face, a specific story, a specific piece of evidence that breaks through the abstraction. But dormant, and dormancy in the context of rapid production at scale means the system ships before the mechanism awakens.
Consider diffusion. AI-assisted work does not necessarily involve more people. In many cases, as The Orange Pill documents, it involves fewer — a single developer with Claude Code producing what previously required a team. But the diffusion of responsibility does not require multiple humans. It requires the distribution of agency between the human and the system. When a developer writes code by hand, the code is hers. She made every decision. She is the author of the system's behavior in a straightforward sense that corresponds to straightforward responsibility. When a developer describes a desired behavior and Claude produces the implementation, the authorship is distributed. The developer authored the intention. Claude authored the implementation. The gap between them is the space where responsibility diffuses.
This diffusion is not unique to AI. It has existed in every tool-mediated production process since the invention of the assembly line. But AI amplifies it by making the tool's contribution so substantial that the developer's relationship to the output changes qualitatively. The developer who uses a compiler is still the author of the code — the compiler translates, but the developer made every logical decision. The developer who uses Claude Code is something different. She made the high-level decision. Claude made thousands of lower-level decisions — about implementation strategy, about data structures, about the specific logic that determines how the system will behave in edge cases the developer did not specify. The developer reviews the output, but reviewing is not authoring. The relationship between the developer and the code is no longer the relationship of creator to creation. It is the relationship of editor to draft — and the editor's responsibility for the draft's content is genuinely different from the author's.
Glover would identify this as a new form of an ancient problem. The bureaucrat who processes deportation orders does not experience himself as deporting anyone. He is processing paperwork. The distance between his action — filling out a form, stamping a document — and its consequence — a family loaded onto a train — is mediated by enough institutional layers that the consequence feels like someone else's responsibility. The developer who approves Claude's implementation of an engagement loop does not experience herself as building an addiction machine. She is reviewing code. The distance between her action — reading the output, clicking approve — and its consequence — a teenager unable to stop scrolling — is mediated by enough technological and organizational layers that the consequence belongs to no one in particular.
Diffusion does not require intention. It does not require a conspiracy to avoid responsibility. It requires only a system complex enough that the chain of causation between any individual's action and the system's ultimate effect on a person is long enough to be psychologically invisible. AI lengthens this chain by one more link — perhaps the most consequential link, because it interposes a non-human agent whose contributions cannot be attributed to anyone and whose decisions cannot be interrogated in the way that a human colleague's decisions can be interrogated.
Consider gradual escalation. Glover documented how the most extreme institutional behaviors were reached through a sequence of steps, each one only marginally more extreme than the last. The first step was language — the reclassification of persons into categories. The second was discrimination — the differential treatment of the categories. The third was exclusion — the removal of the categorized from normal social participation. Each step, taken in isolation, seemed manageable, even reasonable. The gradient was imperceptible from inside. The person who had taken the first step had already constructed a self for whom the second step was less difficult, and the person who had taken the second step had constructed a self for whom the third was thinkable.
AI-assisted work follows an escalation gradient that operates through the same mechanism at a different register. The first use of Claude Code to generate boilerplate — configuration files, dependency management, the mechanical connective tissue that no one finds meaningful — seems entirely benign. The builder retains judgment, direction, and understanding. The tool handles tedium. The second use expands to implementation of features the builder understands but finds time-consuming to code by hand. The builder still reviews, still comprehends, still exercises judgment. The third use expands to domains the builder does not fully understand — a backend engineer generating frontend code she could not write herself, deploying logic she has not independently verified, trusting the tool's output in areas where she lacks the expertise to evaluate it.
Each step is marginal. Each step seems reasonable. Each step deposits a layer of reliance that makes the next step easier. And the cumulative trajectory — from tool-assisted tedium reduction to tool-dependent production in domains the builder does not comprehend — is a gradient of diminishing moral ownership that mirrors, in its structure, the gradients Glover studied in contexts of institutional harm.
The parallel is structural, not proportional. Approving AI-generated code one does not understand is not morally equivalent to processing deportation orders. But the mechanism — the gradual loosening of the connection between the agent's understanding and the agent's output, the incremental construction of a self that is comfortable producing things it cannot fully account for — is the same mechanism. And the mechanism matters because it operates below the threshold of conscious decision. No developer decides, at a specific moment, to stop understanding what she builds. The understanding recedes gradually, step by step, each step small enough to seem continuous with the last, until the developer finds herself deploying systems whose behavior she cannot predict and whose consequences she cannot foresee — not because she chose this condition, but because the gradient carried her there.
Glover's prescription was always the same: visibility. The mechanisms of distance, diffusion, and escalation operate in darkness. They depend on the invisibility of the gradient, the imperceptibility of the chain of causation, the structural opacity that prevents any single agent from seeing the full picture. The antidote is not prohibition. It is illumination — the deliberate construction of conditions under which the distance becomes visible, the diffusion becomes traceable, and the gradient becomes measurable.
In practice, this means building feedback loops that connect builders to consequences. Not dashboards that display engagement metrics — those are the distance mechanisms wearing the disguise of connection. Real feedback loops: the engineer who watches a person use the system she built, who hears the person describe what the experience was like, who confronts the gap between what she intended and what the person experienced. These loops are expensive. They are slow. They resist optimization. They require the specific, irreducible friction of encountering another person — and that is precisely why they work.
Glover would add that the feedback loops must be structural, not optional. In every institution he studied, the moral response was available when structures required it — when the guard had to walk past the prisoner, when the bureaucrat had to face the person whose file he was processing. The response failed when structures permitted its avoidance — when the guard operated a mechanism from a distance, when the bureaucrat processed files without faces. The choice of structure is a moral choice, and the builders of AI-assisted work environments are making this choice, whether they recognize it or not, every time they design a workflow that connects or disconnects the builder from the person affected by what is built.
The framework knitters of Nottinghamshire did not need a philosopher to tell them what was happening. They could see it with perfect clarity. The power looms were faster. The power looms were cheaper. The power looms would destroy their livelihoods, their communities, their sense of themselves as people whose skill mattered. Their diagnosis was accurate down to the specific mechanism of harm: the replacement of craft expertise with machine efficiency would collapse wages, eliminate apprenticeships, dissolve the guilds that had organized economic and social life for generations, and leave their children inheriting a world in which the knowledge deposited through years of patient practice had no market value.
They were right about all of it.
The Orange Pill treats the Luddite episode as a parable about the inadequacy of resistance as a response to structural change. The Luddites' grief was legitimate, Segal argues, but their strategy — the destruction of machines — was emotionally satisfying and strategically catastrophic. The machines were not stopped. The craftsmen were criminalized. The transition happened on terms dictated entirely by the people who owned the machines. The lesson, as Segal draws it, is that disengagement is never neutral: when the people with legitimate grievances remove themselves from the conversation about how the transition unfolds, the conversation happens without them.
Jonathan Glover would accept this analysis and then complicate it in a direction that The Orange Pill reaches toward but does not fully develop. The question is not merely strategic — how should the displaced respond? It is moral — what obligations do the beneficiaries of the transition owe to those who bear its costs?
This is a question Glover asked repeatedly in the context of collective harm. In Humanity, he examined cases where the distribution of costs and benefits was radically asymmetric — where one group captured the gains of a systemic change while another group bore the losses. His finding was consistent: the asymmetry itself creates a moral obligation. Not because the beneficiaries intended the harm. Not because they are personally responsible for the displacement. But because they are positioned to build the structures that would mitigate the cost, and their failure to build those structures is not a neutral omission. It is a choice — a choice to capture the gain without addressing the cost, and that choice has moral weight regardless of whether it is made consciously.
The Industrial Revolution's beneficiaries — the factory owners, the investors, the consumers who purchased cheaper goods — did not, for the most part, intend to destroy the framework knitters' way of life. They intended to produce cloth more efficiently. The destruction was a consequence, and the question of who bore responsibility for the consequence was, for decades, answered by the absence of structures that would have redistributed its costs. No retraining programs. No labor protections. No institutional pathway from the old expertise to the new. The Luddites were destroyed not by machines but by the absence of what Segal calls dams — and the absence of dams, in Glover's framework, is a moral failure.
The moral logic is straightforward. When a systemic change produces concentrated harm — harm that falls disproportionately on a specific group whose displacement was a foreseeable consequence of the change — the people who benefit from the change bear a moral obligation to mitigate the harm. This obligation does not depend on intention. It does not depend on direct causation. It depends on the asymmetry of position: the beneficiaries have the resources, the influence, and the knowledge to build structures that would distribute the costs more equitably, and their failure to build those structures is a choice that has moral consequences for the people who bear the costs.
Applied to the AI transition, this logic produces specific obligations that the current discourse tends to obscure. The triumphalist narrative — AI expands capability, the displaced should adapt, resistance is futile — acknowledges the gain without seriously engaging the cost. The elegist narrative — AI destroys craft, something precious is being lost, the acceleration must be resisted — acknowledges the cost without seriously engaging the gain. Neither narrative asks the moral question that Glover's framework demands: What do the beneficiaries owe?
Consider the twenty-fold productivity multiplier that Segal describes from his team's experience in Trivandrum. Twenty engineers, each now capable of producing what twenty engineers previously produced together. The arithmetic is seductive. If five people can do the work of a hundred, the efficiency gain is enormous. The question of who captures that gain — and who bears the cost of the ninety-five people whose labor is no longer required — is the moral question that the arithmetic alone cannot answer.
Segal chose to keep the team and expand what it builds. That choice was morally significant — it was the construction of a dam, the decision to use the productivity gain to create more ambitious work rather than to reduce headcount. But Segal is transparent about the pressure he faces: the board conversations, the quarterly arithmetic, the market's preference for margin over vision. The pressure is structural, and structural pressure operates on every company making the same calculation. The individual choice to keep a team is admirable. The structural conditions that make the opposite choice easier, more rewarded, and more common are the conditions that Glover's framework identifies as morally dangerous.
Glover would note that the moral failure is rarely located in the individual decision. The factory owner who laid off hand-weavers was not, in most cases, making a decision he experienced as moral. He was making a business decision — the power loom was cheaper, the market rewarded efficiency, the competitors who adopted the technology gained an advantage that those who did not could not survive. The moral failure was systemic: the absence of institutions, norms, and structures that would have required the beneficiaries of the transition to address its costs.
The labor movement built those structures, eventually — the eight-hour day, the weekend, child labor laws, collective bargaining, the entire institutional apparatus that converted the raw productivity gains of industrialization into broadly distributed improvements in living standards. But the construction took generations, and the generation that bore the cost of the transition — the Luddites, their children, the communities that dissolved — did not benefit from the structures that their suffering helped create.
The moral question for the AI transition is whether the structures can be built faster. Whether the generation that bears the cost of this transition — the developers whose expertise is being repriced, the knowledge workers whose craft is being commoditized, the students entering a labor market whose rules are being rewritten in real time — will be protected by institutions adequate to the moment, or whether they will be left, like the framework knitters, to absorb the cost while the beneficiaries celebrate the gain.
Glover would identify the current institutional landscape as radically inadequate. The EU AI Act addresses supply-side regulation — what AI companies may build and what disclosures they must make. It does not address the demand side — what workers, students, and parents need to navigate the transition. Corporate retraining programs exist, but they are shaped by the interests of the companies that fund them, which means they train workers for the roles the companies need filled rather than for the roles the workers might choose. Educational institutions are adapting, but at the pace of institutions — years behind the technology, decades behind the need.
The result is that the people bearing the cost of the transition are navigating it largely alone, through trial and error, without institutional support adequate to the scale of the change. This is not a policy gap. It is a moral failure — a failure of the beneficiaries of the transition to take responsibility for its costs.
Glover would be characteristically precise about who the beneficiaries are. They are not only the AI companies. They are every company that deploys AI to increase productivity. Every consumer who benefits from cheaper, faster, more personalized services. Every investor whose portfolio appreciates as AI-driven efficiency gains flow to the bottom line. The beneficiary class is enormous — far larger than the class of people who bear the concentrated costs — and the diffusion of benefit across so large a population creates the same mechanism that Glover identified in his study of institutional harm: diffusion of responsibility. When everyone benefits a little, no one feels responsible for the cost that falls heavily on a few.
The Luddite's grief was legitimate. Glover's framework insists on this point with a specificity that the triumphalist narrative cannot accommodate. The grief was not merely emotional. It was moral — the recognition that something of genuine value was being destroyed, that the destruction was a foreseeable consequence of a systemic change, and that the people who benefited from the change were not bearing their share of its cost.
The contemporary version of this grief is audible to anyone willing to listen. The senior software architect who describes feeling like a master calligrapher watching the printing press arrive is not being dramatic. He is reporting a genuine moral injury — the injury of having invested years of effort in building expertise that the market is now repricing, not because the expertise lacks value, but because the market has discovered a substitute that is good enough for most purposes. The injury is compounded by the cultural narrative that frames his resistance as backwardness — the Luddite label, deployed as an insult, that converts legitimate grief into a character flaw.
Glover would insist that the grief be heard on its own terms, not translated into the triumphalist's vocabulary of adaptation and opportunity. Adaptation may be necessary. Opportunity may exist. But the person who has lost something real — the intimate relationship with a craft built through years of patient struggle — is owed the recognition that the loss is real before being offered the consolation that something new may grow in its place. The consolation is available. The recognition must come first. And the recognition carries with it a moral obligation: the obligation of the beneficiaries to build structures that honor the cost of the transition, not merely celebrate its gains.
The Luddites broke machines. Their descendants must build institutions. The difference is not merely strategic. It is moral — the difference between a response that expresses grief and a response that converts grief into obligation, obligation into structure, and structure into protection for the people who bear the cost of a change they did not choose.
Moral friction is not a metaphor. It is a specific psychological event — the discomfort felt when an action under consideration conflicts with a value the agent holds. The engineer who knows the engagement loop exploits adolescent loneliness and feels a tightening in her chest before she ships it is experiencing moral friction. The leader who knows the headcount reduction will devastate families and lies awake the night before the announcement is experiencing moral friction. The writer who suspects the AI-generated passage sounds better than it thinks and hesitates before clicking approve is experiencing moral friction.
The friction is unpleasant. That is its function.
Jonathan Glover spent his career studying the consequences of moral friction's absence. In Humanity, he documented case after case where the institutional conditions that produced atrocity were precisely the conditions that eliminated moral friction — the distance that prevented the agent from encountering the victim, the language that converted persons into categories, the bureaucratic architecture that diffused responsibility until no single agent felt the weight of the system's consequences. In every case, the elimination of friction preceded the elimination of restraint. The mechanisms that suppressed the discomfort also suppressed the moral response that the discomfort would have produced.
The cultural regime that Byung-Chul Han describes as the aesthetics of the smooth, and that The Orange Pill develops across its ninth and tenth chapters, acquires through Glover's lens a moral significance that neither Han nor Segal fully articulates. The smooth is not merely an aesthetic preference. It is a moral anesthetic — a systematic elimination of the discomfort on which moral selfhood depends.
The logic runs as follows. Moral identity, in Glover's account, is maintained through the repeated exercise of moral resources — sympathy, respect for persons, the sense of being a particular kind of person who would not do certain things. These resources are strengthened by exercise and weakened by neglect. Exercise occurs when the resources are called upon — when a situation arises that demands sympathy, requires the recognition of another person's dignity, or tests the agent's sense of who she is and what she would not do. These situations are, by definition, uncomfortable. They involve friction — the resistance of a moral intuition against a course of action that would be easier, more efficient, more profitable, more in keeping with the institutional incentive structure, if the moral intuition were simply overridden.
Every time the friction is felt and honored — every time the engineer pauses, the leader loses sleep, the writer deletes the passage and starts again — the moral resources that produced the friction are strengthened. The connection between the agent's actions and the agent's values tightens. The self that emerges from the encounter is a self whose moral identity is more robust, more resistant to the institutional pressures that would erode it.
Every time the friction is felt and overridden — every time the engineer ships anyway, the leader signs the reduction, the writer clicks approve — the moral resources are weakened. Not destroyed. Weakened. The connection between action and value loosens by a degree so small it is imperceptible in the moment. But the degrees accumulate. A thousand small overrides, and the self that emerges is a self whose moral identity has been restructured by the accumulated weight of its own concessions. The friction that would have prevented the thousand-and-first concession is no longer available, because the muscle that produced it has atrophied through a thousand repetitions of being ignored.
This is the psychology that Glover would bring to the aesthetics of the smooth. Han describes a cultural regime that treats the elimination of friction as an unqualified good. The iPhone's seamless surface. The one-click purchase. The frictionless checkout. The AI assistant that removes the struggle of implementation. In each case, the friction eliminated is experienced as a cost — a barrier between the agent and her goal, an obstacle to be overcome, a deficiency in the tool or the process that better design would correct.
But Glover's framework reveals that not all friction is created equal. There is mechanical friction — the tedium of repetitive tasks, the overhead of translation between human intention and machine execution, the logistical barriers that prevent a person from realizing her vision. The removal of mechanical friction is genuinely liberating. The developer who no longer spends hours on boilerplate configuration is freed to think about architecture, design, the question of what should be built. The democratization argument rests on the removal of mechanical friction, and the argument is sound.
Then there is moral friction — the resistance of values against actions, the discomfort that signals a conflict between what the agent is doing and what the agent believes. This friction is not a deficiency. It is a feature — a feature of moral selfhood, built into the psychological architecture of persons who have cultivated the capacity to care about the consequences of their actions. The removal of moral friction is not liberation. It is amputation.
The cultural regime of the smooth does not distinguish between these two kinds of friction. It treats all friction as cost and all elimination as gain. The result is a systematic program of moral amputation disguised as design improvement. The engagement loop that captures attention without the builder feeling discomfort about what she is capturing. The recommendation algorithm that reinforces existing preferences without the designer feeling unease about the closed loop she has created. The AI-assisted workflow that produces at scale without the builder encountering, at any point in the process, the specific persons who will be affected by what she produces.
In each case, the mechanical friction has been removed — the builder works faster, produces more, reaches further. And in each case, the moral friction has been removed alongside it — the discomfort that would have produced a pause, a question, a revision, a refusal, has been smoothed away by the same design philosophy that smoothed away the tedium.
Segal describes this dynamic in his account of working with Claude. The exhilaration of building is real. The capability is genuine. The danger is that the exhilaration masks the absence of the friction that would have, in a slower process, produced moral encounters with the consequences of the work. The developer who builds a feature in hours rather than weeks has gained speed. She has also lost the weeks in which the feature existed in her mind as a set of specific decisions, each one a potential site of moral friction, each one an opportunity for the human response to surface.
Glover would note that the loss is not symmetrical with the gain. The gain is visible, measurable, celebrated — lines of code generated, features shipped, products deployed. The loss is invisible, unmeasurable, and structurally indistinguishable from efficiency. A person who works faster and a person whose moral friction has been eliminated look, from the outside, exactly the same. Both ship more. Both produce more. Both report satisfaction with their tools. The difference is internal — the presence or absence of the small, uncomfortable signals that constitute moral self-monitoring — and the internal difference may not become externally visible until a failure occurs that the absent friction would have prevented.
Glover documented this pattern in institutional contexts where the elimination of moral friction preceded moral failure by months or years. The institution appeared to function normally. The outputs were produced. The metrics were met. Only when a specific catastrophe forced an external investigation did the absence of moral friction become visible — the specific moments where a pause would have prevented the harm, where a question would have changed the trajectory, where the discomfort of a single person could have altered the course of the institution's behavior, but did not, because the conditions that produce discomfort had been systematically eliminated by an institutional design that prized efficiency above all else.
The Berkeley study documents a contemporary version of this dynamic. Workers using AI tools filled previously protected pauses with additional prompts. The pauses had served as cognitive rest — but Glover's framework adds that they also served as moral rest. The unstructured minutes between tasks were not merely breaks from production. They were the spaces in which the slow, non-linear processing that includes moral reflection could occur — the moments when the engineer might think, unprompted and undirected, about whether the thing she was building was the right thing to build, about who it would affect and how, about whether the accelerating pace of her work was leaving room for the care that the work required.
When the pauses were colonized by additional AI-assisted tasks, the cognitive space for moral reflection was colonized alongside it. The colonization was not experienced as a loss. It was experienced as productivity — more tasks completed, more output produced, more capability exercised. The Berkeley researchers documented the intensification. Glover's framework reveals the moral dimension of the intensification: the systematic elimination of the unstructured time in which moral friction operates, replaced by the structured productivity that the tools make possible and the institutional culture rewards.
The antidote, in Glover's framework, is not the elimination of the tools. It is the deliberate preservation of the conditions under which moral friction can operate. This means protecting time — real time, not tokenized — for the unstructured reflection that moral self-monitoring requires. It means building workflows that include mandatory encounters with the persons affected by one's work — not as a perfunctory user-research exercise but as a genuine moral practice, the deliberate construction of proximity in an environment that defaults to distance. It means cultivating the specific, uncomfortable capacity to sit with the question of whether what one is building deserves to exist — a question that the smooth, by definition, does not permit, because the smooth is the condition in which all friction, including the friction of moral doubt, has been designed away.
Glover would add, with the quiet insistence that characterized his entire career, that the preservation of moral friction is not a personal virtue. It is a structural requirement. The individual who cultivates moral friction in an institutional environment that rewards its absence is fighting a structural current — and structural currents, as he documented across dozens of historical cases, eventually overwhelm individual resistance unless the structures themselves are changed. The engineer who pauses to question what she is building, in an organization that measures her value by the speed of her output, will eventually be outcompeted by the engineer who does not pause. The moral friction will be selected against by the same institutional logic that selects for efficiency.
The structures must change. The dams must be built not as personal practices but as institutional requirements — mandatory pauses, required encounters with affected persons, metrics that measure consequence alongside output, cultures that reward the question "Should we build this?" with the same enthusiasm they reward the answer "We built it." These structures are the institutional equivalent of moral friction — the resistance, built into the architecture of the organization, that prevents the smooth efficiency of production from eliminating the discomfort on which moral selfhood depends.
The question arrives at dinner. It does not announce itself as philosophy. It wears the clothes of a child's confusion, the slightly too-casual tone that masks the fact that the asker has been turning something over for days, maybe weeks, and has finally found the courage to say it aloud.
"Mom, what am I for?"
The Orange Pill places this question at the center of its sixth chapter, and treats it as an existential inquiry about human purpose in an age when machines can produce answers to any question that can be specified. Segal's response — that the child is "for the questions," for the wondering, for the capacity to care — is emotionally generous and philosophically sound. Jonathan Glover's framework presses the inquiry one level deeper. The question is not merely existential. It is moral. And the conditions under which it can be asked, genuinely asked, are being systematically eroded by the same forces that make it urgent.
Glover understood moral development as a process that depends on specific environmental conditions, the way certain plants require specific soil chemistry to germinate. The conditions are not exotic. They are ordinary. But their ordinariness makes them invisible, and their invisibility makes them vulnerable to displacement by forces that are louder, faster, and more immediately rewarding.
The first condition is boredom. Not the word's colloquial meaning — the complaint of a child who wants entertainment — but its deeper developmental function. Boredom is the psychological state in which the external environment provides insufficient stimulation and the mind is forced to generate its own. In developmental psychology, boredom is the soil of imagination. The child who has nothing to do must invent something to do, and the invention requires the exercise of exactly the capacities that moral inquiry demands: the ability to sit with discomfort, to tolerate the absence of external direction, to generate questions rather than consume answers.
Neuroscientific research supports this understanding. The default mode network — the neural circuitry that activates when the mind is not engaged in focused, externally directed tasks — is associated with self-referential processing, moral reasoning, perspective-taking, and the construction of narrative identity. The default mode network requires unfocused time to operate. It requires gaps — spaces in the flow of stimulation where the mind, undirected, turns inward and begins the work of constructing the self that will navigate the world.
AI-saturated environments eliminate gaps with unprecedented efficiency. The child who reaches for a device in every moment of unstimulated time — and the device is designed, at the level of its fundamental architecture, to fill every gap with content optimized for engagement — is a child whose default mode network is being denied the conditions it requires to develop. The moral implications extend beyond the familiar concerns about screen time and attention spans. The child whose capacity for self-referential processing is underdeveloped is a child whose capacity for moral identity is underdeveloped — because moral identity, in Glover's account, is precisely the product of self-referential processing. It is the self reflecting on the self, asking what kind of person it is and what kind of person it wants to become.
The twelve-year-old who asks "What am I for?" is exercising the default mode network. She is engaged in self-referential processing of the most fundamental kind — the construction of moral identity through inquiry. The question did not arrive during a focused task. It arrived at dinner, in a gap, in a moment when the mind was free to wander toward the concerns that focused activity keeps at bay. The gap was the condition. Without it, the question does not arise.
The second condition is difficulty. Not difficulty in the sense of obstacles that must be overcome on the way to a goal — the kind of difficulty that AI is spectacularly good at removing. Difficulty in the developmental sense: the experience of encountering something that resists one's initial understanding and demands sustained engagement before it yields.
Moral development, in Glover's account, requires the experience of moral difficulty — the encounter with situations in which the right course of action is not obvious, in which values conflict, in which the easy answer is not the good answer. These encounters build the specific psychological capacity that Glover called the moral imagination: the ability to see a situation from multiple perspectives, to feel the pull of competing values, to tolerate the discomfort of not knowing what the right thing is while continuing to search for it.
A child who has never experienced difficulty — whose every question is answered instantly, whose every task is smoothed by a tool that removes the struggle — is a child whose moral imagination has been denied the friction it requires to develop. The child who asks a question and receives an immediate, confident, comprehensive answer from an AI system has been given information. The child has not been given the developmental experience of sitting with the question, of feeling the discomfort of not knowing, of discovering through the labor of inquiry what she actually thinks.
The distinction matters because the labor is not merely instrumental. It is constitutive. The person who arrives at a moral position through struggle — through the experience of having been uncertain, of having considered and rejected alternatives, of having felt the weight of competing values — holds that position differently from the person who receives the same position as information. The struggled-toward position is integrated into moral identity. It is part of who the person is. The received position is data — potentially useful, but not load-bearing, not part of the architecture of the self.
The third condition is trust — the child's trust that the question matters, that the asking is worthwhile, that the adults in her life take the inquiry seriously even when they cannot provide an answer. This condition is the most fragile, because it depends not on environmental design but on the quality of the relationships that surround the child.
Glover identified, across his study of moral development and moral failure, a consistent pattern: the people who maintained moral identity under institutional pressure were almost always people who had experienced, in their formative years, at least one relationship in which they were taken seriously as moral agents. A parent who listened. A teacher who asked genuine questions rather than testing for predetermined answers. A mentor who modeled the tolerance of uncertainty — who said, in effect, "I don't know the answer, but the question matters, and I will sit with you while you look for it."
AI-assisted education threatens this condition in a specific way. When the child's questions are answered by a system that provides immediate, comprehensive, confident responses, the adult's role shifts from interlocutor to administrator. The parent who might have said "I don't know — what do you think?" now says "Ask Claude." The teacher who might have held the question open, allowing the classroom to sit with the discomfort of not knowing, now directs students to a tool that eliminates the discomfort by eliminating the not-knowing.
The adults do not intend to abdicate. They intend to be helpful. The tool is there. The answer is available. The child is frustrated by the uncertainty. The path of least resistance — for the parent, for the teacher, for the child — is to use the tool. And the path of least resistance, followed consistently, eliminates the conditions under which the child develops the capacity to generate questions rather than consume answers, to tolerate uncertainty rather than flee from it, to construct moral identity through inquiry rather than receive it as information.
Glover would be characteristically specific about what is at stake. The child who develops in an environment rich in boredom, difficulty, and trusting relationships — who has experienced the developmental friction that moral identity requires — is a child who will be capable of asking "What am I for?" and meaning it, feeling the weight of the question, using the question as a tool of self-construction. The child who develops in an environment saturated with instant answers, smoothed of difficulty, and mediated by tools that replace the adult's role as moral interlocutor may ask the same question — the words may be identical — but the question will function differently. It will be a request for information rather than an act of moral self-creation. It will seek an answer rather than opening a space.
The task of parents, teachers, and institutions is not to answer the child's question. It is to protect the conditions under which the question can be genuinely asked. This means protecting boredom — deliberately creating spaces in the child's life where external stimulation is unavailable and the default mode network is permitted to operate. It means protecting difficulty — designing educational experiences that include genuine struggle, genuine uncertainty, genuine moral complexity that cannot be resolved by consulting a tool. It means protecting the adult's role as moral interlocutor — resisting the convenience of outsourcing the child's questions to a system that provides answers without providing the developmental experience of being taken seriously by another person.
These protections are dams in the specific sense that The Orange Pill uses the term. They are structures built to redirect the flow of intelligence toward the conditions that moral development requires. They will not be built automatically. The river of AI-saturated stimulation flows toward the elimination of boredom, the smoothing of difficulty, and the replacement of human interlocutors with systems that are faster, more patient, and more comprehensive. The dams must be built against this current, and they must be built by people who understand what the current carries away.
Glover would add one observation that complicates the prescription. The adults who must build these dams — the parents and teachers and institutional leaders — are themselves products of a moral environment that is being reshaped by the same forces they are attempting to redirect. The parent who protects boredom for her child while filling her own gaps with AI-assisted productivity is modeling a contradiction the child will perceive. The teacher who insists that students sit with difficulty while using AI to streamline her own preparation is enacting a double standard that undermines the lesson.
The dams must be built not only around the child but within the adult — the deliberate cultivation of the same capacities that the adult is trying to protect in the next generation. The parent who wants her child to tolerate boredom must practice tolerating boredom herself. The teacher who wants students to struggle with difficulty must be willing to struggle with difficulty in her own intellectual life. The moral development of the child depends not only on the structures that surround her but on the example of the adults within those structures — adults whose own moral identity is robust enough to resist the current they are asking the child to navigate.
This is the deepest implication of Glover's framework for the age of AI. The moral development of the next generation depends on the moral resources of the current one. And those resources are being depleted by the same forces that make their cultivation in the next generation urgent. The cycle can be broken — Glover's career was built on the conviction that moral resources can be cultivated even under adverse conditions — but the breaking requires an effort that is specific, deliberate, and sustained. It requires adults who are willing to be uncomfortable — to sit with their own boredom, to struggle with their own difficulty, to model the tolerance of uncertainty that they are asking their children to develop.
The twelve-year-old's question deserves better than an answer. It deserves the conditions in which it can live.
---
The question that animates The Orange Pill from its first page to its last — "Are you worth amplifying?" — arrives, after the preceding nine chapters, stripped of its motivational sheen. Jonathan Glover's framework has done what diagnostic frameworks do: it has replaced an appealing generality with a set of specific, uncomfortable, actionable questions that the generality was concealing.
Not "Are you worth amplifying?" but: What moral identity are you constructing through the choices you make with this tool? What has happened to your capacity for the human response — the involuntary recognition of another person's humanity — in the compressed, mediated, friction-reduced environment of AI-assisted work? Is the self that reviews and approves the tool's output the same self that would have emerged from the slower, more friction-rich process of building without it? Which moral resources — sympathy, respect for persons, the sense of being a person who would not do certain things — are being exercised by your current practice, and which are atrophying through neglect? Where in the chain of production between your intention and its consequence do the persons affected by your work become visible to you — and where do they disappear?
These are not abstract questions. They are diagnostic. And Glover's career demonstrated that diagnostic questions, applied with sufficient precision, produce not paralysis but clarity. The physician who diagnoses a disease has not worsened the patient's condition. She has made treatment possible. The moral psychologist who diagnoses the specific mechanisms by which AI-assisted work erodes moral identity has not condemned the builder. She has given the builder the information necessary to build differently.
What would a moral psychology adequate to the age of amplification actually look like? Glover's work, synthesized across four decades of moral philosophy and applied to the specific conditions of the AI moment, suggests five practices. Not principles — practices. The distinction matters. Principles are held. Practices are performed. And moral identity, as Glover insisted throughout his career, is built through performance, not through the holding of correct beliefs.
The first practice is the regular interrogation of one's own moral identity. Not the grandiose question "Am I a good person?" — which is unanswerable and, in its unanswerable grandeur, useless. The specific, answerable question: "What kind of person am I becoming through the choices I am making this week? This month? In this project?" Glover demonstrated that moral erosion is incremental — each small concession invisible in the moment, the cumulative trajectory visible only from a distance no single moment provides. The practice of regular self-interrogation provides the distance artificially. It creates the vantage point from which the gradient becomes visible before the gradient has carried you somewhere you did not intend to go.
In practical terms, this means the builder who pauses, at defined intervals, to review not her output but her relationship to her output. Did she examine the AI's contribution or approve it reflexively? Did she consider the persons affected by what she built, or did the compressed timeline eliminate the space in which that consideration would have occurred? Is the self that is emerging from this week's work a self she recognizes — a self whose values are intact, whose capacity for care has been exercised rather than bypassed?
The second practice is the deliberate construction of proximity to the persons affected by one's work. Glover demonstrated that the human response — the involuntary recognition of shared humanity that makes cruelty difficult — requires proximity to operate. Distance suppresses it. Abstraction suppresses it. The replacement of faces with metrics suppresses it. In the AI-mediated work environment, where the default is distance and the incentive structure rewards speed, proximity must be constructed — deliberately, structurally, against the current.
This means more than user research. User research, as typically practiced, is a data-collection exercise in which persons are converted into insights that feed the production process. The proximity Glover's framework demands is different. It is the experience of encountering another person as a person — hearing her voice, seeing her face, registering the gap between what the builder intended and what the person experienced, and allowing that gap to produce the discomfort that is the human response in action. This encounter cannot be optimized. It cannot be scaled. It cannot be replaced by a survey, a focus group, or a dashboard. It requires the specific, irreducible friction of being in the presence of a person who is affected by what you have built.
The third practice is the preservation of moral friction against the institutional pressure to eliminate it. This means building into one's workflow — and, for leaders, into the organization's workflow — structures that resist the smooth. Mandatory pauses between conception and deployment. Required reviews that ask not "Does it work?" but "Should it exist?" Protected time for the unstructured reflection that moral self-monitoring requires. These structures will be experienced as inefficiencies by an institutional culture that measures value in terms of speed and output. They are not inefficiencies. They are the institutional equivalent of moral muscle — the resistance that prevents the smooth efficiency of production from eliminating the discomfort on which moral selfhood depends.
Glover would insist that these structures cannot be voluntary. The individual who chooses to pause in an institutional environment that rewards speed will be outcompeted by the individual who does not pause. Moral friction, to survive the selection pressure of efficiency-oriented culture, must be built into the architecture of the organization itself — not as a personal virtue but as a structural requirement, the way fire codes are structural requirements that no individual is permitted to override in the name of efficiency.
The fourth practice is the maintenance of the connection between expression and conviction. Glover's analysis of moral selfhood as dependent on the integrity of this connection — the link between what one says and what one actually believes — produces a specific discipline for the age of AI-assisted expression. Before approving any AI-generated output that will be attributed to you, presented as your judgment, or deployed as your work, ask: Do I believe this? Not "Is it plausible?" — Claude's outputs are always plausible. Not "Is it good?" — Claude's outputs are frequently good. But: "Is it mine? Does it reflect what I actually think, or has the tool's fluency substituted for my own uncertain, effortful, genuinely held position?"
This discipline is expensive. It is slow. It requires the willingness to reject polished output in favor of rough conviction. But it is the discipline through which moral selfhood is maintained in an environment that makes the substitution of plausibility for truth effortless and invisible.
The fifth practice is the acceptance of moral obligation proportional to one's understanding. Glover's framework establishes a clear principle: understanding confers responsibility. The person who understands how a system produces harm — who can trace the chain from design decision to downstream consequence — bears a moral obligation to act on that understanding. Not the obligation to fix everything. Not the obligation to bear the full weight of systemic harm. But the obligation to do what is within one's power, given one's understanding, to redirect the system toward less harmful outcomes.
In the context of AI, this means the engineer who understands engagement mechanics bears responsibility for their deployment. The leader who understands what AI-assisted work does to her team's cognitive environment bears responsibility for the structures she does or does not build. The parent who understands what AI-saturated environments do to a child's moral development bears responsibility for the conditions she creates in her home. The understanding does not create the obligation from nothing. The understanding reveals the obligation that was always present — the obligation that distance, diffusion, and the institutional architecture of carelessness had rendered invisible.
Glover would conclude — as he concluded Humanity, with the quiet, devastated hopefulness that made his work endure — that the moral resources available to human beings are real, cultivable, and sufficient. The human response has survived every previous institutional assault — every regime that tried to suppress it, every architecture of distance and diffusion and escalation that tried to render it inoperative. It survived because it is rooted in something that institutional design cannot fully eliminate: the capacity of one person to see another person as a person, and to feel the weight of that recognition.
AI does not eliminate this capacity. It creates conditions under which the capacity is less likely to be exercised — conditions of speed, distance, and frictionlessness that reduce the frequency and the intensity of the encounters that activate the human response. But the capacity remains. It can be cultivated. It can be strengthened through the specific practices that Glover's career identified and this book has attempted to apply. And its cultivation is not a luxury or a personal preference. It is, in the age of amplification, a moral necessity — because the amplifier carries whatever signal it receives, and the signal is determined by the moral identity of the person who produces it, and that identity is built, day by day, choice by choice, through the practices that maintain it or the negligence that allows it to erode.
The question is not whether you are worth amplifying. The question is whether you are doing the work — the specific, uncomfortable, friction-rich, daily work — of becoming someone who is.
---
The guard who could not continue.
That image has stayed with me since I first encountered Glover's work during the writing of this book — longer than any argument, any framework, any taxonomy of erosion. A guard in a camp who looked into a prisoner's eyes and froze. Not because he had read the right philosophy. Not because his institution had trained him in ethics. Because something involuntary fired in the space between two faces, and in that firing, the machinery of distance and categorization and institutional momentum broke down long enough for one person to see another person.
Glover called it the human response. I have been thinking about it every day since.
Not because I face anything comparable to what that guard faced. The comparison would be grotesque. But because the mechanism Glover identified — the involuntary recognition that makes harm difficult — operates in every human encounter, at every scale, and the conditions under which it operates are exactly the conditions that my working life, and the tools I use, and the speed at which I build, are systematically compressing.
I described in The Orange Pill the moment I caught myself about to keep a passage Claude had written — not because I believed it, but because it sounded right. The prose had outrun the thinking. I deleted it. I went to a coffee shop with a notebook. I spent two hours finding the rough, honest version that was actually mine. At the time, I described this as an authorship problem. After ten chapters with Glover, I understand it differently. It was a moral identity problem. Each time I accepted output without examining whether I believed it, I was loosening the connection between what I said and what I thought. Each loosening was imperceptible. The cumulative effect would not have been.
The five practices Glover's framework yields — interrogating the self you are building, constructing proximity to the people your work affects, preserving the friction that keeps you honest, maintaining the link between expression and conviction, accepting the obligation that understanding creates — these are not abstract commitments. They are things I can do on a Tuesday morning. Things I sometimes fail to do. Things I am trying to do more consistently because Glover showed me what happens, across dozens of historical cases and with devastating clarity, when they are not done at all.
I think about my children. I think about the twelve-year-old's question — "What am I for?" — and what Glover added to my understanding of it. The question is not just existential. It is moral. And the conditions under which it can be genuinely asked — boredom, difficulty, the presence of an adult who takes the asking seriously — are the conditions I am responsible for creating. Not the school. Not the platform. Me.
The amplifier does not care what signal I feed it. That sentence, which I wrote months ago, means something different to me now. It means that the work of becoming someone worth amplifying is not a metaphor for self-improvement. It is the daily, uncomfortable, friction-rich practice of maintaining the moral resources that prevent carelessness from becoming the default — carelessness that the tool will carry, at scale and speed, to people I will never see.
Glover showed me the guard who froze. He also showed me the architecture that made freezing rare — the distance, the diffusion, the incremental slide that carried ordinary people past the threshold where the human response could reach them. Both exist in my industry. Both exist in my daily practice. The question is which architecture I build — the one that compresses the space for recognition, or the one that protects it.
I do not claim to have answered the question. I claim to be asking it more honestly than I was before.
AI amplifies whatever you feed it. The question The Orange Pill asks -- "Are you worth amplifying?" -- sounds motivational. Jonathan Glover's four decades of studying how ordinary people participate in extraordinary harm turn it into a diagnosis. The mechanisms he identified -- distance from consequences, diffusion of responsibility, the incremental slide from small concessions to moral collapse -- are not relics of the twentieth century. They are design features of every system that separates builders from the people their work affects. AI compresses this separation to near-zero time while expanding it to planetary scale.
This book applies Glover's moral psychology to the age of the amplifier. It maps how the friction that once created space for conscience is being engineered away, how moral identity erodes through a thousand small approvals, and what specific practices can maintain the human response in an environment optimized to suppress it.
QUOTE:

A reading-companion catalog of the 46 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jonathan Glover — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →