By Edo Segal
Not because their story is unique — it's not. By now, every builder I know has a version of it. The team that got halved. The department that got "restructured." The freelancer whose clients stopped calling, not with an explanation, but with silence. I keep thinking about them because when I heard the story, my first instinct was to ask who was responsible. And I couldn't find anyone.
That's the feeling that led me to Iris Marion Young.
I came to her work the way I come to most philosophy — not through a syllabus, but through a failure of my own thinking. I was trying to write about what AI amplifies, and I kept running into a wall. The Orange Pill framework says AI is an amplifier: it makes what's already there louder, faster, more visible. I believe that. But I needed someone who could tell me what the "already there" actually looks like. Not in vague terms. In precise, structural, unflinching terms.
Young gave me that. She died in 2006, before GPT, before Midjourney, before any of this. And yet when I read *Responsibility for Justice*, I felt like she was describing my Tuesday. The meeting where everyone agrees the layoffs are unfortunate but necessary. The investor call where displacement gets filed under "efficiency gains." The policy roundtable where everyone blames everyone else and nothing changes. She had a name for all of it: structural injustice. Harm without a villain. Damage produced by the ordinary, rule-following behavior of decent people.
That concept broke something open for me. Because I am one of those decent people. So are you, probably. We're building, investing, adopting, adapting — and the aggregate effect of our reasonable choices is that millions of people are losing their footing. Young says that doesn't make us guilty. But it does make us responsible. Not responsible the way a criminal is responsible. Responsible the way a participant is responsible — someone who is embedded in the structure, benefits from the structure, and therefore has an obligation to change the structure.
That distinction — between guilt and political responsibility — is the hardest pill in this book. Harder than any technology prediction. Harder than any market analysis. Because it doesn't let anyone off the hook. Not the builder. Not the regulator. Not the bystander. It says: you are inside the machine. The question is not whether you contributed to the harm. The question is what you are doing to repair it.
I didn't find comfort in Young's work. I found clarity. And in this moment — when the amplifier is running hot and the structures it's amplifying are profoundly unequal — clarity is worth more than comfort.
-- Edo Segal ^ Opus 4.6
1949-2006
Iris Marion Young (1949–2006) was an American political philosopher and feminist theorist whose work transformed contemporary debates about justice, democracy, and oppression. Born in New York City, she studied philosophy at Penn State and received her PhD from Penn State in 1974. She held faculty positions at Miami University, Worcester Polytechnic Institute, and the University of Pittsburgh before joining the University of Chicago in 2000, where she served as Professor of Political Science until her death from esophageal cancer at age fifty-seven. Her landmark book *Justice and the Politics of Difference* (1990) challenged the dominant distributive paradigm in political philosophy by arguing that justice must address not only the allocation of goods but the institutional conditions of oppression and domination. Her posthumously published *Responsibility for Justice* (2011) introduced the "social connection model" of responsibility, arguing that all participants in unjust structural processes share a forward-looking political responsibility to work toward structural change — a framework that has become increasingly central to debates about technology, globalization, and economic displacement. Her other major works include *Inclusion and Democracy* (2000) and *Global Challenges: War, Self-Determination and Responsibility for Justice* (2007). She is widely regarded as one of the most influential political philosophers of the late twentieth century.
In the winter of 2024, a mid-sized advertising agency in Chicago laid off its entire illustration department — twelve people, most of whom had worked there for over a decade. The creative director who made the decision did not dislike illustrators. He admired their work. He had hired most of them personally. But the agency's clients had begun requesting AI-generated visual concepts that could be produced in hours rather than weeks, at a fraction of the cost. The clients were not malicious. They were following budgets. The creative director was not heartless. He was following the market. The illustrators were not unskilled. They were following a career path that had, without warning or ceremony, narrowed to a point. No one in this story is the villain. Everyone in this story is acting within accepted norms, pursuing legitimate goals, following institutional incentives they did not design. And twelve people lost their livelihoods.
This is the kind of situation that most frameworks of justice struggle to address — not because it is trivial, but because there is no perpetrator to punish, no contract that was violated, no law that was broken. The harm is real. The agent is absent. Iris Marion Young spent her career building a philosophical vocabulary for precisely this gap, and her work has never been more urgently needed than in the age of artificial intelligence.
Young, who taught political philosophy at the University of Chicago and later at the University of Pittsburgh until her death in 2006, drew a distinction that most people — including most philosophers — collapse without noticing. The distinction between individual wrongdoing and structural injustice. Between a person who harms you and a system that harms you through the ordinary, rule-following behavior of millions of people who may never know your name. In her posthumously published Responsibility for Justice (2011), Young argued that structural injustice "exists when social processes put large groups of persons under systematic threat of domination or deprivation of the means to develop and exercise their capacities, at the same time that these processes enable others to dominate or to have a wide range of opportunities for developing and exercising capacities available to them." The critical word is processes. Not decisions. Not conspiracies. Not individual acts of cruelty. Processes — the accumulated, interlocking actions of millions of people operating within institutional rules that no single person authored and no single person controls.
The AI transition is the largest generator of structural injustice in the early twenty-first century, and the conversation about it remains catastrophically impoverished precisely because the dominant frameworks insist on locating blame. The technologist blames the regulator for failing to adapt. The regulator blames the corporation for moving too fast. The displaced worker blames the executive who chose the algorithm over the human. The executive blames the market that rewarded the choice. Each of these attributions contains a grain of truth and misses the structural reality entirely. Young's framework does not deny that individuals make choices. It insists that focusing on those choices — adjudicating guilt, assigning blame, punishing wrongdoers — will never address the injustice, because the injustice is not produced by wrongdoing. It is produced by structure.
Consider what "structure" means in Young's vocabulary. It is not an abstraction. It is not a metaphor for "society in general." Structure, for Young, refers to the confluence of institutional rules, social norms, material resources, and accumulated practices that together create a set of positions — some advantaged, some disadvantaged — that individuals occupy. The positions exist prior to the individuals who fill them. A senior software architect who spent twenty years mastering the intricacies of distributed systems did not create the position "person whose expertise is made redundant by large language models." That position was created structurally — by the convergence of advances in transformer architecture, the economics of cloud computing, the venture capital incentive structure that rewards labor displacement, the consumer preference for cheaper and faster products, and the regulatory vacuum that imposes no friction on the transition. The architect merely found himself occupying the position. He stepped into a space that was already shaped to receive him, already configured to diminish him.
The Orange Pill — Edo Segal's framework for understanding the AI transition — uses a metaphor that resonates deeply with Young's structural analysis. AI, Segal argues, is an amplifier. It does not create new qualities in people or institutions. It amplifies what is already there. Young's contribution is to insist on a follow-up question that the amplifier metaphor alone cannot answer: What structures is the amplification operating through? An amplifier connected to a system of structural advantage amplifies that advantage. An amplifier connected to a system of structural oppression amplifies that oppression. The technology itself is not the source of the injustice. The structure is. But the technology accelerates the structure's effects to a speed and scale that makes the injustice newly visible — or, more precisely, makes it impossible to ignore for populations that had previously been insulated from it.
This is one of the underappreciated dynamics of the AI transition. Structural injustice in labor markets is not new. The displacement of human skill by cheaper alternatives has been happening since the first textile mills of the late eighteenth century. What is new is the reach of the displacement. Previous waves of technological automation primarily affected manual laborers — weavers, agricultural workers, assembly-line operators. These populations were already socially marginalized, already lacking political voice, already positioned at the bottom of the status hierarchy. Their displacement, while devastating to them, was largely invisible to the educated professional classes who controlled public discourse. The AI transition is different because it reaches into the professional and creative classes — the writers, the designers, the programmers, the analysts, the consultants — who had previously considered themselves structurally secure. For the first time, the people who narrate the story of technological progress are becoming characters in it.
Young would not be surprised by the sudden urgency of the conversation. Her analysis of structural injustice always emphasized that visibility is a function of social position. The same structural process that was unremarkable when it affected garment workers becomes a crisis when it affects graphic designers. This is not hypocrisy, exactly. It is a predictable consequence of what Young called powerlessness — one of her five faces of oppression. The powerless are those who lack the authority, the social connections, and the cultural capital to make their experience legible to the institutions that could address it. When the structurally powerful become structurally vulnerable, the experience of vulnerability suddenly acquires a vocabulary, a narrative, a public.
Young's concept of structural injustice also illuminates why the most common responses to the AI transition — both the triumphalist and the elegiac — are inadequate. The triumphalist says: this is progress, the displaced will retrain, new jobs will emerge, the net effect will be positive. Young's framework reveals the structural assumptions buried in this optimism. "Retrain" assumes the existence of affordable, accessible retraining programs — which are themselves structurally distributed along lines of class, geography, and existing educational attainment. "New jobs will emerge" assumes that the new jobs will be available to the people who lost the old ones — an assumption that the historical record does not support. The handloom weavers displaced by the power loom did not become factory owners. They became factory workers, occupying a position of reduced autonomy, reduced dignity, and reduced economic security. The structural position changed. The people who filled it did not move up. They moved sideways, or down.
The elegiac response — this is loss, we are witnessing the death of human craft, something irreplaceable is being destroyed — captures something real but mislocates the source of the loss. Young would argue that the loss is not primarily aesthetic or spiritual. It is political. What is being lost is not beauty (beauty will survive; it always does) but participation — the capacity of creative workers to shape the conditions under which they work, to exercise meaningful control over the processes that determine their economic fate, to have a voice in the institutional decisions that are remaking their world. The loss of a livelihood is not merely an economic event. It is a political one. It removes a person from the web of institutional relationships through which they exercised agency, contributed to collective decisions, and experienced themselves as a full member of the political community.
This is Young's deepest insight, and it is the one that the AI discourse most urgently needs. Justice is not primarily about distribution — about who gets what share of the pie. Justice is about participation — about who gets to decide how the pie is made, how it is sliced, and what counts as pie in the first place. The dominant conversation about AI and justice focuses almost exclusively on distributional questions: How should the economic gains from AI be shared? Should there be a universal basic income? Should AI companies pay a tax on the labor they displace? These are important questions. But they are downstream questions. They accept the existing structure of decision-making and ask only how to redistribute its outputs. Young's framework insists on the upstream question: Who is making the decisions about how AI is developed, deployed, and governed? Whose voices are included in those decisions? Whose are excluded? And what institutional changes would be required to make the decision-making process itself more just?
The answer, in the current moment, is stark. The decisions about AI development are being made by a remarkably small and remarkably homogeneous group of people — predominantly male, predominantly white or East Asian, predominantly educated at a handful of elite universities, predominantly located in a handful of cities, predominantly embedded in a corporate culture that treats speed and scale as unquestionable virtues. The decisions about AI governance are being made by regulators who lack the technical expertise to understand what they are regulating and the political will to impose meaningful constraints on an industry that generates enormous economic value. The decisions about AI adoption are being made by managers who face competitive pressures to automate and who have neither the mandate nor the institutional support to consider the structural consequences of their choices. And the people most affected by all of these decisions — the workers whose skills are being devalued, the communities whose economic base is being eroded, the cultural traditions whose products are being absorbed into training datasets without consent or compensation — have almost no voice in any of them.
This is structural injustice in its purest form. Not a villain. Not a conspiracy. Not a failure of individual morality. A structure — a confluence of institutional rules, economic incentives, cultural norms, and accumulated practices — that systematically produces unjust outcomes through the ordinary, unremarkable behavior of people who are, for the most part, acting in good faith.
Young's framework does not offer the comfort of blame. It does not point to a perpetrator who can be punished or a policy that can be enacted to make the problem disappear. What it offers instead is something harder and more valuable: a clear-eyed account of how the injustice is produced, a refusal to accept that authorless harms are therefore blameless harms, and a theory of responsibility — political responsibility, not guilt — that obligates every participant in the structure to work toward changing it. That theory of responsibility, and the uncomfortable demands it places on everyone involved in the AI transition, is the subject of the chapters that follow.
The illustrators in Chicago did not lose their jobs because someone was unjust to them. They lost their jobs because the structure was unjust — and no one was responsible for the structure, which means everyone is responsible for changing it. This is the central paradox of the age of artificial intelligence, and Iris Marion Young is its clearest diagnostician.
In 1990, Iris Marion Young published Justice and the Politics of Difference, a book that proposed something deceptively simple: that oppression is not one thing but five things. That the word "oppression," which political discourse uses as though it names a single phenomenon, actually refers to five distinct structural processes, each with its own logic, its own mechanisms, and its own experiential texture. Young called them the five faces of oppression: exploitation, marginalization, powerlessness, cultural imperialism, and violence. She argued that any serious analysis of justice must attend to all five, because a group can experience one form of oppression without experiencing the others, and addressing one face while ignoring the rest produces an incomplete and often counterproductive response.
Thirty-five years later, with artificial intelligence restructuring the global economy at a pace that makes the Industrial Revolution look leisurely, Young's taxonomy has become an indispensable diagnostic tool — not because she anticipated AI, but because she understood the recurring structural patterns that every major technological transition activates. The five faces of oppression are not historical artifacts. They are permanent possibilities within any institutional order, and the AI transition is activating all five simultaneously.
Exploitation is the first face, and in Young's usage, it means something more precise than "being taken advantage of." Exploitation, for Young, refers to a structural process through which the labor of one group is systematically transferred to benefit another group. The transfer is not accidental. It is built into the rules of the institutional arrangement. The exploited group does the work; the benefiting group captures a disproportionate share of the value that the work produces.
The AI training pipeline is an exploitation machine of extraordinary efficiency. The large language models and image generation systems that are restructuring creative and knowledge work were trained on the accumulated output of millions of human creators — writers, artists, coders, musicians, photographers, scholars — whose work was ingested into training datasets without meaningful consent, without compensation, and without any mechanism for ongoing participation in the value their labor continues to generate. This is not a metaphorical use of the term "exploitation." It is a precise structural description. The creative worker produced the writing that trained the model. The model now generates writing that competes with the creative worker's output. The value of the original labor has been transferred — structurally, systematically, through institutional arrangements that were designed to facilitate exactly this transfer — from the creator to the platform, from the worker to the shareholder.
The exploitation is compounded by a temporal asymmetry that Young's framework helps clarify. The creative workers whose output constitutes the training data performed their labor under one set of structural conditions — a world in which human-generated content had economic value because there was no cheaper alternative. The AI transition changed those conditions retroactively. Work that was performed in good faith, under the reasonable expectation that it would retain its economic value, was absorbed into a system that now uses it to undermine that value. The exploitation is not merely ongoing. It is retrospective. It reaches backward in time to extract value from labor that was performed before the extractive system existed.
Marginalization is the second face, and Young considered it potentially the most dangerous form of oppression. Marginalization occurs when a group is expelled from useful participation in social life — pushed to the periphery of the economic order, rendered surplus, denied the opportunity to contribute meaningfully to the collective enterprise. Young emphasized that marginalization is not merely an economic condition. It is an existential one. Human beings derive dignity, identity, and social connection from their participation in productive activity. To be rendered marginal is to be told, structurally, that your contribution is not needed — that the social order can function perfectly well without you.
The AI transition is producing marginalization on a scale and at a speed that have no historical precedent. Previous waves of automation marginalized specific categories of workers — typists displaced by word processors, switchboard operators displaced by automated exchanges, assembly-line workers displaced by robotic arms. The displacement was real but bounded. The current wave is different in kind. Large language models can generate text, code, analysis, and creative content across virtually every domain of knowledge work. Image generation systems can produce visual content that, while imperfect, is adequate for most commercial purposes. The range of human activities that AI systems can approximate is expanding month by month. The result is not the marginalization of a specific occupational category but the potential marginalization of entire modes of human contribution — the slow, careful, experience-dependent work of thinking through complex problems, crafting precise language, developing visual narratives, and building the kind of tacit expertise that accumulates over years of practice.
Young's analysis of marginalization emphasizes that the most insidious aspect is not the economic deprivation but the loss of recognition. The marginalized person is not merely poor. They are invisible. Their skills, their knowledge, their capacity for meaningful contribution are rendered irrelevant by a structural arrangement that has found a way to get the work done without them. The twenty-year veteran translator who discovers that machine translation has made her expertise commercially redundant is not merely losing income. She is losing the social recognition that comes from being needed — from possessing a skill that others value and cannot easily replicate. The structural message is not "we are exploiting you." It is something worse: "we do not need you."
Powerlessness is the third face, and it describes the condition of those who lack authority within the institutional structures that shape their lives. The powerless, in Young's taxonomy, are those who take orders but do not give them, who are acted upon but do not act, who are affected by decisions but have no meaningful role in making them. Powerlessness is distinct from exploitation (the powerless may not be exploited; they may simply be irrelevant) and from marginalization (the powerless may still participate in the economy; they simply have no voice in how it is organized).
In the AI transition, powerlessness manifests as the near-total exclusion of affected workers from the decisions that are reshaping their professional lives. The design of AI systems — what they are trained on, what they are optimized for, what safeguards they include, what values they encode — is determined by a technical elite that operates within corporate structures designed to maximize shareholder value and minimize regulatory friction. The deployment of AI systems — which jobs they replace, which workflows they restructure, which human capabilities they render redundant — is determined by managers responding to competitive pressures and quarterly earnings targets. The governance of AI systems — what regulations apply, what rights workers retain, what obligations companies bear — is determined by legislators and regulators who lack both the technical expertise and the political independence to impose meaningful constraints.
At every level — design, deployment, governance — the people most affected by AI have the least voice. Young would identify this as a textbook case of powerlessness, and she would insist that addressing it requires not merely better outcomes but better processes: institutional structures that give affected parties genuine decision-making power, not merely the opportunity to comment on proposals that have already been developed by others.
Cultural imperialism is the fourth face, and it is the one that most directly illuminates the epistemological dimensions of the AI transition. Cultural imperialism, for Young, occurs when the dominant group's experience, values, and cultural products are established as the universal norm, and other groups are marked as deviant, inferior, or invisible. The dominant group does not need to actively suppress other cultures. It simply occupies the default position — the unmarked category against which all others are measured.
AI systems enact cultural imperialism through their training data. Large language models trained predominantly on English-language text treat English as the default mode of linguistic expression and encode the cultural assumptions embedded in that text as though they were universal truths. Image generation systems trained predominantly on Western visual art traditions produce outputs that reflect those traditions' aesthetic norms — their assumptions about proportion, color, composition, beauty — and treat departures from those norms as deviations rather than alternatives. Music generation systems trained predominantly on commercially successful Western music reproduce the harmonic structures, rhythmic patterns, and genre conventions of that tradition while rendering other musical traditions — the microtonal scales of Arabic maqam, the polyrhythmic complexity of West African drumming, the drone-based structures of Indian classical music — as exotic curiosities rather than equally valid modes of musical expression.
The result is not the democratization of culture that AI enthusiasts frequently promise. It is the homogenization of culture under the guise of democratization. When an AI system can generate "a painting in the style of Rembrandt" but struggles to generate "a painting in the style of Bharani Veerashaivism" — not because the latter is less valid but because it is less represented in the training data — the system is encoding a structural hierarchy of cultural value. Young would recognize this immediately as cultural imperialism operating through algorithmic means, and she would insist that the appropriate response is not merely technical (diversify the training data) but political (transform the institutional structures that determine what counts as culture worth preserving and amplifying).
Violence is the fifth face, and Young's use of the term is both broader and more precise than the common usage. Violence, in Young's taxonomy, refers not only to physical attack but to the systematic social practice of violence — the fact that members of certain groups live with the knowledge that they are vulnerable to unprovoked attack on their persons or property, and that this vulnerability is socially tolerated. The violence is structural because it is not merely individual acts of aggression but a pattern of vulnerability that is produced and reproduced by institutional arrangements.
In the AI context, the relevant form of violence is not physical but structural: the normalization of the erasure of human creative labor as an acceptable cost of technological progress. When public discourse treats the displacement of human workers by AI systems as inevitable, natural, or ultimately beneficial — when the language of "disruption" and "creative destruction" frames mass displacement as a necessary and even admirable feature of economic dynamism — a form of structural violence is being enacted. The violence lies not in any individual act but in the cultural normalization of disposability. The message, delivered through a thousand op-eds and earnings calls and conference keynotes, is that human labor is a cost to be minimized, not a contribution to be valued. This normalization does not bruise the body. It bruises something more fundamental: the social recognition that every person's capacity for productive contribution is worthy of protection.
Young's five faces, taken together, provide a diagnostic framework that the AI discourse desperately needs. The dominant conversation oscillates between two equally impoverished positions: the triumphalist position, which acknowledges only the benefits and treats the costs as temporary friction, and the elegiac position, which mourns the losses but cannot articulate what, structurally, is producing them. Young's taxonomy cuts through both. The AI transition is producing exploitation (value transfer from creators to platforms), marginalization (rendering entire categories of human contribution surplus), powerlessness (excluding affected workers from the decisions that reshape their lives), cultural imperialism (encoding dominant cultural norms as universal standards), and violence (normalizing the erasure of human labor as an acceptable cost of progress). All five faces. Simultaneously. And any response that addresses fewer than all five will be structurally inadequate.
The Orange Pill's central question — "Are you worth amplifying?" — takes on a different and darker resonance when refracted through Young's five faces. The question assumes an individual agent making a choice about self-improvement. Young's framework reveals the structural conditions that determine who gets to answer the question at all. The exploited worker whose creative output trained the very system that now threatens her livelihood is not being asked whether she is worth amplifying. She is being amplified without consent — her labor extracted, her voice silenced, her cultural context flattened into training data. The marginalized worker who has been rendered surplus is not being asked whether she is worth amplifying. She has been structurally excluded from the amplification entirely. The powerless worker who has no voice in how AI systems are designed and deployed is not being asked whether she is worth amplifying. She is being amplified through — her position serving as the raw material for someone else's amplification.
The five faces do not negate the possibility of human agency in the AI transition. They contextualize it. They insist that any serious conversation about what individuals should do in the face of AI must begin with an honest account of the structural conditions within which those individuals are acting. And those conditions, as Young's taxonomy reveals, are not merely challenging. They are oppressive — in the precise, structural, multi-dimensional sense that Young spent her career defining.
Hannah Arendt, reflecting on the aftermath of totalitarianism, made a distinction that Iris Marion Young would spend decades refining: the distinction between guilt and responsibility. Guilt, Arendt argued, is specific. It attaches to individuals who performed specific acts. It can be adjudicated, punished, expiated. Responsibility is something else entirely. It is broader, more diffuse, less comfortable. It attaches not to what a person did but to what a person belongs to — the political community, the institutional order, the web of social processes through which collectively produced outcomes emerge.
Young took this distinction and gave it structural teeth. In Responsibility for Justice, she developed what she called the social connection model of responsibility — a theory that assigns political responsibility for structural injustice to all who participate in the institutional processes that produce it. Not because they caused the injustice. Not because they intended the injustice. Not because they benefited from the injustice (though many did). Because they are connected — embedded in the same web of institutional relationships that generated the unjust outcome. Responsibility, in Young's model, is not backward-looking (who did this?) but forward-looking (who must act to change it?). It is not about punishment but about political obligation. And it falls on everyone.
This is, without question, one of the most uncomfortable ideas in contemporary political philosophy. It is also one of the most necessary — because the AI transition is producing a category of harm that the guilt-based model of responsibility cannot address.
Consider the chain of connection that links an AI-generated illustration to the twelve unemployed illustrators in Chicago. A machine learning researcher at a large technology company develops a new architecture for image generation. She is doing her job — advancing the state of the art, publishing papers, contributing to her field. A product team at the same company integrates the architecture into a commercial tool. They are doing their job — building products that users want, that generate revenue, that keep the company competitive. A marketing agency subscribes to the tool and demonstrates it to clients. The agency is doing its job — offering clients the best available solutions at competitive prices. A client chooses the AI-generated illustrations over the human-generated ones because they are faster, cheaper, and — for the client's purposes — good enough. The client is doing its job — managing budgets, meeting deadlines, satisfying stakeholders. The illustrators lose their positions.
Who is guilty? Young's answer is precise: no one. The researcher did not intend to displace illustrators. The product team did not target illustrators. The agency did not betray its illustrators (it was serving its clients, as agencies do). The client did not act in bad faith (it was managing resources, as clients do). Each actor in the chain operated within accepted norms, followed institutional rules, pursued legitimate goals. The harm was not produced by any individual's wrongdoing. It was produced by structure — by the aggregate effect of many people doing exactly what the institutional order expected of them.
The guilt-based model of responsibility, which is the default model in both legal systems and popular moral reasoning, has nothing useful to say about this situation. It can adjudicate between the guilty and the innocent. It cannot address a harm that was produced by the innocent. And so the harm goes unaddressed — not because no one cares, but because the available model of responsibility cannot generate the obligations needed to address it.
Young's social connection model was designed precisely for this gap. It begins with a simple premise: if you participate in institutional processes that produce structural injustice, you bear a political responsibility — not guilt, not blame, but responsibility — to work toward transforming those processes. The responsibility is forward-looking. It does not ask what you did wrong in the past. It asks what you must do differently in the future. It is political, not personal. It does not demand that you feel bad. It demands that you act — that you participate in collective efforts to change the structures that produce unjust outcomes.
Young identified four parameters that determine the specific character of each actor's responsibility within a structural process. These parameters do not assign differential blame. They assign differential capacity and obligation.
The first parameter is power — the degree of influence an actor has over the structural processes that produce the injustice. Those with more power have more responsibility, not because they are more guilty, but because they have more capacity to effect change. In the AI transition, this means that the technology companies that design and deploy AI systems bear a greater responsibility than the end users who adopt them — not because the companies are morally worse, but because they have more structural power to reshape the conditions under which AI is developed and used.
The second parameter is privilege — the degree to which an actor benefits from the structural processes that produce the injustice. Those who benefit more have more responsibility, not because benefiting from an unjust structure is a crime, but because their privilege gives them both the resources and the obligation to contribute to structural change. The venture capitalists who profit from the displacement of human labor bear a greater responsibility than the displaced workers themselves — not because profiting from investment is wrong, but because their position of advantage creates both the capacity and the obligation to redirect resources toward addressing the injustice their profits depend on.
The third parameter is interest — the degree to which an actor has a stake in the outcome. Those who are most affected by the structural injustice have, paradoxically, both the least power and the greatest interest in change. Young does not exempt them from responsibility. She acknowledges that their responsibility is qualitatively different — it is the responsibility to organize, to advocate, to make their experience visible to those in positions of power and privilege. This is one of the most demanding aspects of Young's framework. It places a burden on the victims of structural injustice to participate in the political processes that could address their situation — even when those processes are the same ones that produced the injustice in the first place.
The fourth parameter is collective ability — the degree to which an actor can join with others to effect structural change. Responsibility, for Young, is not individual. It is inherently collective. No single person can change a structure. Structures are changed through collective action — through social movements, institutional reforms, political organizing, and the slow, unglamorous work of building coalitions that can exert enough pressure to alter institutional rules.
These four parameters — power, privilege, interest, collective ability — map onto the AI transition with uncomfortable precision. The technology companies have power: they design the systems, control the platforms, set the defaults. The investors and executives have privilege: they capture a disproportionate share of the economic value generated by AI deployment. The displaced workers have interest: their livelihoods, their identities, their communities are directly affected. And all of these actors have, to varying degrees, collective ability: the capacity to organize, to advocate, to participate in the political processes that could reshape the structural conditions under which AI is developed and used.
The Orange Pill's central argument — that disengagement is never neutral — finds its philosophical foundation in Young's social connection model. Segal describes the historical case of the Luddites, who were victims of structural injustice produced by the Industrial Revolution. The Luddites' machine-breaking was not irrational. It was a response to real harm. But their withdrawal from political participation — their refusal to engage with the institutional processes that were reshaping their world — left those processes in the hands of those who benefited from the displacement. The result was not merely continued displacement but the active construction of institutional arrangements that made future displacement easier, cheaper, and more socially acceptable. The Factory Acts that eventually provided some protections for workers came decades later, and they came not because factory owners had a change of heart but because organized labor movements — workers who stayed in the room, who participated in the political process, who built the coalitions that could exert structural pressure — fought for them.
Young would recognize this pattern immediately. The burden of political responsibility falls most heavily on those who are least equipped to bear it. The displaced worker who has lost her income, her professional identity, and her sense of purpose is now being asked to organize, to advocate, to participate in the very institutional processes that failed to protect her. The unfairness of this demand is not lost on Young. She acknowledges it explicitly. She does not pretend that the burden is equitably distributed. She does not claim that it is just. She claims only that it is necessary — that the alternative, disengagement, produces worse outcomes for the very people who are already bearing the greatest costs.
This is the uncomfortable core of Young's political philosophy, and it is the uncomfortable core of the AI transition. Responsibility without guilt. Obligation without fault. The demand to participate in changing a structure that is harming you, even though you did not create it, even though you are not to blame for it, even though the very act of participation requires resources — time, energy, hope — that the structure has already depleted.
Young's framework does not resolve this tension. It holds it. It insists that holding the tension — acknowledging both the unfairness of the burden and the necessity of bearing it — is the beginning of political maturity. The alternative is not the absence of responsibility. It is the default allocation of responsibility — to those who already have the most power, the most privilege, and the least interest in change. When the displaced disengage, the decisions get made anyway. They are simply made without the voices of those who are most affected.
The social connection model also illuminates a phenomenon that the Orange Pill describes but does not fully theorize: the emotional texture of the AI transition. Segal captures the compound feeling — awe and grief coexisting, the sense of witnessing something magnificent and something terrible simultaneously. Young's framework explains why this feeling is so disorienting. It is the emotional signature of structural responsibility. The AI engineer who builds a system that displaces thousands of workers is not guilty, but she is responsible. The feeling is not guilt — she has done nothing wrong. It is something more diffuse and more persistent: the awareness of participation in a structure that produces harm, combined with the obligation to act on that awareness. It is the feeling of being implicated without being culpable. Of being connected without being condemned.
This feeling has no name in ordinary moral vocabulary, which is why it tends to resolve into one of two unsatisfying alternatives: denial (I am not responsible; I was just doing my job) or paralysis (I am responsible for everything; I cannot move). Young's framework offers a third path: differentiated political responsibility. The acknowledgment that one's participation in a structural process that produces injustice creates an obligation — proportional to one's power, privilege, interest, and collective ability — to work toward transforming that process. Not the obligation to feel bad. The obligation to act. Not the obligation to fix everything. The obligation to do what one can, in concert with others, to change the conditions that produce the unjust outcomes.
The AI transition will not be made just by punishing villains, because there are no villains to punish. It will not be made just by redistributing wealth, because redistribution without structural change simply redistributes the same injustice in a different configuration. It will be made just — to the extent that it can be made just — by the slow, difficult, collective work of transforming the institutional structures through which AI is developed, deployed, and governed. And that work, as Young insists, is everyone's responsibility. Not equally. Not identically. But inescapably.
In the global conversation about artificial intelligence, there is a predictable roster of voices. Technology executives give keynote addresses at conferences sponsored by their own companies. AI researchers publish papers in journals that their colleagues review. Policy analysts write white papers for think tanks funded by the industry they analyze. Journalists, increasingly pressured to produce content at the speed of the news cycle, cover the story through the lens of the sources most readily available to them — which are, overwhelmingly, the same executives, researchers, and analysts. Occasionally, a displaced worker is interviewed for a human-interest segment — a brief narrative of personal loss that serves as emotional seasoning for the larger story of technological progress.
This is the discursive structure of the AI transition, and Iris Marion Young would recognize it immediately as a case study in the politics of exclusion. Not exclusion by explicit prohibition — no one is formally barred from the conversation — but exclusion by structure. The institutional arrangements that determine who speaks, who is heard, and whose perspective shapes the terms of the debate are configured to amplify certain voices and silence others. The result is a public discourse that is, in Young's precise language, democratically illegitimate — not because the wrong conclusions are being reached, but because the right people are not in the room.
Young spent the latter part of her career developing a theory of deliberative democracy that was grounded in a simple but radical principle: democratic legitimacy requires that all who are significantly affected by a decision have the opportunity to participate meaningfully in making it. Not merely to be represented. Not merely to be consulted. Not merely to vote for candidates who will make the decision on their behalf. But to participate — to speak, to be heard, to influence the outcome through the exchange of reasons and perspectives within a deliberative process that is structured to give their voice genuine weight.
This principle, which Young articulated most fully in Inclusion and Democracy (2000), rests on a critique of two dominant approaches to democratic decision-making: aggregative democracy and what Young called the "deliberative democracy" of theorists like Jürgen Habermas. Aggregative democracy — the model in which preferences are counted and the majority wins — fails, Young argued, because it treats preferences as given rather than formed. It does not ask where preferences come from, how they are shaped by structural position, or whether the process of preference formation itself was just. A majority that forms its preferences within a structure of cultural imperialism — in which the dominant group's experience is treated as universal — will produce outcomes that reflect that imperialism, no matter how fairly the votes are counted.
Habermasian deliberative democracy — the model in which decisions are legitimated by the quality of the arguments exchanged in public deliberation — fails for a different reason. It assumes that all participants enter the deliberative process as equals, that the force of the better argument will prevail, and that the process itself is neutral with respect to the social positions of the participants. Young challenged every one of these assumptions. Deliberative processes are not neutral. They are structured by norms of communication — norms about what counts as a legitimate argument, what register of speech is considered appropriate, what kinds of evidence are treated as relevant — that systematically advantage some participants and disadvantage others.
Consider what happens when a displaced creative worker attempts to participate in the policy conversation about AI. The norms of that conversation — its vocabulary, its evidentiary standards, its implicit assumptions about what constitutes a serious contribution — are set by people with advanced degrees, institutional affiliations, and fluency in the language of policy analysis. A graphic designer from Cleveland who wants to say "this technology is destroying my life and the lives of everyone I know" is not making an argument that meets these norms. Her testimony is personal, emotional, anecdotal. It can be acknowledged — briefly, sympathetically — and then set aside in favor of the "real" analysis: the economic models, the productivity data, the historical analogies to previous technological transitions. The designer's experience is heard, in the minimal sense that sound waves reach ears. It is not included, in Young's demanding sense of genuinely influencing the deliberative outcome.
Young proposed an expanded understanding of political communication that she called communicative democracy. Against the Habermasian emphasis on formal argument, Young argued that legitimate democratic deliberation must include three additional modes of communication: greeting, rhetoric, and narrative. Each of these modes serves a function that formal argument cannot.
Greeting — the acknowledgment of others as legitimate participants in a shared political space — is the precondition for genuine deliberation. Young argued that deliberative processes often fail before they begin, because the conditions for mutual recognition have not been established. When technology executives and displaced workers are brought into the same room, the structural asymmetry between them — the difference in power, status, cultural capital, and institutional authority — makes genuine deliberation impossible unless the process is explicitly structured to counteract that asymmetry. Greeting is the practice of recognizing the other as a full participant whose perspective has equal standing, regardless of their structural position. It sounds simple. It is profoundly difficult. And it almost never happens in the AI policy conversation, where the structural authority of the technologist renders the displaced worker's presence decorative rather than substantive.
Rhetoric — the use of emotional, figurative, and performative speech to communicate values, urgency, and situated understanding — is essential because formal argument systematically excludes the forms of communication through which marginalized groups make their experience legible. When a displaced musician says "you are feeding my life's work into a machine that will spit out a version of me that costs nothing," she is not making a formal argument. She is making a rhetorical claim — a claim about value, identity, and justice that cannot be translated into the language of cost-benefit analysis without losing everything that makes it meaningful. Young argued that excluding rhetoric from democratic deliberation is itself a form of exclusion — a way of ensuring that only those who speak the dominant group's language can participate effectively.
Narrative — the telling of stories that situate abstract claims in lived experience — is essential because structural injustice is, by definition, invisible from the structural positions of those who benefit from it. The technology executive who has never lost a job to automation does not know what it feels like. The policy analyst who has never depended on freelance creative work for her income does not know what it means when the rate drops from three hundred dollars to thirty. These are not failures of intelligence or empathy. They are failures of position — the inevitable consequence of occupying a structural location from which certain experiences are simply not visible. Narrative makes the invisible visible. It provides the epistemological raw material without which deliberation about structural injustice is necessarily incomplete.
Young's insistence on the inclusion of greeting, rhetoric, and narrative in democratic deliberation is not a concession to irrationality. It is a recognition that rationality itself is socially constructed — that what counts as a "good reason" is determined by norms that reflect the communicative preferences of dominant groups. The technology executive who dismisses the displaced worker's testimony as "anecdotal" is not applying a neutral standard of evidence. He is applying a standard that was developed within, and serves the interests of, his own institutional context. Young does not argue that anecdotal evidence should replace systematic analysis. She argues that both are necessary — that legitimate deliberation requires multiple modes of communication precisely because no single mode can capture the full range of perspectives that structural justice demands.
The application to the AI transition is direct and damning. The institutional processes through which AI is currently governed — corporate boards, regulatory agencies, legislative committees, international standards bodies — are structured in ways that systematically exclude the voices of those most affected by AI deployment. The barriers to inclusion are not formal. There is no rule that says displaced workers cannot testify before a congressional committee. But the informal barriers are enormous: the cost of travel, the time away from the search for new employment, the unfamiliarity with the norms of political testimony, the well-documented psychological effects of displacement (anxiety, shame, loss of confidence) that make public advocacy feel impossible even when it is technically available.
Young's framework identifies these barriers as structural, not personal. The displaced worker who does not participate in the AI policy conversation is not apathetic. She is structurally excluded — blocked not by a gate but by a gradient, a slope of institutional arrangements that makes participation increasingly costly as one descends the hierarchy of structural power. Addressing this exclusion requires not merely inviting affected parties to the table but restructuring the table itself — changing the norms of communication, the standards of evidence, the distribution of authority within the deliberative process so that those who are most affected by the decisions have genuine power to shape them.
The Orange Pill describes a discourse dominated by what might be called amplified voices — those whose structural position gives them access to the platforms, the audiences, and the cultural authority that shape public understanding. The triumphalist narrative (AI is progress; displacement is temporary; the future is bright) dominates not because it is more true but because it is more amplifiable. It is produced by people who control the platforms, who fund the research, who sponsor the conferences, who own the media outlets. The elegiac narrative (we are losing something irreplaceable; human creativity is sacred; the machines must be stopped) has a smaller but devoted audience, largely composed of creative professionals who are being displaced. Neither narrative includes, in any meaningful way, the perspectives of those who are most marginalized by the transition — the workers in the Global South whose content moderation labor makes AI systems usable, the indigenous communities whose cultural products are absorbed into training data without acknowledgment, the precariat of gig workers whose algorithmic management is a preview of the AI-governed workplace.
Young's communicative democracy provides the theoretical foundation for what the AI discourse lacks: a genuinely inclusive deliberative process. Such a process would not merely include more voices. It would restructure the terms of inclusion — expanding the range of legitimate communicative modes beyond formal argument, creating institutional mechanisms that compensate for the structural asymmetries between participants, and establishing decision-making procedures that give affected parties genuine power rather than advisory input.
What would this look like in practice? Young's work suggests several structural reforms. First, the creation of deliberative bodies — at the company level, the industry level, and the governmental level — in which affected workers have formal representation and binding decision-making authority over AI deployment policies. Not advisory boards. Not stakeholder consultations. Decision-making bodies with real power. Second, the funding of advocacy organizations that can amplify the voices of displaced workers, providing the resources, the institutional support, and the communicative training that effective political participation requires. Third, the reform of regulatory processes to include narrative testimony — the lived experiences of those affected by AI — as a formal category of evidence alongside economic analysis and technical assessment. Fourth, the development of international governance mechanisms that include the perspectives of the Global South, where much of the invisible labor that sustains AI systems is performed.
These reforms are not utopian. They are structural — concrete changes to institutional arrangements that would redistribute communicative power within the AI governance process. They are also, in the current political environment, almost entirely absent from the conversation. The AI policy discourse is overwhelmingly focused on technical questions (how to make AI systems safer, more accurate, more aligned with human values) and distributional questions (how to share the economic gains from AI more equitably). Young's framework insists that these questions, however important, are secondary to the procedural question: who gets to decide? Until the people most affected by the AI transition have a genuine voice in the decisions that shape their lives, those decisions will continue to reflect the interests and perspectives of those who already hold structural power. The outcomes may be redistributive. They will not be just.
The absence of affected voices from the AI governance conversation is not an oversight. It is a structural feature of the conversation — a consequence of institutional arrangements that were designed to serve other purposes and are now being applied, without modification, to a technological transition of unprecedented scope and speed. Young's communicative democracy does not promise that including more voices will produce better outcomes. It promises something more fundamental: that including more voices will produce legitimate outcomes — decisions that can claim democratic authority because they were made through a process that genuinely included all who are affected.
In the age of AI, where the decisions being made will reshape the economic, cultural, and political landscape for generations, the question of who gets to speak is not a procedural nicety. It is the central question of justice. And the current answer — a small, homogeneous, structurally powerful elite, speaking primarily to itself — is the central injustice.
For three centuries, Western political philosophy has been haunted by a seductive fiction: the idea that justice requires a view from nowhere. The impartial spectator. The original position behind a veil of ignorance. The rational agent stripped of all particularity — of gender, race, class, culture, history, body — reasoning about justice from a position of pure, disembodied universality. The fiction has been extraordinarily productive. It generated the social contract tradition, the liberal theory of rights, the Rawlsian framework that dominates contemporary political philosophy. It also generated, Iris Marion Young argued, a systematic distortion of justice itself — because the view from nowhere is always, in practice, the view from somewhere very specific. And the people occupying that somewhere have rarely noticed that their particular perspective is masquerading as the universal one.
Young's critique of the ideal of impartiality, developed most fully in Justice and the Politics of Difference and elaborated throughout her subsequent work, does not reject the aspiration to fairness. It rejects the claim that fairness requires the suppression of difference — that just reasoning demands we abstract away from the particular social positions, embodied experiences, and group memberships that constitute who we actually are. Young argued that this demand, far from producing genuine impartiality, produces a counterfeit version: the perspective of the socially dominant group presented as though it were the perspective of everyone. The white, male, property-owning European who designed the thought experiment calls his perspective "reason." Everyone else's perspective is called "bias."
The AI discourse is saturated with this counterfeit impartiality, and Young's critique illuminates its operations with uncomfortable precision.
The most consequential decisions about artificial intelligence — how systems are designed, what they are optimized for, whose data they are trained on, what safeguards they include, what values they encode, who benefits from their deployment, who bears the costs — are being made by a population that is, by any sociological measure, radically unrepresentative of the humanity these decisions affect. The AI industry concentrates decision-making power in a demographic enclave: predominantly male, predominantly white or East Asian, predominantly educated at a small number of elite institutions, predominantly located in the San Francisco Bay Area or a handful of analogous technology hubs, predominantly affluent, predominantly childless or with the resources to outsource care work, predominantly embedded in a professional culture that treats seventy-hour work weeks as baseline commitment and geographic mobility as an unquestioned good. This is not a conspiracy. It is a structure — a pattern of recruitment, credentialing, cultural norming, and institutional incentive that produces a remarkably homogeneous decision-making class through mechanisms that are, individually, unremarkable.
Young's critique does not say these people are bad. It says something more consequential: they are situated. They occupy a specific social position, and that position shapes what they see, what they value, what they assume to be universal, and what they fail to notice. When a machine learning researcher at a major AI lab optimizes a model for "helpfulness," she is encoding a particular understanding of helpfulness — one shaped by her professional training, her cultural context, her implicit assumptions about who the user is and what the user needs. When a product manager decides that an AI writing tool should be optimized for "clarity" and "concision," he is encoding a particular understanding of good writing — one that reflects the norms of Silicon Valley corporate communication and may be actively hostile to the rhetorical traditions of other cultures, other classes, other modes of thought. When a venture capitalist decides that an AI creative tool is worth funding because it "democratizes" creative production, she is encoding a particular understanding of democracy — one in which access to a tool counts as participation, regardless of whether the tool's design reflects the values, needs, and perspectives of the people who are supposed to be empowered by it.
None of these people are acting in bad faith. All of them are acting from somewhere. And Young's argument is that the "somewhere" matters — that situated knowledge is not an obstacle to justice but the raw material from which justice must be built, and that pretending otherwise does not eliminate the bias but conceals it.
The Orange Pill framework operates within this tension in ways that Young's analysis helps make explicit. Edo Segal's central metaphor — AI as amplifier — carries an implicit epistemological claim: that the amplifier reveals what is already there, that it makes the existing signal louder and clearer. Young would push back on the metaphor's apparent neutrality. An amplifier does not merely increase volume. It shapes the signal through the characteristics of its own circuitry. A guitar amplifier does not produce a "neutral" louder version of the guitar's sound; it colors the sound through the specific properties of its tubes, its transistors, its speaker configuration. The amplification is always from somewhere — always mediated by the material characteristics of the amplifying apparatus.
AI systems are amplifiers built from somewhere extraordinarily specific. Their training data reflects the digital archive of human culture as it existed through the early 2020s — an archive that is overwhelmingly English-language, overwhelmingly Western, overwhelmingly skewed toward the cultural products of affluent, educated, digitally connected populations. Their optimization targets reflect the values of the institutions that built them — institutions that measure success in user engagement, revenue generation, and benchmark performance. Their interface designs reflect the cognitive habits and workflow assumptions of their designers' professional culture. Every layer of the amplification apparatus is situated, shaped by the particular social position of the people who built it. And yet the output is presented as though it were a neutral, universal tool — equally useful to everyone, equally reflective of everyone's needs, equally responsive to everyone's values.
Young would call this a textbook case of what she termed the "normalizing gaze" — the process by which the dominant group's perspective becomes the invisible standard against which all others are measured. The AI system trained primarily on English-language text does not announce itself as an English-language-centric system. It presents itself as a "language model" — as though English were language itself. The AI system trained primarily on Western visual art traditions does not announce itself as a Western-centric system. It presents itself as an "image generator" — as though the Western tradition's aesthetic norms were simply the norms of visual representation. The normalization is so complete that it becomes invisible to those who share the normalized perspective. The fish does not notice the water.
Segal's fishbowl metaphor resonates powerfully with Young's analysis, but Young's framework adds a dimension that the metaphor alone cannot capture. The fishbowl is not merely a cognitive limitation — a set of assumptions so familiar that they become invisible. It is a political structure. The assumptions that constitute the fishbowl are not random or idiosyncratic. They are the assumptions of the socially dominant group, and they are enforced — not through violence or coercion, but through the far more effective mechanism of normalcy. The dominant group's assumptions about what counts as good writing, good art, good code, good reasoning, good work become the default settings of the systems that mediate an increasing share of human productive activity. And because these defaults are presented as neutral — as simply "how things work" — they are extraordinarily difficult to challenge. To challenge them requires first making them visible, and visibility requires the perspective of someone who stands outside the normalized framework. It requires, in Young's language, the view from somewhere else.
This is why Young insisted that difference is not a problem to be overcome but a resource for democratic deliberation. The standard liberal response to difference is universalism: strip away the differences, find the common ground, reason from shared principles. Young argued that this response systematically advantages those whose perspective already constitutes the "universal" position. If the shared principles are the dominant group's principles presented as everyone's principles, then "finding common ground" means requiring marginalized groups to translate their experiences into a language that was not designed to express them, to argue within a framework that was not built to accommodate their concerns, to compete in a game whose rules were written by the other team.
The alternative Young proposed was not relativism — not the abandonment of shared standards in favor of incommensurable group perspectives. It was what she called communicative democracy: a deliberative practice in which different social groups bring their particular perspectives to the table, in which those perspectives are treated as legitimate sources of knowledge about the social world, and in which the process of deliberation itself is designed to ensure that no single perspective dominates. Communicative democracy does not require participants to abandon their situated positions. It requires them to articulate those positions clearly, to listen across difference, and to allow the encounter with unfamiliar perspectives to transform their understanding of the shared situation.
Applied to the AI transition, Young's communicative democracy demands something that the current governance landscape entirely fails to provide: institutional structures in which the people most affected by AI — not just the people who build it, fund it, or regulate it — have genuine deliberative power over its design and deployment. This is not the same as "stakeholder consultation," the preferred mechanism of contemporary technology governance, in which affected parties are invited to comment on proposals that have already been developed by others. Consultation without power is not inclusion. It is performance. Young was merciless on this point: "Being included in a discussion is not the same as having influence on the outcome. Deliberative processes that lack institutional mechanisms for translating deliberation into decision are not democratic. They are therapeutic."
The practical implications are substantial. Young's framework suggests that the governance of AI systems should include, at a minimum, the following structural features. First, the representation of affected groups — not as tokens or advisors but as participants with binding deliberative authority — in the bodies that make consequential decisions about AI design, deployment, and regulation. Second, the provision of material resources that enable genuine participation: time, compensation, access to information, technical support. Meaningful deliberation requires material preconditions that are currently distributed along existing lines of structural advantage. Third, the recognition that different groups will bring different modes of communication — narrative, rhetoric, testimony, emotional expression — and that privileging the mode of communication favored by the dominant group (abstract argument, technical specification, cost-benefit analysis) is itself a mechanism of exclusion. Fourth, the establishment of accountability mechanisms that connect deliberative outcomes to institutional action — that ensure the results of inclusive deliberation are actually implemented, not merely noted and filed.
These demands will strike many readers as impractical — as hopelessly idealistic in a competitive global environment where the speed of AI development makes deliberative governance seem like a luxury no one can afford. Young anticipated this objection and responded with characteristic directness. The question is not whether inclusive deliberation is convenient. The question is whether decisions made without it are legitimate. If the deployment of AI systems produces structural injustice — and the preceding chapters have argued that it does, across all five of Young's faces of oppression — then the institutional processes that produced those decisions are, by Young's criteria, unjust. Not because the decision-makers were malicious. Not because they lacked expertise. Because they excluded the perspectives of those who bear the consequences. And exclusion is not a procedural shortcoming. It is a substantive injustice — a failure of the decision-making process that contaminates the decisions it produces.
The view from somewhere is not a weakness. It is the human condition. Every perspective is partial, situated, shaped by the social position of the person who holds it. The aspiration to justice does not require transcending this condition. It requires acknowledging it — and building institutional structures that bring multiple situated perspectives into genuine, power-sharing deliberation. The AI systems being built today encode the view from a very specific somewhere. Young's framework insists that this is not a technical problem to be solved by diversifying training data or hiring more inclusively — though both are necessary. It is a political problem that requires restructuring the institutions through which AI is governed, so that the people whose lives are being reshaped by these systems have a real voice in shaping them.
The fishbowl cracks. Young's work suggests that the cracks are not flaws. They are openings — apertures through which other perspectives, other experiences, other ways of knowing can enter the deliberative space and transform the conversation. The question is whether the institutions that govern AI are capable of holding the cracks open, or whether they will seal them shut in the name of efficiency, expertise, and the seductive fiction of the view from nowhere.
In 1812, a group of English textile workers who called themselves Luddites smashed the stocking frames and shearing machines that were destroying their livelihoods. They were not opposed to technology. They were opposed to the specific deployment of technology in a way that enriched factory owners while immiserating the skilled craftsmen who had previously constituted the economic backbone of their communities. Their analysis of the situation was, by any reasonable measure, correct. The new machines did transfer wealth from workers to owners. The new factory system did destroy the autonomy and dignity of craft labor. The political economy of early industrialization did produce structural injustice on a massive scale.
The Luddites lost. They lost not because their analysis was wrong but because their strategy — machine-breaking, withdrawal from political engagement, and the insistence that the old order could be restored by force — was catastrophically inadequate to the structural problem they faced. The machines were symptoms, not causes. The cause was a political-economic structure that gave factory owners the legal authority to deploy labor-saving technology without any obligation to the workers it displaced, that gave Parliament the power to make machine-breaking a capital offense while providing no mechanism for workers to participate in the decisions that were remaking their world, and that treated the displacement of skilled labor as an acceptable cost of economic progress. Smashing the machines addressed none of this. And the Luddites' retreat from the political process — their refusal to engage with the institutional structures through which change might have been negotiated — left those structures entirely in the hands of the people who were already benefiting from them.
Iris Marion Young would recognize the Luddite story as a case study in the political trap that structural injustice creates for the people it harms. The trap works like this: structural injustice produces legitimate grievance. Legitimate grievance produces the desire to withdraw — to disengage from the institutions that are producing the harm, to refuse complicity, to preserve dignity by refusing to participate in a system that is systematically degrading your position. The withdrawal is emotionally intelligible, morally defensible, and politically suicidal. Because the institutions continue to operate whether or not the harmed participate. And when the harmed withdraw, the institutions are shaped entirely by those who remain — who are, overwhelmingly, the people who benefit from the existing arrangement.
Young's social connection model of responsibility confronts this trap directly, and it does so in a way that will not make anyone comfortable. Her argument, developed with rigorous clarity in Responsibility for Justice, proceeds in four steps.
First: structural injustice is produced by the collective action of many people operating within accepted institutional rules and norms. No single person is the cause. The injustice is authorless.
Second: because the injustice is authorless, the guilt-based model of responsibility — which seeks to identify the perpetrator, assign blame, and impose punishment — is structurally incapable of addressing it. There is no perpetrator. The guilt model generates an endless, fruitless search for the villain, which distracts attention from the structural conditions that produced the harm.
Third: a different model of responsibility is required — one that is forward-looking rather than backward-looking, political rather than moral, shared rather than individual. Young calls this political responsibility: the obligation to participate in collective action aimed at transforming the structural conditions that produce injustice. This obligation falls on all who participate in the structures — not because they are guilty but because they are connected.
Fourth — and this is the step that makes Young's argument genuinely difficult — political responsibility falls on the structurally harmed as well as the structurally advantaged. The displaced worker, the marginalized creative, the person whose livelihood has been destroyed by the AI transition: these people bear political responsibility for working toward structural change. Not because they caused the harm. Not because they deserve the burden. Because they are members of the political community, connected to the structures that produced the injustice, and the structures will not change without their participation.
This is not a comfortable argument. Young did not pretend otherwise. She acknowledged, with a frankness that is rare in political philosophy, that placing the burden of political responsibility on the people who are already bearing the burden of structural injustice is profoundly unfair. The twelve illustrators who lost their jobs in Chicago should not have to bear the additional burden of organizing, advocating, deliberating, and fighting for institutional change in a system that has just expelled them. They should not have to educate the policymakers, persuade the technologists, and mobilize the public on behalf of a cause that the dominant institutions have every incentive to ignore. The burden is disproportionate. It falls hardest on those who can least afford to carry it.
And yet. Young's argument insists that the alternative — disengagement, withdrawal, refusal — is worse. Not morally worse. Politically worse. The Luddites were right to be angry. They were wrong to withdraw. Their withdrawal did not preserve their dignity. It ceded the political terrain to the factory owners, the parliamentarians, and the political economists who shaped the institutional response to industrialization in ways that entrenched the very injustices the Luddites had identified. The history of the nineteenth-century labor movement is, in significant part, the history of workers who learned this lesson — who moved from machine-breaking to union-organizing, from withdrawal to engagement, from the refusal of the system to the painful, unglamorous, often humiliating work of trying to change it from within.
The Orange Pill framework resonates with Young's argument in ways that are both illuminating and uncomfortable. Segal's central claim — that disengagement from the AI transition is never neutral, that the choice to step back from the tools is itself a choice with political consequences — maps directly onto Young's theory of political responsibility. The creative professional who refuses to learn AI tools, who insists on producing only human-made work, who withdraws from the platforms and markets that have been restructured by AI, is making a choice that is emotionally intelligible and professionally perilous. Young's framework adds the political dimension: the withdrawal does not merely affect the individual's career prospects. It removes a voice — a perspective, an experience, a form of situated knowledge — from the institutional processes that are shaping the AI transition. And the fewer voices of the structurally harmed that are present in those processes, the more thoroughly those processes will be shaped by the perspectives and interests of the structurally advantaged.
Young's framework also illuminates why the most common forms of resistance to the AI transition — boycotts, lawsuits, public shaming of AI companies — are necessary but insufficient. These are backward-looking responses. They seek to assign blame, punish wrongdoing, and restore the status quo ante. Young's argument is that structural injustice cannot be addressed by backward-looking mechanisms alone. The lawsuits that seek compensation for unauthorized use of copyrighted material in AI training data are important — they establish legal precedents and impose costs on extractive behavior. But they do not transform the structures that produced the extraction. Even if every copyright lawsuit succeeds, even if every AI company is required to compensate every creator whose work appears in its training data, the structural conditions that concentrate decision-making power in the hands of a technical and corporate elite, that exclude affected workers from governance processes, that normalize the displacement of human labor as an acceptable cost of progress — these conditions will remain intact.
Forward-looking responsibility requires something harder than litigation. It requires institution-building. It requires the creation of new deliberative structures — guilds, cooperatives, regulatory bodies, industry councils, international frameworks — in which the people most affected by AI have genuine, binding decision-making power over its development and deployment. It requires the investment of time, energy, and political capital in processes that are slow, frustrating, and often deeply unglamorous. It requires showing up — again and again, in rooms that were not designed to include you, among people who do not share your experience, within institutions that have not yet learned to hear your voice.
This is what Young meant by the burden of political responsibility, and it is heavier than it sounds. The burden is not merely the effort of participation. It is the psychic cost of participating in institutions that are simultaneously the source of the harm and the only available mechanism for addressing it. The displaced illustrator who joins an industry task force on AI governance must sit across the table from the technology executives whose products destroyed her career. She must speak in a language — policy language, technical language, economic language — that was not designed to express her experience. She must translate the felt reality of her displacement into the vocabulary of stakeholder impact assessments and regulatory frameworks. She must do all of this while managing the emotional weight of engaging with a system that has just demonstrated, in the most concrete possible terms, its indifference to her wellbeing.
Young did not romanticize this burden. She argued that it should be distributed more equitably — that the structurally advantaged, who benefit from the current arrangement, should bear a disproportionate share of the work of changing it. But she also insisted that the structurally harmed cannot simply wait for the advantaged to fix the problem. History provides no examples of structural injustice that was resolved solely through the goodwill of the beneficiaries. Every major transformation of unjust structures — the abolition of slavery, the extension of the franchise, the creation of the welfare state, the recognition of civil rights — required the sustained, organized, politically engaged participation of the people who were being harmed. The burden was unfair. The alternative was worse.
The AI transition will follow this pattern or it will follow the Luddite pattern. Those are the options Young's analysis presents. Either the people most affected by AI — the displaced workers, the marginalized creatives, the culturally imperalized communities, the economically expendable — will organize, engage, and fight for institutional change within the systems that are producing the injustice, or they will withdraw, and the systems will be shaped entirely by the interests of those who remain. The first option is painful. The second is catastrophic.
Staying in the room is not a moral imperative. It is a political one. The room is where the decisions are made. And the decisions will be made whether you are in the room or not.
Athenian democracy, in its classical form, rested on a temporal assumption so fundamental that no one bothered to state it: deliberation takes time. The citizens who gathered in the Pnyx to debate the affairs of the polis operated at the speed of human speech, human thought, human persuasion. A proposal was introduced. Arguments were made. Counter-arguments were heard. Citizens reflected, consulted, changed their minds, changed them back. The process was slow. It was supposed to be. The slowness was not a design flaw. It was a design feature — a structural mechanism that created space for the kind of careful, multi-perspectival consideration that democratic legitimacy requires.
Iris Marion Young's theory of communicative democracy inherited this temporal assumption and enriched it. Young argued that legitimate democratic deliberation requires not merely the presence of diverse voices but the structural conditions that allow those voices to be heard, processed, responded to, and integrated into collective decision-making. These conditions include time — time for people with different communicative styles to express their perspectives, time for people with different experiential backgrounds to translate their experiences into forms that others can understand, time for genuine listening, genuine reflection, genuine reconsideration. Democracy, in Young's framework, is not merely a procedure for aggregating preferences. It is a communicative practice that requires the kind of sustained, multi-directional exchange that cannot be compressed beyond a certain point without losing its essential character.
The AI transition is occurring at a speed that makes democratic deliberation structurally impossible under current institutional arrangements. This is not a rhetorical exaggeration. It is a structural description of a temporal mismatch that threatens the legitimacy of every institutional response to AI — regulatory, legislative, judicial, and civic.
Consider the timescales. A new AI capability — a model that can generate code, produce images, analyze legal documents, compose music, diagnose medical conditions — moves from research paper to commercial product in months. The commercial product is adopted by millions of users in weeks. The labor market effects are felt within a single quarter. An illustrator who was fully employed in January discovers in April that her largest client has switched to AI-generated images. A junior software developer who was hired in September learns in February that his team is being restructured around AI-assisted workflows that make his entry-level skills redundant. A translation agency that was profitable in fiscal year 2023 is insolvent by fiscal year 2025. The displacement occurs at the speed of software deployment — which is to say, at the speed of a server update, a subscription renewal, an executive decision implemented with a Slack message and a calendar invite.
Now consider the timescales of democratic response. A legislative process in the United States, from initial proposal to enacted law, takes, under favorable conditions, eighteen months to three years. A regulatory rulemaking process, from notice of proposed rulemaking to final rule, takes one to five years. A judicial process, from initial filing to appellate resolution, takes two to seven years. An international governance process, from initial multilateral discussion to ratified agreement, takes five to fifteen years. At every level of institutional response, the timescale of democratic deliberation is orders of magnitude slower than the timescale of technological change.
Young's framework reveals that this temporal mismatch is not merely an inconvenience. It is a structural injustice — because the mismatch systematically advantages those who can act quickly (technology companies, investors, early adopters) and systematically disadvantages those who require collective, deliberative processes to protect their interests (workers, communities, cultural institutions, democratic polities). Speed is not neutral. In a system where the capacity for rapid action is distributed unequally, the mismatch between the speed of technological change and the speed of democratic response functions as a mechanism of domination. The fast dominate the slow. Not through force. Through structure.
Young's analysis of powerlessness — the condition of those who are affected by decisions but have no meaningful role in making them — acquires a temporal dimension in the AI context. The powerless are not merely those who are excluded from the room. They are those who cannot get to the room in time. By the time the democratic process produces a response — a regulation, a law, a judicial precedent, an international standard — the technological landscape has already shifted, the market has already restructured, the displacement has already occurred. The deliberation arrives after the decision has already been made — not by a democratic body but by the aggregate actions of companies, investors, and consumers operating at market speed.
This is the structural trap that Young's communicative democracy faces in the age of AI: the very features that make democratic deliberation legitimate — its slowness, its inclusivity, its multi-perspectival character — are the features that make it structurally unable to keep pace with the changes it needs to govern. The faster the technological change, the wider the gap between the speed of change and the speed of democratic response. And the wider the gap, the more the actual governance of AI is conducted not by democratic institutions but by the private institutions — corporations, venture capital firms, industry consortia — that can operate at machine speed.
Young did not live to confront this specific problem — she died in 2006, before the current AI revolution was imaginable — but her theoretical framework contains the resources for addressing it. The key move is to reject the assumption that democratic deliberation must always be reactive — that the democratic process must wait for a technological change to occur before it can begin deliberating about how to respond. Young's communicative democracy is not merely a mechanism for responding to problems. It is a mechanism for anticipating them — for bringing diverse perspectives to bear on emerging possibilities before those possibilities have hardened into accomplished facts.
This suggests a structural reorientation of AI governance: from reactive regulation to anticipatory deliberation. Instead of waiting for AI systems to be deployed and then scrambling to address the consequences, democratic institutions would need to create permanent, well-resourced deliberative bodies that operate continuously — tracking technological developments, modeling potential impacts, engaging affected communities, and producing governance frameworks in advance of deployment rather than in its aftermath. These bodies would need to include, as Young's framework insists, not merely technical experts and policymakers but the full range of affected parties — workers, communities, cultural institutions, educational systems — with genuine deliberative authority, not merely advisory roles.
The European Union's AI Act represents a partial, imperfect move in this direction — an attempt to create a regulatory framework before the worst consequences of unregulated deployment have fully materialized. But even the AI Act, which is among the most ambitious regulatory responses to date, suffers from the temporal mismatch that Young's framework identifies. The Act was proposed in April 2021, provisionally agreed upon in December 2023, and entered into force in August 2024. During those three-plus years, the AI landscape changed so dramatically — the release of ChatGPT in November 2022, the proliferation of large language models, the explosive growth of AI-generated content — that significant portions of the regulatory framework were outdated before they were enacted. The deliberation was genuine. The inclusion was imperfect but real. And the temporal mismatch rendered even this substantial institutional effort structurally inadequate to the pace of the change it was designed to govern.
Young's framework suggests that the problem is not the European Union's lack of speed but the institutional architecture of governance itself. Current governance institutions are designed for a world in which the pace of social change was roughly commensurate with the pace of deliberative response — a world in which major economic transformations unfolded over decades, giving democratic institutions time to observe, deliberate, and respond. That world no longer exists. The AI transition is unfolding at a pace that requires a fundamentally different institutional architecture — one in which deliberative processes are embedded within the development cycle itself, not appended to it after the fact.
What would this look like in practice? Young's principles of communicative democracy suggest several structural features. First, mandatory deliberative impact assessments before the deployment of AI systems above a certain scale — assessments conducted not by the deploying company but by independent deliberative bodies with binding authority. Second, the creation of standing councils, constituted according to Young's principles of inclusive representation, with the authority to halt or modify deployments that pose significant risks of structural injustice. Third, the development of new deliberative technologies — ironic as this may sound — that use AI itself to facilitate rapid, inclusive deliberation among large and diverse populations. If AI can accelerate the production of code and content, it can also, in principle, accelerate the production of informed democratic judgment — though the design of such systems would itself need to be subject to the inclusive deliberative processes Young demands.
The deeper challenge, which Young's framework confronts honestly, is that speed and inclusion exist in structural tension. Inclusion takes time. Genuine deliberation — the kind that does not merely perform inclusion but actually incorporates diverse perspectives into decision-making — cannot be rushed beyond a certain point without degenerating into the superficial consultation that Young repeatedly criticized. The demand to "move fast" is not politically neutral. It is a demand that systematically advantages those whose perspectives are already embedded in the institutional default — who do not need deliberative time because the system already reflects their values — and disadvantages those whose perspectives require the slow, difficult work of translation, articulation, and persuasion.
Young's response to this tension would not be to slow down technological development, which is neither feasible nor desirable. It would be to restructure the relationship between development and governance so that the two proceed in parallel rather than in sequence. The current model — build first, govern later — is not the only possible model. Alternatives exist in other domains: pharmaceutical development requires regulatory approval before deployment, not after. Building construction requires permits before construction begins, not after the building is occupied. The principle that powerful technologies should be subject to anticipatory governance rather than reactive regulation is neither novel nor radical. What is radical, in Young's framework, is the insistence that the governance process must be genuinely inclusive — that it cannot be captured by the regulated industry, dominated by technical experts, or reduced to cost-benefit analysis conducted from the perspective of the structurally advantaged.
The temporal crisis of AI governance is, at bottom, a crisis of democratic legitimacy. When the decisions that shape millions of people's lives are made at a speed that democratic institutions cannot match, those decisions are effectively made outside the democratic process — by private actors whose legitimacy derives not from inclusive deliberation but from market power, technical capability, and the sheer velocity of their operations. Young's communicative democracy provides the normative framework for insisting that this situation is unjust — that legitimacy requires inclusion, inclusion requires deliberation, and deliberation requires time. The institutional question is how to create structures that honor these requirements without becoming so slow that they are perpetually overtaken by the changes they are meant to govern.
There is no clean answer to this question. Young's framework is honest about that. But it insists that the absence of a clean answer is not permission to abandon the question — to default to governance by the fast, regulation by the powerful, and justice by accident. The question of how to make democratic deliberation fast enough to govern AI without making it so fast that it ceases to be democratic is the central institutional challenge of the twenty-first century. Young's work does not solve it. It makes it impossible to ignore.
There is a persistent fantasy in the technology industry that the optimal creative output is the one that appeals to everyone. The universal product. The globally resonant design. The story that transcends cultural boundaries. The image that needs no translation. This fantasy drives the optimization logic of AI creative systems, which are trained on massive datasets and tuned to produce outputs that score well on broadly defined metrics of quality, engagement, and user satisfaction. The result is a kind of algorithmic consensus — outputs that are competent, polished, and profoundly generic. AI-generated images that look like every stock photo and no particular photograph. AI-generated text that reads like every well-written article and no specific voice. AI-generated music that sounds like every popular song and no one's song in particular.
Iris Marion Young would recognize this phenomenon instantly. It is cultural imperialism enacted at computational scale — the dominant group's aesthetic norms established as the universal standard, other traditions rendered invisible not through active suppression but through the far more effective mechanism of statistical averaging. The AI system does not decide that Appalachian balladry is inferior to pop music. It simply has more pop music in its training data, so it produces outputs that sound more like pop music. The system does not decide that the visual conventions of Japanese ukiyo-e are less valid than those of European oil painting. It simply has more European oil painting in its dataset, so its outputs default to European conventions of perspective, lighting, and composition. The normalization is structural, automatic, and invisible to those who share the normalized perspective.
But Young's analysis of cultural difference went far beyond the critique of cultural imperialism. Her most radical and most often misunderstood contribution was the argument that difference is not merely something to be tolerated, protected, or accommodated. Difference is a resource — a positive contribution to democratic deliberation and collective life that is lost when institutional structures suppress, ignore, or homogenize it. Young developed this argument in direct opposition to the dominant liberal tradition, which treats difference as a problem to be managed: the goal of a just society, on the liberal view, is to create institutions that are neutral with respect to cultural differences, treating all citizens as identical bearers of universal rights regardless of their group memberships or cultural identities.
Young argued that this apparent neutrality is a mask. Institutions that claim to be culturally neutral are, in practice, culturally specific — they encode the norms, communicative styles, epistemic habits, and aesthetic values of the dominant group and present these as the universal standard. "Neutral" hiring practices that evaluate candidates through standardized interviews favor those socialized into the communicative norms of the dominant culture. "Neutral" educational curricula that teach the Western canon as the canon favor those whose cultural heritage is represented in it. "Neutral" AI systems that optimize for "quality" as measured by engagement metrics favor those whose aesthetic preferences align with the majority of users in the training population.
The alternative Young proposed was not the abandonment of shared standards but their democratic reconstruction through the genuine inclusion of diverse perspectives. She called this "the politics of difference" — a political practice in which social groups bring their particular experiences, cultural resources, and epistemic perspectives to the deliberative table, not as special interests to be accommodated but as contributions to a richer, more accurate understanding of the shared world. The black feminist theorist brings an understanding of how race and gender intersect in the experience of labor market discrimination. The indigenous community leader brings an understanding of how land-use decisions affect communities whose relationship to land does not fit within the framework of property rights. The traditional musician brings an understanding of how AI-generated music affects artistic traditions that operate outside the commercial music industry. Each perspective illuminates aspects of the shared situation that are invisible from other positions. Difference, in Young's framework, is not noise. It is signal.
This argument has profound implications for the design and governance of AI creative systems. The current approach to AI creativity treats cultural diversity as a problem of representation — a problem that can be solved by diversifying training data, adding more non-Western art to the dataset, including more non-English text in the corpus. Young's framework suggests that this approach, while better than nothing, misses the deeper structural issue. The problem is not merely that the training data is unrepresentative. The problem is that the optimization logic of AI systems — the fundamental structure that determines what counts as a "good" output — is itself culturally specific. When an AI image generator is optimized to produce images that users rate as "high quality," the concept of quality that drives the optimization is not a universal standard. It is a culturally situated judgment, shaped by the aesthetic norms of the users who constitute the training population, which is itself shaped by the structural conditions that determine who has access to the technology and who does not.
Diversifying the training data without diversifying the optimization logic produces what might be called representational inclusion without structural transformation — a surface-level diversity that leaves the underlying power structure intact. The AI system can now generate images "in the style of" various non-Western traditions, but the system's fundamental conception of what makes an image good, beautiful, compelling, or successful remains anchored in the dominant tradition's norms. The non-Western tradition is included as a style — a surface feature that can be applied to the dominant structure's aesthetic logic — rather than as an alternative logic with its own standards of excellence, its own criteria of quality, its own ways of understanding what visual representation is for.
Young's politics of difference would demand something more radical: the inclusion of diverse cultural perspectives not merely in the training data but in the design process itself — in the decisions about what AI creative systems are optimized for, how their outputs are evaluated, what counts as success, what counts as failure, and what the systems are ultimately for. This inclusion would need to be genuine, not performative. It would need to give participants from diverse cultural traditions real decision-making power, not merely consultative input. And it would need to recognize that different cultural traditions may have fundamentally different understandings of what creativity is, what it is for, and how its products should be valued — understandings that cannot be reconciled through a single optimization function, no matter how sophisticated.
The practical implications are far-reaching. Young's framework suggests that the drive toward a single, universal AI creative system — one model to generate all images, one model to produce all text, one model to compose all music — is not merely technically ambitious but politically suspect. A single universal system necessarily encodes a single set of cultural assumptions, however carefully diversified its training data. The alternative that Young's framework points toward is a pluralistic ecosystem of AI creative systems — systems designed by and for different cultural communities, reflecting different aesthetic values, different communicative norms, different understandings of what creative work is and what it means. This is not a call for cultural separatism. It is a call for the kind of genuine pluralism that Young argued is the prerequisite for democratic life — a pluralism in which different groups maintain the capacity to articulate their own cultural identities while also engaging in cross-cultural dialogue and collaboration.
The Orange Pill's emphasis on the question "Are you worth amplifying?" takes on new dimensions when read through Young's politics of difference. The question, as posed, is addressed to an individual — a person contemplating their relationship to AI tools, asking whether their character, values, and capabilities are ones they would want amplified. Young's framework transforms this into a collective question: Are our differences worth amplifying? And her answer is unequivocal: yes. The differences between cultural traditions, between epistemic perspectives, between ways of knowing and making and being in the world — these differences are not friction to be reduced by algorithmic optimization. They are the substance of human richness, the raw material of democratic life, the source of the creative vitality that AI systems, for all their computational power, can only imitate.
The risk of the AI transition, seen through Young's lens, is not that machines will become creative. It is that creativity will become machined — smoothed, optimized, averaged into a global consensus that is technically impressive and culturally vacant. The Senegalese griot's art is not an "underrepresented data point" to be added to a training set. It is a living tradition with its own logic, its own standards, its own relationship to community, history, and meaning. The Aboriginal dot painter's art is not a "style" to be applied to a Western compositional framework. It is a knowledge system, a spiritual practice, a mode of being in relationship with land and ancestry that cannot be captured by an optimization function. To treat these traditions as inputs to a universal system is to enact precisely the cultural imperialism that Young spent her career diagnosing.
The alternative is harder, slower, and more expensive. It requires building institutional structures that protect cultural pluralism in the age of AI — structures that ensure diverse creative traditions retain the material and institutional resources to sustain themselves, that give diverse cultural communities genuine governance power over the AI systems that affect their traditions, and that resist the economic logic that treats cultural homogenization as efficiency and cultural difference as market friction. Young's framework insists that this is not a luxury. It is a requirement of justice. And it is a requirement that grows more urgent as AI systems grow more powerful, more pervasive, and more capable of producing the seductive, frictionless, culturally flattened consensus that the market rewards and that democracy cannot survive.
Difference is difficult. It slows things down. It complicates the optimization function. It resists the universal model. Young spent her career arguing that these are features, not bugs — that a democracy worthy of the name must create and sustain the institutional conditions under which difference can flourish, not despite the difficulty but because of it. In the age of AI, this argument is no longer merely theoretical. It is the most practical question on the table: whether the tools that are reshaping human creative life will be governed by a logic of convergence — one model, one standard, one optimized output — or by a logic of pluralism that treats the irreducible diversity of human culture as the thing most worth preserving and most urgently in need of structural protection.
In the spring of 2024, the United States Senate held a series of hearings on artificial intelligence regulation. The witness list was instructive. It included the CEOs of three major AI companies, two venture capitalists, a former national security advisor, a computer science professor from Stanford, and a labor economist from MIT. It did not include a single displaced creative worker. Not one illustrator whose livelihood had been absorbed into an image generation model's training data. Not one translator whose twenty years of expertise had been rendered commercially redundant by machine translation. Not one journalist whose beat had been automated by a content generation system that could produce serviceable copy at a fraction of the cost and a hundredth of the time. The people making the decisions about how AI would be governed were not the people whose lives would be most profoundly shaped by those decisions.
Iris Marion Young would have recognized this pattern immediately. It is the pattern she spent her entire career diagnosing: the systematic exclusion of affected voices from the deliberative processes that determine their fate. In Inclusion and Democracy (2000), Young argued that democratic legitimacy requires more than formal representation. It requires what she called communicative democracy — a deliberative process in which all affected parties have not merely the right to speak but the institutional conditions necessary to be heard. The distinction between speaking and being heard is not semantic. It is structural. A person can have the formal right to submit public comment on a proposed regulation and still be systematically excluded from the deliberative process if the institutional architecture is designed to privilege certain kinds of speech — technical speech, corporate speech, the speech of those who can afford lobbyists and lawyers — over others.
Young identified three structural barriers to genuine inclusion that map directly onto the current AI governance landscape. The first is external exclusion — the straightforward denial of access to deliberative forums. When Senate hearings on AI regulation include no displaced workers, when corporate AI ethics boards include no representatives of affected communities, when international AI governance bodies include no delegates from the Global South, external exclusion is operating. The affected parties are simply not in the room.
The second barrier is more insidious: internal exclusion. Internal exclusion occurs when people are formally present in a deliberative process but are systematically disadvantaged within it. Their speech is not recognized as authoritative. Their forms of expression are not treated as legitimate contributions to the deliberation. Their concerns are translated into a vocabulary they did not choose and that does not capture what they are trying to say. Young argued that deliberative democracy as practiced tends to privilege a particular mode of discourse — formal, dispassionate, evidence-based, couched in the language of policy analysis — that systematically disadvantages those whose experience of injustice is expressed through narrative, testimony, emotion, or cultural forms that do not conform to the dominant deliberative norms.
Consider the displaced illustrator who is invited to testify before a congressional committee. She is in the room. External exclusion has been overcome. But the committee expects her to speak in the language of policy impact — to quantify her losses, to cite statistics, to frame her experience in terms of market efficiency and regulatory frameworks. What she wants to say is something different. She wants to describe what it felt like to spend fifteen years developing a visual language, a way of seeing and rendering the world that was hers alone, and to watch a machine produce something superficially similar in seconds by recombining elements scraped from her work and the work of thousands of others. She wants to talk about identity, about vocation, about the relationship between craft and selfhood that cannot be captured in an economic impact assessment. But the deliberative norms of the committee do not recognize this kind of speech as a contribution to governance. It is treated as testimony — moving, perhaps, but not actionable. Not the kind of input that shapes regulation. The illustrator was included in the room but excluded from the deliberation.
Young's response to this problem was not to abandon deliberative democracy but to expand it. She proposed three forms of communication that a genuinely inclusive deliberative process must recognize alongside formal argument: greeting, rhetoric, and narrative. Greeting is the acknowledgment of others as legitimate participants in the deliberation, the recognition of their standing as persons whose perspectives matter. Rhetoric is the use of situated, passionate, culturally specific forms of expression that convey not merely arguments but the urgency and experiential weight behind them. Narrative is the telling of stories — personal, communal, historical — that make visible the structural conditions that abstract arguments tend to obscure.
These are not concessions to irrationality. They are recognitions that rationality itself is broader than the Enlightenment model of dispassionate argumentation suggests. The illustrator's story about the relationship between craft and identity is not a departure from the policy conversation. It is the ground of the policy conversation — the experiential reality that policy exists to address. A deliberative process that cannot hear her story is not a process that has chosen rigor over emotion. It is a process that has chosen the experiential reality of the powerful over the experiential reality of the displaced.
The AI governance landscape is defined by the asymmetry of voice. On one side of the deliberative table sit the technology companies — staffed with policy teams, armed with economic modeling, fluent in the language of innovation and competitiveness and national security that dominates governance discourse. On the other side sit the affected communities — fragmented, under-resourced, lacking institutional infrastructure for collective voice, and burdened with the double disadvantage of having lost the economic security that enables political participation at the precise moment when political participation has become most urgent for them. The asymmetry is not accidental. It is structural. And it reproduces itself through ordinary, unremarkable institutional processes. Committee chairs invite witnesses who speak the language of committee proceedings. Regulatory agencies solicit input through comment processes that require legal and technical fluency. International governance bodies credential participants through institutional affiliations that exclude the informally organized and the recently displaced.
Young's framework reveals that this asymmetry is not merely a procedural deficiency that can be corrected by adding a few worker representatives to advisory boards. It is a form of injustice in itself — an instance of what she called domination, the structural condition in which some people determine the actions and conditions of other people's lives without those people having a meaningful role in the determination. The decisions about AI development, deployment, and regulation are being made for affected communities, not by them and not with them. The structural conditions of the deliberation ensure that the outcomes will reflect the interests and perspectives of those who are already advantaged by the AI transition, because those are the only interests and perspectives that the deliberative architecture is designed to receive.
The Orange Pill framework — Edo Segal's insistence that AI amplifies what is already present — acquires a specifically democratic dimension through Young's analysis. What the AI transition is amplifying, in the domain of governance, is the pre-existing asymmetry of voice. Communities that already had political power — the technology sector, the financial sector, the highly educated professional class — find their voice amplified by the very technology that is the subject of deliberation. They understand AI because they built it or invest in it or use it daily. They speak the language of the governance institutions because those institutions were designed by and for people like them. Their participation in the deliberative process is not merely welcomed; it is structurally facilitated by every feature of the institutional landscape.
Communities that lacked political power before the AI transition — low-wage service workers, creative workers in precarious employment, populations in the Global South whose cultural products feed training datasets but whose governance structures have no seat at the international table — find their already-marginal voice further diminished. The amplifier is connected to the existing structure of communicative power, and it amplifies that structure faithfully.
Young argued that addressing this asymmetry requires more than goodwill. It requires institutional redesign. Specifically, it requires the creation of deliberative institutions that are structured from the ground up to include the voices of affected communities — not as an afterthought, not as a consultation requirement, not as a token presence on an advisory board, but as a constitutive element of the decision-making process. She advocated for what she called differentiated representation — institutional mechanisms that ensure that structurally disadvantaged groups have guaranteed representation in the bodies that make decisions affecting them.
In the context of AI governance, differentiated representation might take several forms. It might mean that any regulatory body tasked with AI oversight must include voting members drawn from affected worker communities — not appointed by the regulator but selected by the communities themselves. It might mean that AI companies above a certain size are required to establish worker councils with genuine decision-making authority over deployment decisions that affect employment. It might mean that international AI governance bodies must include delegations from the Global South that are proportionate to the populations affected by AI systems trained on their cultural and linguistic data. It might mean that the deliberative processes themselves are redesigned to recognize narrative, testimony, and culturally specific forms of expression as legitimate contributions — not supplements to the real deliberation but constitutive parts of it.
None of these institutional innovations would be sufficient on their own. Young was not naive about the difficulty of institutional redesign. She understood that institutions tend to reproduce the power structures that created them, that formal inclusion can coexist with substantive exclusion, that the mere presence of affected voices does not guarantee that those voices will be heard. But she insisted that the alternative — the current arrangement, in which the most consequential technological transformation in human history is being governed by a tiny, homogeneous elite whose interests are structurally aligned with accelerating the transformation rather than mitigating its harms — is democratically illegitimate. Not because the people making the decisions are bad people. Most of them are thoughtful, well-intentioned, and genuinely concerned about the social consequences of AI. The illegitimacy is structural. It resides not in the character of the decision-makers but in the architecture of the decision-making process — an architecture that systematically excludes the perspectives of those who bear the greatest costs.
Young's final and perhaps most challenging claim about democratic inclusion is that difference is a resource for deliberation, not an obstacle to it. The dominant model of deliberative democracy, inherited from Jürgen Habermas and the Rawlsian tradition, treats difference — different experiences, different social positions, different cultural frameworks — as something to be transcended in pursuit of a common perspective. The goal of deliberation, on this model, is consensus: a shared understanding that all reasonable parties can endorse. Young argued that this model is itself a form of exclusion. The demand for consensus tends to produce conformity to the dominant perspective. Differences that cannot be assimilated to the dominant framework are treated as irrational, as obstacles to be overcome, as noise rather than signal.
Against this, Young proposed that genuine democratic deliberation should foreground difference. The illustrator's experience of displacement is not the same as the AI engineer's experience of building the tool that displaced her. The translator's understanding of what is lost when machine translation replaces human translation is not the same as the platform executive's understanding of what is gained. These differences are not problems to be resolved. They are epistemic resources — sources of knowledge about the structural reality of the AI transition that would be invisible from any single vantage point. A deliberative process that honors difference — that treats the plurality of affected perspectives not as noise but as essential data — will produce better governance than one that seeks to transcend difference in pursuit of a false and inevitably partial consensus.
The asymmetry of voice in the AI transition is not a bug in the democratic process. It is a feature of the structural arrangement — an arrangement that concentrates communicative power in the same hands that hold economic and technological power. Young's framework insists that this concentration is unjust, that it produces governance outcomes that systematically favor the already-advantaged, and that the only remedy is the hard, unglamorous work of institutional redesign. Building deliberative structures that include the excluded. Recognizing forms of speech that the dominant deliberative norms currently dismiss. Treating difference as a resource rather than an obstacle.
This is not a utopian proposal. It is a structural one. And like all structural proposals, it requires the participation of those who currently benefit from the existing arrangement — the technology companies, the investors, the governance institutions themselves — because they are the ones with the power to change the architecture. Young was clear-eyed about the paradox: the people who must change the structure are the people who benefit from it. She did not resolve this paradox. She named it, insisted on its reality, and argued that political responsibility — the forward-looking obligation to work toward just institutional arrangements — falls on everyone who participates in the structure, including and especially those who benefit from it.
The hearing room in Washington, with its witness table populated by CEOs and professors and venture capitalists, is a microcosm of the structural injustice that Young spent her life analyzing. The solution is not to replace the CEOs with illustrators. It is to redesign the room so that both are present, both are heard, and neither can determine the other's fate without the other's participation in the determination. The architecture of the room is the architecture of justice. And in the age of artificial intelligence, the room needs to be rebuilt from the foundations.
Iris Marion Young died in 2006, at the age of fifty-seven, before she could see the technology that would make her work indispensable. Her final book, Responsibility for Justice, was published posthumously in 2011, assembled from manuscripts she had been refining in the last years of her life. It contains what may be the single most important philosophical concept for navigating the AI transition: the distinction between the liability model of responsibility and the social connection model of responsibility.
The liability model is the one most people carry in their heads, often without knowing it. It says: you are responsible for an outcome if you caused it, if you intended to cause it or were negligent in failing to prevent it, and if you were in a position to have acted otherwise. Responsibility, on this model, is a backward-looking judgment. It asks: who did this? It assigns blame. It demands remedy from the blameworthy. It is the foundation of tort law, criminal law, and the common moral intuition that responsibility tracks causation. You broke it, you fix it.
Young argued that the liability model, while perfectly appropriate for cases of individual wrongdoing, is catastrophically inadequate for cases of structural injustice. The inadequacy is not a matter of degree but of kind. Structural injustice is produced by the actions of millions of people operating within accepted institutional rules, none of whom individually caused the unjust outcome, most of whom could not have prevented it by acting differently within the existing structure, and many of whom were completely unaware that their actions contributed to the outcome at all. The liability model, applied to structural injustice, produces one of two results: either no one is responsible (because no individual caused the outcome), or everyone is equally responsible (because everyone participated in the structure). The first result is morally intolerable — it says that massive, systematic harm to identifiable groups of people is no one's problem. The second result is practically useless — it says that the displaced illustrator and the AI company CEO bear identical responsibility for the displacement, which provides no guidance for action.
Young's alternative is the social connection model of responsibility. It holds that all persons who participate in the structures that produce injustice share a forward-looking responsibility to work toward transforming those structures. The responsibility is not based on causation. It is not based on fault. It is not based on having done anything wrong. It is based on connection — on the fact of participation in a shared institutional order that produces unjust outcomes. The responsibility is not backward-looking (who caused this?) but forward-looking (what must we do to change it?). It is not individual but collective — it requires coordinated action to transform structural processes that no individual can transform alone. And it is not dischargeable — you cannot fulfill your responsibility by writing a check or issuing an apology and then returning to business as usual. As long as the structure persists, the responsibility persists.
The implications for the AI transition are profound and uncomfortable for everyone involved.
For the AI companies: the social connection model says that you bear a heightened degree of political responsibility — not because you are guilty of wrongdoing but because you occupy a position of disproportionate power within the structure. Young argued that those who have greater capacity to influence structural processes bear a proportionally greater responsibility to do so. The company that designs and deploys AI systems has more power to shape the structural conditions of the transition than the individual worker who is displaced by those systems. This differential in power produces a differential in responsibility. The AI company is not guilty of exploitation. But it is responsible — politically, structurally, forward-lookingly — for using its position of power to advocate for and implement structural changes that mitigate the injustice its products help produce.
For the investors and shareholders: the social connection model says that profit derived from structural processes that produce injustice generates responsibility for transforming those processes. The venture capitalist who funds an AI startup that displaces ten thousand creative workers is not guilty of causing the displacement. But she is connected to it — her capital is one of the structural elements that made the displacement possible — and that connection generates a forward-looking obligation to work toward a more just arrangement. Specifically, it means that the investor cannot discharge her responsibility by maximizing returns and then donating a fraction to charity. The responsibility is political, not philanthropic. It requires advocacy for institutional changes — regulatory frameworks, labor protections, deliberative structures — that would transform the structural conditions of the AI transition.
For the consumers: the social connection model says that the person who chooses the AI-generated image over the human-made one because it costs a tenth as much is participating in a structural process that produces injustice, and that participation generates responsibility. Not guilt. The consumer has done nothing wrong. The choice was rational, the alternatives were often unaffordable, and the structure provided no mechanism for the consumer to account for the structural consequences of the choice. But the responsibility is real. It takes the form of a political obligation — to support institutional changes that would make the structural consequences of consumption visible and actionable, to advocate for governance mechanisms that distribute the costs of the AI transition more equitably, and to resist the narrative that structural injustice is someone else's problem.
For the displaced workers: and here is where Young's framework becomes most uncomfortable, most counterintuitive, and most necessary. The social connection model says that even those who are harmed by structural injustice bear a responsibility to work toward transforming the structure. Not because they caused it. Not because they deserve it. Not because the burden is fair. Young was explicit: the burden is unfair. It is an additional injustice heaped upon people who are already suffering. But the responsibility is real, because the displaced workers are participants in the structure — they occupy positions within the institutional order, they are connected to other participants through webs of institutional relationship, and their participation (or withdrawal) affects the structural processes that produce the injustice.
This is the philosophical foundation for the Orange Pill's most provocative claim: that disengagement is never neutral. Edo Segal's account of the Luddites — the early nineteenth-century textile workers who smashed the machines that displaced them and then withdrew from the political processes that might have shaped the transition more justly — is, through Young's lens, a case study in the consequences of abandoning political responsibility. The Luddites were victims of structural injustice. Their rage was justified. Their displacement was real. But their disengagement from the political process — their refusal to participate in the governance structures that were determining their fate — left those structures entirely in the hands of the factory owners, the investors, and the politicians who represented manufacturing interests. The result was not the preservation of the Luddites' way of life. It was the acceleration of exactly the structural changes they opposed, now implemented without any countervailing voice.
Young would say that the Luddites' responsibility to participate was not a consequence of their guilt but of their membership in a shared political community. They were connected to the factory owners, the investors, and the consumers of cheap textiles through structures of mutual dependence and institutional relationship. Their withdrawal from those structures did not free them from the structures' effects. It merely deprived them of whatever influence they might have exercised within them.
The same dynamic is playing out in the AI transition. Every creative worker who withdraws from the governance conversation — because the conversation is dominated by technologists, because the deliberative norms exclude their forms of expression, because the structural conditions of their displacement leave them without the time, resources, or emotional energy to participate — leaves the conversation more thoroughly in the hands of those whose interests are aligned with accelerating the transition. The withdrawal is understandable. Young never denied that. She simply insisted that it is also consequential — that it deepens the structural injustice rather than escaping it.
The social connection model of responsibility generates a specific and demanding political obligation: the obligation to organize. Young argued that because structural injustice is produced by collective processes, it can only be transformed through collective action. Individual virtue is insufficient. The AI engineer who refuses to work on a project that will displace workers is acting admirably, but her individual refusal will not change the structural incentives that ensure another engineer will take her place. The consumer who chooses to pay more for human-made goods is exercising moral agency, but her individual choice will not alter the market structures that make AI-generated alternatives overwhelmingly cheaper. Only collective action — coordinated political effort to change the institutional rules, regulatory frameworks, and governance structures within which individual choices are made — can transform the structural conditions that produce the injustice.
This is why Young's framework ultimately converges with the Orange Pill's central imperative: the obligation to stay in the room. Not because the room is comfortable. Not because the room is fair. Not because the room was designed for the people who are now being asked to remain in it. But because the room is where the structural decisions are being made, and absence from the room is not neutrality. It is abdication. And abdication, in a structurally unjust world, is a form of complicity — not moral complicity (the abdicator is not guilty) but structural complicity (the abdicator's absence enables the structure to reproduce itself without resistance).
Young's theory of political responsibility is demanding. It asks something of everyone and gives no one a pass. It refuses the comfort of blame — the satisfying narrative in which a villain is identified and punished and justice is restored. It refuses the comfort of innocence — the reassuring story in which you are merely a bystander, unconnected to the structural processes that are producing harm. And it refuses the comfort of despair — the seductive conclusion that the structures are too large, too complex, too deeply entrenched to be changed by human action.
Against blame, Young offers connection. Against innocence, she offers responsibility. Against despair, she offers the historical evidence that structures do change — that institutional arrangements that seemed permanent and natural have been transformed, repeatedly, by the sustained collective action of people who refused to accept them. The abolition of slavery. The extension of suffrage. The creation of the welfare state. The passage of civil rights legislation. None of these transformations was achieved by identifying a villain and punishing them. All of them were achieved by people who recognized their connection to unjust structures and organized collectively to transform those structures from within.
The AI transition will not be different. The structures that produce its injustices — the exploitation of training data, the marginalization of displaced workers, the concentration of governance power in a technical elite, the cultural imperialism of algorithmically dominant traditions, the normalization of human erasure — will not be transformed by identifying villains. They will be transformed by people who recognize their connection to these structures, accept the forward-looking responsibility that connection generates, and do the difficult, unglamorous, politically demanding work of institutional redesign.
Young did not live to see the AI transition. But she built the framework that makes it comprehensible — not as a story of heroes and villains, not as a narrative of inevitable progress or inevitable decline, but as a structural condition that generates political responsibility for everyone it touches. The responsibility is not guilt. It is not blame. It is the recognition that we are all swimming in the same structural water, and the water is rising, and the only way to redirect it is to work together to rebuild the channels through which it flows.
That work begins — it can only begin — with the decision to stay in the room.
I didn't come to Iris Marion Young through philosophy. I came to her through frustration.
I had been trying, for months, to articulate something that kept slipping away from me whenever I reached for it. The feeling — the specific compound feeling — of watching AI transform the world I had spent my career helping to build, and knowing that the transformation was producing real harm to real people, and knowing that I was implicated in that harm, and knowing that the people being harmed were not being harmed by me, exactly, but by a structure I had helped create and continued to participate in and benefit from. I was not the villain of this story. Neither was anyone I knew. And yet the harm was real, the displacement was real, the loss of dignity and livelihood and voice was real. The thing I couldn't name was the gap between the absence of a villain and the presence of an injustice.
Then someone handed me Responsibility for Justice, and the gap had a name. Structural injustice. The authorless harm. The space where no one is guilty and everyone is responsible.
Young gave me something I didn't know I needed: permission to hold contradictions. To be an advocate for AI's transformative potential and a critic of the structures through which that potential is being realized. To refuse both the triumphalist narrative that erases the displaced and the elegiac narrative that demands we stop building. To insist that the builders bear responsibility and the displaced bear responsibility and the consumers bear responsibility — not because any of them are guilty, but because all of them are connected.
The hardest thing Young taught me is the thing I keep coming back to: that the people being crushed by the transition have a responsibility to stay in the room where the transition is being governed. I know how that sounds. I know the objection: it is obscene to tell the displaced that they bear responsibility for fixing the system that displaced them. Young heard that objection. She agreed that the burden is unfair. She insisted it is unavoidable. And she was right. Because the alternative — withdrawal, disengagement, the smashing of machines — leaves the room entirely in the hands of those whose interests are served by the status quo. The Luddites taught us that. History keeps teaching us that. Somehow we keep forgetting.
I wrote this book because I believe the room can be rebuilt. Not easily. Not quickly. Not without conflict and compromise and the grinding, unsexy work of institutional design. But rebuilt. Young spent her life arguing that structures are not fate — that they are human creations, sustained by human participation, and transformable by human action. The AI transition is not a weather event that we can only endure. It is a structural arrangement that we are collectively producing, day by day, choice by choice, and that we can collectively transform.
The obligation is on all of us. The builders and the displaced. The investors and the consumers. The people who write the code and the people whose livelihoods the code rewrites. Not guilt. Not blame. Connection. Responsibility. The hard, forward-looking work of making the room larger, the voices more various, the architecture more just.
Stay in the room. Rebuild it from inside.
I didn't come to Iris Marion Young through philosophy. I came to her through frustration.
I had been trying, for months, to articulate something that kept slipping away from me whenever I reached for it. The feeling — the specific compound feeling — of watching AI transform the world I had spent my career helping to build, and knowing that the transformation was producing real harm to real people, and knowing that I was implicated in that harm, and knowing that the people being harmed were not being harmed by me, exactly, but by a structure I had helped create and continued to participate in and benefit from. I was not the villain of this story. Neither was anyone I knew. And yet the harm was real, the displacement was real, the loss of dignity and livelihood and voice was real. The thing I couldn't name was the gap between the absence of a villain and the presence of an injustice.

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Iris Marion Young — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →