The authorless harm is the diagnostic phenomenon that Young's entire framework was built to address. The Chicago advertising agency that laid off its illustration department is its paradigm case: no participant in the chain of decisions violated a contract, broke a law, acted in bad faith, or intended the harm. The creative director admired his illustrators. The clients followed budgets. The illustrators followed a career path that narrowed without warning. The harm is real — twelve households lost income, a professional community lost standing, a craft tradition lost practitioners — and yet no one is responsible in the liability-model sense. The author is absent; the harm is present; and standard moral vocabularies have nothing to say.
There is a parallel reading that begins not with the moral puzzle of distributed causation but with the material substrate that makes such distribution possible. The 'authorless harm' is better understood as the predictable output of specific technological and financial architectures designed to obscure agency while concentrating benefits. The Chicago illustrators weren't victims of some emergent property of complex systems — they were casualties of a deliberate investment in automation infrastructure by venture capital firms betting on labor replacement. The 'absent author' is a carefully maintained fiction that serves those who profit from the harm.
The liability model's supposed inadequacy is itself a political achievement. Limited liability corporations, algorithmic decision-making, platform immunity provisions, and cross-jurisdictional supply chains weren't discovered in nature — they were engineered precisely to diffuse responsibility while preserving extraction. When a generative AI company scrapes millions of artists' works without permission, trains models on this stolen corpus, then sells tools that displace those same artists, the 'authorlessness' of the resulting harm required billions in capital investment, armies of lawyers structuring defensive corporate architectures, and systematic lobbying to prevent regulatory oversight. The harm appears authorless only because we've been trained to see infrastructure as neutral rather than as the crystallized intention of those with the resources to build it. Every 'complex institutional process operating at scales beyond individual perception' was designed by specific people with specific interests to be precisely that opaque.
The concept cuts through the AI discourse's defining confusion. Each actor in the chain of AI deployment can truthfully claim that they were following institutional norms, pursuing legitimate goals, and responding to incentives they did not design. The machine learning researcher advances the state of the art. The product team builds what users want. The agency offers clients the best available tools. The client manages budgets. The aggregate effect is displacement; the individual actions are unremarkable. Searching for a villain produces either paralysis (no one is responsible) or indiscriminate blame (everyone is equally guilty) — both of which are politically useless.
The authorless harm is not a rare edge case. It is the dominant category of injury in a world organized around complex institutional processes operating at scales beyond individual perception. Climate change is an authorless harm. Sweatshop labor is an authorless harm. Financial crises produce authorless harms. The AI transition is the latest, largest, and fastest-moving instance of a category that modernity has been producing at industrial scale and that moral philosophy has been struggling to theorize since Rousseau. See structural injustice.
The concept's political significance is that it forces a choice. Either we accept that massive, systematic harm to identifiable groups is no one's problem because no one caused it — which is morally intolerable — or we develop a different model of responsibility that can address harm without requiring a perpetrator. Young's social connection model is the most developed attempt to take the second option seriously. Its uncomfortable demands are the price of taking authorless harm seriously at all.
The phrase is not Young's coinage but the diagnostic condition her work was built to name. Precursors exist in Marx's analysis of alienation and Durkheim's analysis of anomie, but Young's framework gave the phenomenon its sharpest contemporary articulation. The AI transition has made it a category that ordinary people now recognize in their own lives, which is why Young's previously specialist framework has become urgently public.
Real harm, absent agent. The defining structure of the category.
Rule-following, not rule-breaking. The harm is produced by people doing exactly what the institutional order expects.
Paralysis or promiscuity. The liability model produces either 'no one is responsible' or 'everyone is equally responsible' — both useless.
Modernity's dominant injury type. Climate change, financial crises, supply-chain harms, AI displacement — all share the structure.
The diagnostic gateway. Recognizing authorless harm as its own category is the precondition for developing responsibility frameworks adequate to it.
The divergence between these readings depends entirely on which temporal and spatial scale we examine. At the immediate, transactional level where Young focuses — the creative director choosing AI tools, the client managing budgets — the authorless harm frame is essentially correct (95%). These actors genuinely operate within constraints they didn't create and make reasonable decisions given their local context. The liability model truly fails here because individual rule-following behavior produces collective injury without individual malice.
But zoom out to the infrastructure level, and the contrarian view gains force (70%). The systems that enable authorless harms — from corporate structures to AI training regimes — were indeed designed with specific distributive outcomes in mind. Silicon Valley's investment in generative AI explicitly aimed to reduce labor costs across creative industries. The 'absence' of authors at the transaction level coexists with clear authorship at the system-design level. The Chicago illustrators experienced both realities simultaneously: no one they could identify acted badly, yet identifiable actors built the tools that displaced them.
The synthesis requires holding both scales in view simultaneously — what we might call 'structured authorlessness.' The harm is genuinely authorless at the level where most people experience it (Young is right), and this authorlessness is genuinely structured by those with system-building power (the contrarian is right). This suggests that effective response requires different interventions at different scales: Young's social connection model for participants in the immediate system, and something more like traditional liability for those who architect the systems themselves. The key insight is that authorlessness and intention aren't opposites — they're phenomena that exist at different layers of the same process.