The handmade artifact identifies its maker. The mass-produced object identifies its manufacturer. The AI-generated artifact identifies no one in particular. It carries the traces of millions of uncredited contributors whose work entered the training data, the engineers who built the model, the corporation that deployed it, and the user who wrote the prompt. No single participant made it; they all made it; the 'collaboration' made it. This diffusion looks like shared credit. In practice it functions as shared evasion — because the structures of accountability are designed to locate responsibility in specific agents, and when the agency is distributed across actors of incommensurable kinds, the location fails.
There is a parallel reading that begins not with authorship but with substrate. The handmade artifact appears to identify its maker, but this appearance depends on bracketing everything that made the making possible. The canvas, the pigments, the training, the patronage system, the guild structures, the trade routes that brought materials, the economic arrangements that gave the painter time—all of this is already a distributed system of production. The painting's surface hides this infrastructure by convention, not by nature. We agree to see only the painter's hand because the legal and cultural apparatus is designed to support that fiction, and the fiction serves specific interests: it creates markets, enables inheritance, structures taxation, supports the romantic ideology of individual genius.
The AI artifact makes this infrastructure visible by refusing the convention. When we cannot locate a single author, we are forced to see what was always true: that every artifact is the product of a vast, distributed system, and that authorship is a legal construct imposed on collaborative production to enable certain forms of economic and political organization. The 'diffusion' is not a new problem introduced by AI—it is the actual structure of production that earlier technologies worked hard to conceal. The question is not how to restore individual accountability to AI outputs, but whether the accountability structures designed for individual authorship were ever adequate to the collaborative, infrastructural reality of making anything at all. The crisis AI produces is not that it distributes responsibility, but that it makes the distribution impossible to ignore.
The Orange Pill describes this diffusion honestly. 'Neither of us owns that insight,' the author writes of a moment of productive collaboration with Claude. 'The collaboration does.' The honesty is admirable, and the description of the experience is accurate. But the formulation conceals a question Berger's framework insists on asking: if neither the human nor the machine owns the insight, who is responsible for it? If the insight is wrong — if the reference is misused, if the historical claim is inaccurate, if the recommendation causes harm — the diffusion of authorship becomes a diffusion of accountability. The collaboration owns the insight, but the collaboration is not a person and cannot be held responsible. The human can be held responsible, but did not produce the output alone. The machine cannot be held responsible, because it is not a moral agent.
Berger spent his career insisting that every image carried a politics, and that part of what the politics consisted in was accountability. When you looked at an oil painting, you could ask: who made this, under what conditions, in whose interest, for whom? The questions were answerable because the painting had a specific history. The AI artifact resists these questions not because the history is unknown but because it is distributed across so many actors that no specific history applies. The training corpus is not a person you can locate. The engineers who built the model are many and anonymous. The corporation is a legal fiction designed to diffuse liability. The user wrote the prompt but did not produce the output.
This is not a technicality. It is a political outcome. Systems that produce consequential outputs — medical recommendations, legal briefs, financial analyses, news-adjacent content — need accountability structures, and accountability structures need to locate responsibility somewhere. The diffusion of authorship in AI collaboration tends, in practice, to push responsibility either onto the user (who signs off on an output she did not fully understand) or into nowhere (where responsibility dissolves into the collective agency of the 'system').
The handmade artifact says: I was made, and the marks of my making tell you something about who made me and how and why. The AI-generated artifact says: I appeared, and my surface tells you nothing about the conditions of my production, because my surface is all there is. The first arrangement supports accountability because it supports identification. The second undermines accountability because it undermines identification. This is not a problem new technologies will solve. It is a feature of the technology's architecture, and addressing it requires institutional and legal structures that do not yet exist.
The concept is developed in Chapter 3 of this volume, extending Berger's analysis of authorship in Ways of Seeing and in his later writing on collaborative work with photographer Jean Mohr. It draws also on recent work in AI ethics by Joanna Bryson, Luciano Floridi, and others on the question of distributed moral agency.
Authorship and accountability travel together. When the first is diffused, the second is diffused with it.
Diffused accountability tends toward no accountability. The structures designed to locate responsibility cannot function without specific agents to locate.
The problem is not malice but architecture. No one designed AI collaboration to evade responsibility; the evasion is a structural consequence of distributing agency across actors of different kinds.
Existing legal and institutional structures do not yet address this. The question of who is liable when an AI-assisted output causes harm remains substantially unresolved.
The handmade artifact is not a nostalgic alternative. It is a case that clarifies what distributed authorship gives up, and what would be needed to restore.
Legal scholars have proposed various responses: strict liability for AI deployers, insurance mandates, formal attribution requirements. Technologists have proposed technical responses: provenance tracking, model cards, disclosure requirements. Critics argue none of these adequately address the structural problem. The framework's position is modest: making the diffusion visible is the precondition for any institutional response, because institutions cannot address problems they cannot see. Whether specific responses work is an empirical question; whether the problem exists is not.
The right frame depends on which question you're asking. If the question is 'Who signs the medical report?'—a question of present legal and professional responsibility—Edo's analysis is essentially correct (95%). Existing accountability structures require specific agents, and AI collaboration genuinely complicates identification in ways that matter for malpractice law, editorial standards, professional licensing. The diffusion is not just conceptual; it creates real gaps in liability regimes that need institutional repair.
If the question is 'What makes authorship feel different?'—a question of phenomenology and cultural practice—the weighting shifts (60/40). The handmade artifact does provide different markers for tracing production, but the contrarian view is right that this tracing was always partial, always dependent on what we agreed to make visible. The painter's signature brackets vast systems; the AI output refuses the bracket. Both involve distributed production; they differ in which parts of the distribution we conventionally acknowledge.
If the question is 'What structures would genuine accountability require?'—a question of institutional design—the synthesis matters most (50/50, but transformed). The entry is right that current structures fail; the contrarian is right that individual authorship was always insufficient. The productive frame is neither restoration nor acceptance, but invention: accountability systems designed for explicitly collaborative, infrastructural production. This means legal concepts of collective agency, insurance models that cover systemic uncertainty, professional standards that acknowledge distributed expertise, attribution practices that make infrastructure visible without paralyzing action. The AI artifact doesn't break accountability—it reveals what accountability adequate to collaborative reality would actually require.