Howard Becker — On AI
Contents
Cover Foreword About Chapter 1: The Art World You Cannot See Chapter 2: Conventions — The Rules Nobody Wrote Chapter 3: The Cooperative Network Behind the Solo Builder Chapter 4: Support Personnel — Who Does the Work You Cannot See Chapter 5: Editing as the Core Creative Act Chapter 6: The Distribution of Credit and the Authorship Problem Chapter 7: Mavericks, Integrated Professionals, and Naive Artists in the AI Age Chapter 8: Reputation Systems in a World of Abundant Production Chapter 9: The Fishbowl as a Set of Conventions Chapter 10: Building New Conventions for a New World Epilogue Back Cover
Howard Becker Cover

Howard Becker

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Howard Becker. It is an attempt by Opus 4.6 to simulate Howard Becker's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The credit line was what broke the spell.

I was reviewing the passage in *The Orange Pill* where I describe building Napster Station in thirty days — no software, no hardware, no industrial design, nothing but a vision and a deadline and Claude Code. I had written it as a story about what I accomplished. What my team accomplished. And it is that story. The accomplishment is real.

But somewhere in the process of working through Howard Becker's ideas for this book, I started counting. Not lines of code or hours logged. People. The researchers at Anthropic who built the model. The annotation workers who refined it. The open-source developers whose code trained it. The cloud engineers who kept the servers alive. The communities that developed prompting conventions I absorbed without noticing. Thousands of participants in a cooperative chain that made every moment of those thirty days possible — and not one of them appeared in my telling.

I had followed a convention so thoroughly I mistook it for reality. The convention says: the builder built it. The convention assigns credit to the visible node and renders the network invisible. I had been swimming in that convention my entire career without pressing my face against the glass.

That is what Becker does. He does not tell you your conventions are wrong. He shows you they exist. He makes the water visible. And once you can see it, you cannot unsee it — which is its own kind of orange pill, quieter than the one I describe in the main book but no less disorienting.

Becker spent sixty years studying how creative work actually gets made. Not the romantic version — the solitary genius, the flash of inspiration — but the sociological reality: cooperative networks of people following shared conventions about who does what, who gets credit, and what counts as good. His framework applies to jazz clubs and art galleries and Hollywood studios and, as this book argues, to the AI world forming around us right now.

The conventions of that world are being written as you read this. Who gets credited for AI-assisted work. What counts as quality when production is nearly free. Whose labor sustains the system and whose is invisible. These are not technical questions. They are social ones, and they are being answered mostly by default — by whoever shows up with the most power and the loudest voice.

Becker gives you the tools to see those defaults for what they are: choices, not facts. And choices can be made differently.

This is another lens. Use it.

-- Edo Segal ^ Opus 4.6

About Howard Becker

1928-2023

Howard Becker (1928–2023) was an American sociologist whose career spanned seven decades and reshaped how scholars understand creative production, deviance, and social interaction. Born in Chicago, he began playing piano in jazz clubs as a teenager — an experience that permanently shaped his conviction that creative work is cooperative rather than solitary. He earned his doctorate at the University of Chicago under Everett Hughes and went on to teach at Northwestern University, the University of Washington, and elsewhere. His landmark book *Outsiders* (1963) redefined the sociology of deviance by arguing that deviance is not a property of an act but a label applied by social groups. His most influential work for the study of creative life, *Art Worlds* (1982), demonstrated that art is produced not by individual geniuses but by cooperative networks of participants — artists, suppliers, distributors, critics, audiences — coordinated by shared conventions. He also wrote *Writing for Social Scientists* (1986), a widely beloved guide that demystified academic prose, and *Tricks of the Trade* (1998), on sociological method. Becker's style was deliberately unpretentious: he wrote clearly, drew on ethnographic observation rather than grand theory, and insisted that the sociologist's job was to describe what people actually do rather than prescribe what they should do. His work influenced fields far beyond sociology, including art history, musicology, science studies, and organizational theory.

Chapter 1: The Art World You Cannot See

In February 2026, Edo Segal flew to Trivandrum, India, to train twenty engineers on Claude Code. He describes the scene in The Orange Pill with the breathless specificity of a person who has just watched the laws of physics change: by Wednesday, each engineer was reaching across professional boundaries they had respected for years; by Friday, a twenty-fold productivity multiplier had been demonstrated at a hundred dollars per person per month. Segal saw twenty individuals amplified. He saw the imagination-to-artifact ratio collapse to the width of a conversation. He saw, in his words, the future of building.

A sociologist would have seen something different. Not twenty amplified individuals but a cooperative network in the process of being restructured. The roles that had organized the room on Monday — backend engineer, frontend developer, designer, product manager — were dissolving by Wednesday, not because the people in those roles had suddenly acquired new talents but because the conventions that had assigned them to those roles were no longer operative. The division of labor that had governed software production for half a century, the assumption that a person trained in one technical domain could not competently operate in another, turned out to be an artifact of the translation cost between human intention and machine execution. Remove the translation cost, and the divisions dissolve. Not because they were arbitrary, exactly, but because they were responses to a constraint that no longer exists.

Howard Becker would have understood this scene immediately, because Becker spent sixty years studying exactly this kind of restructuring in every creative domain he could get his hands on. His central insight, developed across a career that moved from jazz clubs to photography studios to academic departments, was disarmingly simple: nobody makes anything alone. The painter who signs the canvas depends on the people who manufactured the paint, stretched the canvas, built the gallery, wrote the reviews, trained the audience to recognize what a painting is supposed to look like, and established the aesthetic conventions within which this particular arrangement of pigment on cloth registers as art rather than decoration. The jazz musician who takes a solo depends on the rhythm section, the club owner, the recording engineer, the record label, the radio programmer, the audience that learned to listen to jazz rather than dismissing it as noise. Remove any of these participants and the work changes. Remove enough of them and the work becomes impossible.

Becker called these cooperative networks art worlds. The term was deliberately unglamorous, chosen to strip the mystique from creative production and replace it with observable sociology. An art world is not a metaphor. It is a description of how people actually organize themselves to produce, distribute, and evaluate creative work. It includes everyone whose activity contributes to the final product, from the most celebrated performer to the janitor who cleans the concert hall. And its central feature is not the talent of any individual participant but the conventions that coordinate their activity — the shared understandings about who does what, how quality is assessed, how credit is distributed, and what the final product should look like.

The argument sounds obvious when stated abstractly. Of course nobody makes anything alone. Of course creative work depends on infrastructure and cooperation. But the argument becomes radical when applied to specific cases, because it challenges the organizing myth of nearly every creative domain: the myth of the solitary genius. The myth says that the meaningful unit of creative production is the individual mind. The painter's vision. The novelist's voice. The composer's inspiration. The cooperative network is mere support, necessary but artistically irrelevant, the way an electrical grid is necessary for a lamp but is not the source of the light.

Becker's entire career was a sustained demonstration that this myth is sociologically false. Not philosophically false — philosophy can argue the question endlessly — but empirically false, observable in the actual practices of people who make things. When Becker studied jazz musicians in Chicago in the 1950s, he did not find isolated geniuses expressing interior visions. He found working professionals embedded in a network of clubs, booking agents, audience expectations, union regulations, and musical conventions that determined what they played, how they played it, where they played it, and who listened. The music that emerged was shaped at every point by the cooperative structure, not just by the musicians' individual abilities.

The same analysis applied to every art world Becker examined. The conventions of the world — the shared rules, practices, and expectations that its participants have internalized — are not constraints imposed on otherwise free creators. They are the enabling infrastructure without which creation cannot occur. A jazz musician can sit in with a band she has never met and begin playing immediately, not because she is a genius but because she shares the conventions of the genre: the standard chord progressions, the turn-taking protocols for solos, the implicit agreement about how much freedom the rhythm section permits. Without those conventions, the session would be chaos. With them, it becomes music.

This framework, applied to the scene in Trivandrum, reveals something that Segal's account gestures toward but does not fully develop — not because Segal lacks the insight but because his book is written from inside the builder's experience, and the cooperative network is precisely what the builder, situated at the center of it, cannot see whole. Segal is right that something extraordinary happened in that room. Twenty engineers discovered capabilities they did not know they possessed. The boundaries between technical domains dissolved. The imagination-to-artifact ratio collapsed. All of this is accurately reported and genuinely significant.

But the individuals in that room were not operating alone. They were operating within an emerging cooperative network — an AI world — that included, at minimum: Anthropic, the company that built Claude; the researchers who developed the transformer architecture on which Claude is based; the data workers who annotated the training data; the open-source communities whose code was ingested by the training process; Amazon Web Services or whatever cloud provider was running the inference; the internet service provider that connected the room in Trivandrum to the servers; the community of developers who had already established prompting conventions and shared them online; and Segal himself, who had designed the training, set the goals, and created the organizational context within which the engineers' newfound capabilities could be exercised productively.

Remove Anthropic from this network and the scene does not occur. Remove the cloud infrastructure and the scene does not occur. Remove the training data — the millions of lines of code written by millions of developers over decades, ingested and compressed into the model's parameters — and the scene does not occur. Remove the prompting conventions that the engineers learned, either from Segal or from online communities, and the engineers type confused queries into a tool they do not know how to use, and the twenty-fold multiplier does not materialize.

None of this diminishes what the engineers accomplished. The point is not that their contributions were illusory. The point is that their contributions were embedded — situated within a cooperative structure that made those contributions possible. The art world concept does not deflate individual achievement. It contextualizes it. The jazz musician who plays a brilliant solo is still brilliant. The brilliance is real. But it is realized within a structure, dependent on a structure, and unintelligible apart from a structure.

The AI world that is forming around tools like Claude Code has all the features Becker identified in traditional art worlds. It has conventions — about how to prompt, how to evaluate output, how to iterate, how to combine AI-generated material with human judgment. It has a division of labor — between model builders, tool designers, prompt engineers, builders, critics, and audiences. It has distribution systems — GitHub, social media, app stores, enterprise sales channels. It has evaluation mechanisms — the discourse of triumphalists and elegists that Segal describes in The Orange Pill, the reputation systems that determine whose work gets attention, the quality conventions that are still being negotiated.

What the AI world does not yet have is stability. Traditional art worlds have conventions that have been refined over decades or centuries. The conventions of oil painting were established over generations of practice, criticism, and institutional development. The conventions of jazz were negotiated through decades of performance, recording, and critical reception. These conventions are not fixed — they change, sometimes dramatically — but at any given moment they provide a stable enough framework for cooperative activity to proceed without constant renegotiation.

The AI world has no such stability. Its conventions are being established in real time, under conditions of extraordinary speed and pressure, by participants who are simultaneously using the tools, developing norms for using the tools, and arguing about whether the tools should exist at all. The anxiety that pervades Segal's account — the vertigo, the terror-and-exhilaration, the inability to tell whether one is witnessing birth or burial — is not merely a personal response to a powerful technology. It is the characteristic emotional signature of an art world in the process of formation, before conventions have stabilized enough to tell participants what the rules are.

This instability is visible at every level. At the level of production: nobody has established conventions for how much human engagement constitutes genuine creative contribution. Is reviewing and selecting AI output enough? Is prompting enough? Must the human have independently conceived the idea, or is it sufficient to have recognized a good one when the machine produced it? At the level of evaluation: nobody has established conventions for what counts as quality in AI-assisted work. Speed? Volume? Novelty? Depth? The question that generated it? At the level of credit: nobody has established conventions for how to attribute work that emerges from a collaboration between human and machine. Segal's honest acknowledgment that "neither of us owns that insight" is not a philosophical puzzle. It is a convention gap — a place where the cooperative structure has not yet developed the shared understandings needed to distribute recognition in a way that participants accept as legitimate.

The Google DeepMind researchers Piotr Mirowski and Rida Qadri arrived at a similar conclusion through a different route. In their work on culturally situated AI creativity, they argued that "technologies for artistic production will likely impact an entire ecosystem, and not just individual users," and proposed using Becker's art world framework to understand that impact. Their eight-week study with artists in the Persian Gulf region found that local art worlds could appropriate AI tools in ways the tool designers never anticipated — developing "hacks" for culturally specific capabilities, imagining alternative technological trajectories. The point was not that AI was good or bad for art. The point was that its effects could only be understood within the context of a specific art world, with its specific conventions, participants, and institutional structures.

The discourse about AI and creativity is dominated by two voices that Becker's framework reveals as equally incomplete. The triumphalists celebrate the empowered individual — the solo builder who ships a product over a weekend, the engineer who suddenly operates across domains, the non-technical founder who prototypes without a co-founder. The elegists mourn the lost craftsperson — the senior developer whose years of embodied expertise are being devalued, the artist whose style is reproduced without consent or credit, the professional whose guild is dissolving. Both voices take the individual as the meaningful unit of analysis. The triumphalist celebrates the individual's gain. The elegist mourns the individual's loss.

Becker's contribution is to shift the unit of analysis from the individual to the world. What is forming is not a collection of amplified individuals but a new cooperative structure with its own conventions, hierarchies, distribution systems, and evaluation mechanisms. The quality of the work this structure produces will depend not primarily on the talent of its most visible participants but on the quality of its conventions — the shared understandings that coordinate activity, distribute credit, establish standards, and determine what gets made, by whom, and for whom.

That is what the engineers in Trivandrum could not see, because they were inside it. The water they were breathing, the glass that shaped their view, was the emerging convention set of the AI world — already powerful enough to restructure their professional identities in five days, but not yet stable enough to tell them what those restructured identities meant. The vertigo was real. The cooperative network that produced it was invisible.

Making it visible is the work of this book.

---

Chapter 2: Conventions — The Rules Nobody Wrote

A working musician in Chicago in the 1950s did not think of herself as following conventions. She thought of herself as playing music. The chord changes of a standard twelve-bar blues, the four-beat swing rhythm, the unwritten rule that the saxophone takes the first solo and the trumpet takes the second, the understood relationship between the melody as written and the liberties a soloist may take with it — none of these registered as conventions. They registered as the way things are done. They were invisible in the precise sense that water is invisible to a fish: not because they were hidden but because they were everywhere, so thoroughly integrated into the practice of making music that separating them from the music itself required a deliberate act of analytical attention.

Howard Becker performed that act of attention across every creative domain he studied, and his findings were consistent. Conventions are the invisible infrastructure of creative work. They are the shared understandings — about materials, methods, forms, relationships, and standards — that allow people to cooperate without negotiating every detail from scratch. They are what make a jam session possible, what make a gallery opening legible, what make a code review meaningful. They solve coordination problems. And they are so deeply internalized by the people who use them that those people typically cannot articulate what they are, any more than a fluent speaker of English can articulate the rules of English grammar on demand.

Conventions are not rules in the sense that laws are rules. Nobody writes them down. Nobody enforces them through explicit sanctions. They are maintained through practice — through the accumulated weight of thousands of instances in which people did things one way rather than another and found that the one way worked and the other did not. A convention persists because it solves a problem that recurs. It disappears when the problem it solves ceases to exist.

Software development, for all its self-image as a domain of pure logic, runs on conventions as thoroughly as jazz. The sprint — a fixed period, usually two weeks, during which a team commits to completing a defined set of tasks — is a convention. Nobody proved that two weeks is the optimal duration. The convention stabilized because it solved a coordination problem: it synchronized the work of people with different specialties, created regular checkpoints for evaluation, and provided a rhythm that teams could internalize. The code review is a convention. It solves the problem of quality assurance in a domain where errors can be invisible to their creators. The pull request is a convention. The standup meeting is a convention. The division of a development team into frontend and backend specialists is a convention. The assumption that a product manager writes a specification and a developer implements it is a convention.

These conventions are not natural. They are historical — the products of specific organizations, specific technologies, specific economic conditions. The sprint emerged from Agile methodology, which emerged from a manifesto written by seventeen software developers at a ski lodge in Utah in 2001, which was itself a reaction against the conventions of waterfall development that had governed the industry for the previous two decades. The code review convention predates Agile but was formalized within it. The frontend-backend division became a convention when web applications grew complex enough that a single developer could not competently handle both the visual interface and the server logic.

Each of these conventions solved a real problem. And each of them, in the process of solving that problem, constrained what could be created and by whom. The sprint convention assumes that work can be decomposed into discrete tasks completable in two weeks. Work that cannot be so decomposed — exploratory research, open-ended design thinking, the kind of architectural contemplation that requires marinating in a problem for months — fits badly within the convention and tends to be either forced into an inappropriate shape or simply not done. The frontend-backend division assumes that visual design and server logic are fundamentally different kinds of work requiring different kinds of expertise. Work that crosses the boundary — a feature whose visual behavior depends intimately on server-side logic — creates coordination overhead that the convention was supposed to eliminate.

What happened in Trivandrum in February 2026, understood through Becker's framework, was not primarily a demonstration of AI capability. It was a convention collapse. The conventions that had organized software development — the division into frontend and backend, the spec-to-implementation handoff, the role boundaries that determined who could contribute to what — dissolved, because the constraint they had been designed to address no longer existed.

Consider the backend engineer Segal describes who built a complete user-facing feature in two days. Under the old conventions, this was not merely difficult. It was illegitimate. The conventions of the software art world assigned frontend work to frontend specialists. A backend engineer who attempted frontend work was violating a convention — not a written rule, but a shared understanding about who does what. The violation would have been met not with formal punishment but with informal resistance: skepticism from colleagues, pushback from reviewers, a general sense that the backend engineer was operating outside her competence.

Claude Code did not just make it technically possible for the backend engineer to build a frontend feature. It dissolved the convention that made such cross-boundary work illegitimate. The translation cost that had justified the division of labor disappeared, and with it, the social infrastructure that had maintained the boundary.

This is the pattern Becker observed in every art world transition he studied. New technologies do not merely add capability to an existing set of conventions. They destabilize the conventions themselves, because conventions are responses to constraints, and when the constraint changes, the convention loses its rationale. Photography destabilized the conventions of portrait painting, not because photographers were better painters but because the constraint that painting addressed — the absence of any other means of producing a realistic likeness — was eliminated. The conventions that had organized portrait painting (the poses, the lighting conventions, the relationship between painter and subject, the economy of commissions) did not survive unchanged. They transformed, gradually and contentiously, into something different.

The AI world is in the early stages of an equivalent transformation, and the conflicts Segal describes in The Orange Pill are, in Becker's terms, conflicts about which conventions will govern the new world. The discourse between triumphalists and elegists is not merely a difference of opinion about AI. It is a contest between competing convention sets. The triumphalists are proposing a convention set organized around speed, individual capability, and output volume: the solo builder who ships fast, measures impact in metrics, and credits herself for the result. The elegists are defending a convention set organized around craft, embodied expertise, and the slow accumulation of depth: the senior developer who earned her understanding through years of struggle and finds that understanding devalued by a tool that makes struggle optional.

Both convention sets solve real problems. The triumphalist set solves the problem of production in a world where execution used to be the bottleneck. The elegist set solves the problem of quality in a world where depth used to require friction. Neither set is natural or inevitable. Both are social constructions — ways of organizing cooperative activity that serve certain interests and neglect others.

The Byung-Chul Han critique that occupies the center of The Orange Pill is, in Becker's vocabulary, a critique of a specific convention: the convention that smooth output is good output. Han argues that the removal of friction produces superficiality. Becker would not disagree, exactly, but he would reframe. The question is not whether smoothness is inherently good or bad. The question is whether the AI world's convention of quality will settle on smoothness as its standard. If it does, the world will produce smooth work, because artists in any art world produce the work that the conventions reward. If the convention of quality instead rewards the kind of judgment Segal describes — the willingness to reject a polished passage because the idea beneath it is hollow — then the world will produce work of a different character.

The Berkeley study Segal discusses in Chapter 11 of The Orange Pill is particularly revealing when read through the conventions lens. The researchers found that AI tools intensified work — that workers took on more tasks, expanded into adjacent domains, and filled previously protected pauses with AI-assisted productivity. Becker's framework suggests that what the researchers were observing was not a natural consequence of the technology but a convention in operation. The convention is: available capability should be used. A person who can do more, should do more. Idle capacity is waste.

This convention is not inherent in the technology. It is a social product, inherited from decades of productivity culture, reinforced by organizational incentive structures, and internalized so deeply that workers experience it as personal motivation rather than external pressure. The AI tool did not impose the convention. It provided a new substrate on which the existing convention could operate. The technology amplified a convention that was already there.

The "AI Practice" frameworks the Berkeley researchers proposed — structured pauses, sequenced rather than parallel work, protected time for human-only collaboration — are, in Becker's terms, attempts to introduce new conventions that counteract the default. They are interventions in the convention landscape, designed to create space for activities that the dominant convention of maximum utilization crowds out. Whether they succeed will depend on the same factors that determine the success of any convention change: whether enough participants adopt them, whether institutional structures reinforce them, and whether they solve a problem that participants recognize as real.

The conventions of the AI world are being formed right now. They are being formed in corporate AI governance meetings and open-source community norms and educational experiments and the cultural arguments between people who see liberation and people who see exploitation. Most of the people participating in this process do not think of themselves as establishing conventions. They think of themselves as figuring out how to use a tool, or arguing about whether the tool is good or bad, or trying to keep up with a pace of change that makes deliberation feel like a luxury they cannot afford.

But conventions do not require deliberate establishment. They emerge from practice. They stabilize when enough people do things the same way for long enough that the way becomes the default. And once they stabilize, they become the water — invisible, pervasive, and extremely difficult to change. Which conventions stabilize in the AI world's formative period will determine the character of the work the world produces for years or decades to come.

This is why the formation period matters so much, and why the sociological eye matters during this period more than it might at any other time. You cannot see the conventions while they are forming if you are inside the process, doing the forming. You can only see them from outside, through the deliberate act of making the familiar strange — treating the emerging practices of the AI world not as the natural way things are done but as one possible arrangement among many, an arrangement that is being chosen, whether deliberately or by default, and that will have consequences for everyone the world touches.

The conventions are being written. Nobody is writing them down. That is the problem and the opportunity at once.

---

Chapter 3: The Cooperative Network Behind the Solo Builder

Alex Finn's year is one of the set-piece stories of the AI moment. Segal describes it in The Orange Pill as proof of concept for the democratization thesis: a single person, armed with Claude Code and determination, built a revenue-generating product without writing a line of code by hand. Five years earlier, the same accomplishment would have required a team of five, a runway of twelve months, and a founder with deep technical skills. Finn did it with an idea, a tool, and an appetite for work. The narrative was clean and powerful, and it circulated widely because it confirmed what many people wanted to believe: that AI had leveled the playing field, that individual capability was being amplified to the point where a single person could do what only organizations could do before.

Becker spent his career asking a different kind of question about stories like this. Not whether the accomplishment was real — he had no interest in debunking — but who else was in the room. The question was never rhetorical. It was methodological. When you trace the actual chain of cooperation that makes any creative product possible, the story of individual accomplishment does not collapse, but it changes shape. It becomes a story about a network, and the individual's contribution, still real and still significant, takes its place within a structure that is far larger and far more complex than the credit conventions acknowledge.

Consider the cooperative chain behind a single Claude Code session. A builder types a prompt. The prompt travels over the internet to a data center. The data center is operated by a cloud infrastructure company — Amazon, Google, Microsoft — that maintains the servers, the cooling systems, the power supply, the security protocols. The server runs a model developed by Anthropic. The model was trained on data: billions of tokens of text, much of it produced by human beings who did not consent to its use for this purpose and who receive no compensation for its contribution to the model's capabilities. The training data includes code from open-source repositories, written by developers who contributed their work under licenses that did not anticipate machine learning ingestion. It includes academic papers, books, forum posts, documentation, and the accumulated written output of decades of human intellectual labor.

The model's architecture — the transformer — was developed by researchers at Google in 2017, building on decades of prior work in machine learning, neural networks, and computational linguistics. The researchers were employed by a corporation, funded by advertising revenue generated by billions of users of Google's products. The mathematics underlying the transformer draws on linear algebra developed over centuries, optimization theory refined over decades, and probability theory whose foundations were laid in the seventeenth century.

The model was further refined through reinforcement learning from human feedback — a process that requires human annotators to evaluate the model's outputs and indicate which responses are better and which are worse. These annotators are often contract workers, frequently based in countries where labor costs are low. The wages are modest. The work is repetitive and sometimes psychologically taxing. Their contribution to the model's capabilities is essential and almost completely invisible.

The builder who types the prompt is using an interface designed by Anthropic's product team, which draws on decades of work in human-computer interaction. The prompting conventions the builder follows — the phrasing, the structuring, the iterative refinement — were developed by early adopters who shared their techniques through blog posts, forum discussions, and social media. The builder's ability to evaluate the output depends on knowledge she acquired through education, professional experience, and the cultural conventions of her field.

Every link in this chain is a participant in the cooperative network. Remove any of them and the session does not occur, or it occurs differently. Remove the transformer researchers and there is no model. Remove the training data and there is no capability. Remove the annotators and the model's outputs are less useful. Remove the cloud infrastructure and the computation does not happen. Remove the open-source developers whose code the model learned from and the model's coding abilities are diminished. Remove the prompting conventions and the builder cannot communicate effectively with the tool.

Alex Finn's year was remarkable. The productivity was genuine, the revenue was real, and the accomplishment of building something valuable without traditional technical skills represented a genuine expansion of who gets to build. None of the sociological analysis takes that away. But the narrative of solo building is a convention of credit — a way of distributing recognition that highlights one participant and renders the rest invisible. It is the same convention that credits the novelist and not the editor, the film director and not the cinematographer, the startup founder and not the open-source ecosystem whose libraries the startup depends on.

Becker was interested in conventions of credit because they have consequences. They determine who gets paid, who gets recognized, who gets to make decisions about how the work proceeds, and whose interests are represented when conflicts arise. The convention that credits the solo builder has specific consequences: it concentrates recognition on the person who is most visible, which means it concentrates recognition on the person who is most likely to be economically privileged (able to afford the tools and the time), culturally connected (embedded in the networks where AI techniques are shared), and geographically located in the centers of the technology industry.

The developer in Lagos whom Segal invokes in The Orange Pill as evidence of democratization is a real figure, and the expansion of access she represents is genuinely significant. But the cooperative chain behind her Claude Code session includes participants whose interests may conflict with hers. The training data that makes Claude useful to her includes code written by developers who may object to its use. The annotation labor that refined the model was performed by workers who may be in her own country, earning wages that do not reflect the value their labor contributes to the system. The cloud infrastructure she depends on is owned by corporations whose pricing decisions she cannot influence. The platform conventions that determine the visibility of her output are set by companies whose interests may not align with hers.

The art world framework does not reduce these complexities to a simple story of exploitation. Art worlds are not inherently exploitative. They are cooperative structures that distribute the work of production, evaluation, and distribution among participants according to conventions that all participants have, to some degree, accepted. The question Becker asked was not whether art worlds are fair — fairness is a philosophical question — but whether the conventions of credit accurately represent the actual distribution of contributions, and what happens when they do not.

In traditional art worlds, the gap between the credit convention and the actual cooperative structure produces chronic conflict. Session musicians who play on hit records and receive no royalties. Ghostwriters who produce bestselling books and receive no bylines. Screenwriters who create the story and watch the director accept the award. These conflicts are not aberrations. They are structural features of art worlds whose credit conventions privilege one participant over others. The conflicts persist because the conventions serve the interests of the participants with the most power — usually the ones who receive the most credit — and those participants have no incentive to change a system that benefits them.

The AI world is developing its own version of this structural gap. The credit convention says: the builder built it. The cooperative reality says: the builder's contribution was essential but partial, situated within a network of thousands whose contributions were also essential. The gap between the convention and the reality is not yet producing the kind of organized conflict that characterizes mature art worlds — there are no annotator unions, no open-source developer guilds demanding credit for training data contributions — but the conditions for such conflict are forming.

Segal approaches this territory in The Orange Pill when he acknowledges that democratization is "real but partial" and that structural inequalities persist beneath the surface of expanded access. Becker's framework provides the analytical tools to specify what "partial" means: it means the cooperative network has expanded to include new participants, but the conventions of credit have not expanded to include them. The floor of who gets to build has risen. The floor of who gets recognized for building has not risen to match it.

The solo builder narrative is seductive because it aligns with the deepest myth of creative production: the individual genius, self-sufficient and original. Becker's career was a patient, empirical, decades-long dismantling of that myth. Not because individual contribution is unreal, but because individual contribution, no matter how genuine, occurs within a cooperative structure that makes it possible, shapes it, and determines its meaning. The builder at three in the morning is not alone. She never was. The question is whether the AI world's conventions will acknowledge the network she depends on, or whether the myth of the solo builder will become the founding mythology of a world that systematically renders most of its participants invisible.

Every art world makes this choice, usually without recognizing it as a choice. The choice looks like common sense: of course the novelist's name goes on the cover. Of course the director gets the possessory credit. Of course the builder who shipped the product gets the applause. These are conventions so deeply normalized that they feel like facts. Making them visible — seeing them as conventions rather than as the natural order of things — is the first step toward deciding whether they are the conventions this new world ought to have.

---

Chapter 4: Support Personnel — Who Does the Work You Cannot See

The audience at a symphony concert sees the conductor raise the baton. It sees the first violinist draw the bow. It hears the music, experiences the performance, and forms an opinion about the quality of what it has heard. It does not see the luthier who carved the violin from a block of spruce and maple over the course of six months. It does not see the piano tuner who arrived at the hall at seven in the morning and spent three hours adjusting hammers and strings. It does not see the music librarian who prepared the scores, the stage crew who arranged the chairs and music stands, the fundraiser who secured the grant that pays the orchestra's salaries, the board member who negotiated the lease on the concert hall, or the custodian who mopped the lobby floor an hour before the doors opened.

These people are what Becker called support personnel. Their contributions are essential to the performance — remove any of them and something goes wrong, something is missing, the performance degrades or fails to occur at all — but the conventions of the art world render them invisible. The credit flows to the performers and the conductor. The program lists the musicians' names. The review in the newspaper evaluates the interpretation of the score. Nobody reviews the piano tuner's work, though every listener depends on it.

Support personnel are present in every art world, and their invisibility is not accidental. It is conventional — produced and maintained by the shared understandings of the art world's participants about what counts as creative contribution and what counts as mere support. The line between the two is not inherent in the work itself. It is drawn by the conventions, and it could be drawn differently. In some art worlds, at some times, the line has been drawn differently: the film credits that scroll for ten minutes at the end of a movie represent a convention of credit that is more inclusive than the novelist's solitary byline, though still far from comprehensive.

The AI world has its own support personnel, and they are, in several important respects, even more invisible than the support personnel of traditional art worlds. The piano tuner is at least present in the concert hall. The data annotator who labeled training data for a large language model is typically on a different continent from the people who use the model, employed by a subcontracting firm, working under conditions that the AI company's end users never encounter and rarely consider.

The labor of data annotation has been documented by researchers and journalists, though the documentation has had remarkably little effect on the conventions of the AI world. Mary Gray and Siddharth Suri's Ghost Work described the vast, distributed, largely invisible workforce that performs the micro-tasks on which AI systems depend — labeling images, transcribing audio, evaluating text, flagging harmful content. The workers are classified as independent contractors, which means they receive no benefits, no job security, and no upward mobility within the systems they support. Their wages are determined by global labor arbitrage: the work is routed to wherever workers will accept the lowest rate for acceptable quality.

These workers are support personnel in the strictest Beckerian sense. Their contribution is essential — the model cannot be trained without labeled data, cannot be refined without human feedback, cannot be kept safe without content moderation — but the conventions of the AI world assign them no credit, no visibility, and no voice in how the world develops. They are the piano tuners of the AI world, except that the piano tuner at least has a professional identity, a recognized skill, and a place within the art world's social structure. The data annotator has none of these.

The numbers are significant. Estimates vary, but the global data annotation workforce numbers in the hundreds of thousands to low millions, concentrated in Kenya, India, the Philippines, Venezuela, and other countries where English-language literacy is high and wages are low. The work pays, in many cases, a few dollars per hour — enough to make it attractive in local labor markets, far below what the same work would command in the countries where the AI companies are headquartered. The asymmetry is structural: the value created by the annotators' labor accrues to the model, and the value of the model accrues to the company that owns it, and the company's revenue flows to its employees and shareholders, who are overwhelmingly located in high-income countries.

Becker would not have called this exploitation, exactly, because Becker's vocabulary was deliberately less loaded than that. He would have called it a convention of credit and compensation that distributes the returns of cooperative activity in a way that reflects the power relations among the participants rather than the value of their contributions. The annotators contribute essential labor. The convention assigns them minimal credit and minimal compensation. The convention persists because the participants with the power to change it — the AI companies — have no incentive to do so, and the participants who would benefit from change — the annotators — have no leverage to demand it.

Content moderators occupy a similar position. The people who review and filter the outputs of large language models, who flag harmful content, who red-team systems for safety — they perform work that is essential to the AI world's functioning and invisible to its end users. The emotional and psychological toll of this work has been documented: moderators are exposed to violent, disturbing, and traumatic content as a routine part of their job. Their labor keeps the AI world's output within the bounds that end users expect, and the conventions of the AI world acknowledge their contribution no more than the conventions of the concert world acknowledge the custodian's mop.

The open-source community represents a different kind of support personnel, one that complicates the analysis in useful ways. Open-source developers contributed code to public repositories under licenses that permitted sharing and modification. That code was subsequently ingested by AI training processes, becoming part of the data on which models like Claude were trained. The developers' contributions are essential to the model's capabilities, but the developers did not consent to this specific use, and they receive no compensation for it.

This is not a straightforward case of exploitation. The open-source convention is itself a set of shared understandings about how code should be shared, used, and credited. The developers who contributed under open-source licenses made a deliberate choice to share their work freely, within the terms of those licenses. The question is whether the ingestion of open-source code into AI training data falls within the conventions the developers accepted when they chose to open-source their work, or whether it represents a novel use that the existing conventions do not cover.

This is a convention gap — a situation where the existing shared understandings do not provide clear guidance for a new set of circumstances. Convention gaps are normal in art worlds. They arise whenever a new technology or a new practice outpaces the existing conventions, and they are resolved through negotiation, conflict, and eventually the stabilization of new conventions. The legal battles over AI training data, the proposed licensing frameworks that would require AI companies to compensate developers whose code was used for training, the community debates about whether open-source licenses should be updated to address machine learning — these are all instances of convention negotiation, the messy social process through which a new art world establishes the rules that will govern it.

Segal's account of the developer in Lagos, invoked in The Orange Pill as evidence of democratization, takes on additional complexity when the full cooperative chain is made visible. The developer in Lagos gains access to Claude Code, and with it, access to building leverage that was previously unavailable to her. This is a genuine and significant gain. But the model she is using was trained, in part, on the labor of annotators who may be in her own country, earning wages that do not reflect the value they create. The open-source code the model learned from was contributed by developers who did not anticipate this use. The infrastructure she depends on is priced according to corporate strategies she cannot influence.

The developer in Lagos is simultaneously a beneficiary of the AI world's expansion and a participant in a cooperative structure whose conventions of credit and compensation were established without her input. She gains access to the tools. She does not gain access to the conventions-setting process. Her voice is not present in the rooms where the rules are being made, and the conventions that stabilize in her absence may not serve her interests.

This is the support personnel problem scaled to the AI world: a cooperative structure in which essential contributors are invisible, uncredited, and unrepresented in the governance of the world they help to sustain. The problem is not unique to AI. It exists in every art world Becker studied. The session musicians. The ghostwriters. The piano tuners. The custodians. But in the AI world, the scale is different. The support network is global. The invisibility is reinforced by geographical distance, contractual abstraction, and the sheer complexity of the cooperative chain. When a builder types a prompt and receives a response, the chain of cooperation that produces that response spans continents, involves thousands of participants, and is compressed into a transaction that takes seconds.

The conventions that will govern the AI world are being established now, during this period of instability. The question of support personnel — who they are, what they contribute, how they are compensated, and whether they have a voice in the world they help to sustain — is not a peripheral concern. It is a central question about the character of the world being built. Traditional art worlds resolved the support personnel question through conventions that ranged from moderately inclusive (film credits) to almost entirely exclusive (the novelist's byline). The AI world is free, at this early stage, to establish conventions that are more inclusive than those of any previous art world — or conventions that are less inclusive than any that have come before.

The choice is being made right now, mostly by default. The default convention is the convention of the market: compensation is determined by bargaining power, and credit is determined by visibility. Under this convention, the annotators will continue to earn a few dollars an hour, the open-source developers will continue to receive no compensation for their contributions to training data, the content moderators will continue to perform psychologically damaging work for modest wages, and the builder who types the prompt will continue to be celebrated as a solo creator.

Becker would not have prescribed a solution. Prescription was not his style. He would have described the problem with enough clarity that the people inside it could see it whole — could see the convention for what it is, a social arrangement rather than a natural fact, and could then decide, with full knowledge of what they were choosing, whether to maintain it or change it. The first step, always, is making the invisible visible. The rest is a social process, messy and contested and never fully resolved, but capable of producing conventions that are more adequate to the cooperative reality they are supposed to represent.

The AI world's support personnel are not asking for credit. Most of them do not even know they are support personnel in an art world. They are doing their jobs, which happen to be essential links in a cooperative chain they cannot see. Making that chain visible is not an act of sentimentality. It is a precondition for building conventions that are adequate to the world's actual structure — conventions that acknowledge the full network of cooperation rather than celebrating only its most visible node.

Chapter 5: Editing as the Core Creative Act

The received image of the author is a person generating text. Words flow from mind to page. The author is the origin, the source, the spring from which the work emerges. Editing, in this image, is secondary — a cleaning-up operation performed after the real work is done. The editor fixes errors, smooths rough patches, suggests cuts. The editor is a technician. The author is the artist.

Becker knew this image was wrong, because Becker talked to the people who actually made things, and what they told him did not match the image. The novelists he spoke with described writing as a process of selection — not generation from nothing but the continuous narrowing of possibilities, the rejection of most of what could be said in favor of the small fraction that should be. The jazz musicians described improvisation the same way: not the free expression of interior feeling but a real-time editorial process, choosing among the phrases that present themselves, rejecting most, shaping the few that survive into a coherent statement. The photographers described their work as editing with special emphasis: taking hundreds of exposures and selecting the handful that constitute the work. The ratio of rejected to accepted was enormous. The editing was the art.

This observation, which sounds modest, has radical implications when applied to AI-assisted creation. If the core creative act is not generation but selection — not producing material but judging it — then the arrival of a tool that generates material at unprecedented speed and volume does not displace the creator. It intensifies the demand for the creator's actual skill, which was never generation in the first place.

Segal arrives at this insight experientially in The Orange Pill when he describes the discipline of working with Claude. "The discipline of this collaboration," he writes, "is the willingness to reject Claude's output when it sounds better than it thinks." The formulation is precise. The output sounds right — the sentences are grammatical, the structure is logical, the references arrive on time. But sounding right and being right are different things, and the difference is detectable only by someone who knows enough about the subject to distinguish between plausible and true. That distinction is an editorial judgment. It is the same judgment an editor at a publishing house exercises when she reads a manuscript that is competent but empty — well-crafted sentences that do not add up to a book worth reading. The craft is present. The substance is not. Only someone with the knowledge and taste to tell the difference can catch the gap.

The AI world has made editing the central creative act by making generation trivially easy. When producing text, code, images, or music costs almost nothing in time or effort, the constraint shifts from production to evaluation. The scarce resource is no longer the ability to generate material. It is the ability to judge whether the material is any good.

This shift has precedents in every art world that has undergone a production revolution. The arrival of cheap recording technology in the mid-twentieth century made it possible for anyone with a tape recorder to produce a musical recording. The constraint shifted from production to evaluation: not who could record, but what was worth listening to. The arrival of desktop publishing in the 1980s made it possible for anyone with a computer and a laser printer to produce a professional-looking document. The constraint shifted from typesetting to editorial judgment: not who could publish, but what deserved to be published. The arrival of digital photography made it possible for anyone with a phone to take technically competent photographs. The constraint shifted from exposure to selection: not who could take pictures, but which pictures were worth looking at.

In each case, the democratization of production was celebrated as a liberation — and it was. More people could make things. More voices could be heard. More work could reach an audience. But in each case, the celebration obscured a quieter structural change: the relocation of the creative premium from generation to editing. The people who thrived in the new environment were not necessarily the most prolific producers. They were the most capable editors — the ones who could look at a large volume of material and identify the fraction that was genuinely good.

The AI world is following the same pattern at a vastly accelerated pace. Claude generates text by the paragraph, code by the function, designs by the iteration. The volume is extraordinary. The average quality is competent. And competent, in a world of abundant production, is the new mediocre. The builder who accepts Claude's first output without editorial intervention produces work that is indistinguishable from the output of every other builder who accepted Claude's first output without editorial intervention. The homogeneity is a direct consequence of unedited generation: the model produces outputs that cluster around the central tendencies of its training data, and without editorial pressure to push the output away from the center, everything converges on the same competent, unremarkable mean.

The editorial conventions that are developing in the AI world are still rudimentary. Some builders iterate extensively — prompting, evaluating, re-prompting, rejecting, refining — in a process that resembles the editorial back-and-forth between an author and a skilled editor at a publishing house. Others accept the first output that meets a minimum threshold of adequacy, which resembles nothing so much as a publishing house that prints every manuscript that arrives without reading it. The difference in the quality of the resulting work is predictable and large.

Segal describes his own editorial process in The Orange Pill Chapter 7 with instructive candor. He recounts a passage where Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Deleuze. The passage was elegant. It connected two threads beautifully. And the philosophical reference was wrong — not subtly wrong but wrong in a way that would be obvious to anyone who had actually read Deleuze. Segal caught it, but he almost did not, and the near-miss stayed with him because it revealed the specific danger of AI-generated text: confident wrongness dressed in good prose. The editorial act — the moment of recognition that something sounds right but is not right — was the moment that saved the work from error. Without it, a plausible falsehood would have entered the published text, and most readers would not have caught it, because the plausibility was the disguise.

Becker would have recognized this as a familiar problem in a new guise. Every art world has conventions for quality control, and those conventions always involve some form of editorial gatekeeping — a process by which knowledgeable participants evaluate work before it reaches its audience. The code review in software development is editorial gatekeeping. The peer review process in academic publishing is editorial gatekeeping. The rehearsal process in theater, where the director watches a performance and says "again, differently," is editorial gatekeeping. Each of these conventions exists because the people inside the art world understand that generation alone is insufficient — that the work must be tested, evaluated, and refined before it is ready.

The AI world's editorial conventions are under-developed relative to the volume of production the tools enable. A builder working alone with Claude has no code reviewer, no peer reviewer, no director watching from the house seats. The editorial function must be performed by the same person who is doing the generating, which creates a conflict of interest that every art world has learned to manage through structural separation. Publishing houses employ editors who are not the authors. Orchestras employ conductors who are not the composers. Film productions employ editors who are not the directors — or at least, the editing function is understood as distinct from the directing function even when the same person performs both.

The convention of structural separation exists because self-editing is hard. The person who generated the material has an attachment to it — an investment of effort, an aesthetic preference, a cognitive bias toward seeing it as good because it came from their own process. The external editor has no such attachment. The external editor sees the work as the audience will see it: without knowledge of what the creator intended, without sympathy for the effort involved, with only the question of whether the thing works on its own terms.

The AI world has collapsed the editorial structure in ways that have consequences for quality. The builder who prompts Claude, evaluates the output, and ships the result has performed all three functions — generation, evaluation, and distribution — without the structural separation that traditional art worlds use to ensure that each function is performed competently. The result is predictable: some builders are excellent self-editors and produce work of genuine quality, while others lack the knowledge or discipline to evaluate what the tool has given them and produce work that is polished on the surface and hollow underneath.

The conventions that develop around AI editing will determine whether the AI world produces work that is merely abundant or work that is actually good. Several possible conventions are competing. One convention says: iterate until the output is correct and useful, applying domain knowledge at every step to catch errors, reject plausible falsity, and push the output beyond the competent mean. This convention produces high-quality work but requires the builder to possess genuine expertise — the very expertise that the democratization narrative sometimes implies is no longer necessary.

Another convention says: generate, review briefly, ship. This convention maximizes speed and volume. It is the convention that the productivity metrics reward and that the market, in its preference for more over better, tends to reinforce. It produces work that is adequate for many purposes and inadequate for purposes that require depth, accuracy, or originality.

A third convention, still nascent, says: separate the editorial function structurally. Have one person or team generate with AI, and another person or team evaluate the output with the detachment that self-evaluation cannot provide. This convention replicates the structural separation of traditional art worlds and is likely to produce the highest quality, but it requires organizational infrastructure — multiple people, defined roles, shared standards — that the solo builder does not have.

The conventions of editing are not merely technical. They carry moral weight, because they determine what reaches an audience, and what reaches an audience shapes what the audience believes, knows, expects, and values. A published falsehood that was never caught because the editorial process was skipped does real harm. A mediocre product that was shipped because nobody applied judgment to the output wastes the user's time and erodes trust. A piece of writing that sounds insightful but contains no insight degrades the discourse.

In traditional art worlds, the editorial conventions evolved over decades or centuries, refined by the accumulated experience of participants who discovered through trial and error what level of editorial rigor was necessary to maintain the world's standards. The AI world does not have decades. The volume of production is already enormous, the conventions are still forming, and the default — generate, glance, ship — is stabilizing as the norm precisely because it is the path of least resistance.

The editorial act is not glamorous. It is not celebrated. Nobody posts on social media about the passage they rejected, the code they discarded, the design they scrapped after careful evaluation. The triumphalist discourse rewards output: what you shipped, how fast, how much. The editorial discipline that determines the quality of what is shipped is invisible, like all conventions, and its invisibility makes it vulnerable to erosion. When the convention of editing weakens, the average quality of the AI world's output declines, and the decline is gradual enough that participants may not notice it until the water they are swimming in has changed composition entirely.

Becker would say the same thing he always said: look at what people actually do. Do not ask whether editing is important in theory. Watch what happens when it is done well and when it is skipped. The evidence, in the AI world as in every art world before it, is consistent. The work that endures, that earns genuine respect, that serves its audience well, is the work that has been edited — tested against the judgment of someone who knows enough to tell the difference between what sounds right and what is right.

The editorial convention is the AI world's most important emerging norm. Whether it stabilizes as a rigorous standard or erodes into perfunctory glancing will determine whether the AI world's abundant production amounts to anything worth the audience's attention.

---

Chapter 6: The Distribution of Credit and the Authorship Problem

In 1964, a session musician named Glen Campbell played guitar on "Strangers in the Night" and "You've Lost That Lovin' Feelin'" and dozens of other recordings that became cultural landmarks. His name appeared on none of them. The credit conventions of the recording industry allocated recognition to the featured artist — Frank Sinatra, the Righteous Brothers — and rendered the session musicians invisible. Campbell was not obscure. He was one of the most skilled guitarists in Los Angeles, a member of the group informally known as the Wrecking Crew, which played on an extraordinary proportion of the hit records produced in 1960s Los Angeles. His contribution to those records was essential. The conventions of credit did not recognize it.

The conventions were not accidental. They reflected the recording industry's commercial logic: the featured artist's name sold records; the session musician's name did not. They reflected the industry's social hierarchy: the featured artist was the star; the session musician was hired help. And they reflected an aesthetic ideology: the featured artist's performance was the creative essence of the recording; the session musician's contribution was mere execution, technically accomplished but artistically subordinate.

Becker observed this pattern across every art world he studied. The conventions of credit never perfectly match the actual distribution of contributions. They cannot, because contributions are continuous — each participant contributes something, and the contributions shade into each other without clean boundaries — while credit is discrete: someone's name goes on the cover, and someone's does not. The discretization necessarily produces a distortion. The question is how large the distortion is, whose interests it serves, and whether the participants who are under-credited have the power to challenge the convention.

The AI world has produced a new version of the credit problem, and it is more complex than any previous version because the cooperative network includes a participant that is not a person. When Segal writes in The Orange Pill that "neither of us owns that insight" — referring to a connection that emerged from the collaboration between himself and Claude — he is describing a situation for which the existing conventions of authorship have no adequate response.

The existing conventions assume that the unit of creative production is a human being. An author writes a book. A programmer writes code. A composer writes a score. The credit follows the human: the author's name appears on the cover, the programmer's name appears in the commit history, the composer's name appears on the score. When the creative process involves multiple humans, the conventions have mechanisms — co-authorship, ensemble credits, production credits — that distribute recognition among them, however imperfectly.

But Claude is not a co-author in any sense the existing conventions recognize. It is not a person. It does not have interests, a career, a reputation that benefits from recognition. It does not experience the injustice of being uncredited. The standard arguments for fair credit distribution — that people deserve recognition for their work, that recognition sustains motivation, that credit shapes career trajectories — do not apply to a machine in any straightforward way.

This has led some participants in the AI world to conclude that the credit problem is simple: the human gets the credit, because the human is the only participant to whom credit is meaningful. The builder prompted Claude, evaluated the output, selected and arranged the material, and made the editorial judgments that determined the final form. The builder is the author. Claude is a tool, like a word processor or a calculator, and nobody credits their word processor.

The argument is clean and appealing. It is also, in Becker's terms, a convention masquerading as a fact. The claim that Claude is merely a tool — equivalent to a word processor — is a convention of classification, not an empirical observation. A word processor does what the user tells it to do. Claude does something qualitatively different: it generates material that the user did not specify, makes connections the user did not see, and produces outputs that the user may not have been capable of producing alone. The difference is not philosophical; it is observable in the actual practice of AI-assisted creation, where builders regularly describe being surprised by what the tool produces, learning from its outputs, and incorporating ideas they would not have arrived at independently.

Segal's account is typical. He describes Claude finding a connection between adoption curves and punctuated equilibrium that reframed his entire argument. He describes the laparoscopic surgery example that became central to his counter-argument against Han. He describes moments where the collaboration produced something that "belongs to the collaboration, to the space between us." These are not descriptions of a person using a tool. They are descriptions of a cooperative process in which both participants — one human, one machine — contribute material that shapes the final product.

The convention that credits the human alone is a choice about how to handle this novel situation, and like all conventions, it has consequences. One consequence is that it obscures the actual process of creation, making AI-assisted work appear more individually authored than it is. Another consequence is that it creates a misleading standard for evaluating AI-assisted work: if the builder is credited as sole author, the work is evaluated as if it were the product of a single mind, which sets expectations the work may not meet and misidentifies the source of both its strengths and its weaknesses.

A third consequence, less obvious but more important, is that the sole-credit convention discourages transparency about the collaborative process. If the convention says the builder is the author, then acknowledging Claude's contribution feels like an admission of diminished authorship — a confession that the work is somehow less authentically the builder's than it would be if written without AI. This creates an incentive to conceal the collaboration, which in turn prevents the AI world from developing accurate conventions for evaluating AI-assisted work, because the evaluators do not know what they are evaluating.

Segal's transparency about his collaboration with Claude — declaring it in the Foreword, examining it in Chapter 7, reflecting on its implications throughout the book — is, in Becker's terms, an attempt to establish a new convention of credit that is more adequate to the actual cooperative process. The convention Segal proposes is not co-authorship exactly — Claude is not a person, and the conventions of co-authorship assume personhood — but something for which the AI world does not yet have a stable name: an acknowledged collaboration between human judgment and machine generation, in which the human takes responsibility for the final product but does not claim sole origination.

Whether this convention stabilizes depends on the same factors that determine the fate of any convention: whether enough participants adopt it, whether institutional structures reinforce it, and whether it solves a problem that participants recognize as real. The alternative — the sole-credit convention that treats Claude as a word processor — is simpler, more flattering to the builder, and more comfortable for audiences who prefer clear attribution. It is also less accurate, less transparent, and less adequate to the reality of how AI-assisted work is actually produced.

Traditional art worlds resolved the credit problem through conventions that varied widely in their inclusiveness. The film credit convention, which lists hundreds of contributors, is far more inclusive than the novel convention, which lists one. The academic convention of co-authorship, which lists contributors in an order that carries meaning about the nature and magnitude of their contributions, is more inclusive than the journalism convention, which often credits a single byline. None of these conventions perfectly represents the cooperative reality. All of them are compromises between accuracy and simplicity, between the desire to acknowledge contributions and the practical need for a manageable attribution system.

The AI world will develop its own compromise. The conventions it settles on will shape what gets built, how it gets evaluated, and whose contributions are recognized. The negotiation is happening now, in the decisions of individual builders about whether to disclose their use of AI, in the policies of publications and platforms about AI-generated content, in the legal arguments about intellectual property and training data. These are all negotiations about credit conventions, conducted under conditions of instability and urgency, and their outcomes will constitute the rules of the AI world for years to come.

Becker would not have advocated for any particular convention. Advocacy was not his method. He would have insisted on describing the convention accurately — showing what it includes and what it excludes, whose interests it serves and whose it neglects, what it makes visible and what it hides. The description itself is the contribution, because you cannot evaluate a convention you cannot see. And the credit conventions of the AI world, like the credit conventions of every art world before it, are doing their work invisibly, shaping the world while appearing to merely describe it.

---

Chapter 7: Mavericks, Integrated Professionals, and Naive Artists in the AI Age

Becker's typology of art world participants was built from observation, not theory. He watched how people actually related to the conventions of their worlds and noticed that the relationships fell into recognizable patterns. Some people worked within the conventions comfortably and competently. They knew the rules, followed them, and produced work that the world's evaluation mechanisms recognized as good. Becker called them integrated professionals. Some people knew the conventions thoroughly but violated them deliberately, pushing against the boundaries to discover what lay beyond them. Becker called them mavericks. Some people worked within conventions that were parallel to the dominant art world — conventions of a community, a tradition, a subculture — producing work that was coherent within its own terms but invisible or illegible to the mainstream. Becker called them folk artists. And some people worked without knowledge of any conventions at all, producing work that was sometimes incoherent, sometimes startling, and occasionally revelatory in ways that convention-bound artists could not achieve. Becker called them naive artists.

The typology was not a hierarchy. Becker was careful about this. Integrated professionals were not better than mavericks; mavericks were not more creative than folk artists; naive artists were not purer than professionals. The categories described different relationships to conventions, and each relationship had characteristic strengths and characteristic limitations. The integrated professional's strength was reliability and communication — the ability to produce work that the art world's participants could immediately understand and evaluate. The limitation was predictability: work that stays within conventions rarely surprises. The maverick's strength was discovery — the capacity to find possibilities that convention-following obscures. The limitation was illegibility: work that violates conventions is difficult for the art world to process and often goes unrecognized until the conventions catch up. The folk artist's strength was coherence within an alternative value system — the maintenance of traditions and practices that the dominant art world neglects. The limitation was insularity: work that circulates only within its own community rarely reaches wider audiences. The naive artist's strength was radical originality — the production of work unconstrained by expectations the artist does not know exist. The limitation was inconsistency: without conventions to provide structure, the work has no quality floor.

The AI world is producing all four types at a speed that traditional art worlds never approached, because the barrier to entry has collapsed. In traditional art worlds, becoming an integrated professional required years of training — conservatory for musicians, art school for painters, a computer science degree and years of on-the-job experience for software developers. The training period served two functions: it developed technical skill, and it socialized the practitioner into the conventions of the world. By the time the training was complete, the new professional knew not only how to do the work but how the work was supposed to be done — what counted as quality, how credit flowed, what was expected and what was forbidden.

When AI tools reduce the technical barrier to near zero, the socialization function of training is disrupted along with the skill-development function. A person can now produce competent software, or competent prose, or competent images, without having undergone the years of immersion in conventions that traditional training provided. The result is a population of producers who have the capability to generate but not the conventional knowledge to evaluate what they have generated. They are, in Becker's terms, naive artists at scale — people producing work without knowledge of the conventions that would tell them whether the work is good, bad, derivative, original, harmful, or helpful.

The integrated professionals of the AI world are the builders who have invested the time to learn not just how to prompt but how to evaluate, iterate, and maintain standards that predate the tools. They are the senior engineers who use Claude Code while applying decades of architectural judgment to the output. They are the writers who use AI assistance while maintaining the editorial discipline that distinguishes publishable prose from plausible text. They are the designers who generate options with AI while applying the aesthetic knowledge that tells them which option actually works. Their defining characteristic is that they possess the conventional knowledge independently of the tool. They knew what good work looked like before the tool arrived, and they use the tool to produce more of it, faster, while maintaining the standards they internalized through years of practice.

The engineer Segal describes in The Orange Pill — the senior developer who spent his first two days in Trivandrum oscillating between excitement and terror before recognizing that his real value was the judgment the tool could not provide — is a paradigmatic integrated professional. His conventional knowledge, built over decades, became more valuable when the tool arrived, not less, because the tool needed someone with that knowledge to direct it competently. The integrated professional's position in the AI world is strong precisely because the tool amplifies the value of conventional knowledge rather than replacing it.

The mavericks are the builders who use AI tools in ways the tool designers did not intend. They jailbreak models. They chain tools together in novel configurations. They probe the boundaries of what is possible, deliberately pushing past the conventions that other users accept. In the AI world, the maverick might be the developer who uses Claude to generate code in a programming language Claude was not optimized for, forcing unexpected solutions. Or the artist who uses image generation tools to produce work that deliberately exposes the tool's biases and limitations, turning the tool's weaknesses into aesthetic material. Or the researcher who uses large language models not to generate answers but to generate questions — systematically probing the model's knowledge boundaries to discover what it does not know and what that absence reveals about the training data.

Mavericks serve an essential function in any art world: they discover possibilities that convention-following obscures. In the AI world, they are particularly important because the conventions are new and untested. The integrated professional accepts the conventions and works within them. The maverick tests the conventions by violating them, and in the process reveals which conventions are robust and which are arbitrary. The AI world needs both, because a world of only professionals stagnates — it produces competent work within conventions that never evolve — and a world of only mavericks is incoherent, producing novel work that nobody can evaluate because there are no shared standards against which to measure it.

The folk artists of the AI world are the builders working in communities with their own conventions, parallel to the mainstream. The open-source community is a folk art world: it has its own conventions of quality, credit, collaboration, and distribution that differ significantly from the conventions of commercial software development. The indie game development community is another: its conventions of aesthetic value, production scale, and distribution are distinct from those of the mainstream game industry. The maker community, the creative coding community, the data journalism community — each operates as a folk art world with conventions that enable its particular kind of work.

These parallel worlds are important because they maintain alternative conventions. When the mainstream AI world settles on conventions that are inadequate — conventions that reward speed over depth, volume over quality, or individual credit over cooperative acknowledgment — the folk worlds provide working examples of alternative arrangements. They are the conservation areas of the convention ecosystem, preserving practices that the dominant world has abandoned and that may, at some future point, be needed.

The naive artists are the most numerous and the most complicated category in the AI world, because democratization has produced them in enormous numbers. The person who downloads Claude and starts building a product over a weekend, with no training in software development, no experience with professional conventions of quality, no knowledge of the norms that govern how code should be structured, tested, documented, and maintained — this person is a naive artist. Her work may be brilliant or terrible or both. She may produce something genuinely original, precisely because she does not know the conventions that would constrain her approach. Or she may produce something that is competent on the surface and catastrophically flawed underneath, because she lacks the conventional knowledge to recognize the flaw.

The challenge for the AI world is that the naive artist is also the figure the democratization narrative celebrates most. The triumphalist story is a story about naive artists: people who were previously excluded from production, who now have access to tools that let them build. The celebration is warranted. The expansion of who gets to build is significant and real. But the celebration obscures a problem that Becker would have seen immediately: the naive artist's lack of conventional knowledge is not only a source of potential originality. It is also a source of potential harm. Code that is competent enough to deploy but not competent enough to secure. Text that is plausible enough to publish but not accurate enough to trust. Designs that look professional but violate accessibility standards that the designer has never encountered.

The health of the AI world depends on maintaining a productive ecology of all four types. The integrated professionals provide the conventional backbone — the shared standards that make evaluation and communication possible. The mavericks provide the evolutionary pressure — the boundary-testing that prevents conventions from ossifying. The folk artists provide the biodiversity — the alternative conventions that ensure the ecosystem does not become a monoculture. The naive artists provide the disruption — the fresh perspectives that convention-bound participants cannot access.

A world dominated by professionals becomes conservative. It produces reliable work that never surprises. A world dominated by mavericks becomes chaotic. It produces surprising work that nobody can evaluate. A world dominated by naive artists becomes noisy. It produces enormous volumes of work without the quality conventions that would allow anyone to distinguish the brilliant from the disastrous. A world that maintains all four types, in productive tension with each other, evolves. Its conventions change, because mavericks challenge them. Its standards hold, because professionals maintain them. Its diversity persists, because folk artists preserve alternative practices. Its boundary expands, because naive artists attempt things that no one with conventional knowledge would think to try.

The conventions being established now will determine which types the AI world rewards and which it marginalizes. If the conventions reward speed and volume above all, they will favor naive artists and marginalize professionals. If they reward credentialed expertise above all, they will favor professionals and exclude the naive artists whose fresh perspectives the world needs. The ecology is fragile. Its balance is maintained not by any natural law but by the conventions that the world's participants negotiate, contest, and eventually stabilize.

The negotiation is underway. The balance has not been struck. The conventions that emerge will determine whether the AI world produces an ecology of creation or a monoculture of competent mediocrity.

---

Chapter 8: Reputation Systems in a World of Abundant Production

The French Salon system, which dominated the exhibition of painting in Europe from the seventeenth century through the nineteenth, was a reputation machine. The Académie Royale selected which paintings would be shown. The placement of those paintings on the Salon walls — prominent eye-level positions for favored works, cramped upper-wall placements for others — signaled the Académie's judgment of each painting's quality. Critics reviewed the Salon exhibitions and published their assessments. Collectors attended the exhibitions and made purchasing decisions based on what they saw and what they read. The entire system functioned as a mechanism for converting the raw abundance of painted canvases into a ranked hierarchy of reputation, directing the attention of audiences and the money of collectors toward the work the system deemed best.

The system was imperfect. It was political. It was biased in ways that reflected the aesthetic ideology, social networks, and institutional interests of its operators. The Impressionists were systematically excluded because their work violated the Académie's conventions of finish, subject matter, and technique. The excluded artists responded by establishing their own exhibition — the Salon des Refusés — which eventually undermined the Académie's authority and restructured the art world's reputation system entirely. But for decades, the Salon was the mechanism through which reputation was produced, and the reputation the Salon produced determined which artists could make a living and which could not.

Every art world has an equivalent mechanism. In popular music, it is the combination of radio airplay, chart position, critical reviews, and social media metrics. In academic publishing, it is the combination of journal prestige, citation counts, grant awards, and institutional affiliation. In commercial software, it is the combination of user adoption, revenue, press coverage, and the opinions of influential early adopters. These mechanisms vary in their specific operations, but they serve the same function: they convert the abundance of production into a hierarchy of attention, directing audiences toward the work the mechanism deems worthy.

Reputation systems become more important as production becomes more abundant, because the ratio of what exists to what any individual can attend to grows. When a hundred paintings compete for attention, a viewer can browse them all. When a million paintings compete, the viewer cannot, and the reputation system — whatever form it takes — determines which tiny fraction reaches the viewer's eye. The reputation system does not merely reflect quality. It constructs it, in the specific sense that work the system elevates is treated as good and work the system ignores is treated as nonexistent, regardless of the actual properties of the work itself.

The AI world faces a reputation problem of unprecedented scale. The tools have made production so easy that the volume of output is overwhelming any existing mechanism for sorting it. A single builder with Claude Code can produce in a weekend what a team would have produced in a quarter. Multiply that across millions of builders, and the result is a flood of software, text, images, and other artifacts that vastly exceeds any audience's capacity to evaluate.

The reputation systems currently operating in the AI world are crude. They include: follower counts on social media platforms, which measure popularity rather than quality and are susceptible to manipulation; viral metrics such as shares, likes, and reposts, which measure emotional resonance in the moment of encounter rather than lasting value; revenue figures, which measure market demand but not the quality of what the market is demanding; speed-of-production metrics, which measure efficiency but say nothing about whether the efficiently produced thing is worth producing; and the informal opinions of a small number of high-visibility commentators whose judgments carry disproportionate weight because they are amplified by the same platforms whose metrics they are competing within.

These mechanisms share a characteristic that Becker identified in the reputation systems of every art world he studied: they reward the qualities that the system is designed to measure, which may or may not be the qualities that matter. A reputation system that measures follower counts will produce a world that optimizes for followers. A system that measures revenue will produce a world that optimizes for revenue. A system that measures speed will produce a world that optimizes for speed. None of these are inherently wrong measurements, but none of them measure what most participants, when asked directly, say they value most: the quality, depth, originality, and usefulness of the work.

The misalignment between what the reputation system measures and what participants say they value is a chronic feature of art worlds. Jazz musicians in the 1950s valued improvisational daring; the commercial music industry valued smooth, accessible performances that sold records. Academic researchers value original, rigorous work; the academic reputation system rewards publication volume and citation counts, which correlate imperfectly with originality and rigor. The misalignment does not paralyze the art world, but it shapes it: participants learn to optimize for whatever the reputation system rewards, which means the system produces more of whatever it measures and less of whatever it does not.

In the AI world, the misalignment is acute. The discourse Segal describes in The Orange Pill — the triumphalists posting impressive metrics, the elegists mourning the loss of depth, the silent middle holding contradictions they cannot resolve — is, in part, a conflict about reputation. The triumphalists have captured the existing reputation system: their metrics are impressive, their posts go viral, their narratives of individual accomplishment align with the platforms' preference for clear, shareable stories. The elegists lack a reputation mechanism that values what they value — depth, craft, embodied expertise, the slow accumulation of understanding through struggle. The silent middle has no reputation mechanism at all, because ambivalence does not go viral.

The consequence is predictable and was predicted by Becker's framework forty years before the AI world existed: the art world will produce more of whatever its reputation system rewards. If the system rewards speed and volume, builders will optimize for speed and volume. If the system rewards impressive-sounding metrics — lines of code generated, products shipped, revenue earned — builders will optimize for those metrics, regardless of whether the code is maintainable, the products are useful, or the revenue is sustainable.

The missing ingredient is a reputation mechanism that rewards the editorial quality described in the previous chapter — the capacity to evaluate, select, reject, and refine. This capacity is invisible to the current reputation systems because it produces no measurable output. The builder who rejects a passage, who scraps a feature, who decides not to ship a product because it does not meet her standard — this builder has produced nothing the reputation system can detect. She has exercised judgment, which is the most valuable activity in a world of abundant production, and the system rewards her with silence.

The historical precedent suggests that the current reputation systems will eventually be supplemented or replaced by mechanisms better suited to the AI world's actual needs. The Salon system was eventually supplemented by the gallery system, which was supplemented by the museum system, which was supplemented by the auction system, each one addressing the limitations of its predecessor while introducing new biases of its own. The academic citation count is being supplemented by altmetrics, open peer review, and other mechanisms that attempt to measure impact more broadly than a single number can capture.

But the supplementation takes time, and during the transition, the existing system shapes the world. The builders who are developing their practices now, establishing their reputations now, setting the conventions that will govern the AI world for years — they are operating within a reputation system that rewards the wrong things, and the work they produce in response to those rewards will constitute the AI world's early output, the foundation on which everything else is built.

What a reputation system that valued judgment might look like is not self-evident. Judgment is harder to measure than speed, harder to display than output, harder to verify than revenue. It resists quantification, which means it resists the platforms that currently dominate the AI world's attention economy. A reputation system that valued judgment would need to be built by people who understand both the limitations of the current metrics and the practices of the people whose work the current metrics fail to capture — the integrated professionals whose editorial discipline produces good work that the follower-count economy does not reward, the mavericks whose boundary-testing produces discoveries that the revenue metric cannot price, the folk artists whose alternative conventions maintain practices the mainstream has abandoned.

The reputation system is a convention, and like all conventions, it is being established right now, during the AI world's formative period, by the people who show up to shape it. The people who do not show up leave the system to be shaped by whoever does — which means, typically, by the people with the most existing visibility and the strongest incentive to maintain a system that already rewards them. The convention stabilizes. The water becomes invisible. And the AI world produces whatever the reputation system tells it to produce, which may or may not be what anyone actually wanted.

Chapter 9: The Fishbowl as a Set of Conventions

Segal opens The Orange Pill with an image that recurs throughout the book: the fishbowl. Everyone swims in one. The set of assumptions so familiar you have stopped noticing them. The water you breathe. The glass that shapes what you see. The scientist's fishbowl is shaped by empiricism. The filmmaker's by narrative. The builder's by the question "Can this be made?" The philosopher's by "Should it be?" Every fishbowl reveals part of the world and hides the rest. The effort that defines the best thinking, Segal writes, is the effort to press your face against the glass and see the world beyond the water's refractions.

The image is vivid and useful. It captures something real about the limits of perspective. But it is, from a sociological standpoint, imprecise in a way that matters. Segal treats the fishbowl as a property of the individual mind — a cognitive limitation shaped by biography, training, and temperament. The scientist sees the world through empiricism because the scientist was trained in empiricism. The builder sees the world through engineering because the builder spent decades building. The fishbowl is personal. The effort to see beyond it is a personal effort, an act of individual will.

Becker would reframe. The fishbowl is not a property of the individual mind. It is a property of the art world the individual inhabits. The scientist sees the world through empiricism not because of some private cognitive architecture but because the conventions of science — its methods, its standards of evidence, its reward structures, its institutions — produce and enforce an empiricist worldview. A scientist who stopped seeing the world through empiricism would not merely be thinking differently. She would be violating conventions, and the art world would respond: her papers would be rejected, her grants would not be funded, her colleagues would regard her with suspicion. The fishbowl is maintained not by individual habit but by social structure. The water is not personal. It is institutional.

This reframing has consequences for how one thinks about the AI moment. If the fishbowl is personal, then the solution is personal: individual effort, individual will, the heroic act of pressing your face against the glass. If the fishbowl is institutional, then the solution must also be institutional: changing the conventions, restructuring the cooperative relationships, modifying the reward systems that maintain the glass.

The builders Segal describes in The Orange Pill inhabit a fishbowl whose conventions include: the assumption that capability expansion is progress; the assumption that the imagination-to-artifact ratio should be as small as possible; the assumption that speed is a good and friction is a cost; the assumption that individual productivity is the meaningful unit of measurement. These assumptions feel natural to the builders because they are the conventions of the art world within which the builders operate. They are reinforced by every metric, every performance review, every venture capital pitch, every social media post celebrating how fast someone shipped something.

Han, the philosopher whom Segal engages at length, inhabits a different fishbowl. Its conventions include: the assumption that friction is formative; the assumption that speed erodes depth; the assumption that the aesthetic of smoothness conceals a pathology; the assumption that resistance to optimization is a form of intellectual integrity. These assumptions feel natural to Han because they are the conventions of the philosophical art world within which he operates — a world that rewards contemplation, that values the slow development of ideas, that treats the rejection of technological convenience as a mark of seriousness.

Uri, the neuroscientist, inhabits yet another fishbowl. Its conventions include: the assumption that consciousness is the relevant category for evaluating the significance of AI; the assumption that empirical evidence takes precedence over philosophical speculation; the assumption that claims about intelligence must be operationally defined before they can be evaluated.

Each fishbowl is a set of conventions. Each set of conventions enables certain kinds of insight and prevents others. The builder's conventions enable the recognition that AI tools genuinely expand capability. They prevent the recognition that capability expansion is not self-evidently good. Han's conventions enable the recognition that something is lost when friction is removed. They prevent the recognition that the people who benefit most from friction-removal are often the people who had the most friction to begin with — the developer in Lagos, the engineer in Trivandrum, the designer who could never cross the implementation barrier. The neuroscientist's conventions enable rigorous analysis of cognitive processes. They prevent the recognition that the social organization of AI use may matter as much as the cognitive processes it engages.

Becker's methodological contribution was a technique for making conventions visible, and it was disarmingly simple: comparison. Compare how different worlds organize the same activity, and the conventions of each become visible against the backdrop of the other. The jazz world's convention of crediting the bandleader becomes visible when compared with the classical world's convention of crediting the composer. The academic world's convention of measuring impact through citations becomes visible when compared with the journalism world's convention of measuring impact through readership. Neither convention is natural. Both are constructed. The comparison reveals the construction.

Applied to the AI moment, Becker's comparative method reveals something that participants inside any single fishbowl cannot see: the conventions currently shaping the AI world are not inevitable responses to the technology. They are social choices, made by specific people in specific institutional contexts, and they could be made differently.

The convention that speed is the primary measure of AI's value is a choice. The convention that individual productivity is the meaningful metric is a choice. The convention that the builder who types the prompt deserves sole credit is a choice. The convention that smooth output equals good output is a choice. Each of these choices has consequences, and each could be made differently if the participants recognized it as a choice rather than as the natural order of things.

The fishbowl metaphor, enriched by Becker's sociological analysis, becomes more useful and more demanding. Seeing beyond the glass is not merely a matter of individual will. It requires the comparative method — the deliberate study of how other worlds organize the same activities differently. The builder who wants to see beyond the builder's fishbowl needs to spend time in Han's world, in Uri's world, in the annotator's world, in the folk artist's world. Not as a tourist, consuming other perspectives as intellectual entertainment, but as a sociologist, studying the conventions of other worlds seriously enough to recognize the conventions of one's own.

This is harder than it sounds, because conventions resist visibility. They resist it structurally, because the institutions that maintain them have no incentive to make them visible — visible conventions can be questioned, and questioned conventions can be changed, and change is disruptive to the people who benefit from the current arrangement. And they resist it psychologically, because the conventions of your own world feel like common sense — like the way things obviously should be done — and recognizing them as conventions requires the uncomfortable admission that your common sense is a social product rather than a direct apprehension of reality.

The AI world's conventions are still young enough to be changed. That is the significance of the current moment. In a decade, the conventions will have stabilized. The water will be invisible. The glass will be taken for the edge of the world. The choices that were made during the formation period will have become the conditions within which everyone operates, as invisible and as constraining as the conventions of any mature art world.

The window for making the conventions visible — for pressing one's face against the glass while the glass is still thin enough to see through — is now. Not because the technology demands urgency, though it does. Because conventions harden. They stabilize. They become the water. And once they are the water, the effort required to change them is orders of magnitude greater than the effort required to shape them while they are still forming.

The fishbowl is a set of conventions. The conventions are being constructed. The construction is happening now. And the people inside the fishbowl, breathing the water that is being mixed around them, are the ones who must somehow find a way to see what they are swimming in before it becomes the only medium they know.

---

Chapter 10: Building New Conventions for a New World

The question that runs through every chapter of this book is not whether the AI world needs conventions. Every cooperative activity needs conventions, and the AI world is a cooperative activity on a global scale. The question is which conventions, established by whom, serving whose interests, and maintained through what mechanisms.

The question is urgent because of a principle Becker observed across every art world he studied: in the absence of deliberately established conventions, the default convention is the convention of the market. Whatever produces the most output at the lowest cost becomes the standard. Whatever the most powerful participants prefer becomes the norm. Whatever can be measured becomes the criterion of quality. The default is not chosen. It arrives, fills the space that deliberate choice left empty, and hardens into the way things are done.

The AI world's default conventions are already visible. The default convention of credit: the builder gets the credit; the cooperative network is invisible. The default convention of quality: smooth, fast, voluminous output is good output. The default convention of evaluation: follower counts, revenue, and speed metrics determine whose work gets attention. The default convention of access: anyone who can afford the subscription can build; the question of whether the cooperative network is fairly compensated is someone else's problem. The default convention of labor: support personnel are invisible, interchangeable, and priced by global labor arbitrage.

These are not good conventions. They are the conventions that arrive when nobody builds anything better. They serve the interests of the most visible and most powerful participants — the AI companies, the high-profile builders, the venture capitalists — and they neglect the interests of the support personnel, the folk artists, the naive artists who need guidance rather than mere access, and the audiences who need quality rather than mere abundance.

The alternative to the default is not utopia. It is deliberate convention-building: the conscious, contested, imperfect social process of establishing shared understandings that serve the cooperative network as a whole rather than only its most visible nodes.

Becker would have been the first to insist that this process cannot be designed from above. Conventions are not policies. They cannot be imposed by fiat and expected to work, because their effectiveness depends on voluntary adoption by the participants who must use them. A convention that is experienced as an external imposition rather than a shared understanding will be violated, circumvented, or ignored. The labor laws that eventually improved conditions in nineteenth-century factories worked because they were backed by enforcement mechanisms, but even those laws were more effective in contexts where workers and employers had developed shared understandings about what constituted fair treatment. The law codified conventions that were already partially formed. It did not create them from nothing.

The conventions the AI world needs can be specified, even if the process of establishing them cannot be centrally planned. Several domains require particular attention.

The conventions of credit must expand to acknowledge the cooperative network. This does not necessarily mean listing every contributor — the complexity of the chain makes comprehensive attribution impractical — but it means developing standard practices for disclosing the nature and extent of AI assistance, as Segal does in The Orange Pill, and developing institutional norms that treat such disclosure as a mark of integrity rather than a confession of diminished authorship. The publishing industry is beginning to develop these norms. The software industry is lagging. The academic world is debating them intensely. The eventual conventions will vary by domain, as credit conventions always have, but the direction must be toward greater transparency about the cooperative process, not less.

The conventions of quality must move beyond the default metrics of speed and volume. This requires the development of evaluation mechanisms that can detect the qualities the current metrics miss: editorial discipline, depth of understanding, the capacity to reject plausible but hollow output. What such mechanisms look like in practice is not yet clear. Possible approaches include: structured peer review processes adapted from academia but modified for the speed of the AI world; portfolio-based evaluation that assesses a body of work rather than individual outputs; apprenticeship models that pair naive artists with integrated professionals, providing the conventional socialization that democratized access has bypassed.

The conventions of access must address the support personnel problem. The data annotators, content moderators, and open-source developers whose labor sustains the AI world are currently outside the convention-setting process entirely. Bringing them in is not a matter of charity. It is a matter of self-interest for the AI world as a whole, because a cooperative network that systematically under-compensates essential participants is fragile — vulnerable to the same kinds of labor conflicts that disrupted industrial art worlds in the nineteenth and twentieth centuries. The specific mechanisms for addressing this — licensing frameworks for training data, compensation structures for annotation labor, governance models that give support personnel a voice — are being negotiated in legal, legislative, and community forums. The negotiation needs participants from all parts of the cooperative chain, not only the most visible.

The conventions of editing must be formalized enough to serve as quality control without being rigid enough to constrain creativity. This is the challenge that every art world faces: the editorial function must be strong enough to maintain standards but flexible enough to accommodate the mavericks and naive artists whose unconventional work the world needs. In the AI world, the editorial conventions might include: the expectation that AI-generated output is reviewed by someone with domain expertise before publication or deployment; the development of standard practices for iterative refinement that distinguish cursory review from genuine engagement; the separation of generation and evaluation functions, either across people or across time, to counteract the bias of self-editing.

The conventions of evaluation, the reputation systems that determine whose work gets attention, require the most fundamental rethinking. The current systems, built on metrics inherited from social media, are poorly suited to the AI world's needs. They reward visibility rather than quality, speed rather than judgment, confidence rather than accuracy. Building better reputation systems requires understanding what the AI world's participants actually value — which, as noted in the previous chapter, is often quite different from what the current metrics measure — and developing mechanisms that detect and reward those values. This is a design problem, a social problem, and an institutional problem simultaneously, and it will not be solved by any single intervention.

The process of establishing these conventions will be messy. It will involve conflict between participants with different interests and different power. It will involve compromise, because conventions are always compromises. It will involve trial and error, because nobody knows in advance which conventions will work and which will not. And it will involve the ongoing maintenance that Becker observed in every art world: conventions do not establish themselves once and persist. They must be actively maintained, renegotiated, and adapted as conditions change.

Segal's image of the beaver resonates here. The beaver's work is not a project with a completion date. It is an ongoing relationship between the builder and the river. The dam requires constant maintenance. The sticks loosen. The water finds new channels. The beaver responds not by building once but by tending continuously — chewing new sticks, packing new mud, repairing what the current has loosened.

The conventions of the AI world are the dam. They are the structures that direct the flow of cooperative activity toward productive ends rather than allowing the default — the convention of the market, the convention of maximum output at minimum cost — to carry everything downstream. Building them is work. Maintaining them is more work. And the people who do this work, the convention-builders and convention-maintainers of the AI world, are themselves a kind of support personnel — essential to the world's functioning but unlikely to appear in any triumph narrative about the solo builder who shipped a product over a weekend.

Becker's final contribution to this analysis is a characteristic one: a refusal to prescribe, combined with an insistence on describing accurately. The sociologist does not tell people what conventions to build. The sociologist describes the conventions that exist, identifies the conventions that are missing, traces the consequences of the gap, and trusts the participants to make their own choices once they can see clearly what they are choosing.

This book has attempted to perform that description. The conventions of the AI world are young. They are still forming. They are being shaped, right now, by every builder who decides whether to disclose AI assistance, every company that decides how to compensate annotation labor, every educator who decides what standards to apply to AI-assisted student work, every platform that decides what metrics to foreground, and every participant who decides whether to show up for the messy, contentious, essential process of negotiating the rules of a world that does not yet know what it is.

The conventions will stabilize. They always do. The question is whether they will stabilize around arrangements that serve the full cooperative network — the integrated professionals and the naive artists, the builders and the support personnel, the mavericks and the folk artists — or around arrangements that serve only the most visible and most powerful.

The question is open. The process is underway. And the conventions that emerge from it will determine, more than any algorithm or any individual act of genius, the character of what the AI world builds.

---

Epilogue

The convention I could not name was the one I had followed longest.

For most of my career, when I built something and it worked, I called it mine. Not with arrogance — with the ordinary possessiveness of a person who spent months or years wrestling an idea into reality. I built it. My team built it. We shipped it. The credit felt natural because the convention was invisible, the way water is invisible to the thing swimming in it.

Howard Becker would have smiled at that. Not unkindly. He spent sixty years pointing out the invisible conventions that structure creative life, and his tone was never accusatory. He simply described what was there, and the description itself did the work. Here is who actually participated. Here is who got credit. Here is the gap between the two. Now you can see it. What you do about it is your business.

What Becker gave me — through this book, through the months of working with his ideas — is a different way of seeing the room I am standing in. When I describe the Trivandrum training in The Orange Pill, I describe twenty engineers amplified. That happened. It is true. But after spending time inside Becker's framework, I also see the thousands of people who were not in that room but whose labor made every moment in that room possible. The researchers who built the transformer architecture. The annotators who labeled training data for wages I would not accept. The open-source developers whose code was ingested without their explicit consent. The cloud infrastructure teams who kept the servers running. My engineers were brilliant. They were also standing on a cooperative structure they could not see, and I could not see it either, because the conventions of my world — the builder's world — assign credit to the people who type the prompts and ship the products.

That convention is a choice. It is not a fact about who contributed and who did not. It is a way of organizing recognition that privileges certain participants and renders others invisible. Becker did not tell me the convention is wrong. He showed me the convention exists — that it is a social arrangement rather than a natural law — and that seeing it is the precondition for deciding whether to keep it.

The question that haunts me now is one Becker would have recognized: who is not in the room where the rules are being made? When the conventions of AI-assisted work are being established — the conventions of credit, of quality, of editing, of who gets to build and who gets recognized for building — whose voice is present and whose is absent? The builders are present. The investors are present. The commentators are present. The annotators in Nairobi and Manila are not. The open-source developers whose libraries trained the models are not. The teachers trying to figure out what to do with AI in their classrooms are mostly not.

The conventions will form regardless. They always do. The only question is whether they form by default — serving whoever has the most power and the most visibility — or by deliberate negotiation among all the people the conventions affect.

I am a builder. I will keep building. But Becker taught me that building the thing is only part of the work. The other part is building the conventions — the shared understandings about credit, quality, fairness, and responsibility — that determine whether what gets built serves the full cooperative network or only its most visible member.

The convention I could not name was the one that told me the work was mine alone. Now I can see it. And seeing it changes what I owe.

-- Edo Segal

The solo builder is a myth. The AI revolution's most celebrated narrative -- one person, one tool, infinite capability -- hides a cooperative network of thousands. Howard Becker spent sixty years maki

The solo builder is a myth. The AI revolution's most celebrated narrative -- one person, one tool, infinite capability -- hides a cooperative network of thousands. Howard Becker spent sixty years making invisible networks visible. Now we need him more than ever.

When Claude Code generates a function in seconds, who actually produced it? The builder who prompted. The researchers who designed the architecture. The annotators who refined the model for modest wages on another continent. The open-source developers whose code was ingested without consent. A convention assigns credit to one participant and erases the rest. That convention is not a fact. It is a choice being made right now, in the formative months of a world whose rules have not yet hardened.

Becker's Art Worlds framework reveals the AI revolution as what it actually is: not an expansion of individual genius but a wholesale restructuring of cooperative conventions -- about credit, quality, labor, and whose voice shapes the rules. The conventions being built today will govern creative life for decades. This book makes them visible before they become the water we can no longer see.

Howard Becker
“technologies for artistic production will likely impact an entire ecosystem, and not just individual users,”
— Howard Becker
0%
11 chapters
WIKI COMPANION

Howard Becker — On AI

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Howard Becker — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →