By Edo Segal
The word that kept tripping me was "right."
Not right as in correct. Not right as in morally defensible. Right as in — this sentence belongs here and not there. This color works and that one doesn't. This architectural choice holds weight and that one collapses under scrutiny even though both compile clean.
I have spent my entire career making judgment calls I could not fully explain. Choosing between two prototypes that both function but only one of which feels like it deserves to exist. Looking at a screen and knowing something is off before I can articulate what. Every builder knows this sensation. Every builder also knows the discomfort of not being able to defend it in a meeting where the other prototype has better metrics.
Then I encountered Nelson Goodman, a philosopher who spent decades building the vocabulary for exactly this problem. Not the problem of what works. The problem of what is *right* — and why rightness is not reducible to function, not reducible to accuracy, not reducible to any metric you can put on a dashboard.
Goodman was an analytic philosopher working in the mid-twentieth century, which means he wrote with a precision that can feel forbidding. But underneath the technical apparatus is an insight that hit me harder than almost anything I encountered while writing *The Orange Pill*: that every representation we make — every painting, every line of code, every product, every sentence in this book — is not a copy of reality. It is a *construction* of reality. A version. And versions are not all equal. Some achieve a fit between what they show and what they mean that Goodman called rightness of rendering. Others merely look plausible.
That distinction — between the right version and the plausible one — is the distinction I struggle with every single time I sit down with Claude. The machine produces plausible output with terrifying ease. Fluent prose. Clean code. Coherent arguments. The surface is smooth. And the smoothness is exactly the problem, because smoothness can conceal the absence of the thing that makes output worth having.
Goodman gives me the language to name what I have been feeling since December 2025. The rendering has gotten spectacularly easy. The worldmaking has not gotten easier at all. If anything, it has gotten harder, because the ease of rendering tempts you to stop asking whether what you have made is right or merely competent.
This book is another lens for the tower we are climbing together. It will not make the view simpler. It will make it more precise.
-- Edo Segal ^ Opus 4.6
1906-1998
Nelson Goodman (1906–1998) was an American philosopher whose work spanned aesthetics, epistemology, the philosophy of science, and the philosophy of language. Born in Somerville, Massachusetts, he studied at Harvard and later held professorships at the University of Pennsylvania and Harvard. His major works include *The Structure of Appearance* (1951), *Fact, Fiction, and Forecast* (1955) — which introduced the famous "grue" paradox challenging inductive reasoning — *Languages of Art* (1968), widely regarded as one of the most important works in analytic aesthetics, and *Ways of Worldmaking* (1978), which advanced his radical constructivist thesis that there is no single ready-made world but rather multiple "versions" constructed through different symbol systems. Goodman's key contributions include the distinction between autographic and allographic arts, the analysis of denotation, exemplification, and expression as modes of symbolic reference, and the concept of "rightness of rendering" as an alternative to correspondence theories of truth. His rigorous, nominalist approach to representation and meaning profoundly influenced aesthetics, cognitive science, and the philosophy of art, establishing frameworks that continue to shape debates about creativity, authenticity, and the cognitive function of the arts.
A painting does not copy the world. This claim, which Nelson Goodman spent the better part of his career defending, sounds modest until one traces its implications to their ends. If a painting does not copy the world, then what does it do? Goodman's answer was precise: it refers to the world through a structured system of symbols — colors, lines, spatial relations, conventions of representation — that function according to rules as determinate as those governing any language. A landscape painting deploys green not because green is what the eye sees when it looks at a field but because the pictorial system within which the painting operates has assigned to that particular pigment, applied in that particular way, a referential function. The green denotes. It exemplifies. It expresses. These are technical terms in Goodman's vocabulary, and each carries philosophical weight that casual usage obscures.
Denotation is the most familiar referential relation: the picture denotes the field, the way the word "field" denotes a field. But exemplification is where Goodman's framework begins to diverge from common intuition. A tailor's swatch exemplifies the color and texture of the fabric it is cut from — it refers to those properties by possessing and displaying them. A painting exemplifies certain visual properties — a quality of light, a density of brushwork, a particular tension between foreground and background — by possessing those properties and directing attention to them. Expression, in Goodman's analysis, is metaphorical exemplification: a painting that "expresses sadness" does not denote sadness (it is not about sadness the way a psychology textbook is about sadness) but metaphorically possesses sadness as a property and directs attention to it.
These distinctions matter because they establish the arts as cognitive enterprises. A painting is not a stimulus that produces an emotional response in a viewer the way a drug produces a chemical response in a body. It is a symbol system that organizes experience through structured reference, and understanding it is an act of interpretation — active, skilled, dependent on familiarity with the conventions of the system within which the painting operates. The viewer who sees Cézanne's Mont Sainte-Victoire as a failed attempt to copy a mountain has misunderstood the symbol system. The painting is not copying the mountain. It is constructing a version of the mountain using the specific resources of post-Impressionist pictorial convention — the fragmentation of form into planes of color, the flattening of depth, the emphasis on the structural geometry beneath the surface appearance — and the understanding it provides is available only through those resources. No verbal description of the mountain, however accurate, provides what the painting provides, because the painting operates through a different symbol system with different expressive capacities.
This framework has a direct and unsettling application to the question of whether AI-generated art is "real" art. The question, as typically posed, presupposes that art is defined by its origin — that what makes something art is the fact that a human being produced it, with human intentions, from human experience. Goodman's framework displaces this presupposition. In his analysis, what makes something art is not its origin but its function — whether it operates as a symbol system, whether it refers, exemplifies, and expresses through structured conventions, whether it organizes experience in ways that yield understanding.
The displacement is not a liberation. It is a tightening of the criteria in a different direction.
Consider a landscape image generated by a diffusion model — Midjourney, DALL-E, or any of their successors. The image deploys color, line, and spatial organization according to conventions learned from millions of training examples. It denotes (or appears to denote) a landscape. It exemplifies certain visual properties — a quality of light, a compositional balance, a palette. It may even express, in Goodman's technical sense, qualities like tranquility or vastness through metaphorical exemplification. By every functional criterion Goodman established, the image operates as a symbol system.
Yet something crucial is missing from this analysis, and Goodman's own framework identifies it with characteristic precision. A symbol system does not function in isolation. It functions within what Goodman called a scheme-content relation — a pairing of a symbolic scheme (the set of symbols and their syntactic relations) with a field of reference (the domain of objects, properties, and events to which the symbols refer). The scheme-content relation is not given by the symbols themselves. It is established by practice, convention, and — this is the critical point — by the worldmaking intentions of the symbol-user.
When Cézanne painted Mont Sainte-Victoire, the scheme-content relation was established by his specific intention to construct a version of the mountain that revealed its geometric structure, its resistance to easy visual comprehension, its insistence on being more than what the Impressionists' fleeting light-effects could capture. The painting's referential functions — what it denotes, exemplifies, expresses — were determined not merely by the symbols on the canvas but by the worldmaking project within which those symbols were deployed.
When a diffusion model generates a landscape image, the scheme-content relation is established by — what, exactly? The training data establishes statistical regularities: these kinds of pixel patterns tend to co-occur with these kinds of text descriptions. The model produces outputs that conform to those regularities. But conforming to statistical regularities is not the same as establishing a scheme-content relation, because the scheme-content relation requires what Goodman's framework implicitly demands: a worldmaker who intends the symbols to refer, who selects this referential function over that one, who constructs a version of reality through deliberate symbolic choice.
The model does not select. It samples. The distinction is not pedantic. Selection implies a space of rejected alternatives — paths not taken, versions not constructed, referential functions not activated. To select green for the field is to reject the blue that would have made the field otherworldly, the grey that would have made it desolate, the absence of color that would have made it abstract. Selection is worldmaking. Sampling is pattern-matching at scale.
This does not mean AI-generated images cannot function as art. Goodman's framework is functional, not genetic — what matters is how the symbols operate, not where they came from. But the operation of symbols as art requires that they be embedded in a worldmaking project, and worldmaking projects require worldmakers. When a human artist uses AI as a tool — specifying intentions, evaluating outputs, selecting among possibilities, adjusting parameters to realize a vision — the worldmaking project is intact. The AI is the rendering engine. The human is the worldmaker. The symbol system functions as art because it is embedded in a project of deliberate reference, exemplification, and expression.
When the AI operates without a worldmaker — when the prompt is casual, the output unexamined, the selection among alternatives absent — the symbols on the screen may look like art. They may possess every surface property of art. But they lack the scheme-content relation that makes them function as art, the way a random arrangement of sounds might happen to match the pitch sequence of a Beethoven sonata without being a performance of the Beethoven sonata. The notes are right. The music is absent.
Goodman would resist the romantic reading of this argument. He was not claiming that art requires a tortured soul or a burst of inspiration or any of the other apparatus of Romantic aesthetics. He was claiming something more austere and more defensible: that symbols function as art only when they are deployed within a structured system of reference, and that the structure requires a deployer — an agent who establishes the scheme-content relation by intending the symbols to refer in particular ways.
The implications for the evaluation of AI-assisted creative work are precise. Segal's account in The Orange Pill of the collaboration between human and machine — the spectrum from editorial refinement to structural scaffolding to generative surprise — maps onto the question of where the scheme-content relation is established. When the human specifies the intention and the machine refines the rendering, the scheme-content relation is established by the human, and the symbol system functions as the human's art. When the machine suggests connections the human had not conceived, the scheme-content relation begins to distribute across the collaboration — the machine is contributing not just symbols but referential functions, not just rendering but worldmaking. The question of whether this distributed worldmaking constitutes art is, in Goodman's framework, a question about whether the resulting symbol system achieves the structured reference that art requires — whether it denotes, exemplifies, and expresses in ways that organize experience and yield understanding.
The answer is not determined in advance. It is determined by examination of the specific work. Does this passage — the one Segal almost kept, the one Claude generated about Deleuze that sounded like insight but broke under scrutiny — function as a symbol system that organizes philosophical understanding? The answer, in that case, was no. The symbols were well-formed. The referential function was defective. The passage exemplified fluency without denoting anything the Deleuze reference could support. It was, in Goodman's terms, a symbol system that failed — not because it was machine-generated, but because the scheme-content relation was not established with the precision that philosophical reference demands.
The same test applies to every product of human-AI collaboration. Not: Was this made by a human? But: Does this function as a symbol system that organizes experience through structured reference? Does the denotation hold? Does the exemplification illuminate? Does the expression achieve the metaphorical possession of the properties it claims to convey?
These are difficult questions. Goodman never claimed they were easy. What he claimed was that they were the right questions — that the evaluation of art is a matter of evaluating symbolic function, not of tracing causal origin. The claim liberates AI-generated work from the automatic disqualification that origin-based criteria would impose. It simultaneously subjects that work to standards that most AI-generated output, examined with Goodman's rigor, fails to meet.
The liberation and the subjection are the same gesture. Goodman's framework opens the door to AI art and raises the bar for what counts as walking through it. The door is open. The threshold is higher than almost anyone currently generating images with a casual prompt has recognized. What remains is to understand why the threshold exists where it does — to examine the formal properties of symbol systems that make some renderings right and others merely plausible, some versions genuine worldmaking and others sophisticated pattern-matching dressed in the conventions of art.
That examination requires the concept Goodman placed at the center of his mature philosophy: the concept of worldmaking, and the plural versions of reality that different symbol systems construct.
---
The most radical proposition in Nelson Goodman's philosophy is stated with characteristic flatness: there is no single, pre-given world that our representations copy. There are only versions — structured constructions produced by different symbol systems, each organizing experience differently, each constituting a different world.
The proposition sounds like relativism. Goodman spent considerable energy insisting it was not. Relativism holds that every version is as good as every other, that there are no criteria for distinguishing better from worse constructions of reality. Goodman held precisely the opposite. Some versions are better than others. Some are right and others wrong. But "right" does not mean "corresponds to a world that exists independent of all versions." It means something more demanding: the version achieves internal coherence, fits with other accepted versions, serves the purposes for which it was constructed, and meets the standards of the symbol system within which it operates. Goodman called this rightness of rendering, and it replaced truth-as-correspondence in his philosophical vocabulary without replacing the rigor that truth-as-correspondence was supposed to provide.
The concept requires careful unpacking, because its implications for the age of AI are both liberating and constraining in ways that neither the triumphalists nor the critics of AI have fully absorbed.
Consider the versions of reality constructed by different symbol systems. The physicist's version of a table is a collection of particles in a mostly empty space, held in configuration by electromagnetic forces, vibrating at frequencies determined by temperature. The carpenter's version of the same table is a solid object with grain, weight, and structural integrity, responsive to the tools of the woodworking trade. The painter's version is a play of light and shadow on a surface that recedes into a compositional space according to the conventions of pictorial depth. The economist's version is a commodity with a production cost, a market price, and a depreciation curve.
Each version is constructed through a different symbol system: mathematical notation, the tactile and visual vocabulary of the workshop, the conventions of pictorial representation, the analytical apparatus of economic theory. And each version reveals features of the table that the others miss — the physicist sees the particle structure that the carpenter cannot perceive; the carpenter knows the grain direction that the physicist's equations do not capture; the painter renders the visual presence that neither the physicist's mathematics nor the carpenter's hands can convey; the economist identifies the exchange relations that the other three have no vocabulary to express.
Goodman's point was not that these are different perspectives on the same table. That formulation presupposes the existence of a table-in-itself, independent of all versions, that the different perspectives view from different angles. Goodman denied the table-in-itself. There is no version-free table lurking behind the versions. There are only the versions, and the table is constituted differently by each one. The physicist's table and the carpenter's table are genuinely different objects, constructed by genuinely different symbolic means, answering to genuinely different standards of rightness.
This is not a metaphysical extravagance. It is a logical consequence of taking symbol systems seriously as constructive enterprises rather than transparent windows onto a pre-given reality. And it is the framework that makes the most precise sense of what happens when AI enters the process of worldmaking.
A large language model trained on the entirety of digitized human text has access to an extraordinary range of versions. It can produce outputs in the physicist's vocabulary, the carpenter's vocabulary, the painter's vocabulary, the economist's vocabulary. It can construct passages that look like any of these versions, deploying the appropriate conventions, referencing the appropriate standards, organizing the appropriate features into the appropriate structures.
What the model cannot do — and this is where Goodman's framework exerts its pressure — is choose between versions. The choice of which version to construct is itself a worldmaking act. When Cézanne chose to paint Mont Sainte-Victoire according to the conventions of geometric simplification rather than Impressionist light-effects, he was not merely selecting a style. He was constructing a different world — a world in which the mountain's deep structure was more real than its surface appearance, a world organized by the principle that perception is analytical, not passive. The choice of version determined what the painting could reveal and what it would necessarily miss.
The model samples from a probability distribution over possible continuations. It does not choose a version any more than a river chooses a channel. The water flows where the terrain directs it; the tokens appear where the probability landscape places them. The result may look like a version — it may possess the structural features of a version, the vocabulary of a version, the coherence of a version — but it has not been chosen, and the absence of choice means the absence of the worldmaking act that gives versions their significance.
This matters because the significance of a version is not exhausted by its content. Goodman was emphatic on this point. Ways of Worldmaking opens with the observation that the same symbols can function in multiple versions, and what determines which version they belong to is not the symbols themselves but the project within which they are deployed. The same green pigment on the same canvas can exemplify "the color of the field" in one version and "the emotional quality of spring" in another. The pigment is identical. The exemplification is different, because the worldmaking project is different.
Segal's description of the collaboration with Claude illustrates this with unintended philosophical precision. When Segal describes a half-formed idea to Claude and Claude returns a passage that clarifies the idea, the passage is embedded in Segal's worldmaking project — his specific intention to construct a version of the AI moment that holds exhilaration and loss in tension, that takes Han's critique seriously while mounting a counter-argument, that speaks to a particular reader lying awake with a particular set of concerns. The symbols Claude produces are embedded in that project, and they function as elements of that version, because Segal's worldmaking intention establishes the scheme-content relation.
When Claude produces the same caliber of passage in response to a casual prompt from a user with no worldmaking project — no intention to construct a specific version, no criteria for evaluating rightness beyond surface plausibility — the symbols may be identical. The version is absent. The passage is a rendering without a world.
Goodman's pluralism about versions has a further implication that cuts against the grain of both AI optimism and AI pessimism. The optimists celebrate AI's capacity to produce multiple versions rapidly — ten possible openings for a novel, fifteen possible architectures for a system, twenty possible framings for a business presentation. The abundance of versions is treated as an unqualified good. The pessimists lament that the abundance dilutes quality, that when versions become cheap, their individual significance diminishes.
Goodman's framework rejects both responses. The abundance of versions is neither good nor bad in itself. What matters is whether the versions achieve rightness — whether each one is internally coherent, responsive to the standards of its symbol system, and productive of the understanding it claims to offer. A world with more versions is not necessarily a better world. It is a world that demands more judgment — more capacity to evaluate rightness, more skill in distinguishing versions that reveal from versions that merely simulate revelation.
The demand for judgment is the demand that the age of AI keeps pushing upstream. When rendering was expensive — when producing a version required years of training, access to materials, institutional support — the rendering itself served as a filter. Not everyone could produce a version, so the versions that existed had, on average, been produced by people with some claim to the skills and intentions that worldmaking requires. When rendering becomes cheap — when any person with a subscription can produce outputs in any symbolic medium — the filter disappears. The versions multiply. And the question of which versions are right, which achieve the coherence and productivity that justify their existence, becomes the most important question a culture can ask.
Goodman did not live to see this moment. He died in 1998, before the first large language model was trained. But the framework he built anticipated the question with uncanny precision, because the question was always there, embedded in the logic of worldmaking itself. If there is no version-free reality, then the evaluation of versions cannot appeal to correspondence with reality. It must appeal to rightness. And rightness requires judgment — the specifically human capacity to assess whether this version, produced by these means, deployed in this context, achieves the coherence, fit, and productivity that make it worth having.
The judgment cannot be outsourced to the rendering engine. The rendering engine produces versions. Judgment evaluates them. These are different cognitive operations, and they belong to different agents in the collaborative process. The machine that produces ten possible openings for a novel has not relieved the human of the burden of choosing among them. It has intensified that burden, because now the human must evaluate ten versions where before the human evaluated one, and the evaluation requires the kind of understanding — of purpose, of audience, of the specific version of reality the novel is trying to construct — that no rendering engine possesses.
What emerges from Goodman's framework is not a prohibition against AI-generated versions. It is a clarification of what versions require to be genuine — a set of demands that the ease of rendering makes it tempting to forget. The demands are: a worldmaking intention that establishes the scheme-content relation, a set of criteria for rightness that the version can be measured against, and a capacity for judgment that can distinguish right versions from plausible ones. When these demands are met, the tool that assists the rendering is irrelevant — a paintbrush, a printing press, a compiler, a language model. When they are not met, the output may have every surface property of a version without being one.
The distinction between having every surface property of a version and actually being one is the distinction between plausibility and rightness. It is the distinction that the Deleuze passage failed to maintain. It is the distinction that every user of AI creative tools must learn to make, because the tools will not make it for them. And it is the distinction that carries the full weight of Goodman's philosophical project into the present moment, where the capacity to produce versions has outpaced, perhaps catastrophically, the capacity to evaluate them.
---
Where does authorship live — in the feeling or the blueprint? The question, as Segal poses it in The Orange Pill, carries the weight of personal urgency. The feeling is the author's pre-articulate sense of what matters, the conviction that this idea needs to exist in the world, the emotional and intellectual pressure that drives the creative act. The blueprint is the structure that organizes the feeling into communicable form — the chapter sequence, the argument's architecture, the specific deployment of evidence and metaphor that makes the feeling accessible to someone who has not felt it. Segal confesses that he cannot always tell where his feeling ends and Claude's blueprint begins. The confession is presented as a vulnerability. In Goodman's framework, it is a philosophical problem of the first order.
Goodman spent decades resisting the dichotomy between content and form, between what a work says and how it says it. In Languages of Art, the resistance takes the shape of a technical argument about the inseparability of the symbolic functions through which a work operates. A painting does not possess a "content" (the landscape it depicts) and a "form" (the arrangement of paint through which the depiction is achieved) that can be separated and evaluated independently. The content is constituted by the form. Change the brushwork and the landscape changes — not because the depicted mountain has moved, but because the version of the mountain that the painting constructs is a function of the specific symbolic resources deployed. The thick, architectural brushstrokes of Cézanne's Mont Sainte-Victoire do not convey a vision of the mountain's geometric structure. They constitute it. Remove the brushstrokes and the vision does not survive, because the vision has no existence apart from the symbolic means through which it is realized.
This inseparability thesis has direct consequences for the question of authorship. If form and content are inseparable — if the feeling and the blueprint are not two things that can be independently varied but two aspects of a single worldmaking act — then the question "Where does authorship live?" is malformed. It presupposes a separation that does not exist. Authorship lives in the worldmaking: the specific act of constructing a version of reality through the conjunction of experiential content and symbolic structure. There is no feeling without a structure to realize it, and no structure without a feeling to animate it. The two are fused in the act of making, and the act of making is what authorship is.
But the fusion becomes problematic, philosophically and practically, when the act of making is distributed across a human and a machine. When Segal describes the collaboration, he identifies moments where the feeling is his and the structure is Claude's — where he knows what he wants to say but cannot find the arrangement that makes it sayable, and Claude provides the arrangement. In Goodman's terms, this is a case where the experiential content originates with the human and the symbolic structure originates with the machine. If form and content are inseparable, and if the work is constituted by their conjunction, then the work is genuinely collaborative in a way that resists attribution to either party.
The analogy to musical performance illuminates the structure. When a composer writes a score and an orchestra performs it, the composer provides the symbolic specification — the notes, the rhythms, the dynamics — and the performers provide what the specification leaves open: the timbre, the precise shaping of phrases, the interpretive decisions that make this performance distinct from every other. The composer is the author. But the performance is not nothing. Glenn Gould's Goldberg Variations differs from Wanda Landowska's not because the notes are different (they are the same) but because the interpretive space the score leaves open is filled differently, and what fills it constitutes a genuine contribution to what the listener hears.
Goodman was precise about the ontology here. The musical work, in his analysis, is the class of performances that comply with the score. Any compliant performance is a genuine instance of the work. The performer contributes everything the score does not specify, but the performer does not thereby become the author, because the score determines the identity conditions of the work. Two performances of the same score are instances of the same work despite their interpretive differences, because the identity of the work is determined by the notational features the score specifies, not by the performance features it leaves open.
The mapping onto AI collaboration is structural. When Segal provides the experiential content — the ideas, the commitments, the emotional urgency — and Claude provides the structural rendering, the question of authorship depends on what one identifies as the "score" and what one identifies as the "performance." If the experiential content is the score — if Segal's ideas and commitments determine the identity conditions of the work, and Claude's structural contributions are performances that fill in what the score leaves open — then Segal is the author, in the same sense that Beethoven is the author of the Ninth Symphony regardless of which orchestra performs it.
But the analogy strains at a critical juncture. A musical score is a notation system with precise syntactic properties. It specifies certain features unambiguously (pitch, rhythm) and leaves others explicitly unspecified (timbre, precise tempo). The boundary between what is scored and what is performed is determined by the notational system itself. In the collaboration between Segal and Claude, there is no comparable notation system. There is no formal specification of which features of the work the human determines and which features the machine supplies. The boundary is fluid, contested, and different in every interaction.
This fluidity means that the autographic question — the question of whether the specific origin of the work matters to its identity — reasserts itself even in a domain (writing) that Goodman classified as allographic. Literature, in Goodman's analysis, is an allographic art: the identity of a literary work is determined by the sequence of words (the "text"), and any correct copy of the text is a genuine instance of the work. It does not matter who typed the words, in the same way that it does not matter which orchestra plays the notes. The text is the text is the text.
But this classification assumed that the text was produced by a single agent with a unified worldmaking intention. When the text is produced collaboratively — when some sentences originate with the human and others with the machine, and the boundary between them is invisible even to the human who participated in the production — the allographic criterion begins to malfunction. Two texts with identical word sequences might have radically different authorial structures: one produced entirely by a human, the other produced through a collaboration in which the machine contributed not just rendering but worldmaking. The texts are identical. The authorship is different. And if the authorship is different, then the works are different — not as texts (the words are the same) but as acts of worldmaking (the intentions behind the words are structured differently).
Goodman's own framework does not resolve this cleanly, because his allographic/autographic distinction was designed for a world in which the production of texts was either clearly single-authored or clearly multi-authored (as in the case of musical performance, where the division of labor between composer and performer is explicit). The AI collaboration produces a new category: texts that are indeterminately authored, where the human participant cannot reliably identify which aspects of the worldmaking are his and which were contributed by the machine.
This indeterminacy is what Segal confesses in Chapter 7 of The Orange Pill when he describes the moments that keep him awake — the moments when Claude makes a connection he had not made, and he cannot say whether the connection belongs to him, to Claude, or to the collaboration. In Goodman's vocabulary, the indeterminacy is a symptom of a breakdown in the scheme-content relation. The scheme (the symbolic structure) and the content (the experiential material) have been fused in the act of making, as Goodman's theory predicts. But the fusion has occurred across two agents, and neither agent can fully specify what it contributed, because the contribution was made in real-time interaction, each response building on the previous, and each building-on constituting a new act of worldmaking that incorporated both parties' contributions.
The result is a new ontological category that Goodman did not anticipate: the collaboratively constructed version, in which the worldmaking is genuinely distributed across human and machine, the authorial function cannot be cleanly localized, and the work's identity conditions are determined not by a score or a text but by the process through which the version was constructed. The process is the score. The interaction is the notation. And the work exists not as a fixed text but as the specific trajectory of worldmaking decisions — human and machine, interleaved and inseparable — that produced it.
Whether this is a new kind of authorship or the dissolution of authorship is a question Goodman's framework poses with more precision than it answers. What the framework does answer is the preliminary question: the feeling and the blueprint are not separable. You cannot extract the human's experiential content from the machine's structural contribution and evaluate them independently, because the content is constituted by the structure and the structure is animated by the content. They are fused in the making. And the making, in the age of AI, is collaborative in a way that no previous creative technology has produced — not because the machine is more powerful than previous tools, but because it operates in the same symbolic medium as the human, and the boundary between the human's symbols and the machine's symbols cannot be drawn from the inside.
The question that remains is what happens to authorship when the boundary dissolves entirely — when the human can no longer say, with confidence, "This is mine." Goodman's framework suggests that the question may itself be misconceived, that authorship was never a matter of possession but of function. The author is not the person who owns the symbols. The author is the agent — human, machine, or collaborative — that establishes the scheme-content relation through which the symbols refer. If the scheme-content relation is established collaboratively, then the authorship is collaborative, and the anxiety about ownership is the residue of a Romantic theory of creation that Goodman spent his career dismantling.
---
In 1937, the Dutch painter Han van Meegeren produced a canvas titled The Supper at Emmaus and sold it to the Boijmans Museum in Rotterdam as a newly discovered work by Johannes Vermeer. The painting was celebrated by critics, exhibited prominently, and valued as a masterpiece of the Dutch Golden Age. When van Meegeren confessed to the forgery in 1945 — under circumstances that involved Nazi art looting and a charge of collaboration that made confession the lesser crime — the art world convulsed. Not because the painting had changed. The pigment was the same. The composition was the same. The canvas hung on the same wall. What had changed was the knowledge of its origin, and that knowledge transformed the painting from a masterpiece to a fraud.
Goodman used this case, and the category of forgery more broadly, to establish a distinction that carries more weight in the age of AI than it did in 1968 when he first articulated it. The distinction is between autographic arts — those in which the history of production is constitutive of the work's identity — and allographic arts — those in which the work can be fully specified by a notation and correctly instantiated by anyone who follows the notation.
Painting is autographic. A forgery of a Rembrandt is not a Rembrandt, even if no perceivable difference exists between the forgery and the original. The reason is not sentimental. Goodman was unsentimental to a degree that occasionally alarmed his colleagues. The reason is that in an autographic art, the work's identity is tied to its specific history of production — to the fact that this hand applied this pigment to this canvas in this sequence of decisions. The history of production is part of what the work is, and a work with a different history — even one that produces an identical physical object — is a different work.
Music is allographic. A performance of Beethoven's Ninth Symphony by the Berlin Philharmonic and a performance by the Vienna Philharmonic are both instances of the same work, because the work's identity is determined by the score — the notational specification that both performances comply with. The performers contribute everything the score leaves open: interpretation, timbre, the micro-decisions of phrasing and dynamics that make each performance unique. But the work is the same work, because the notation determines identity.
Literature, in Goodman's classification, is allographic. The identity of a literary work is determined by the text — the sequence of words in the specified order. Any correct copy of the text is a genuine instance of the work. It does not matter whether the copy was produced by a human typist, a printing press, or a photocopier. The text is the text. Hamlet produced on a laser printer is the same work as Hamlet set in movable type in 1623.
This classification, which seemed stable for three centuries of print culture, becomes deeply unstable in the age of generative AI.
The instability enters through a question that Goodman's framework poses but does not answer for the specific case of machine-generated text: What determines the identity of a literary work when the production process is opaque even to its nominal author?
Return to the van Meegeren case. The painting was a forgery because its history of production was misrepresented. Van Meegeren produced it; it was presented as if Vermeer had produced it. The misrepresentation was constitutive: knowing the true history changed what the painting was. In autographic arts, history of production is part of identity.
Now consider a text produced through human-AI collaboration. The human provides intentions, evaluates outputs, selects among alternatives. The machine generates prose, suggests structures, produces passages that the human may or may not recognize as originating with the machine. The text is published under the human's name. Is this a forgery?
The allographic classification says no. The text is the text. The words on the page determine the work's identity. It does not matter who or what produced them, any more than it matters which printing press stamped them onto paper. If the text satisfies the identity conditions of the work — if the words are in the right order, if the sentences are the sentences the author endorses — then the work is authentic regardless of its productive history.
But there is an obvious problem with this answer, and the problem illuminates a limitation in Goodman's original framework that the age of AI forces into view.
The allographic classification works for music because the division of labor between composer and performer is explicit and socially understood. Everyone knows that the pianist did not write the sonata. The pianist's contribution is acknowledged, valued, and understood as distinct from the composer's. The allographic nature of music does not conceal the productive history — it organizes it into clearly defined roles.
Literature has historically functioned as if it were autographic, despite Goodman's classification. Readers expect that the name on the cover identifies the person who produced the text — not just endorsed it, not just supervised its production, but sat with the blank page and produced the sentences through the specific cognitive labor of writing. The expectation is not merely conventional. It is tied to what readers believe they are receiving when they read a work of literature: access to a mind, a sensibility, a way of seeing the world that is constituted by the specific person who wrote the words.
When AI enters the productive process, this expectation collides with the allographic classification. The text is the text, Goodman's framework insists. The words on the page determine the work. But the reader's experience is shaped by the belief that the words originated in a human consciousness, and if that belief is false — if the words were generated by a machine and selected by a human — then the reader's experience is based on a misrepresentation of productive history that matters to the reader even if it does not matter to the formal theory of identity.
Goodman would likely resist this objection. His philosophy was deliberately anti-psychologistic — he was interested in the formal properties of symbol systems, not in the psychological states of their users. Whether the reader feels deceived is, in his framework, a psychological fact about the reader, not a philosophical fact about the work. The work is what the symbols determine it to be. The reader's feelings about the symbols' origin are external to the work's identity.
But Segal's experience of collaboration suggests that the formal framework needs supplementation, even on its own terms. When Segal describes the moments where Claude contributes a connection he had not made — a structural insight that changes the direction of the argument — the contribution is not merely a performance feature. It is a worldmaking contribution. The machine has not merely rendered what the score specifies. It has altered the score. And if the score determines the work's identity, and the machine has contributed to the score, then the work's identity is shaped by the machine's contribution in a way that the allographic framework, which assumes a clear distinction between score and performance, cannot cleanly accommodate.
The pressure on the framework increases when one considers the evaluative dimension. In an autographic art, forgery matters because the history of production affects how the work is rightly evaluated. Knowing that van Meegeren, not Vermeer, painted The Supper at Emmaus changes what the painting achieves — it is no longer a seventeenth-century innovation but a twentieth-century pastiche, and the criteria of evaluation shift accordingly. The painting looks the same, but what it does in the art-historical context, what it exemplifies and expresses, changes when its history changes.
The same evaluative shift applies to AI-assisted literature, even under the allographic classification. A passage that achieves remarkable syntactic compression might be evaluated differently depending on whether it was produced by a human writer who labored for weeks to achieve that compression or by a machine that generated it in milliseconds. The text is the same. But what the text achieves — what it exemplifies about the possibilities of language, what it expresses about the effort of communication, what it reveals about the human capacity to find the right word — depends on its productive history in a way that the allographic framework was not designed to capture.
Goodman's distinction was formulated for a world in which the productive history of allographic arts was transparent. Everyone knew that orchestras performed and composers composed. Everyone knew that printing presses reproduced and authors wrote. The division of labor was visible, and the allographic classification could safely ignore productive history because productive history was not in dispute.
AI has made productive history opaque. The reader of an AI-assisted text does not know, and often cannot determine, which sentences originated with the human and which with the machine. The text is the text, but the text no longer carries with it the transparent productive history that the allographic classification assumed.
The result is that literature — and code, and academic writing, and journalism, and every other text-based domain — is experiencing what might be called an autographic crisis: the sudden relevance of productive history in domains where productive history was previously irrelevant. The crisis is not resolved by disclosure, though disclosure helps. It is resolved, if it is resolved at all, by the development of new evaluative frameworks that can assess works independently of their productive history while acknowledging that productive history affects what the works achieve.
Goodman's original distinction provides the skeleton for such a framework. The autographic/allographic spectrum is not a binary but a continuum, and the age of AI is pushing every textual art toward the autographic end — toward the recognition that origin matters, that what a work achieves depends not just on what symbols it contains but on the worldmaking process that produced them. The recognition does not require abandoning the allographic insight that texts can be correctly instantiated in multiple copies. It requires supplementing that insight with the recognition that correct instantiation is not sufficient for evaluation, that knowing what the symbols are is not the same as knowing what the symbols do, and that what the symbols do depends, in part, on where they came from.
The supplementation is uncomfortable. Goodman's analytic precision was built on the assumption that formal properties could do the philosophical work. The age of AI suggests that formal properties, while necessary, are not sufficient — that the evaluation of creative work in an era of distributed authorship requires a richer account of rightness than formal compliance with a notational specification can provide. The richer account must include what Goodman called rightness of rendering — the fit between the version and its purposes — and the purposes cannot be specified by the text alone. They require a worldmaker, and the identity of the worldmaker matters to what the purposes are.
A musical score is a technology for specifying identity across instances. This description sounds reductive, and Goodman intended it to. The score's function is not to inspire performers or to capture the composer's emotional state or to serve as a blueprint for beauty. Its function is narrower and more precise: to determine which performances count as instances of the work and which do not. A performance that complies with the score — that produces the pitches, rhythms, and dynamics the score specifies — is a genuine instance of the work, regardless of who performs it, where, or when. A performance that deviates from the score in the specified parameters is not an instance of the work, regardless of how beautiful or expressive it may be.
The narrowness of this account is its philosophical strength. Goodman was not interested in what makes a performance good. He was interested in what makes a performance of this work. The two questions are independent. A terrible performance of Beethoven's Ninth is still a performance of Beethoven's Ninth, provided it complies with the score. A magnificent performance that deviates from the score — that adds notes, changes rhythms, transposes passages — is not a performance of Beethoven's Ninth, regardless of its magnificence. Identity and quality are separate dimensions, determined by separate criteria.
The score achieves this identity-determining function through the formal properties of notation. A notation system, in Goodman's analysis, must be syntactically disjoint (every mark belongs to exactly one character), syntactically differentiated (there are no infinitely fine discriminations between characters), and semantically unambiguous (each character has a determinate compliance class). These properties ensure that the score can serve as a reliable bridge between the composer's specification and the performer's realization. The bridge holds because the notation eliminates indeterminacy at the level of identity — whatever the score specifies is specified precisely, and whatever it leaves unspecified is explicitly outside the identity conditions of the work.
What the score leaves unspecified is enormous. Pitch is specified. Timbre is not — the score says "violin" but does not specify which violin, with which wood, aged for how many years, strung with which strings. Rhythm is specified. The micro-timing of rubato is not — the score says "quarter note" but does not specify the precise millisecond at which the bow engages the string. Dynamics are specified in broad categories — forte, piano, crescendo — but the exact decibel level, the shape of the dynamic envelope, the way the sound fills the specific acoustic space of the specific hall, these are left to the performer.
The performer fills the space the score leaves open. This filling is not arbitrary. It is skilled, interpretive, responsive to tradition, to the acoustics of the hall, to the other performers on stage, to the performer's own history of engagement with the work. Glenn Gould's 1955 recording of the Goldberg Variations fills the interpretive space with a kinetic energy, a speed of articulation, a willingness to make the contrapuntal structure audible at the expense of lyrical warmth. His 1981 recording fills the same space differently — slower, more contemplative, each voice given room to breathe. Both are instances of the same work. Both are radically different performances. The work accommodates both because the score specifies the identity-determining features and leaves the rest to the performer's judgment.
The structure of the score-performance relation maps onto the collaboration between human and AI with a precision that illuminates both the power and the limits of the analogy.
When a human describes an intention to Claude — "I need a passage that explains why the speed of AI adoption measures human need, not product quality" — the description functions as a kind of score. It specifies certain features of the desired output: the topic, the argumentative direction, the relationship between the claim and the evidence. It leaves other features unspecified: the sentence structure, the choice of specific words, the rhythm of the prose, the particular examples deployed, the way the argument builds and turns.
Claude fills the space the description leaves open. The filling is not arbitrary. It is responsive to the training data, to the patterns of effective argumentation the model has absorbed, to the context of the conversation, to the specific vocabulary and register the human has established in previous exchanges. The result is a passage that complies with the human's specification — it is about the right topic, it moves in the right direction, it connects the right claims — while adding features the human did not specify: a particular syntactic rhythm, a particular choice of example, a particular arrangement of ideas that the human had not conceived.
The analogy holds to a point. The human is the composer. The machine is the performer. The work's identity is determined by the human's specification, and the machine's contribution fills the interpretive space the specification leaves open.
But the analogy fails at the point where the analogy is most needed: the point where the machine's contribution alters the specification itself.
In musical performance, the boundary between score and interpretation is maintained by the formal properties of notation. The performer cannot change the score. She can interpret it — fill the unspecified space with her own judgment — but the specified features remain fixed. The notes are the notes. The rhythms are the rhythms. The performer's freedom operates within constraints that the notation establishes and maintains.
In human-AI collaboration, there is no comparable notation system maintaining the boundary between specification and interpretation. The human describes an intention in natural language. Natural language lacks the formal properties Goodman required of a notation system — it is not syntactically disjoint (words belong to multiple categories), not syntactically differentiated (the boundaries between meanings are not sharp), and not semantically unambiguous (nearly every sentence admits multiple interpretations). The "score" the human provides is therefore not a score in Goodman's technical sense. It is something looser: a set of constraints expressed in a medium that does not enforce them with notational precision.
The consequence is that the machine's "performance" can alter the "score" without anyone noticing. When Claude fills the interpretive space with a particular example, a particular structural choice, a particular connection between ideas, the contribution may change what the work is about — may shift its emphasis, redirect its argument, introduce a claim that the human's original specification did not contain. The Deleuze incident described in The Orange Pill is the paradigmatic case: Claude supplied a philosophical reference that was not in Segal's specification, and the reference changed the direction of the argument. The machine was not performing within the constraints of a score. It was revising the score under the guise of performing it.
This revision is not necessarily a failure. In some cases — the laparoscopic surgery example, the connection between adoption curves and evolutionary biology — the machine's contribution genuinely improves the work. The revision is a better version of the score than the original specification. But the improvement does not change the structural fact: the boundary between specification and interpretation has dissolved, and the dissolution means that the identity conditions of the work are no longer determined by the human's specification alone.
Goodman's framework identifies the problem with characteristic precision. In a genuine notational system, the identity of the work is preserved across instances because the notation fixes the identity-determining features. In a collaboration mediated by natural language, nothing fixes the identity-determining features. The human's specification is provisional, revisable, and subject to alteration by the machine's response. The machine's response is provisional, revisable, and subject to alteration by the human's reaction. The work's identity emerges from the interaction — from the back-and-forth of specification and response, evaluation and revision — and it is not determined by either party's contribution in isolation.
The musical analogy remains useful, but its usefulness is diagnostic rather than prescriptive. It shows what is missing from the collaboration: the formal precision that a notation system provides, the clear boundary between what the composer specifies and what the performer interprets, the guarantee that the work's identity will survive the performer's contribution intact. The collaboration has the structure of a performance without the constraints of a score, and the result is a productive ambiguity that generates works of uncertain authorial status.
Whether this ambiguity is a problem or a feature depends on what one demands of authorship. If authorship requires the kind of identity-preservation that notation provides — if the author must be able to specify the work's identity-determining features in a way that survives the rendering process intact — then AI collaboration threatens authorship at a fundamental level, because natural language does not provide the formal resources for identity-preservation. If authorship requires only the looser condition that the work emerge from the author's worldmaking project — that the author provide the experiential content, the evaluative criteria, and the overarching intention that gives the work its purpose — then AI collaboration is compatible with authorship, provided the human maintains the worldmaking role.
Goodman's own framework supports the stricter interpretation for allographic arts and the looser interpretation for autographic ones. But the age of AI scrambles this mapping. The strictness of notation is available for music because music has a notational system. Literature does not — or rather, literature has the text as its notational base, but the text's notational properties (syntactic disjointness and differentiation at the level of spelling) do not extend to the semantic level where worldmaking operates. The meaning of a text is not notated in the way that the pitch of a musical work is notated. It is interpreted, and the interpretation is subject to the same productive ambiguity that characterizes the collaboration itself.
The result is that the score-performance model, as a framework for understanding human-AI collaboration, is most useful where it breaks down. It breaks down at the point where the machine's contribution crosses the boundary from performance to composition — from filling the unspecified space to altering the specified features. And that crossing point is where the most important questions about AI authorship live.
The questions are not: Did the human write every word? (The allographic framework says this does not matter.) Did the human intend every meaning? (The inseparability of form and content says this cannot be fully determined.) The questions are: Did the human establish the worldmaking project within which the symbols function? Did the human evaluate the machine's contributions against criteria of rightness that the human, not the machine, determined? Did the human maintain the capacity to reject, revise, and redirect — to function as the composer who can rewrite the score when the performance reveals that the score was wrong?
If the answers are yes, then the collaboration is authorship. Not pristine authorship. Not the autographic authorship of the solitary genius. But authorship in the Goodmanian sense: the construction of a version through the deliberate deployment of symbols within a worldmaking project. The machine performed. The human composed. And the work, like a symphony, exists as the class of renderings that comply with the composer's specification — even when the specification was revised in response to the performance, even when the boundary between score and interpretation was never as clean as the formal theory demanded.
---
The most persistent intuition about authorship is that the author is the person from whom the work originates. The origin theory of authorship locates creative ownership in the causal chain: the author is the first cause, the point at which the work begins its journey from nothing to something, from potentiality to artifact. Remove the author and the work does not exist. The author is the necessary condition, the generative source, the spring from which the river flows.
Goodman's philosophy makes this intuition available for examination without endorsing it. The examination reveals that the origin theory presupposes a metaphysics of creation — a before and after, a nothing and a something, a void and a filling — that Goodman's constructivism rejects. In Goodman's framework, there is no nothing. There is no void that the author fills. There are only prior versions and subsequent versions, prior constructions and new constructions, the rearrangement and reorganization of symbolic materials that already exist within the culture's repertoire. The author does not create from nothing. The author remakes from everything.
This remaking is what Goodman called worldmaking, and it operates through specific procedures that he catalogued with the precision of a taxonomist. Worldmaking proceeds by composition and decomposition, by weighting and ordering, by deletion and supplementation, by deformation. A new version of reality is constructed by taking elements from prior versions, rearranging them, emphasizing some and suppressing others, stretching some beyond their previous applications and compressing others into new configurations. The materials are never new. The configuration is.
This account of worldmaking dissolves the origin theory of authorship not by denying that authors contribute something essential but by redefining what the contribution consists of. The author's contribution is not the materials — the words, the concepts, the structural patterns, the referential conventions. These pre-exist in the cultural repertoire. The author's contribution is the specific configuration — the particular arrangement of pre-existing materials into a version that did not previously exist, a version that reveals aspects of experience that prior versions missed.
The dissolution is liberating in one direction and constraining in another. It liberates AI-assisted creation from the charge that the machine cannot be a genuine collaborator because it does not originate anything. Of course it does not originate anything. Neither does the human. Both operate on pre-existing materials — the machine on its training data, the human on the accumulated symbolic repertoire of the culture. The difference is not between originating and recombining but between different kinds of recombination, different qualities of configuration, different degrees of responsiveness to the worldmaking purposes that give the configuration its significance.
The constraint falls on the other side: if authorship is not origin but configuration, then the question of who configured what becomes paramount. And in human-AI collaboration, this question is extraordinarily difficult to answer, for reasons that go beyond the practical difficulty of tracing which sentences came from where.
The difficulty is structural, not merely evidential. Configuration, in Goodman's sense, is not a discrete event that can be localized in time and attributed to a specific agent. It is a process that unfolds through interaction — through the back-and-forth of trial and evaluation, production and revision, that characterizes any serious creative effort. When a painter steps back from the canvas and evaluates what the brushstroke achieved, the evaluation is part of the configuration. When she adjusts — thickens a line, shifts a color, scrapes away a passage and begins again — the adjustment is part of the configuration. The configuration is the entire arc of decisions, not any single decision within it.
In the collaboration between Segal and Claude, the configuration is distributed across the entire arc of their interaction. Segal provides an intention. Claude renders it. Segal evaluates the rendering. Claude revises. Segal redirects. The configuration emerges from this process, and it cannot be cleanly attributed to either party, because each party's contribution is a response to the other's, and the responses are constitutive of the final configuration.
Goodman's framework suggests that the question "Who is the author?" may be less important than the question "What kind of worldmaking is this?" The first question seeks a name. The second question seeks an understanding of the process through which the version was constructed — the specific procedures of composition and decomposition, weighting and ordering, deletion and supplementation that produced this particular arrangement of materials into this particular version.
The shift from "who" to "what kind" has consequences for evaluation. If authorship is origin, then AI-assisted work is evaluated by the standard of the human's contribution: the more the human contributed, the more authentic the work. If authorship is configuration, then AI-assisted work is evaluated by the standard of the configuration's rightness: does the version achieve coherence, fit, productivity, and responsiveness to the standards of its symbol system?
This second standard is more demanding in some respects and less demanding in others. It is less demanding in that it does not require the work to have originated entirely in a single human mind. It is more demanding in that it requires the work to achieve rightness — to meet the standards of the symbol system, to fit with other accepted versions, to serve the purposes for which it was constructed — and these standards apply regardless of how the work was produced.
The distinction between authorship as origin and authorship as configuration maps onto a distinction that runs through the entire discourse around AI creativity. The origin camp holds that the value of creative work is tied to its production by a human consciousness — that what makes a poem valuable is not just the words on the page but the fact that a particular person, with particular experiences and particular struggles, wrestled those words into existence. The configuration camp holds that the value of creative work is in the work itself — in the version it constructs, the understanding it provides, the rightness it achieves — and that the productive history, while interesting, is not constitutive of the work's value.
Goodman was a configuration theorist. His entire philosophical project was built on the principle that what matters about a symbol system is how it functions, not where it came from. The question "Is this art?" is answered by examining the symbolic functioning — denotation, exemplification, expression — not by tracing the causal history. The question "Is this right?" is answered by evaluating the version — coherence, fit, productivity — not by investigating the biography of the version-maker.
Yet even within Goodman's framework, the configuration does not float free of the configurer. The procedures of worldmaking — composition, decomposition, weighting, ordering, deletion, supplementation, deformation — are not mechanical operations that can be performed by any agent with equal authority. They require judgment. The decision to weight this feature and suppress that one, to compose these elements and decompose those, to deform a conventional arrangement into something that reveals what convention conceals — these decisions presuppose criteria of rightness, and the criteria are established by the worldmaker's purposes.
Purposes require a purposer. The machine has no purposes in the sense Goodman's framework requires. It has been trained to produce outputs that conform to patterns in its training data, and it can be directed by a human's prompt to produce outputs that serve the human's purposes. But the purposes are the human's, not the machine's, and the evaluation of whether the configuration achieves rightness is the human's judgment, not the machine's output.
Authorship, in Goodman's framework, lives where purposes meet symbols — where the worldmaker's intention encounters the symbolic resources of the medium and configures them into a version that serves the intention. When the machine contributes to the configuration — when it suggests connections, proposes structures, generates passages that alter the trajectory of the work — the authorship does not transfer to the machine. The machine has not contributed purposes. It has contributed configurations, and the configurations are evaluated by the human's purposes. The human remains the author in the sense that matters: the agent whose purposes determine the criteria of rightness by which the version is evaluated.
But the machine's contributions can reshape the human's purposes. When Claude provides a connection that changes the direction of the argument, and Segal recognizes the connection as right — as achieving a fit between elements that his original purposes had not anticipated — the purposes have been revised in response to the machine's contribution. The revision is the human's act. The material that prompted the revision is the machine's. And the resulting version is configured by revised purposes that neither party held before the interaction.
Authorship lives, then, not in a person or a machine but in the dynamic of purposes meeting materials and being revised by the encounter. The human is the author because the human holds the purposes. The machine is the collaborator because the machine provides materials that revise the purposes. The work is the configuration that emerges from the dynamic, and its rightness is evaluated by the revised purposes — purposes that are the human's, shaped by the machine's contributions, and ultimately answerable to the human's judgment about what the version should achieve.
This is not a clean answer. Goodman would have preferred cleaner answers, and his analytic temperament was calibrated for the kind of precision that resists the productive ambiguity of the present moment. But the ambiguity is the condition, and the framework that most honestly addresses it is one that locates authorship not in origin but in the ongoing configuration of purposes and materials — a process that is, in the age of AI, irreducibly collaborative.
---
The distinction between dense and differentiated symbol systems is the most technical and the most consequential concept in Goodman's aesthetics. It provides the formal ground for intuitions that, without it, remain impressionistic — intuitions about what is lost when human experience passes through digital rendering, about why AI output feels smooth in ways that human-made artifacts do not, about the structural basis of the unease that accompanies the replacement of handcraft with machine production.
A symbol system is syntactically dense when it provides for infinitely many characters ordered so closely that between any two there is always a third. A mercury thermometer is syntactically dense: between any two readings, no matter how close, there is always an intermediate reading that the instrument can in principle register. The thermometer does not jump from one discrete value to the next. It moves through a continuum of values, and every point on the continuum constitutes a potential character in the symbol system.
A symbol system is syntactically differentiated (Goodman's term was "finitely differentiated" or "articulate") when the characters are separated by gaps — when, for any mark, it is theoretically possible to determine which character it belongs to. A printed alphabet is syntactically differentiated: the letter 'a' is distinct from the letter 'b', and for any well-formed mark on the page, it is possible in principle to determine which letter it is. There are no intermediate characters between 'a' and 'b'. The system jumps from one discrete value to the next.
Semantic density and semantic differentiation apply the same distinction to what the symbols refer to rather than to the symbols themselves. A symbol system is semantically dense when its referential field is ordered continuously, so that between any two things referred to there is always an intermediate thing. A symbol system is semantically differentiated when its referential field is divided into discrete, separated classes.
Goodman argued that aesthetic symbol systems are characteristically dense — syntactically, semantically, or both. A painting is syntactically dense: every difference in color, line, and texture potentially constitutes a difference in the symbol. There are no gaps between adjacent colors on the canvas. A slight shift in hue — a fraction of a degree on the color wheel — is a different mark in the symbol system, and potentially a different meaning. The painting's capacity to carry meaning is, in a precise sense, infinite: there are infinitely many discriminable marks, each potentially significant.
This density is what gives aesthetic symbol systems their characteristic cognitive function. Dense systems demand what Goodman called a particular kind of attention — an alertness to minute differences, a sensitivity to the specific qualities of each mark, a readiness to find significance in variations so fine that a differentiated system would collapse them into the same category. Looking at a painting is cognitively demanding because the symbol system is dense: every feature is potentially meaningful, and the viewer cannot determine in advance which features matter and which do not. The demand for attention is structural, built into the formal properties of the symbol system, not contingent on the viewer's mood or the painting's subject matter.
Digital systems are differentiated by design. A digital image is composed of pixels, each assigned a discrete RGB value from a finite set. The image does not move continuously through color space. It jumps from one discrete value to the next, and the jumps, however small, are jumps — discontinuities that separate one value from its neighbor. A text generated by a large language model is composed of tokens, each selected from a discrete vocabulary. The model does not move continuously through meaning space. It selects from a finite menu, and each selection is a discrete event with a clear identity.
The differentiation is not a flaw. It is a feature — the feature that makes digital systems precise, reproducible, and computationally tractable. A differentiated system can be copied without loss, transmitted without degradation, processed without ambiguity. These properties are why digital systems have conquered every domain of information processing. The precision of differentiation is the foundation of the digital revolution.
But precision and density are structurally opposed. A system cannot be both dense and differentiated, because density requires the absence of gaps between characters and differentiation requires their presence. The move from analog to digital — from the continuous to the discrete, from the dense to the differentiated — is therefore not merely a change in medium. It is a change in the kind of meaning the symbol system can carry.
What density carries that differentiation cannot is the significance of infinitesimal variation. In a dense system, the difference between this blue and a blue infinitesimally lighter is a potential difference in meaning. The viewer who attends to the difference — who sees that Vermeer's blue is not Titian's blue, that the two blues construct different versions of light, of space, of the relationship between surface and depth — is engaging with the density of the symbol system. The engagement is cognitive. It yields understanding that is available only through the dense system, because the understanding resides in discriminations that a differentiated system cannot represent.
When a painter mixes pigments on a palette, the resulting color exists in a continuous space. It is not selected from a finite menu. It is found — arrived at through a process of adjustment so fine that the painter may not be able to articulate the criteria that determined when the color was right. The rightness is perceived, not calculated. The perception depends on the density of the symbol system — on the fact that the color could have been infinitesimally different, and that the infinitesimal difference would have mattered.
A digital image that replicates the painting's colors does so by mapping the continuous color space onto a grid of discrete values. The mapping is lossy by definition. Between any two adjacent pixel values, there are colors in the original that have been collapsed into one or the other. The loss is often imperceptible to casual viewing. But the loss is structural, and in the context of aesthetic cognition — where the significance of infinitesimal variation is the specific cognitive contribution of the dense system — the loss is not negligible.
Goodman did not make this argument about digital technology specifically. He died before the question of AI-generated aesthetics became culturally urgent. But the framework he built generates the argument with the inevitability of a theorem following from its axioms. If aesthetic symbol systems are characteristically dense, and if digital systems are characteristically differentiated, then the rendering of aesthetic content through digital means involves a structural change in the kind of meaning the system can carry. The change is not a contingent limitation of current technology, to be overcome by higher resolution or more sophisticated algorithms. It is a formal consequence of the shift from density to differentiation — a consequence that holds regardless of how many pixels the screen contains or how many tokens the model's vocabulary includes.
The application to AI-generated text is less obvious but no less consequential. Text appears to be a differentiated system — words are discrete units, and any well-formed inscription can be identified as belonging to a specific word. But Goodman noted that literature, while syntactically differentiated at the level of spelling, operates at the semantic level with a density that approaches the continuous. The meaning of a sentence is not exhausted by the dictionary definitions of its component words. It is shaped by connotation, by rhythm, by the position of the sentence in the larger structure, by allusion, by the specific weight of each word in its particular context. The semantic density of literary language is what makes close reading possible and rewarding — the sense that every word choice matters, that replacing "grief" with "sorrow" changes the meaning in ways that cannot be fully articulated, that the specific texture of the prose is constitutive of what the prose says.
A large language model generates text by selecting tokens from a vocabulary according to probability distributions. The selection is differentiated: at each step, one token is chosen from a discrete set. The result is text that is syntactically differentiated in the same way all text is. But the semantic dimension — the connotative weight, the rhythmic contribution, the contextual resonance of each word — is determined not by the kind of attention to infinitesimal variation that a human writer brings to word choice but by statistical patterns over the training data.
The statistical patterns capture much. The model has absorbed billions of words of text in which "grief" and "sorrow" are used in distinguishable contexts, and it can reproduce those contextual distinctions with impressive accuracy. But the reproduction is fundamentally differentiated. The model selects among discrete options. It does not move continuously through semantic space, sensing the infinitesimal variations of meaning that make one word right and its near-synonym wrong. The selection is based on probability, not on the kind of dense, context-sensitive, infinitely graduated judgment that characterizes a human writer's engagement with language.
What Byung-Chul Han calls "the aesthetics of the smooth" — the cultural tendency toward frictionless, seamless, polished surfaces — finds its formal articulation in Goodman's distinction. Smoothness, in Goodman's terms, is what happens when a dense system is rendered through differentiated means. The infinite gradations of the dense system are mapped onto a finite grid. The result is smooth because the discontinuities have been eliminated — the gaps between characters that differentiation introduces are too small to perceive, and the resulting surface has the appearance of continuity without its substance.
The smoothness is not false. The pixels are where they are. The tokens are what they are. The rendering conforms to its specifications. But the smoothness conceals the loss — the infinite gradations of meaning that the dense system could carry and that the differentiated rendering has collapsed into discrete values. The concealment is what makes AI-generated text seductive. It reads smoothly because it is smooth — because the micro-textures of human semantic judgment, the tiny roughnesses and resistances and moments of not-quite-right that characterize the struggle of a human writer with language, have been replaced by the polished surface of probabilistic selection from a differentiated vocabulary.
The formal distinction between density and differentiation does not, by itself, constitute a judgment against AI-generated work. Goodman was not a technophobe, and his framework does not privilege the analog over the digital as a matter of principle. What the distinction does is identify with precision what is at stake in the shift from human making to machine rendering: a change in the kind of meaning the symbol system can carry, a change that is formal, structural, and irreducible to the question of whether the output looks right.
The question is not whether the output looks right. It often does. The question is whether looking right is the same as being right — whether the plausibility of the rendering is evidence of the rightness of the worldmaking or merely evidence that the differentiated system has achieved a smooth approximation of what density alone can carry. The question cannot be answered by examining the output. It can only be answered by understanding the formal properties of the symbol system through which the output was produced, and by recognizing that those formal properties constrain what the system can mean, not merely what it can display.
---
Every act of creation faces the same fundamental challenge: the translation of experiential content into symbolic form. The painter who stands before a landscape and begins to mix pigments faces it. The novelist who sits with a memory of loss and begins to search for sentences faces it. The composer who hears, internally, a pattern of tension and resolution and begins to notate it faces it. In every case, the challenge is the same: to find the symbolic resources — the colors, the words, the notes — that will construct a version of the experiential content that achieves what Goodman called rightness of rendering.
Rightness of rendering is not accuracy. This point cannot be overstated, because the persistent confusion between rightness and accuracy is the source of most errors in thinking about AI-generated creative work. An accurate rendering is one that copies its subject faithfully — that reproduces the visual appearance of the landscape, the factual details of the memory, the acoustic properties of the internal sound. Goodman demonstrated, across hundreds of pages of tightly argued prose, that accuracy in this sense is incoherent. There is no "faithful copy" of a landscape, because there is no single version of the landscape to be faithful to. The landscape as seen by the painter is already a version — organized by the painter's perceptual habits, her training, her position, the time of day, the conventions of looking that her culture has installed. To render the landscape is not to copy a pre-given reality but to construct a new version from the materials the medium provides.
Rightness, then, is not fidelity to a source but fit within a system. A right rendering is one that achieves coherence with the worldmaking project of which it is a part, that meets the standards of the symbol system within which it operates, that serves the purposes for which the version is being constructed, and that fits productively with other versions the maker and the audience accept. These criteria are demanding. They are also specific to each act of rendering — what counts as right for this painting of this landscape is not what counts as right for that painting of that landscape, because the worldmaking projects are different and the standards of rightness shift with the project.
The rendering problem, understood in this way, is the problem of finding the right rendering — the specific configuration of symbols that achieves the fit, the coherence, and the productivity that the worldmaking project demands. The problem is difficult because the criteria of rightness are not algorithmic. There is no procedure that takes experiential content as input and produces the right rendering as output. The painter cannot calculate the right color. She must find it — through trial and error, through the accumulated intuition of years of practice, through the specific resistance of the medium, which refuses certain configurations and suggests others in ways that the painter could not have anticipated before engaging the materials.
The resistance of the medium is not incidental to the rendering problem. It is constitutive. The oil paint that resists the palette knife, the sentence that refuses to cohere, the harmonic progression that collapses under its own weight — these resistances force the maker back into the experiential content, demand a reconsideration of what the version is trying to achieve, and often produce revisions that are better than the original intention. The rendering problem is not a barrier between the maker and the work. It is the cognitive space within which the work is discovered.
Discovered, not designed. This distinction matters. A designed object is specified in advance and then executed according to the specification. A discovered object emerges through interaction with the medium — through the maker's response to what the medium does when acted upon, which is often not what the maker expected. The sculptor who discovers a form in the stone she did not intend to carve — who follows a vein of marble where it leads because the stone's resistance suggested a better form than the one she planned — is discovering the rendering rather than executing it. The discovery is possible only because the medium resists, and the resistance creates a space of productive surprise that no specification can anticipate.
AI solves the rendering problem at the technical level with an efficiency that is historically unprecedented. A person who describes what they want to Claude receives, within seconds, a rendering that is technically competent — syntactically correct, structurally organized, responsive to the conventions of the medium. The code compiles. The prose reads. The argument holds together. The technical barriers that once made the rendering problem the primary obstacle to creation have been substantially eliminated.
The elimination is genuine. The technical rendering problem — finding the syntax, the grammar, the structural conventions that will express the content in a functional form — consumed the majority of most creators' working time for the entirety of human creative history. The painter who spent months mixing pigments and preparing surfaces before applying the first stroke. The programmer who spent hours debugging syntax errors before testing the logic. The writer who spent days restructuring paragraphs before evaluating the argument. In each case, the technical rendering problem consumed cognitive resources that could not be allocated to the deeper rendering problem — the question of whether this particular configuration is the right one.
But the deeper rendering problem — finding the right rendering rather than merely a competent one — is not solved by technical competence. It is not even addressed by technical competence. The deeper rendering problem is the problem of fit: of determining whether this configuration of symbols, this arrangement of words or colors or notes, achieves the specific rightness that the worldmaking project demands. The determination requires judgment, and judgment requires the kind of engagement with the material that only a purposeful worldmaker can provide.
The danger that AI poses to the rendering problem is not that it renders badly. It renders well, often better than the human could render alone. The danger is that the ease of rendering makes the deeper rendering problem invisible. When the technical problem is hard, the maker is forced to engage with the material at a granular level — to struggle with every word, every color, every structural choice. The struggle is unpleasant. It is also the cognitive space within which the deeper problem is addressed, because the struggle forces the maker to ask, at every juncture, whether this specific choice is right.
When the technical problem disappears — when the rendering arrives fluent, competent, and complete — the maker is no longer forced into the granular engagement that the deeper problem requires. The temptation is to accept the competent rendering as if it were the right one, because competence and rightness look identical from the outside. The prose reads well. The code compiles. The structure holds. The surface is smooth. And the smoothness conceals the question that only the maker can answer: Is this the right version, or merely a plausible one?
Segal identifies this temptation precisely when he describes catching himself unable to distinguish between his own thinking and Claude's polished rendering. The rendering was competent — it met the technical standards of the symbol system (correct grammar, organized argument, appropriate vocabulary). But competence is not rightness. The rendering achieved plausibility without achieving fit, because the fit between the rendering and the worldmaking project — between what the passage said and what the book needed the passage to do — could only be evaluated by the worldmaker, and the worldmaker had been lulled by the smoothness of the rendering into a momentary suspension of evaluation.
The momentary suspension is the specific danger. The rendering problem, in its deeper form, requires continuous evaluation — a relentless asking of "Is this right?" at every level of the work. The asking is exhausting. It is also the thing that makes the difference between a work that achieves rightness and a work that merely achieves competence. The ease of AI rendering makes it tempting to relax the asking, to accept competence as a proxy for rightness, to let the smoothness of the output substitute for the judgment that only the human worldmaker can provide.
Goodman's framework identifies what is lost when the asking relaxes. Rightness of rendering is not a single property. It is a complex of properties — coherence, fit, productivity, responsiveness to standards — that must be satisfied simultaneously. A rendering that achieves some of these properties while failing others can appear right without being right, in the same way that a well-constructed forgery can appear authentic without being authentic. The appearance of rightness is what makes the failure dangerous, because it bypasses the evaluative mechanisms that would catch the failure if the failure were visible.
The rendering problem has not been solved. It has been split into two problems: a technical problem that AI solves with extraordinary efficiency, and a deeper problem that AI's efficiency makes harder to see. The technical solution is a genuine achievement. The obscuring of the deeper problem is a genuine risk. The task for the human worldmaker — the task that no rendering engine can perform — is to maintain the asking in the face of smooth competence, to insist on rightness when plausibility is offered, to remember that the rendering problem was never really about finding the words.
It was about finding the right words. And the difference between the words and the right words is the distance between rendering and worldmaking.
The history of tools is a history of rendering engines. The chisel renders the sculptor's intention in stone. The printing press renders the author's text in ink and paper. The compiler renders the programmer's algorithm in machine code. In each case, the tool mediates between an intention and its realization — between what the maker conceives and what the world receives. The tool does not originate the intention. It translates the intention into a medium that can carry it beyond the maker's private experience and into the shared symbolic space of a culture.
Nelson Goodman did not write about tools as such. His concern was with symbol systems, not with the mechanical means by which symbols are produced. But the framework he built — the theory of worldmaking, the analysis of rightness, the distinction between dense and differentiated systems — generates a precise account of what tools do and do not contribute to the creative process. Tools contribute rendering. They do not contribute worldmaking. The distinction is Goodman's most consequential gift to the present moment, and it is the distinction that the age of AI makes it most tempting to forget.
Rendering, in Goodman's framework, is the production of symbols — the physical or digital instantiation of the marks, words, sounds, or images through which a version of reality is communicated. Rendering is necessary. Without rendering, the version remains private — a thought in the maker's head, unavailable to anyone else, unable to function as a symbol system because symbol systems require public symbols. Rendering is also, in itself, insufficient. The symbols must be deployed within a worldmaking project — organized by a scheme-content relation that establishes what the symbols refer to, exemplify, and express. Rendering provides the symbols. Worldmaking provides their significance.
The historical evolution of rendering engines traces a consistent trajectory: each new engine reduces the friction between intention and instantiation, making it possible to produce symbols more quickly, more cheaply, and with less specialized skill. The printing press reduced the friction of text reproduction from months of monastic labor to hours of mechanical operation. The camera reduced the friction of pictorial representation from years of training in drawing and painting to the press of a button. The word processor reduced the friction of text revision from physical cutting and pasting to keystrokes. Each reduction expanded the population of people who could render — who could produce symbols in the medium — and each expansion raised the same question that the present moment raises with unprecedented urgency: When the rendering barrier drops, what happens to the quality of the worldmaking?
The question has a historical answer, and the answer is neither as optimistic as the triumphalists claim nor as pessimistic as the elegists fear. When the printing press made text reproduction cheap, the quantity of published text exploded and the average quality declined — more pamphlets, more propaganda, more forgettable verse. But the press also made possible the scientific revolution, the Enlightenment, the novel as a literary form, and every other cultural achievement that depends on the wide distribution of complex symbolic material. The average quality declined. The best work improved, because the reduction in rendering friction freed the most capable worldmakers to focus their cognitive resources on the worldmaking rather than on the rendering.
The pattern is consistent across rendering revolutions. The camera democratized pictorial representation and produced mountains of unremarkable snapshots. It also made possible photojournalism, documentary film, and the specific visual literacy of the twentieth century. The word processor democratized text revision and produced an ocean of mediocre prose. It also made possible the kind of iterative, exploratory writing that produces the best contemporary nonfiction — the ability to restructure an argument fifteen times in a morning, testing configurations that the physical labor of retyping would have made prohibitive.
Each revolution floods the cultural landscape with low-quality rendering and simultaneously creates the conditions for higher-quality worldmaking. The two effects are not contradictions. They are consequences of the same structural change: the reduction of rendering friction, which allows more people to render (increasing quantity and decreasing average quality) while freeing the most capable worldmakers from rendering labor (enabling higher-quality worldmaking at the top).
AI is the most powerful rendering engine in the history of human tool use. The claim is not hyperbolic. No previous tool has been able to render human intention across every symbolic medium — prose, code, image, music, argument, narrative — with the competence and speed that large language models and their multimodal successors now achieve. The rendering barrier has not merely been lowered. For a significant class of creative work, it has been functionally eliminated. The imagination-to-artifact ratio that Segal describes in The Orange Pill — the distance between a human idea and its realization — has collapsed to the duration of a conversation.
The collapse is a rendering revolution of a different order than its predecessors, because previous rendering revolutions operated within a single medium. The printing press rendered text. The camera rendered images. The synthesizer rendered sound. Each revolution lowered the barrier within its specific symbolic domain while leaving the barriers in other domains intact. A person who could render text cheaply still needed specialized skills to render images, code, or music. The multi-domain rendering barrier — the obstacle of producing symbols across multiple media — remained formidable.
AI dissolves the multi-domain barrier. A person who can describe their intention in natural language can now produce working software, visual compositions, musical arrangements, and structured arguments, not sequentially but simultaneously, within a single conversation. The person who was previously a text-renderer can now render across every domain the model supports. The expansion of rendering capacity is not incremental. It is categorical.
Goodman's framework identifies what this categorical expansion does and does not change. What it changes is the population of people who can render — who can produce symbols in any given medium with sufficient technical competence to function in the public symbolic space. The population expands enormously, because the specialized skills that previously gated entry to each medium are no longer required. A person who cannot paint can produce images. A person who cannot code can produce software. A person who cannot compose can produce music. The rendering barrier has been equalized across media and across skill levels.
What the expansion does not change is the worldmaking. The images, software, music, and prose that the rendering engine produces are versions — constructions of reality through symbolic means. The versions require the same properties they have always required: coherence, fit with other accepted versions, productivity of understanding, responsiveness to the standards of the symbol system. These properties cannot be rendered into existence. They must be worldmade — constructed through the deliberate selection, organization, and emphasis that Goodman identified as the procedures of worldmaking.
The rendering engine does not select. It generates. The rendering engine does not organize according to purposes. It organizes according to patterns. The rendering engine does not emphasize according to judgment. It distributes emphasis according to statistical regularities in its training data. The difference between generating and selecting, between pattern and purpose, between statistical regularity and judgment, is the difference between rendering and worldmaking. The rendering engine has made rendering trivially easy. It has made worldmaking no easier at all. If anything, it has made worldmaking harder, because the ease of rendering creates a constant temptation to accept the rendered version as if it were a worldmade one — to mistake the technically competent output for the right one.
The twelve-year-old who asks her mother "What am I for?" is asking a worldmaking question. She is asking what version of reality she should construct — what kind of life, what kind of contribution, what kind of significance. The rendering engine cannot answer this question, not because the question is too hard for it, but because the question is not a rendering problem. It is a worldmaking problem, and worldmaking problems require a worldmaker: an agent with purposes, with stakes in the world, with criteria of rightness that are grounded in lived experience and evaluated by judgment.
The human is for the worldmaking. The machine is for the rendering. Goodman's framework articulates this division with a precision that no other philosophical vocabulary quite achieves. The division is not a hierarchy of value — rendering is necessary, and good rendering serves worldmaking in ways that are not merely mechanical. The performer who brings interpretive genius to a Beethoven score contributes something genuine to the work. The rendering engine that produces a beautiful image from a human's description contributes something genuine to the human's project. The contributions are real.
But the contributions are contributions to a project that the rendering engine did not conceive, toward standards of rightness that the rendering engine did not establish, in service of purposes that the rendering engine does not hold. The project, the standards, and the purposes are the worldmaker's. The rendering engine serves them. And the question of whether the rendering serves them well — whether the output is right, not merely competent — is the question that only the worldmaker can answer.
The rendering problem was always dual. The technical rendering problem — how to produce the symbols — was what consumed most of the maker's time and energy for most of human creative history. The deeper rendering problem — whether these particular symbols are the right ones for this particular worldmaking project — was always there, but it was masked by the technical difficulty. When you are struggling to mix the right pigment, you do not have the cognitive resources to ask whether this painting should exist. When you are debugging a syntax error, you do not have the bandwidth to ask whether this software serves a genuine need. The technical problem consumed the space that the deeper problem required.
AI has eliminated the technical problem. The space is now open. And the deeper rendering problem — the worldmaking problem, the question of rightness — stands exposed, demanding attention that the technical problem's noise once drowned out. The question is whether the humans who now have the space will use it for worldmaking or whether they will fill it with more rendering — more technically competent output that nobody evaluated for rightness, more versions that nobody chose to construct, more symbols that nobody deployed within a worldmaking project.
Goodman's answer would be characteristically austere. Whether the space is used for worldmaking or filled with rendering is not determined by the technology. It is determined by the worldmakers — by their willingness to maintain the distinction between competence and rightness, between plausibility and fit, between having something to say and having it said well. The rendering engine provides the latter. The worldmaker provides the former. And the distance between them — the distance between the words and the right words, between a version and the version — is the distance that defines human creative contribution in the age of machines that can produce any symbol in any medium at any time.
The distance is not closing. The rendering engine can approach the surface of rightness with ever-greater fidelity. It can produce outputs that look, read, and sound increasingly like the product of genuine worldmaking. But the surface of rightness and rightness itself are not the same thing, in the same way that a perfect forgery and an original are not the same thing in an autographic art. The surface is rendering. The substance is worldmaking. And the substance is, as it has always been, the exclusive province of agents who inhabit the world they are making — who have purposes that arise from experience, criteria that arise from judgment, and stakes that arise from the simple, irreducible fact of being alive.
---
There is a kind of knowledge that resists extraction. It cannot be stated as a proposition, catalogued in a database, or transmitted as information from one system to another without fundamental loss. It is the knowledge that a Vermeer interior provides about the quality of light falling through a window onto a woman's face — not the physics of light (the scientist handles that in a different symbol system) but the experience of light, the way light constitutes a space, inhabits a silence, makes visible the passage of an afternoon. It is the knowledge that a Beethoven late quartet provides about the structure of human grief — not a description of grief (the psychologist provides that) but the temporal experience of grief, the way it builds and recedes and builds again, the way resolution arrives and proves insufficient and the music continues past the point where resolution was supposed to hold.
Goodman spent his career arguing that this kind of knowledge is genuine. Not a lesser form of knowledge that approximates what propositions capture more precisely. Not an emotional response dressed up as cognition. Genuine knowledge — understanding of the world that is achieved through the specific resources of a symbol system and that is available only through those resources. The painting does not provide a less precise version of what a proposition about light could state more precisely. It provides a different kind of understanding, one that propositions cannot capture, because the understanding is constituted by the formal properties of the symbol system — the density, the exemplification, the specific deployment of color and composition — that no propositional translation can reproduce.
The irreducibility thesis is Goodman's most contested and most consequential claim. It is contested because it challenges the assumption, deep in the foundations of Western philosophy, that propositional knowledge is the paradigmatic form of knowledge and that all other forms are either reducible to propositions or not genuinely knowledge at all. It is consequential because, if true, it means that the arts are not merely valuable cultural activities but irreplaceable cognitive enterprises — ways of knowing the world that nothing else, no scientific description, no philosophical argument, no technological rendering, can substitute for.
The consequence for the age of AI is immediate. If artistic knowledge is irreducible — if what a painting knows cannot be extracted from the painting and restated in another form without becoming a different kind of knowledge — then the question of whether AI can produce art is not a question about the quality of the output. It is a question about the kind of knowledge the output carries. And this question cannot be answered by comparing the AI's output to human art at the level of surface properties. It must be answered at the level of cognitive function — at the level of what the work does for the viewer, reader, or listener who engages with it.
Goodman identified four primary symbolic functions through which art achieves its cognitive work: denotation, exemplification, expression, and reference through complex chains of these basic functions. A painting denotes its subject — refers to a landscape, a face, a bowl of fruit. It exemplifies certain properties — a quality of color, a density of texture, a rhythmic arrangement of forms. It expresses, through metaphorical exemplification, qualities it metaphorically possesses — sadness, exuberance, tension, calm. And it achieves more complex cognitive effects through chains of reference that link these basic functions in configurations that produce understanding exceeding what any single function could achieve.
The cognitive work is not automatic. It requires active engagement from the viewer — a kind of attention that the dense symbol system demands, as argued in the preceding chapter. The viewer must attend to the specific properties the painting exemplifies, perceive the metaphorical qualities it expresses, follow the chains of reference that link denotation, exemplification, and expression into a coherent cognitive structure. The engagement is skilled. It develops through practice. And it yields understanding that is not available to anyone who has not engaged the work with the specific kind of attention it demands.
This account of artistic cognition has a structural implication for AI-generated work that is more subtle than the question of whether the output is "good enough." The implication concerns the conditions under which the cognitive functions operate.
Denotation operates when the symbols refer to things in the world according to established conventions. AI-generated images denote — they refer to landscapes, faces, objects — through the same pictorial conventions that human-made images employ. The denotation functions regardless of the origin of the image. Exemplification operates when the symbols possess and highlight certain properties. AI-generated text can exemplify syntactic compression, rhythmic variety, structural elegance — it can possess these properties and direct attention to them. The exemplification functions regardless of the origin of the text.
Expression is where the analysis becomes difficult. Expression, in Goodman's framework, is metaphorical exemplification: the work metaphorically possesses a property (sadness, tension, exuberance) and directs attention to that property. The metaphorical possession is grounded in a transfer — the property belongs literally to one domain (human emotional experience) and is transferred to the work, which possesses it metaphorically. A grey painting is not literally sad. It is metaphorically sad — it possesses sadness as a metaphorical property, transferred from the domain of human experience to the domain of pictorial properties through a convention that associates certain colors, compositions, and densities with certain emotional qualities.
The transfer depends on the existence of both domains. The literal domain — human emotional experience — provides the property to be transferred. The metaphorical domain — the pictorial symbol system — receives it. The expression works because the viewer recognizes the transfer, perceives the metaphorical possession, and achieves understanding of the emotional quality through the specific visual means by which it is expressed.
AI-generated work can produce outputs that trigger the same perceptual responses that human-made art triggers. A diffusion model can generate an image with the grey tonalities, the compositional weight, the spatial emptiness that convention associates with sadness. The viewer may perceive the image as expressing sadness. The perceptual response is genuine — the viewer really does experience the image as sad.
But Goodman's framework asks a question that the perceptual response alone does not answer: Is the expression right? Does the metaphorical possession of sadness achieve the fit between the visual properties and the emotional quality that constitutes genuine expression? Or does the image merely simulate expression — producing the surface properties that convention associates with sadness without achieving the specific configuration that makes the expression this expression of this sadness, responsive to this worldmaking project?
The question cannot be answered by examining the image in isolation. It can only be answered in relation to the worldmaking project of which the image is a part. A human artist who constructs a grey, weighted, spatially empty image as part of a sustained engagement with the experience of loss has embedded the expression in a worldmaking project that gives it specificity. The sadness expressed is not generic sadness. It is this artist's sadness — configured by this biography, refined by this sustained engagement with the medium, responsive to the standards of rightness that this particular project establishes. The expression achieves rightness because it fits the project. The project is what gives the expression its cognitive value — what makes the viewer's engagement with the image a form of understanding rather than merely a form of response.
An AI-generated image with identical visual properties has no worldmaking project. It has a prompt, which specifies certain features of the desired output, and a probability distribution, which determines how the unspecified features are filled. The image may trigger the same perceptual response. But the expression, if it is expression at all, floats free of the kind of project-specific rightness that gives human expression its cognitive value. It is sadness without a source. Grief without a griever. The metaphorical transfer operates at the conventional level — the visual properties map onto the emotional quality through established associations — but the rightness of the expression, the specific fit between these visual properties and this emotional content, is absent, because there is no "this" to be right about.
The irreducibility thesis holds here with particular force. If artistic knowledge is constituted by the specific deployment of symbols within a worldmaking project, and if the cognitive value of the work depends on the rightness of the deployment — on its fit with the project, its responsiveness to the project's standards, its productivity of understanding that no other deployment could achieve — then the knowledge that art provides is not separable from the worldmaking that produced it. An AI-generated work that replicates every surface property of a human-made work may carry different knowledge, or less knowledge, or no knowledge at all, depending on whether the surface properties are embedded in a worldmaking project that gives them their significance.
The argument does not disqualify AI-generated work from the category of art. Goodman's framework is functional, not genetic. If an AI-generated work functions as art — if it denotes, exemplifies, and expresses through structured symbolic means, and if the cognitive engagement it demands yields understanding — then it is art, regardless of its origin. But the framework makes functioning as art contingent on the presence of the worldmaking that gives symbolic function its significance, and the significance of the worldmaking is grounded in something that AI does not currently possess: the experience of inhabiting the world the work constructs.
A human being who makes art about grief has experienced grief. The experience is not sufficient for art — most grieving people do not produce art, and the grief of the artist is not the same thing as the artistic knowledge the work provides. But the experience is necessary, in the sense that it provides the literal domain from which the metaphorical transfer originates. Without the experience of grief, the expression of grief is empty convention — the deployment of visual or linguistic properties that convention associates with grief, without the specific experiential content that gives the expression its claim to rightness.
AI does not experience grief. It processes tokens that human beings produced while experiencing grief, while writing about grief, while constructing versions of grief in every symbolic medium available to the culture. The processing is extraordinarily sophisticated. The patterns extracted from the training data capture the full range of conventional associations between symbolic properties and emotional qualities. But the patterns are extracted from human experience. They are not generated by the machine's own experience, because the machine has no experience in the relevant sense — no embodied existence, no stakes in the world, no finitude that makes the loss grief responds to genuinely, irreversibly consequential.
Goodman's mature philosophy does not explicitly address the question of whether consciousness or experience is required for worldmaking. His nominalistic temperament inclined him away from metaphysical claims about what kinds of agents can or cannot perform cognitive operations. But his framework generates the requirement implicitly, through the concept of rightness. Rightness of rendering is not a formal property that can be checked against a specification. It is a relation of fit between the rendering and its purposes, and purposes arise from agents who have reasons to construct one version of reality rather than another. The reasons are grounded in experience — in the worldmaker's specific, biographical, embodied engagement with the world being rendered. Without the engagement, the rendering may achieve formal coherence. It will not achieve rightness, because there is nothing for it to be right about.
The irreducibility of artistic knowledge, then, is ultimately an irreducibility of experience. What art knows — the understanding it provides through the specific resources of its symbol system — is grounded in the experience of a being who inhabits the world the art constructs. The inhabiting is not optional. It is constitutive. The art knows what it knows because the worldmaker lived what the art renders. The rendering engine can reproduce the symbols. It cannot reproduce the living. And the distance between the symbols and the living is the distance that defines the irreducible contribution of human worldmaking in the age of machines that can render anything.
This distance does not close as the rendering engines improve. Greater rendering capacity does not produce greater worldmaking capacity, any more than a better piano produces a better composition. The capacity of the instrument is one thing. The capacity of the musician is another. And the capacity that matters — the capacity that produces knowledge rather than mere output — is the capacity of the agent who has purposes, who inhabits a world, who knows what grief is because grief has been suffered, and who deploys the symbols not because the probability distribution suggests them but because the experience demands them.
The demand of experience upon expression is what art is. No rendering engine satisfies that demand. No rendering engine can. The demand is made by the living on the symbols, and the symbols respond — when the worldmaking is right — with understanding that no other means of knowing can provide. The understanding is irreducible. It lives in the specific conjunction of this experience, this medium, this deployment of symbols within this worldmaking project. Remove any element and the understanding changes. Replace the worldmaker with a rendering engine and the symbols remain. The knowledge is gone.
---
One night at three in the morning over the Atlantic, deep into a draft of The Orange Pill, I wrote a passage I thought was mine. The sentences moved with a clarity I rarely achieve on my own — the kind of prose where each claim locks into the next with the satisfying precision of something built rather than assembled. I read it twice, felt the flush of having articulated something true, and moved on.
The next morning I could not determine whether I had written those sentences or Claude had. Not in the trivial sense — I knew the collaboration had produced them, the back-and-forth of intention and rendering that I describe throughout the book. What I could not determine was subtler: whether the thinking in those sentences was mine. Whether the configuration — Goodman's word, and I cannot let go of it now — belonged to my worldmaking project or to a pattern the machine had assembled from its training data and offered me with enough surface rightness that I mistook it for my own.
That is the question Goodman's philosophy forced me to sit with, and it is the question I cannot answer cleanly.
Goodman was not interested in making artists feel comfortable. He was interested in the formal structure of symbol systems — in what makes one rendering right and another merely plausible, in the distinction between a version that achieves genuine understanding and one that simulates the appearance of understanding with enough competence to fool even the worldmaker who commissioned it. His vocabulary is precise where mine is intuitive. Where I reach for the feeling, he reaches for the function. And the function, examined with the rigor he demanded, reveals something I had sensed but could not articulate: the ease of rendering is not the same as the quality of worldmaking. A fluent passage is not a right passage. A coherent argument is not a true argument. And the smoothness of the output — the word keeps returning — can conceal the absence of the very thing that makes the output worth having.
What I take from Goodman is not a prohibition. He was not telling me to stop using the machine. He was telling me — with the austere precision of a philosopher who spent his life studying what representations do and how they do it — that the rendering engine changes the economics of creation without changing the epistemics. It is now trivially easy to produce symbols. It remains exactly as difficult as it has always been to produce symbols that carry genuine knowledge, that achieve rightness, that construct versions of reality worth inhabiting.
The twelve-year-old at the dinner table, asking what she is for — she is for the worldmaking. Not the rendering. The rendering is handled. The worldmaking is what demands a being who has lived in the world she is constructing, who has stakes in its outcome, who knows what grief and wonder and the weight of three in the morning feel like because she has felt them, and who deploys the symbols not because the probability distribution suggests them but because the experience insists.
The machine renders. The human worldmakes. The distinction sounds simple. Living inside it, maintaining it against the constant seduction of smooth output and easy competence, is the hardest discipline I have encountered in thirty years of building. It is also, I now believe, the only discipline that matters.
-- Edo Segal
** AI can now produce any symbol in any medium -- code, prose, image, music -- with a competence that passes every surface test. Nelson Goodman spent his career proving that passing the surface test is not enough. His philosophy draws a razor-sharp line between output that looks right and output that is right, between representations that simulate understanding and those that genuinely construct it. In this volume of The Orange Pill series, Goodman's frameworks -- worldmaking, rightness of rendering, the distinction between dense and differentiated symbol systems -- become diagnostic instruments for the most urgent creative question of our time: when the machine can render anything, what is the human actually for? The answer is more demanding than comfort allows and more hopeful than cynicism permits.

A reading-companion catalog of the 28 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Nelson Goodman — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →