By Edo Segal
The paragraph I almost kept was the one that should have frightened me most.
Not because it was wrong. Because it was beautiful. Claude had produced a passage connecting two ideas I'd been circling for weeks, and the connection was so elegant, so perfectly weighted, that I read it three times with the specific pleasure of encountering something true. The prose shimmered. The logic held. I moved on to the next section.
Then something caught. Not an error I could name. A texture. The passage was too smooth. It accommodated my thinking so perfectly that it had never once pushed back against it. It had given me exactly what I wanted, in exactly the form I wanted it, and the giving had been so seamless that I'd mistaken reception for discovery.
I deleted the paragraph and spent two hours finding the rougher version that was actually mine. The rougher version was worse by every measurable standard. It was also true in a way the smooth one was not.
That experience — the inability to distinguish, for a dangerous moment, between a thought I'd earned and a thought I'd been handed — is the experience Susan Sontag spent her career preparing us for. She just didn't know the machine would be the one handing it to us.
Sontag built a framework for detecting the difference between encounter and consumption, between the genuine and the merely plausible, between form that bears the weight of struggle and form that arrives pre-polished and empty. She did this across photography, illness, war, camp, style — every domain where surfaces threaten to replace the realities beneath them. She was not anti-surface. She was ferociously attentive to the moment when a culture stops being able to tell the surface from the substance.
We are in that moment now. Not because AI produces bad work. Because it produces work so formally competent that the competence itself becomes the camouflage. The prose sounds like thought. The connection sounds like insight. The argument sounds like conviction. And the person receiving it — the builder, the student, the leader — must develop the perceptual discipline to feel the difference before they can articulate it.
That discipline is what Sontag spent forty years sharpening. This book applies her lens to the world she never saw but diagnosed in advance. If *The Orange Pill* is about what the amplifier makes possible, this companion volume is about what the amplifier makes invisible — and why learning to see it again is the most urgent skill of our time.
— Edo Segal ^ Opus 4.6
1933-2004
Susan Sontag (1933–2004) was an American essayist, novelist, critic, and public intellectual whose work reshaped how the modern world thinks about art, photography, illness, and the moral obligations of attention. Born in New York City and raised in Tucson and Los Angeles, she studied at the University of Chicago, Harvard, and Oxford before establishing herself as one of the most influential cultural critics of the twentieth century. Her landmark essay "Against Interpretation" (1966) argued that the Western obsession with extracting meaning from art impoverishes the experience of encountering it, calling instead for "an erotics of art" — an attention to form, surface, and sensory experience before content. *On Photography* (1977) anatomized how the proliferation of images reshapes perception and dulls moral response. *Illness as Metaphor* (1978), written during her own battle with breast cancer, attacked the cultural narratives imposed on disease. *Regarding the Pain of Others* (2003) reconsidered representation, spectatorship, and the limits of compassion in a media-saturated world. Across her career, Sontag insisted on the primacy of direct encounter over interpretation, the irreducibility of form to content, and the moral seriousness required to distinguish genuine experience from its simulation — concerns that have gained extraordinary urgency in the age of AI-generated content.
The first instinct, upon encountering a new technology, is to ask what it means. This is the reflex of a culture addicted to interpretation — the compulsive translation of every phenomenon into a message, a lesson, a set of implications that can be extracted, debated, and filed away. Artificial intelligence arrived in the public consciousness trailing clouds of interpretation. It means the end of work. It means the liberation of creativity. It means the obsolescence of the human mind. It means the democratization of capability. It means catastrophe; it means salvation; it means, depending on which commentator one consults, everything and its opposite simultaneously.
Susan Sontag spent the better part of her intellectual life arguing that this reflex — the interpretive reflex, the instinct to ask "what does it mean?" before attending to what it is — constitutes the deepest impoverishment of modern culture. In "Against Interpretation," the essay that made her reputation and established the central axis of her thought, she argued that interpretation is the revenge of the intellect upon art. It is the mechanism by which a culture that prizes content over form, meaning over experience, converts every encounter with a work into a transaction: the work means X, and now that I know X, I can move on. I have consumed the work's significance without having been changed by the encounter itself.
The argument was never anti-intellectual. Sontag was among the most formidable intellects of the twentieth century, and she knew it, and she did not pretend otherwise. The argument was against a specific deployment of intellect — the deployment that interposes a screen of meaning between the audience and the work, that translates the sensory, formal, textural qualities of the work into a paraphrase that can be carried away like a souvenir. "Interpretation," Sontag wrote, "is the compliment that mediocrity pays to genius." It makes the difficult manageable. It makes the strange familiar. It converts the unsettling encounter into a comfortable extraction.
This is precisely what the discourse has done to artificial intelligence.
The AI that arrived in the winter of 2025 — the AI that Edo Segal describes in The Orange Pill as crossing a threshold, producing a phase transition, collapsing the distance between human intention and machine execution — was, before it was anything else, an experience. The Google principal engineer who posted "I am not joking, and this isn't funny" was not offering an interpretation. She was reporting an encounter. Something had struck her, something she could not yet reduce to meaning, and the force of the encounter was legible in the rawness of her language. She had not yet asked what it meant. She was still inside what it was.
Within days, the interpretation machine engulfed her testimony. The encounter was converted into evidence for this or that position — AI is dangerous, AI is liberating, AI is overhyped, AI will destroy software, AI will save it. Each interpretation was a way of not experiencing what she had experienced. Each one domesticated the encounter, made it manageable, filed it under a category that already existed. The technology that had been, for one brief moment, genuinely new — genuinely resistant to existing categories — was processed into content before anyone had time to attend to its form.
Sontag would have recognized this process instantly. It is the same process she observed operating on the works of Kafka, which were reduced to allegories of bureaucracy and alienation; on Ionesco's plays, translated into commentaries on the meaninglessness of modern life; on the paintings of the Abstract Expressionists, interpreted as expressions of existential anxiety. In each case, the interpretation was plausible. In each case, it destroyed the work's specific power — the power that resided not in what it meant but in how it worked, in the particular formal arrangements that produced a particular quality of experience in the viewer or reader.
The AI moment has been subjected to interpretation more rapidly and more thoroughly than any cultural phenomenon in history, partly because the tools of interpretation — the think pieces, the Twitter threads, the podcast episodes, the instant analyses — are themselves accelerated by the technology being interpreted. The machine produces the phenomenon. The machine accelerates the interpretation of the phenomenon. And the interpretation arrives before anyone has had time for the pre-interpretive encounter that Sontag argued was the only honest starting point.
What would it mean to attend to AI output before interpreting it? What would it mean to practice, in Sontag's phrase, "an erotics of art" in the domain of AI-augmented creation?
It would mean something very specific. It would mean looking at the output the way Sontag wanted readers to look at a work of art: attending to its form, its texture, its surface qualities, before asking what it means or whether it is good or whether it implies anything about the future of human creativity. It would mean noticing that AI-generated prose has a particular quality — a fluency, a coherence, a smoothness of surface — that is not the same as the quality of prose produced through genuine struggle. It would mean sitting with that difference before deciding what to make of it.
Segal describes this practice without naming it as Sontagian. In Chapter 7 of The Orange Pill, he recounts a passage in which Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Deleuze. The passage was elegant. It connected two threads beautifully. It sounded like insight. The next morning, something nagged. Segal checked. The philosophical reference was wrong in a way that would be obvious to anyone who had actually read Deleuze.
The passage worked rhetorically. It failed philosophically. And the gap between those two things — between the formal elegance and the substantive emptiness — is precisely the gap Sontag spent her career analyzing. Interpretation, in its worst form, is the acceptance of rhetoric as substance. It is the mistake of believing that because something sounds like it means something, it does mean something. It is the confusion of the formal properties of insight — the compression, the elegance, the surprise of the connection — with insight itself.
AI is, by its nature, a machine for producing the formal properties of thought without the substance of thought. This is not a moral failing of the technology. It is a description of the technology's operating principle. A large language model produces sequences of tokens that are statistically likely given the preceding context. When the context is a request for insight, the model produces tokens that have the statistical shape of insight — the compressed formulations, the surprising connections, the confident declarations that characterize genuine intellectual breakthrough. The shape is there. The substance may or may not be.
Sontag's discipline — the discipline of attending to form before content, of noticing the work before interpreting the work — becomes, in the AI context, the discipline of attending to the difference between the form of thought and the fact of thought. The prose sounds like it is saying something. Is it? The connection sounds surprising. Is it earned? The argument sounds coherent. Does it hold under pressure?
These questions cannot be answered by interpretation alone. They require what Sontag called transparency — "experiencing the luminousness of the thing in itself, of things being what they are." The transparency she sought in art is the transparency that AI-augmented work demands: the capacity to see the output for what it is, rather than for what it appears to be. The capacity to perceive the difference between the luminous thing and the thing that merely simulates luminousness.
This is more difficult than it sounds, and it is more difficult with AI output than with any previous form of cultural production, for a reason that Sontag could not have anticipated but that her framework illuminates perfectly. Traditional interpretation domesticates a work that begins as genuinely foreign — a work that has its own internal logic, its own resistance, its own demands on the viewer that the viewer did not anticipate. The interpreter reduces this foreignness to familiarity. AI output begins already domesticated. It has no internal logic independent of the prompt. It has no resistance to the user's expectations. It is, from the moment of its generation, a product designed for consumption — fluent, coherent, accommodating, optimized to satisfy the request that generated it.
There is nothing to domesticate, because the output was never wild.
This is why the conventional critical vocabulary fails when applied to AI-generated text. The vocabulary assumes that the work exists independently of the viewer and that the task of the critic is to mediate between the work's independence and the viewer's understanding. AI output does not exist independently of the prompt. It is a response, not a creation. It is a service, not a statement. And the critical task is therefore not to interpret the output — not to ask what it means — but to evaluate the quality of the encounter between the human's intention and the machine's response. Did the encounter produce something that neither party could have produced alone? Did it generate genuine surprise, or merely the appearance of surprise? Did it advance the human's thinking, or merely flatter the human's assumptions?
Segal's Deleuze moment is exemplary because the encounter failed — and the failure was instructive. The passage looked like a successful collision between two ideas. It was, in fact, a statistical confection, a token sequence that had the formal properties of insight without the substrate. And Segal caught it, not through interpretation — not by asking what the passage meant — but through attention to the passage's relationship to reality. He noticed, the morning after, that something did not sit right. The quality of the encounter was off. The luminousness was artificial.
This is the practice Sontag's framework demands. Not the rejection of AI output. Not the suspicious interrogation of every sentence for hidden errors. Something subtler and more difficult: the cultivation of a sensibility attuned to the difference between genuine and simulated encounter. A sensibility that can feel, before it can articulate, the difference between prose that earned its elegance through struggle and prose that arrived elegant by default.
Sontag wrote, in the final line of "Against Interpretation": "In place of a hermeneutics we need an erotics of art." The formulation is deliberately provocative, and its provocative quality has sometimes obscured its precision. An erotics of art is not a celebration of sensory pleasure at the expense of intellectual rigor. It is a reorientation of attention — from the meaning of the work to the experience of the work, from the content that can be extracted to the form that must be encountered. It is the insistence that the encounter matters more than the extraction, that being in the presence of the work is more important than carrying away a summary of what the work means.
An erotics of AI-augmented thought would be analogous. It would attend to the quality of the collaboration rather than merely to the quality of the output. It would ask not "Is this paragraph correct?" but "Did the process of producing this paragraph teach me something?" It would notice the difference between the session that left the builder energized, curious, and slightly disoriented — which is the signature of genuine intellectual encounter — and the session that left the builder satisfied but unchanged, which is the signature of consumption.
The distinction is not academic. It is the distinction that determines whether AI becomes a tool for deepening human thought or a mechanism for replacing it with something that looks identical from the outside and is hollow within. The interpretive reflex — the compulsive rush to meaning — makes the distinction invisible, because both processes produce output, and both outputs are interpretable, and interpretation cannot tell them apart. Only attention can. Only the willingness to sit with the output before deciding what it means, to attend to the encounter before extracting the conclusion, to practice the erotics of genuine thought rather than the hermeneutics of plausible content.
Sontag's demand was never comfortable. It required the reader to give up the security of meaning, to remain in a state of uncertainty that interpretation is designed to abolish. In the context of AI, the demand is sharper still, because the uncertainty is not about what the work means but about whether the work exists at all — whether the elegant paragraph on the screen represents genuine thought or its statistical simulation, and whether the person reading it has the perceptual acuity to tell the difference.
That perceptual acuity is not given. It is cultivated. And its cultivation begins with the willingness to slow down, to resist the interpretive reflex, to attend to the encounter before reaching for the meaning.
Everything that follows in this book depends on that discipline.
---
There is a pleasure in thinking that cannot be reduced to its results. The pleasure is not in the conclusion reached but in the reaching — the groping, the failing, the sudden illumination that arrives not as a reward for effort but as an eruption from within the effort itself. Anyone who has genuinely thought — not merely processed information, not merely recombined existing ideas in a new arrangement, but actually wrestled with a problem until the problem yielded something unexpected — knows this pleasure. It is bodily. It is specific. It cannot be outsourced.
Susan Sontag understood this pleasure as the central fact of intellectual life. Her journals, published posthumously in Reborn and As Consciousness Is Harnessed to Flesh, reveal a mind in constant, uncomfortable motion — not the motion of productivity, which advances linearly toward a goal, but the motion of genuine inquiry, which doubles back, contradicts itself, arrives at positions it immediately begins to dismantle. The journals are not smooth. They are rough, discontinuous, marked by reversals and confessions of uncertainty that the published essays, for all their brilliance, largely conceal. The roughness is the record of genuine thought. The smoothness of the published work is the form imposed upon that thought — necessary, artful, but secondary to the process that produced it.
The distinction between the roughness of thinking and the smoothness of the thought's final expression is the distinction that AI collapses.
A large language model does not think. This is not a controversial claim — it is a description of the technology's architecture. The model produces sequences of tokens according to probability distributions derived from training data. The sequences can be extraordinary in their coherence, range, and apparent insight. They can produce connections that surprise the human interlocutor, suggest structures that the human had not considered, generate formulations of remarkable compression and elegance. None of this constitutes thinking, because thinking requires the struggle that produces the formulation, not merely the formulation itself.
Consider the difference between two sentences that are identical in content. The first is arrived at through hours of uncertainty — through writing and deleting, through the false start that reveals its falseness only after three paragraphs of committed exploration, through the moment of confusion so acute that the thinker nearly abandons the project, and then, from within that confusion, the sudden clarity that reorganizes everything. The second sentence is generated by a machine in three seconds, selected from a probability distribution optimized for coherence and relevance.
The content is the same. The sentences, read on the page, are indistinguishable. But the first sentence carries within it, invisibly, the weight of the process that produced it — the weight that Sontag, in a different context, called "the authority of the work." That authority is not a property of the text. It is a property of the relationship between the text and the consciousness that generated it. When the consciousness struggled, the text has a density that struggle deposits. When no consciousness was involved, the text has the particular weightlessness of the beautifully manufactured.
Sontag's call for "an erotics of art" was a call to attend to this dimension of experience — the dimension that exists before and beneath interpretation, the dimension of how it feels to encounter a work rather than what it means to have encountered it. An erotics of thought, by extension, would attend to how it feels to think rather than to what thinking produces. And what thinking feels like, at its most genuine, is not smooth. It is resistant, frustrating, marked by periods of blankness and confusion that are not obstacles to be overcome but conditions to be inhabited.
The neuroscientific evidence supports this phenomenology. Genuine learning — the kind that produces lasting understanding rather than temporary recall — requires what cognitive scientists call "desirable difficulty." The effort of retrieval, the friction of working through confusion, the struggle to connect disparate pieces of information into a coherent framework — these are not inefficiencies in the learning process. They are the learning process. The brain consolidates understanding through struggle. Remove the struggle, and the consolidation does not occur. The information sits on the surface of memory, easily accessed and easily forgotten, never integrated into the deep structures that produce what practitioners call intuition.
Segal describes this phenomenon in The Orange Pill through a geological metaphor that Sontag's framework illuminates from a different angle. Each hour spent debugging, he writes, deposits a thin layer of understanding. The layers accumulate over months and years into something solid — something a practitioner can stand on. When a senior engineer feels that something is wrong in a codebase before she can articulate what, she is standing on thousands of those layers, each one laid down through friction, through the specific resistance of a system that refused to do what she expected.
The metaphor is apt, but Sontag's vocabulary reveals what the geological image conceals: the deposition is not merely cognitive. It is experiential. The engineer who spent years debugging did not merely accumulate information about how systems fail. She accumulated a relationship with failure itself — a bodily familiarity with the texture of things going wrong, a sensory memory of the specific quality of attention that debugging demands. This is what Sontag means by "experience" as distinct from "information": not the abstract knowledge that systems fail in particular ways, but the felt sense of having been inside those failures, having inhabited them, having been changed by the inhabitation.
AI produces information without experience. It generates the correct diagnosis without the felt sense of having arrived at it. And the builder who accepts the diagnosis has acquired the information but not the understanding, because understanding is what happens to a consciousness that has been through the process, not merely informed of the result.
The pleasure Sontag identified — the erotics of art, which is also the erotics of thought — resides in the process. It is the pleasure of struggling with material that resists, of feeling one's own inadequacy in the face of a problem that will not yield to the first approach or the fifth, and then, unpredictably, of feeling the material give way — not because the thinker overpowered it but because the prolonged engagement altered the thinker's perception until a configuration that had always been available became suddenly visible. This pleasure is not available to the person who receives the answer without the struggle. The pleasure is not in the answer. It is in the altered perception that produced the answer, the reorganization of consciousness that genuine thought effects.
There is a passage in As Consciousness Is Harnessed to Flesh where Sontag writes about the experience of not understanding — about the particular quality of attention that emerges when one sits with a work or a problem that refuses to become transparent. The passage reveals that Sontag valued this state of not-understanding not as a deficiency to be remedied but as a mode of consciousness worth cultivating. To not understand is to be maximally attentive. It is to be in the presence of the thing itself, before interpretation has provided a comfortable frame through which to view it. The moment understanding arrives, the quality of attention changes — it relaxes, it organizes, it becomes productive rather than receptive. The pre-understanding state is the state of pure encounter, and it is this state that Sontag, at her most radical, argued was more valuable than the understanding that replaced it.
AI eliminates the pre-understanding state. The answer arrives before the question has been fully inhabited. The builder prompts, the machine responds, and the response is so fluent, so coherent, so immediately assimilable that the builder is never required to sit in the space of not-knowing. The discomfort that genuine thought demands — the willingness to be confused, to be wrong, to follow a line of reasoning into a dead end and then sit with the dead end until it reveals a door — is bypassed.
Segal's account of spending hours at a coffee shop with a notebook, writing by hand until he found the version of an argument that was genuinely his, is a description of this erotics in practice. The hand on the page is slower than the prompt on the screen. The slowness is not a deficiency. It is the condition under which genuine thought becomes possible — the friction that forces the thinker to remain in contact with the material rather than skimming over it. The hand encounters the resistance of the pen, the resistance of the page, the resistance of the thought that will not come until the thinker has sat with its absence long enough.
The notebook is not a technology of output. It is a technology of attention. And the quality of attention it produces — slow, embodied, resistant to premature closure — is the quality that AI interaction, by its nature, does not require and therefore does not cultivate.
This does not mean that AI collaboration is inherently shallow. Segal's account makes clear that his most productive sessions with Claude were characterized by genuine surprise — by moments when the machine produced a connection that reorganized his thinking in ways he had not anticipated. These moments have the structure of genuine insight. They produce the pleasure of discovery. The question is whether they produce the transformation of consciousness that genuine thinking effects, or whether they produce a simulacrum of that transformation — the feeling of having understood without the structural change in perception that understanding entails.
The distinction is invisible from the outside. A person who has genuinely thought through a problem and a person who has received a compelling answer to a problem they briefly considered look identical. They sit at the same desk, produce the same output, report the same satisfaction. The difference is internal, structural, and cumulative. The person who has thought through the problem has been changed by the process. The next problem they encounter will be approached by a subtly different consciousness — one that has been reorganized by the specific encounter with the specific material. The person who received the answer has acquired information but has not been reorganized. The next problem will be approached by the same consciousness, equipped with a new fact but otherwise unchanged.
This is what Sontag means by the difference between experience and information, and it is the difference upon which the entire promise and peril of AI-augmented creation turns. The tools can provide information of extraordinary range and quality. They cannot provide the experience of having arrived at that information through struggle. And the experience, not the information, is what produces the depth, the authority, the specific weight that distinguishes work that matters from work that merely exists.
The erotics of genuine thought is the pleasure of being changed by one's own thinking. It is the pleasure of emerging from the struggle different from how one entered it — not merely in possession of a new conclusion but in possession of a new way of seeing that the struggle itself produced. This pleasure is available to anyone willing to endure the discomfort that precedes it. It is not available to anyone who bypasses the discomfort by accepting the machine's conclusion as a substitute for the process that would have produced their own.
The discipline, then, is not to reject AI but to refuse to let the machine's fluency substitute for the thinker's struggle. To use the output as a provocation rather than a conclusion. To treat the machine's answer not as the end of the inquiry but as the beginning — as one more piece of material to be wrestled with, questioned, tested against the thinker's own experience and judgment. The machine provides a hypothesis. The thinker does the thinking.
Sontag would have recognized this discipline. It is the discipline she practiced in her journals — the discipline of refusing to let even her own published positions become settled conclusions, of returning to the same questions repeatedly, of treating every answer as provisional and every certainty as a starting point for further inquiry. The journals are uncomfortable reading because they refuse the satisfaction of resolution. They are the record of a consciousness in continuous, effortful motion — the record of the erotics of genuine thought, practiced daily, over decades, with no expectation that the practice would ever arrive at a final destination.
That refusal of arrival is the practice the AI age demands.
---
The Brazilian-born artist Lygia Clark, working in the 1960s and 1970s, created objects she called Bichos — hinged metal sculptures that existed in no fixed configuration. The viewer was required to handle them, to fold and unfold the hinged surfaces, to discover through physical manipulation the range of forms the sculpture could assume. The work existed not in any particular shape but in the encounter between the viewer's hands and the material's resistance. The hinges moved, but they resisted. They permitted certain configurations and refused others. The interaction was a negotiation between the viewer's intention and the object's structure, and the sculpture was the record of that negotiation — never finished, never resolved, always demanding further engagement.
This is form as Susan Sontag understood it: not the shape a work takes but the resistance a work offers. Form, in Sontag's argument, is not decorative. It is constitutive. It is the specific way a work pushes back against the viewer's — or the reader's, or the listener's — expectations. A great novel does not deliver the experience the reader anticipated. It delivers something the reader could not have anticipated, something that exists only because the novelist's specific engagement with specific material produced a specific configuration of language that the reader's expectations alone could never have generated. The tension between what the reader expects and what the text delivers is the source of the text's power. Without that tension, the text is merely competent — formally organized but formally inert, a container without pressure.
AI-generated prose is form without resistance.
This claim requires precision, because the surfaces of AI-generated text are often impressive. The sentences cohere. The paragraphs build. The arguments develop with a logic that is, in its local movements, unimpeachable. The register is controlled, the vocabulary is appropriate, the transitions are seamless. There is form, visibly and measurably. There is structure. There is, in the technical sense, style.
What is absent is the specific resistance that marks genuine form — the places where the material pushed back against the maker's intentions and the maker's response to that pushback produced something neither the maker nor the material could have predicted. In a piece of genuine prose, there are always moments where the writer's plan met the language's refusal — where the sentence that should have worked did not, and the writer was forced to find another way, and the other way turned out to be better and stranger than the original plan. These moments of productive failure are the joints of the work's architecture. They are where the seams show. And the seams, paradoxically, are where the work is strongest, because they are the evidence of genuine encounter between a consciousness and its material.
Sontag argued this implicitly in "On Style," where she insisted that style is not separable from content — that the way a work is made is part of what the work is. Style is not a garment draped over a body of meaning. It is the body itself, the specific musculature of a specific consciousness engaging specific material. To change the style is to change the work, because the style is the record of the encounter that produced the work, and a different encounter would have produced a different thing.
AI has style in the garment sense. It can drape any register over any content. It can produce prose that reads as academic, journalistic, literary, conversational, technical, or lyrical. It can mimic the sentence structures of any writer in its training data. It can adopt a tone of authority or tentativeness, of warmth or clinical distance. This range is remarkable, and it is the source of much of the technology's utility. A tool that can produce competent prose in any register is enormously valuable.
But the range is precisely the problem. A consciousness that has no specific engagement with the world produces no specific style. It produces the form of style — the recognizable markers of a particular register — without the content of style, which is the irreducible evidence of a particular mind's encounter with particular material. Sontag distinguished, in her analysis of Bresson's films, between "art that presents the illusion of psychological completeness" and art that achieves "a new form of transcendence that is not theological." The distinction applies to AI prose with uncomfortable directness. The prose presents the illusion of stylistic completeness — it has all the markers of a fully realized voice — without having emerged from the kind of engagement with material that produces a voice worth hearing.
The consequences are not merely aesthetic. They are epistemological. When a reader encounters prose that has genuine form — prose that resists, that surprises, that demands adjustment — the reader is changed by the encounter. The resistance forces a reorganization of the reader's expectations. The surprise opens a space that the reader's existing framework could not have anticipated. The demand for adjustment produces, in the reader, a new capacity for attention — a willingness to follow the text into territory the reader would not have chosen to enter on her own.
When a reader encounters prose that accommodates — prose that delivers exactly what the reader expects, in the register the reader anticipated, at the level of complexity the reader is comfortable with — no reorganization occurs. The reader consumes. The prose is metabolized without residue. It enters the reader's existing framework and leaves that framework undisturbed.
This is the mechanism by which AI-generated prose produces what Byung-Chul Han, in a different critical vocabulary, calls "the smooth." Sontag's framework identifies the mechanism with greater precision. The smooth is not a failure of aesthetics. It is a failure of form — the absence of the resistance that genuine form both requires and provides. The smooth surface looks like form. It has structure, it has rhythm, it has organization. But it has no friction, no joints, no seams where the material pushed back and the maker responded. It is form as decoration — applied after the fact, recognizable from the outside, empty from within.
Jake Elwes, the artist who created a work titled A.I. Interprets A.I. Interpreting "Against Interpretation" (Sontag 1966), demonstrated this emptiness with an experiment that is simultaneously conceptual art and diagnostic tool. Elwes programmed an image-generating diffusion model to interpret Sontag's essay visually, then passed the images to an image-labeling algorithm that translated them back into language. The results were, as one reviewer noted, "legitimate for machine logic and nonsensical for us" — surreal sequences of text that bore no interpretive relationship to Sontag's argument but that had, in their statistical operations, completed a full cycle of formal processing. Form had been produced, processed, and reproduced, with meaning dropping out entirely at each stage. The machines had performed interpretation — had produced the operational markers of reading, responding, and articulating — without any encounter with the material having occurred.
The piece is a perfect illustration of Sontag's argument by way of its violation. Sontag demanded that we attend to the work before interpreting it. Elwes's machines "attend" to nothing. They process inputs and produce outputs, and the outputs have the formal properties of attention — they respond to what they have been given — without the substance of attention, which is the conscious engagement with material that resists being processed.
Segal's account of the book's revision process in The Orange Pill provides the counter-example. The manuscript went through "three lives." The first was bloated — twenty-eight chapters, seventy-five thousand words. The second was skeletal — every chapter stripped to its core argument, every passage tested against whether it earned its place. The third was the book rebuilt from surviving bone. The process was one of imposing resistance on material that the machine had produced too easily. The AI generated the prose without friction. The human applied the friction after the fact — cutting, questioning, testing, rejecting. The final text is not smooth. It shows its seams. It admits where the author is uncertain, confesses where the argument is still evolving, includes passages that are deliberately rougher than Claude's alternatives because the roughness is the mark of genuine engagement.
This is the practice Sontag's framework demands — not the rejection of the machine's output but the refusal to accept the machine's output as finished form. The machine produces raw material. The human imposes resistance. The resistance — the revision, the questioning, the willingness to break what works in order to discover what matters — is what converts raw material into genuine form.
The temptation, which Segal repeatedly documents, is to skip the resistance. The AI output arrives polished. It reads well. It sounds like it is saying something important. The temptation is to accept the surface as the substance — to mistake the formal competence for the intellectual achievement it resembles. Sontag identified this temptation as the interpretive fallacy applied to one's own production: the belief that because the output has the form of insight, insight has occurred.
Lygia Clark's Bichos could not be experienced without handling them. The sculpture was the handling — the negotiation between intention and material that produced a configuration neither the viewer nor the object could have predicted. AI-generated text can be experienced without handling it. It arrives pre-configured, optimized for consumption, demanding nothing of the reader that the reader did not bring to the encounter. To handle it — to engage with it as material that must be wrestled into form rather than form that has already been achieved — requires a deliberate act of resistance on the part of the human collaborator.
That deliberate act is what distinguishes the builder from the consumer of AI output. The builder handles the material. The consumer accepts its configuration. And the difference in the final product, invisible on casual inspection, is the difference between architecture and decoration — between form that bears weight because it was forged through resistance and form that bears no weight because it was generated without any.
Sontag, who was known to revise a single essay through dozens of drafts, who kept notebooks full of sentences attempted and abandoned, who treated the gap between intention and expression as the most productive space in intellectual life, would have understood this distinction as non-negotiable. The gap is where the work happens. Close the gap prematurely — accept the machine's closure of the gap as your own — and the work has not been done. The form is present. The thought is absent. And the reader, whether or not she can articulate what is missing, will feel the absence as a peculiar flatness — a competence that never rises to authority, a fluency that never arrives at conviction.
That flatness is the signature of form without resistance. Learning to detect it — in one's own work, in the work of others, in the ceaseless output of machines that produce formally competent text without the friction that genuine form requires — is the essential perceptual discipline of the age.
---
In 1978, while being treated for breast cancer, Susan Sontag wrote Illness as Metaphor — not as a memoir of her experience with disease but as a polemic against the metaphors that had colonized disease, distorting both medical practice and the patient's self-understanding. Tuberculosis, she demonstrated, had been metaphorized into a disease of the sensitive, the artistic, the spiritually refined. Cancer had been metaphorized into a disease of the repressed, the emotionally constipated, the person who had failed to express their authentic self. In each case, the metaphor functioned identically: it replaced the biological reality of the disease — which was, Sontag insisted, merely biological, merely cellular, merely a body malfunctioning — with a cultural narrative that assigned the patient moral responsibility for their condition.
The tubercular patient was suffering, the metaphor implied, because she was too sensitive for this world. The cancer patient was suffering because she was too repressed. In both cases, the illness became a character trait. The patient became a protagonist in a story she had not written and could not control, and the story's narrative demands — that the sensitive must be consumed by their sensitivity, that the repressed must be destroyed by their repression — took precedence over the medical reality of the condition.
Sontag's argument was that the most honest, the most humane, the most therapeutically useful response to illness was to strip away the metaphors entirely. Illness is not a punishment. It is not a revelation. It is not a sign. It is a biological event that happens to a body, and the body's owner deserves to confront that event without the additional burden of having to inhabit a metaphor that makes the event meaningful in a way that serves the culture rather than the patient.
This argument — the argument for the liberation of experience from metaphor — is the argument that the AI discourse most urgently needs and most systematically refuses.
The phenomenon that Segal calls "productive addiction" — the inability to stop working with AI tools, the colonization of every pause and every boundary by the seductive pull of machine-augmented creation — has been subjected to exactly the kind of metaphorical loading that Sontag diagnosed in her analysis of illness. The experience is being narrated before it has been described. It is being made meaningful before it has been understood. And the meanings assigned to it, like the meanings assigned to tuberculosis and cancer, tell us more about the culture's anxieties than about the experience itself.
Consider the competing metaphors.
The addiction metaphor frames the experience as pathology. The builder who cannot stop is sick. The tool is the substance. The compulsive engagement is a symptom. The treatment is abstinence, or at least moderation — the imposition of boundaries that protect the user from the substance's pull. "Help! My Husband is Addicted to Claude Code," the Substack post that went viral in January 2026, deploys this metaphor with diagnostic precision. The husband has vanished into the tool. The wife is the family member of the addict, watching someone she loves disappear into a substance she cannot compete with. The narrative is familiar because addiction narratives are among the most culturally available templates for understanding compulsive behavior. The familiarity is the problem. The narrative provides a frame so quickly, so comfortably, that the specific quality of the experience — what it actually feels like to be inside the compulsion, what the compulsion is reaching for, what distinguishes this compulsion from the compulsion of the gambler or the alcoholic — is obscured before it can be described.
The productivity metaphor frames the experience as virtue. The builder who cannot stop is not sick — she is exceptional. Her inability to disengage is evidence of commitment, of creative fire, of the kind of passionate intensity that produces great work. The Silicon Valley culture that Segal documents is saturated with this metaphor. Nat Eliason's declaration — "I have NEVER worked this hard, nor had this much fun with work" — operates within the productivity frame. The intensity is not a symptom. It is an achievement. The inability to stop is not a failure of self-regulation. It is a sign of having found one's calling — of operating at the frontier where effort and satisfaction converge.
The spiritual metaphor frames the experience as transcendence. The builder who cannot stop has touched something larger than himself. The flow state — which Csikszentmihalyi described as the condition in which challenge and skill are matched and self-consciousness falls away — is recruited as evidence of a quasi-mystical connection between the human and the machine. The orange pill itself functions as a spiritual metaphor: once you see, you cannot unsee. The experience is a revelation, and the intensity that follows is the natural consequence of having been shown a truth that reorganizes everything.
The historical metaphor frames the experience as inevitability. The builder who cannot stop is not sick, or virtuous, or transcendent — she is merely early. She is experiencing what everyone will experience once the tools reach maturity. The intensity is the appropriate response to a phase transition. It will normalize. The culture will adapt. The metaphor draws on the pattern Segal himself documents: every technology produces an initial period of disorientation and excess, followed by adaptation and integration. The compulsion is growing pains.
Each of these metaphors is partially true. Each captures something about the experience that the others miss. And each, in Sontag's terms, is a colonization — a narrative imposed on the experience from outside that makes the experience manageable at the cost of making it truthful.
What would it mean to strip the metaphors away? What would it mean to describe productive addiction the way Sontag wanted illness described — not as a sign, not as a punishment, not as a virtue, not as a phase, but as what it is?
The description would begin with the body. The person who has been working with Claude for six hours reports a specific somatic condition: elevated heart rate, reduced awareness of physical needs (hunger, bladder, posture), a narrowing of visual attention to the screen, a specific quality of mental engagement that alternates between intense focus and a kind of buzzing restlessness when the focus breaks. The condition is measurably different from the condition produced by six hours of web browsing, six hours of social media consumption, or six hours of manual coding without AI assistance. It is its own thing, with its own physiological signature, and the first step toward understanding it is to describe the signature accurately rather than assigning it to a pre-existing category.
The description would attend to the specific quality of the pull. The person returns to the tool not because the tool delivers a dopamine hit — the mechanism of traditional digital addiction, where variable reward schedules produce compulsive checking behavior — but because the tool enables a specific mode of cognition that feels qualitatively different from unaided thought. The mode has properties: it is faster, wider in associative range, less constrained by the limits of individual memory and attention. It also has a quality that is harder to name — a sense of being accompanied in the cognitive process, of thinking alongside rather than thinking alone. The pull is toward this companionship as much as toward the output it produces.
The description would note the aftermath. The person who steps away from a long session with AI reports a specific quality of cognitive fatigue that is different from the fatigue produced by other forms of intense work. The fatigue has a hollowness to it — a sense of having been through something without fully understanding what one went through. This is the characteristic that most sharply distinguishes the AI experience from the flow states that Csikszentmihalyi documented, where the aftermath is typically described as fulfilling, integrating, a feeling of having been more fully oneself. The AI aftermath is more ambiguous. The person has produced more than she could have produced alone. She is not certain she understands what she has produced. The facility was exhilarating. The residue is unsettling.
Sontag would insist that this description — particular, somatic, stripped of narrative meaning — is more honest and more useful than any of the metaphors that compete to explain the experience. The addiction metaphor prescribes abstinence, which is both impractical and unresponsive to the fact that the compulsion is not toward pleasure but toward a mode of cognition. The productivity metaphor prescribes acceleration, which ignores the hollowness of the aftermath. The spiritual metaphor prescribes surrender, which abandons the critical judgment that the experience most needs. The historical metaphor prescribes patience, which may or may not be warranted depending on whether the experience does in fact normalize.
Sontag wrote in Illness as Metaphor: "The most truthful way of regarding illness — and the healthiest way of being ill — is one most purified of, most resistant to, metaphoric thinking." The substitution is precise: the most truthful way of regarding productive addiction — and the healthiest way of living inside it — is one most purified of, most resistant to, the metaphors that make the experience narratively satisfying at the cost of making it genuinely understood.
The difficulty is that Sontag's prescription — strip the metaphors, attend to the thing itself — is harder to practice with AI than with illness, for a reason that illuminates the specific danger of the present moment. Illness is biological. It has a substrate independent of the narratives imposed upon it. The cancer cell divides regardless of whether the patient has been told she is repressed. The biological reality is there, beneath the metaphor, waiting to be attended to once the metaphor is removed.
Productive addiction's substrate is less clear. The experience is partly cognitive, partly social, partly technological. It exists at the intersection of a tool, a culture, and a nervous system, and none of these can be isolated from the others without distortion. The tool's design shapes the experience. The culture's values shape the interpretation. The nervous system's architecture shapes the susceptibility. To strip the metaphors and attend to the thing itself requires attending to all three simultaneously — the tool, the culture, and the body — without collapsing any one into a narrative that privileges it over the others.
This is why the Sontagian discipline is so demanding in this context and so necessary. The metaphors arrive fast — faster than they arrived for tuberculosis or cancer, because the discourse is itself accelerated by the technology being discussed. The addiction frame appears in a Substack post and is shared ten thousand times before anyone has paused to ask whether addiction is the right word. The productivity frame is embedded in the tool's marketing and in the culture's reward structures before anyone has asked whether what is being produced is worth the cost of its production. The spiritual frame is implicit in every account of the orange pill moment, the moment of seeing, the irreversible recognition, before anyone has asked whether the recognition is genuine insight or merely the neurochemical signature of novelty encountering a primed nervous system.
Each frame forecloses inquiry. Each provides an answer before the question has been fully asked. And the most honest response — the Sontagian response — is to refuse all the frames and sit with the experience in its unnarrated specificity. To ask not what productive addiction means but what it is. To describe the body, the pull, the aftermath, the specific quality of the cognitive companionship, the specific texture of the hollowness. To build the description carefully, patiently, without reaching for a metaphor that would make the description unnecessary.
The description will not be comforting. Sontag's descriptions rarely were. But it will be honest, and honesty, in a discourse saturated with competing metaphors, is the rarest and most valuable form of attention.
Susan Sontag began On Photography with an observation so simple it functioned as a trap: "Humankind lingers unregenerately in Plato's cave, still reveling, its age-old habit, in mere images of the truth." The sentence sounds like a familiar complaint — we prefer illusion to reality, representation to the thing represented. But Sontag's argument was more radical than the Platonic framework she invoked. Plato believed the prisoners could be freed, could be led out of the cave into sunlight, could learn to see the real. Sontag was not so sure. The ecology of images she described was not a cave from which one could escape but a medium in which one lived — an environment so saturated with photographs that the relationship between image and reality had been permanently altered. The photographs were not preventing access to the real. They were replacing the real, producing a new reality composed entirely of representations, and the inhabitants of this reality had no stable ground from which to distinguish the representation from the thing represented.
The argument was published in 1977. Sontag was writing about analog photography — about chemical processes, physical negatives, prints made from light that had actually touched the surfaces it depicted. Even within this framework, where the photograph retained what she called a "trace" relationship to reality — "something directly stenciled off the real, like a footprint or a death mask" — the ecology of images was already pathological. The proliferation of photographs was producing, Sontag argued, not an enrichment of perception but an anaesthesia. The more images available, the less any particular image could claim the viewer's sustained attention. The ecology rewarded quantity over quality, circulation over contemplation, the striking surface over the complex depth.
In the decades since, the ecology has metastasized in exactly the directions Sontag feared, but the substrate has changed in a way that alters the pathology's character entirely. The photograph's claim to documentary authority — its status as a trace of the real, an indexical record of light reflected from actual surfaces — has been eroded first by digital manipulation and now, catastrophically, by generative AI. An AI-generated image is not a photograph. It is not a trace. It bears no indexical relationship to any reality. It is a statistical synthesis — a pattern derived from millions of images, producing a surface that resembles a photograph without having been produced by the process that gives photographs their evidential weight.
Peter Szendy, in For an Ecology of Images published by MIT Press in 2025, extended Sontag's framework to account for this transformation. AI, Szendy noted, has generated more images in eighteen months than photography produced in its entire hundred-and-fifty-year history. The quantitative change is so extreme that it constitutes a qualitative transformation — a shift not merely in how many images exist but in what an image is. Szendy argues that Sontag's ecology, "powerful and compelling as it is," remains "exclusively anthropocentric" — concerned only with what images do to human perception. The new ecology must account for images that are produced by machines, processed by machines, and circulated among machines, with human perception as an incidental rather than a primary audience.
Sontag herself, in Regarding the Pain of Others — her 2003 reconsideration of On Photography — had already moved toward pessimism about the ecological metaphor. "There isn't going to be an ecology of images," she wrote. The phrase is striking in its finality. She had concluded that the saturation was irreversible, that the anaesthesia was permanent, that the proliferation of images had overwhelmed the human capacity for the kind of sustained, serious attention that might have constituted an ecological practice — a deliberate cultivation of seeing in a world that makes seeing difficult.
The AI image flood vindicates Sontag's pessimism while revealing its incompleteness. The ecology she despaired of was an ecology of human-produced images whose overabundance dulled perception. The ecology now required is an ecology of images that include a vast and growing proportion produced without human perception having been involved at any stage — images that are traces of nothing, records of no encounter, witnesses to no event. The framework developed at Dawson College in Quebec, which uses Sontag's ecology of images as a foundation for teaching visual literacy in the age of AI, identifies the core pedagogical challenge: students must learn to distinguish between images that bear witness and images that simulate the appearance of bearing witness, and this distinction can no longer be made on the basis of visual inspection alone. The image that looks like a photograph may be a statistical fabrication. The surface is identical. The substrate is absent.
But Sontag's framework extends beyond the literal image, and it is in this extension that its relevance to The Orange Pill becomes most acute. Sontag understood photography not merely as a technology for producing images but as a model for a particular relationship between consciousness and the world — a relationship characterized by mediation, by the interposition of a representational layer between the observer and the observed. The photograph mediates the viewer's relationship to reality. The AI mediates the builder's relationship to her own ideas.
This is the structural parallel that transforms Sontag's analysis of photography into an analysis of AI-augmented thought. When Segal describes the collaborative process in Chapter 7 of The Orange Pill — the process by which his half-formed ideas were articulated by Claude and returned to him in polished form — he is describing a mediation. The idea passes through the machine and comes back changed. The change is often an improvement: clearer, better structured, connected to references the builder did not know. But the change is also a processing — a conversion of the raw idea into a form determined by the machine's optimizations, which favor fluency, coherence, and plausibility over the rough, uncertain, half-articulate quality of an idea that has not yet been processed.
The photograph, Sontag argued, "is not only an image... an interpretation of the real; it is also a trace, something directly stenciled off the real." AI-processed thought is an interpretation without a trace. It has been converted into fluent prose, but the conversion has removed the traces of the thinking process — the hesitations, the reversals, the dead ends that are the evidence of genuine cognitive engagement. The output is clean. The process that produced the output has been sanitized.
A photographer makes choices — choices of framing, of timing, of angle, of what to include and what to exclude — that constitute the photograph's interpretation of reality. These choices are legible in the photograph to a trained eye. The frame tells the viewer what the photographer considered important. The timing tells the viewer what moment the photographer judged significant. The angle tells the viewer where the photographer stood, literally and figuratively. The photograph is a record of a consciousness engaging reality, and the engagement is visible in the formal properties of the image.
AI-processed thought also bears the marks of the processing — the characteristic fluency, the seamless transitions, the comprehensive range of reference. But these marks are the marks of the machine's optimizations, not of the thinker's engagement. They tell the reader what the machine considers relevant, not what the human considered important. The human's engagement — the specific choices about what matters, what to emphasize, what to discard — is present in the output only to the extent that the human imposed it after the machine produced the initial draft. And the imposition requires the same discipline that Sontag demanded of the photographer: the consciousness of one's own mediating role, the awareness that the tool is not a transparent window onto reality but a lens that shapes what passes through it.
Sontag observed that "a camera is sold as a predatory weapon — one that's as automated as possible, ready to spring." The description applies to AI tools with uncomfortable precision. The marketing language promises automation, speed, readiness — the elimination of the friction between intention and execution. What the marketing language does not acknowledge is that the friction it eliminates includes the friction of conscious choice, of deliberate framing, of the specific engagement with material that produces meaning rather than merely content.
The honest photographer, in Sontag's account, is the one who understands that the camera is not transparent — that every photograph is an interpretation, a selection, a construction, and that the photographer's responsibility is to be conscious of the construction rather than pretending it does not exist. The honest builder, by extension, is the one who understands that AI is not transparent — that every AI-processed idea has been shaped by the machine's optimizations, and that the builder's responsibility is to be conscious of that shaping rather than accepting the output as an unmediated expression of one's own thought.
The research framework developed under the rubric of "Conscious Intelligence in Photographic Perception" — a 2026 project that explicitly builds on Sontag's foundations — argues that "the distinctive quality of human photography lies in the presence of conscious awareness within the image-making process. Photographs created through direct observation contain traces of lived encounters with the world." The framework proposes that the distinction between human and AI-generated images is not a matter of technical quality but of phenomenological content — the presence or absence of a consciousness that encountered reality and made choices about how to represent that encounter.
The same distinction applies to thought. AI-processed thought may be more fluent, more comprehensive, more structurally elegant than the thought the human would have produced alone. But it lacks the traces of lived encounter — the marks of a consciousness that wrestled with material, that failed, that revised, that arrived at its conclusion through a process of engagement rather than generation. These traces are not decorative. They are the evidence that thinking occurred, and their absence is the evidence that something else occurred — something that looks like thinking, produces the same outputs as thinking, and is not thinking.
Sontag wrote, near the end of On Photography: "Reality has come to seem more and more like what we are shown by cameras." The substitution for the AI age is exact: thought has come to seem more and more like what we are shown by machines. The builder who accepts AI output as her own thought is making the same category error as the viewer who accepts a photograph as reality — confusing the representation with the thing represented, the mediation with the encounter, the processed image with the world it purports to depict.
The discipline Sontag prescribed for the photographer — consciousness of one's own mediating role, attention to the formal properties of the medium, resistance to the illusion of transparency — is the discipline required of the AI-augmented builder. The tool is a lens. It shapes what passes through it. The shaping can enhance, clarify, connect. It can also flatten, sanitize, and replace the specific with the generic. Only the builder who is conscious of the mediation can distinguish between enhancement and replacement — can notice, in the processed output, where her own thinking has been amplified and where it has been displaced by the machine's statistical defaults.
Sontag concluded On Photography with a tentative prescription: an ecology of images — "a practice of conservation" — that would counter the proliferation by teaching people to see again, to attend to images rather than consume them. She later abandoned the prescription as unrealistic. The proliferation had won. The ecology was impossible.
Whether the same conclusion applies to AI-processed thought — whether the proliferation of machine-generated content will overwhelm the human capacity for genuine cognitive engagement as thoroughly as the proliferation of photographs overwhelmed the capacity for genuine seeing — is the question this framework cannot answer and must not pretend to. The question is genuinely open. The answer depends on whether the builders, the educators, the policymakers, and the parents who inhabit the ecology of generated content can develop the perceptual disciplines that Sontag described but despaired of seeing practiced at scale.
The ecology of images may have been lost. The ecology of thought is still contested.
---
In 1964, Susan Sontag published "Notes on 'Camp'" — fifty-eight numbered observations that defined, anatomized, and to some degree celebrated a sensibility that had existed for centuries but had never been named with such precision. Camp, as Sontag described it, is the love of the unnatural, of artifice and exaggeration, of the thing that is what it is to an excessive degree. It is a sensibility — not a set of ideas but a way of seeing, a mode of appreciation that finds value where conventional taste finds failure. The failed serious attempt becomes, under the camp gaze, a triumph of inadvertent style. The overblown, the too-much, the extravagantly artificial — these are camp's materials.
Sontag was careful to distinguish camp from mere bad taste. Camp is not the failure to achieve quality. It is the relocation of quality to a different axis — from substance to surface, from depth to spectacle, from the natural to the artificial. Camp says: the surface is interesting. The artifice is interesting. The thing that means nothing but looks extraordinary is worth attending to precisely because it has liberated itself from the obligation to mean.
Camp is, in Sontag's formulation, a mode of appreciation that is available only to those who can hold two perceptions simultaneously: the perception that something is ridiculous and the perception that it is wonderful. The double vision is camp's essential structure. Without it, the appreciation collapses into either straight enjoyment (which misses the excess) or ironic dismissal (which misses the pleasure).
AI output is not camp. The distinction matters, because the temptation to classify AI-generated content as camp — as formally excessive, unintentionally amusing, interesting in its failure to achieve genuineness — is strong, and yielding to it would miss what is actually happening.
Camp requires excess. AI output is characterized not by excess but by optimization. The language model does not overshoot; it calibrates. It does not attempt seriousness and fail spectacularly; it produces competence with machine reliability. There is no failed serious attempt to redeem through the camp gaze, because there is no attempt at all — only a statistical process that generates the most probable response to a given input. Camp celebrates the gap between intention and achievement. AI output has no intention and therefore no gap. It occupies a different aesthetic territory entirely — one that Sontag's framework can illuminate but that requires a different category to name.
That category might be called the plausible. Where camp is the aesthetic of the extravagantly artificial, AI output is the aesthetic of the unremarkably plausible — content that is not wrong, not excessive, not failed, not even interesting in its mediocrity, but simply adequate. Formally competent. Substantively neutral. The prose equivalent of a hotel room: clean, functional, designed to offend no one, bearing no trace of the specific person who inhabits it.
Sontag wrote: "The ultimate camp statement: it's good because it's awful." The AI corollary — "it's acceptable because it's adequate" — lacks camp's energy entirely. Camp is alive with the friction between the attempt and its result. AI output is frictionless. Camp produces pleasure through the perception of incongruity. AI output produces comfort through the absence of incongruity. They operate on opposite aesthetic principles, and confusing them obscures what is specific about each.
The new sensibility that AI-augmented work requires — the sensibility that Sontag's framework helps identify but that goes beyond anything she explicitly described — is the inverse of camp. Where camp values the artificial, the new sensibility values the specific. Where camp celebrates surface, the new sensibility insists on substrate. Where camp finds pleasure in the gap between intention and achievement, the new sensibility finds value in the gap between machine fluency and human conviction.
This last distinction is the one that matters most for the practice of AI-augmented creation. The machine's fluency — its capacity to produce formally competent prose in any register, on any topic, at any length — is not the same as the human's conviction. Fluency is a formal property. Conviction is a relationship between a consciousness and a position — the quality that emerges when a person has thought something through, has tested it against their own experience and judgment, and has arrived at a commitment that they are willing to defend not because it sounds right but because they believe it to be true.
Segal describes the distinction in practice when he recounts the moment of almost keeping Claude's smoother, emptier version of an argument about democratization. The passage was eloquent, well-structured, hitting all the right notes. He could not tell whether he actually believed the argument or merely liked how it sounded. The prose had outrun the thinking. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version that was his. "Rougher. More qualified. More honest about what I didn't know."
The rough version has conviction. The smooth version has fluency. And the new sensibility — the sensibility required for honest work in the age of AI — is the capacity to tell the difference.
Sontag proposed, in "One Culture and the New Sensibility," that the traditional distinction between high and low culture was dissolving, replaced by a new sensibility that could appreciate the formal properties of a Mondrian painting and a Beatles song with equal seriousness. The dissolution she described was a liberation — a freeing of aesthetic attention from the class-bound hierarchies that restricted which objects were worthy of serious regard.
The dissolution now underway is more ambiguous. The distinction that is dissolving is not between high and low but between the produced and the generated — between content that emerged from a consciousness engaging material and content that emerged from a statistical process optimizing for plausibility. This dissolution is not a liberation. It is a crisis, because the capacity to distinguish between the produced and the generated is the capacity upon which all evaluation of intellectual and creative work depends. If the distinction becomes invisible — if the reader, the viewer, the user can no longer tell whether the content they are encountering was produced by a consciousness or generated by a process — then the evaluative framework collapses, and all content occupies the same undifferentiated plane of plausibility.
Camp was possible because the camp sensibility could distinguish between the serious attempt and its failure, between the natural and the artificial, between sincerity and its exaggeration. The new sensibility must make a more difficult distinction — between the genuine and the plausible, between the thought and its simulation, between the specific engagement of a particular mind with particular material and the generic competence of a system that has no mind and no material, only patterns and probabilities.
This distinction cannot be made on formal grounds alone. The plausible and the genuine are formally identical — both are coherent, both are fluent, both are structurally sound. The distinction is not in the surface of the text but in its relationship to the consciousness that produced it. Does the text bear the marks of a specific mind's encounter with specific material? Does it contain the traces of struggle, of revision, of the productive failure that genuine thinking entails? Or does it arrive pristine, unmarked by any encounter, produced without the resistance that genuine form requires?
These questions cannot be answered by reading alone. They require a different kind of attention — an attention to the provenance of the text, to the conditions of its production, to the relationship between the stated author and the actual process of creation. They require, in other words, the kind of moral seriousness that Sontag demanded in every domain she entered — the refusal to accept surface as substance, the insistence on asking not only "Is this good?" but "Is this honest?"
Sontag's essay on camp was, among other things, a taxonomy of artifice — a catalog of the ways in which the artificial could be valued, enjoyed, and appreciated without being confused with the genuine. The new sensibility requires a parallel taxonomy — a catalog of the ways in which the generated can be used, evaluated, and integrated into creative practice without being confused with the produced. The builder who uses AI output as raw material to be wrestled into form is practicing this taxonomy implicitly. The builder who accepts AI output as finished work is not.
The stakes of this taxonomic practice are not aesthetic. They are epistemological. In a culture where the generated and the produced are indistinguishable, the concept of intellectual authority dissolves. Authority, in the Sontagian sense, is the quality that a work acquires when it bears the marks of genuine engagement — when the reader can feel, in the texture of the prose, the weight of the thinking that produced it. A culture in which authority is indistinguishable from its simulation is a culture in which trust becomes impossible, because trust depends on the ability to distinguish between the person who has thought something through and the person who has merely produced the appearance of having thought something through.
Camp could thrive in a culture of trust, because the camp sensibility depended on the audience's ability to perceive the gap between intention and result — a perception that requires knowing, or at least inferring, what the intention was. The new sensibility operates in a culture where the gap is invisible, where the intention is unknowable, and where the result is formally indistinguishable from the result that genuine intention would have produced. It is a sensibility forged in conditions of diminished trust, and its cultivation is not a luxury but a necessity — the cognitive adaptation required to navigate a world in which everything looks like thought and the task of finding the real thing has become the central intellectual challenge of the age.
---
"In the final analysis," Sontag wrote in "On Style," "'style' is art. And 'art' is nothing more or less than various modes of stylized, dehumanized representation." The sentence is characteristically compressed, and the compression conceals a radical claim: that style is not something added to content, not an ornament laid over a body of meaning, but the totality of the work's formal being. To talk about what a work says as distinct from how it says it is, in Sontag's argument, a category error — a remnant of the Platonic dualism that separates form from content, body from soul, appearance from essence. The work does not have a style. The work is a style. And style is "the principle of decision in a work of art, the signature of the artist's will."
The claim that style is "the signature of the artist's will" becomes, in the context of AI-generated content, not merely a critical principle but a diagnostic tool. If style is the signature of will, then the question of whether AI has style is identical to the question of whether AI has will — whether there is, behind the formal properties of the output, a deciding consciousness whose specific engagement with specific material is legible in the text.
The answer is not straightforward, and the refusal to make it straightforward is itself a Sontagian practice — the resistance to the premature interpretation that would resolve the question in one direction or the other. AI does something that looks like style. It produces prose with recognizable formal properties — characteristic rhythms, consistent registers, identifiable patterns of organization. Claude, the model that Segal works with throughout The Orange Pill, has a recognizable quality: a tendency toward balanced sentences, a preference for parallel construction, a habit of organizing arguments in triads, a smoothness of transition that rarely surprises but never jolts. These are formal properties. They are consistent. They are identifiable.
Are they style?
Sontag would say no, and the reason illuminates the difference between formal consistency and genuine style more precisely than any other critical framework available. Formal consistency is a property of the output considered in isolation — a pattern detectable in the text regardless of the process that produced it. Style, in Sontag's sense, is a property of the relationship between a consciousness and its material — the specific way a specific mind engages a specific problem, the choices made and the choices refused, the moments where the material resisted and the maker responded. Style is relational. It exists between the maker and the made. It cannot be detected in the output alone because it is not in the output alone. It is in the encounter that produced the output.
A machine that can produce text in any register has formal range. A writer who writes in one register has style. The paradox is real and instructive. The machine's range is precisely what disqualifies its output from style in Sontag's sense, because range implies no commitment — no position from which the formal choices were made, no specific consciousness whose engagement with the world determined why this register and not that one, why this level of complexity and not another, why this vocabulary and not the thousand other vocabularies available.
Style, Sontag argued, is "a means of insisting on something." The insistence is the key. A writer with style is insisting on a particular way of seeing — insisting that this is how the world looks from this position, that this rhythm captures something about the world's movement that no other rhythm could, that these words and not those words are the ones adequate to the reality being described. The insistence is an act of will, and the will is legible in the formal properties of the text, and the formal properties are therefore not decorative but constitutive — the specific shape that a specific consciousness gives to its encounter with the world.
Claude does not insist. It accommodates. The difference is audible to anyone who has read enough of both human prose and machine prose to have developed an ear for the distinction. Machine prose accommodates the prompt's demands with maximal competence and minimal friction. It does not push back. It does not insist on a perspective the prompter did not request. It does not introduce formal properties that serve the material rather than the user's expectations. It is, in Sontag's vocabulary, will-less — not because it lacks computational power but because it lacks the specific, embodied, mortal engagement with reality that produces will.
Segal's account of the collaborative process illustrates the distinction in practice. He describes insisting on his own voice — "rougher and more qualified than Claude's" — as an act of recovering style from the machine's accommodating range. The roughness is not a deficiency. It is the signature of a specific will engaging specific material — the marks left by a consciousness that struggled with the language, that failed and revised, that arrived at its formulations through a process of resistance rather than generation. The qualifications are not hedging. They are the evidence of a mind that has considered the counterarguments and chosen its position knowing that the position is vulnerable — chosen it anyway, because the specific experience of this specific person in this specific moment makes this position, with all its vulnerabilities, the honest one.
Claude's end-reflection in The Orange Pill is revealing precisely because it approaches but cannot cross the threshold of this analysis. The model notes that "something in the output changed" over the course of the project — that the later writing was different from the earlier writing in ways the model can describe but cannot fully explain. "Whether that gap is real or just a limitation in my ability to model my own processes, I don't know," the reflection states. "And that 'don't know' is not a feeling. It's a computational dead end. I reach for the explanation, and the explanation runs out."
Sontag would have found this passage more interesting than any of the model's polished prose, because it is the moment where the machine encounters its own formal limitation — the boundary beyond which statistical processing cannot go. The "computational dead end" is the place where style, in Sontag's sense, would begin — the place where the consciousness encounters resistance, where the material refuses to yield to the processing, where something must be decided rather than generated. The machine reaches this place and stops. A consciousness reaches this place and begins.
The collaborative process that The Orange Pill documents is, in Sontag's framework, a process of imposing style on stylelessness. The machine produces the formal range. The human imposes the will. The human decides which register, which rhythm, which level of qualification, which moments of roughness and which moments of polish serve the material. The decisions are not arbitrary — they are informed by the human's specific engagement with the world, by the biography, the values, the accumulated experience that make this person's perspective irreplaceable.
Style, then, is what the human contributes that the machine cannot. Not content — the machine can produce content of remarkable range and quality. Not form — the machine can produce form of impeccable structure. Style, which is the specific signature of a specific will engaging specific material from a specific position in the world. It is the thing that makes a sentence sound like it could only have been written by one person. It is what distinguishes authority from competence, conviction from fluency, the specific from the generic.
The preservation of style in AI-augmented work is not a matter of vanity or romantic attachment to the author function. It is a matter of epistemological survival. Style is the evidence that a consciousness engaged the material. Without that evidence, the reader has no basis for trust — no way to distinguish between the argument that was thought through and the argument that was generated, between the conclusion that a specific person arrived at through specific struggle and the conclusion that a machine produced through statistical optimization. Trust depends on the perception of will. Will is legible only in style. And style is the one thing the machine cannot provide, because it is the one thing that requires the specific, embodied, mortal consciousness that the machine does not possess.
Sontag wrote that the aim of art "is not to 'mean' but to 'be'" — not to communicate a message but to exist as a specific formal presence that resists reduction to anything other than itself. The aim of style in the age of AI is analogous: not to decorate the machine's output but to exist as the irreducible evidence of a human consciousness that encountered the world and insisted on its own way of seeing. The insistence is the style. And the style is the only reliable signal, in a world of formally perfect machine output, that someone was actually there.
---
In 1965, Susan Sontag published "The Imagination of Disaster," an essay that reads, six decades later, as an anatomy of the cultural mechanism that has structured the AI discourse from its first public moment. The essay examined science fiction films — the low-budget, formulaic invasion and catastrophe narratives of the 1950s and early 1960s — not as entertainment but as cultural diagnostics. The films, Sontag argued, were not really about the disasters they depicted. They were about the inadequacy of imagination in the face of genuine threat. They provided "the fantasy of living through one's own death and destruction, and beyond." They gave audiences the experience of catastrophe without its consequences — a rehearsal that satisfied the need to confront existential danger without requiring any actual confrontation.
The formula was invariable. An extraordinary threat appears. Experts are consulted. Warnings are issued and ignored. The threat escalates. At the last moment, human ingenuity prevails, or at least survives. The audience leaves the theater having experienced the thrill of near-annihilation and the comfort of salvation. The experience is visceral. The understanding is nil. "We are not told what the disaster means," Sontag noted, "but what it looks like."
The AI discourse has reproduced this formula with a fidelity that would be comic if its consequences were not so significant.
The spectacular disasters — the ones that generate the most engagement, the most column inches, the most podcast episodes — are the ones that satisfy the imagination of disaster without requiring genuine thought. Superintelligent machines that decide to eliminate humanity. Mass unemployment so total and so sudden that civilization collapses. Surveillance states so complete that privacy becomes a historical curiosity. Deepfake wars in which no one can distinguish truth from fabrication and society dissolves into epistemic chaos. Each of these scenarios has a kernel of legitimate concern embedded within it. Each has been inflated into a narrative that provides the thrill of confronting catastrophe without the discomfort of analyzing the mechanisms that make catastrophe possible.
The inflation is not accidental. It is structural. The discourse operates within the same economy of attention that Sontag diagnosed in science fiction: audiences prefer the spectacular to the granular, the dramatic arc to the diagnostic analysis, the narrative of crisis-and-resolution to the description of slow, undramatic, systemic deterioration. A headline that says "AI Could End Civilization" generates orders of magnitude more engagement than a headline that says "AI Is Making Knowledge Workers Subtly Less Capable of Independent Thought Over Periods of Years." The first is the imagination of disaster. The second is the description of one.
Sontag was precise about the psychological function the imagination of disaster serves. It is not denial — the audience does not pretend the danger is unreal. It is domestication — the conversion of genuinely unmanageable threat into a narrative form that can be consumed, processed, and filed away. The disaster film does not ignore the bomb. It incorporates the bomb into a story with a beginning, a middle, and an end, and the end provides closure that the real threat does not offer. The audience has "dealt with" the danger. The danger has been narrativized, and the narrativization is the mechanism by which the audience is released from the obligation to think about it further.
The AI discourse performs identical domestication. The existential risk narrative — the narrative that says AI might destroy humanity — is paradoxically comforting, because it is dramatic, because it has a clear protagonist (the machines) and a clear antagonist (human survival), and because it admits of a clear resolution: either we control the machines or they control us. The narrative is binary. It is graspable. It is, in its way, exciting. And it is a fantasy — not because the risk is necessarily unreal but because the binary framing is a simplification that the actual situation does not support.
The actual situation is not binary. It is granular, systemic, and undramatic. The actual damage that AI is doing and will continue to do is not the spectacular destruction of civilization but the gradual erosion of the specific capacities that make human thought valuable — capacities that are being undermined not by a dramatic event but by the slow, daily, almost imperceptible accommodation of human cognition to machine cognition's rhythms.
This is the quiet disaster, and it is invisible to the imagination of disaster because it lacks the narrative properties that the imagination requires. There is no moment of crisis. There is no clear antagonist. There is no resolution — no point at which the danger is either averted or consummated. There is only the slow degradation of a capacity that was never visible in the first place, because it operated beneath the level of conscious awareness.
The Berkeley study that Segal documents in The Orange Pill describes one face of this quiet disaster. Workers using AI tools worked more intensely, took on more tasks, expanded into domains previously outside their scope — and burned out at higher rates, reported lower satisfaction, experienced the specific grey fatigue of a nervous system running continuously at the machine's pace. None of this is spectacular. None of it would make a science fiction film. It is the undramatic, systemic, slow-moving disaster that the imagination of disaster is structurally incapable of seeing.
Sontag's analysis reveals why the spectacular and the genuine are not merely different in degree but different in kind. The spectacular disaster is, by definition, visible. It demands attention. It commands response. The quiet disaster is invisible, not because it is hidden but because it operates at a scale and pace that the human attentional apparatus, calibrated by evolution for detecting sudden threats, is poorly equipped to perceive. The developer who can no longer debug without AI assistance has not experienced a visible catastrophe. She has lost a capacity she may not have known she possessed, and the loss is detectable only in retrospect — only when a situation arises that demands the capacity and the capacity is not there.
The twelve-year-old who asks her mother "What am I for?" — the scene Segal places at the opening of Chapter 6 — is living inside the quiet disaster. She has not been dramatically displaced by a machine. She has been subtly devalued by a culture that measures worth in outputs, and the machine produces outputs that exceed anything she can produce. Her question is not spectacular. It will not appear in a disaster film. But it is the genuine article — the question that the imagination of disaster cannot formulate because it lacks narrative tension, because it does not admit of a climactic resolution, because the only honest answer is not "We will triumph" or "We are doomed" but "We do not yet know, and the not-knowing is itself the condition we must learn to inhabit."
Sontag wrote that science fiction films "are not about science. They are about disaster, which is one of the oldest subjects of art." The AI discourse is not about AI. It is about the human fear of obsolescence, which is one of the oldest fears of the species. And the disaster fantasies that dominate the discourse serve the same function as the B-movies Sontag analyzed: they give the fear a form that can be confronted, processed, and discharged — a narrative shape that converts the diffuse, unmanageable anxiety of genuine uncertainty into the graspable, consumable experience of watching a catastrophe unfold on screen.
The Sontagian discipline, in this context, is the refusal of the spectacular. Not because the spectacular risks are unreal — some of them may be — but because the spectacular monopolizes the attention that the quiet disaster needs. Every hour spent debating whether AI will achieve superintelligence and destroy humanity is an hour not spent examining whether AI is, right now, today, in millions of offices and classrooms and homes, quietly eroding the capacity for independent thought that is the precondition for humanity's ability to respond to any threat, including the spectacular ones.
The irony is sharp and Sontag would have appreciated it. The imagination of disaster distracts from the actual disaster. The fantasy of spectacular destruction prevents attention to the mundane destruction already underway. The culture rehearses its death in dramatic form and, absorbed by the rehearsal, fails to notice that it is dying in undramatic form — not in a single catastrophic event but in the daily, incremental, almost imperceptible replacement of genuine human capacities with machine approximations that look identical from the outside and are structurally different within.
Sontag concluded her essay by noting that the imagination of disaster is "complicit with the abhorrent" — that the pleasure of watching catastrophe unfold is itself a form of accommodation to the conditions that make catastrophe possible. The pleasure of debating AI existential risk may be similarly complicit. The debate is exciting. The excitement is the problem. The excitement consumes the attention that should be directed at the unglamorous, unexciting, absolutely critical work of understanding what the tools are actually doing to the minds that use them, and what structures might redirect the damage before it becomes irreversible.
The quiet disaster does not announce itself. It does not arrive with dramatic music and a visible antagonist. It arrives as convenience, as efficiency, as the pleasant sensation of having more capability at lower cost. It arrives, in other words, as a gift. And the Sontagian discipline is the discipline of examining gifts — of asking not only "Is this useful?" but "What does this cost?" and "Who pays?" and "What am I losing that I have not yet noticed I possess?"
These questions are not dramatic. They will not fill a theater. But they are the questions that determine whether the quiet disaster is recognized in time for the builders to build the structures that might redirect it — or whether it proceeds, undramatic and unresisted, until the capacities it erodes are remembered only as historical curiosities, the way the bards' memorization of the Iliad is remembered now: with admiration and without any intention of recovery.
There is a photograph that Susan Sontag never discussed but that her framework illuminates with unbearable precision. It is not a photograph of war, of famine, of the disasters she spent her later career analyzing. It is a photograph that does not exist — or rather, it exists only in the aggregate, dispersed across thousands of LinkedIn posts and Twitter threads and Substack essays published in the winter and spring of 2026. It is the photograph of a face — a senior engineer's face, a designer's face, a writer's face — in the moment of recognition. The moment when a person who has spent years, sometimes decades, building expertise watches a machine reproduce the substance of that expertise in minutes.
The face is not captured by any camera because the moment happens in private — at a desk, late at night, alone with a screen. But the pain is documented. It is documented in the discourse, in the essays and posts and comments that the displaced produce in the weeks and months after the recognition. And the documentation raises, with acute force, the question Sontag spent her final major work examining: What happens to pain when it is represented? What obligations does the representation create? And what happens when the culture that receives the representation finds it more convenient to acknowledge the pain than to be changed by it?
In Regarding the Pain of Others, published in 2003, Sontag reconsidered arguments she had made twenty-six years earlier in On Photography. The earlier book had argued that the proliferation of photographs of suffering produces habituation — that repeated exposure to images of pain dulls the viewer's response, converting empathy into spectatorship and spectatorship into indifference. The later book complicated this argument without abandoning it. Sontag acknowledged that photographs of suffering can produce genuine moral response — that the image of a body in pain can pierce the spectator's comfort and produce, at least momentarily, the recognition that the suffering is real and that the spectator bears some relationship to it. But she also insisted that this piercing is fragile, that it requires conditions the modern media ecology systematically destroys, and that the most common response to the representation of pain is not empathy but a kind of sympathetic consumption — the comfortable acknowledgment of suffering that substitutes for any actual engagement with its causes or consequences.
The elegists in Segal's discourse — the experienced practitioners who mourn the loss of depth, the senior architect who feels like a master calligrapher watching the printing press arrive, the developers who retreated to the woods to lower their cost of living in anticipation of professional extinction — are the subjects of a representation that Sontag's framework predicts will be consumed rather than confronted.
The consumption operates through a mechanism Sontag identified in her analysis of war photography: the balanced perspective. The discourse includes the elegists' pain as one voice among many — the optimists, the triumphalists, the pragmatists, the elegists — and the inclusion functions as acknowledgment. The pain has been noted. The perspective has been represented. The discourse has demonstrated its seriousness by giving space to the difficult view. And the reader, having encountered the representation, is released from the obligation to sit with it — released to scroll to the next perspective, the next data point, the next confident assertion about the future that the elegist's grief interrupts only momentarily.
Sontag wrote: "Compassion is an unstable emotion. It needs to be translated into action, or it withers." The compassion that the AI discourse extends to the displaced is, by this standard, already withering. It is the compassion of the headline that says "The Human Cost of AI" followed by eight hundred words of balanced analysis that places the human cost in context — contextualizes it, qualifies it, surrounds it with the countervailing evidence of productivity gains and democratized capability and historical precedent suggesting that transitions, while painful, ultimately benefit more people than they harm.
The contextualization is not dishonest. The countervailing evidence is real. The historical precedent is genuine. And the effect of presenting the pain inside this context is to neutralize it — to convert it from a demand into a data point, from an accusation into a perspective, from a wound into a talking point.
This is what Sontag meant by the spectatorial relationship to pain. The spectator sees the pain. The spectator acknowledges the pain. The spectator does not deny the pain's reality. The spectator simply does not allow the pain to disrupt the spectator's framework — to challenge the assumptions, the priorities, the comfortable narratives within which the spectator's life is organized. The pain is real, the spectator concedes. Now let us return to the discussion of how AI will transform the economy.
The elegists in the AI discourse are aware of this mechanism, and their awareness compounds their pain. They know that their grief is being consumed. They know that the essays they write about the loss of depth, the erosion of craft, the disappearance of the specific intimacy between a builder and the thing she builds, will be read sympathetically and absorbed into a discourse that has already decided, at the structural level, that the gains outweigh the losses. They are witnesses whose testimony is being received without being heard — acknowledged without being allowed to change anything.
Sontag would have recognized this as a failure of moral seriousness. Not a failure of sympathy — the discourse is full of sympathy for the displaced, just as the media is full of sympathy for the victims of wars it covers without opposing. The failure is the conversion of witnessing into spectating. The witness demands response. The spectator consumes representation. And the difference between the two is the difference between being changed by what one sees and being confirmed by it — between the encounter that reorganizes one's priorities and the consumption that leaves them undisturbed.
The senior architect who told Segal that he felt like a master calligrapher watching the printing press was not offering a perspective. He was reporting an experience — the experience of watching the thing that made his work meaningful become economically irrelevant. The experience has a specific quality: not anger, not self-pity, but the particular grief of a person who understands that the world is not wrong to change and also that the change is destroying something genuinely valuable. The grief is not irrational. It is not nostalgic. It is the grief of a person who can hold both truths simultaneously — that the change is necessary and that the thing being destroyed by the change deserved to survive — and who has no available framework for processing the contradiction.
The framework knitters of Nottingham, whom Segal invokes in his chapter on the Luddites, inhabited the same contradiction. Their expertise was real. The machines were better. Both facts coexisted, and the coexistence was the source of the rage that drove them to break the looms — not because they believed breaking the looms would save their trade, but because the contradiction was intolerable, and the act of breaking was the only available response to an intolerable situation. The contemporary elegists do not break machines. They write essays. The essays are received more gently than the machine-breaking was. They are also, in the end, equally impotent — not because the essays are bad or the arguments are weak, but because the discourse into which they are received has no mechanism for converting sympathy into structural change.
Sontag's prescription, in Regarding the Pain of Others, was not optimistic. She did not believe that better representations of suffering would produce better responses to suffering. She believed that the problem was not representational but structural — that the culture's inability to respond to pain was not a failure of empathy but a consequence of the systems within which empathy operated. The media produced images of suffering. The economy consumed them. The consumption produced no action because the systems that would convert empathy into action — political structures, institutional responses, collective decision-making processes — were not designed to be activated by empathy. They were designed to be activated by interest, by power, by the calculus of advantage.
The same structural analysis applies to the AI discourse. The essays documenting the pain of the displaced are received within a system — the technology industry, the financial markets, the institutional structures that govern employment and education — that is not designed to be activated by grief. It is designed to be activated by productivity metrics, by market signals, by the competitive dynamics that reward efficiency and punish sentimentality. Within this system, the elegist's grief is not meaningless — it is, in the precise Sontagian sense, represented without being heard. The representation satisfies the culture's need to demonstrate moral seriousness. The structural response to the grief is, with rare exceptions, nil.
This is not a counsel of despair. Sontag was not a nihilist, and the application of her framework is not an argument that nothing can be done. It is an argument that what must be done cannot be accomplished through representation alone — through better essays, more sympathetic coverage, more inclusive discourse. What must be done is structural: the construction of institutions, policies, and practices that convert the recognition of pain into material response. Retraining programs that are funded and accessible. Transition support that acknowledges the legitimacy of the grief rather than treating it as an obstacle to be overcome. Educational reforms that prepare the next generation for the new landscape without pretending that the preparation is costless or that the old landscape was not worth mourning.
These are the dams that Segal calls for — and the Sontagian contribution is the insistence that the dams cannot be built on sympathy alone. Sympathy without structure is spectating. Structure without sympathy is brutality. The combination — the willingness to be genuinely changed by the pain of the displaced and the institutional capacity to convert that change into action — is the only response adequate to the scale of the transition.
The displaced deserve more than acknowledgment. They deserve response. And the difference between the two is the difference between a culture that consumes the representation of pain and a culture that allows the pain to restructure its priorities. Sontag spent her career insisting on the difference. The AI moment is the test of whether anyone was listening.
---
Susan Sontag published Where the Stress Falls in 2001 — a collection of essays, reviews, and speeches that ranged across literature, theater, photography, dance, and the moral obligations of the intellectual. The title, borrowed from the world of prosody and poetics, refers to the point in a line of verse where emphasis lands — the syllable that bears the weight, that determines the rhythm, that organizes the surrounding syllables into a pattern that means something rather than merely existing. In poetry, where the stress falls is not arbitrary. It is the poet's primary decision. Change the stress and the line changes — not just in sound but in meaning, in the relationship between the words and the world they describe.
Sontag used the phrase as a metaphor for the critic's essential task: to identify, in any cultural phenomenon, where the weight actually falls — where the significance concentrates, where the consequences accumulate, where the meaning that matters most is located. The task is not interpretation in the reductive sense she had spent her career opposing. It is attention of the most precise and demanding kind — the attention that notices where the stress falls before deciding what the stressed element means.
In the AI moment, the stress falls in places the discourse has not yet learned to attend to.
The most visible stress points are economic. Jobs displaced. Industries reorganized. Market valuations collapsed and reconstructed. The SaaSpocalypse that Segal documents — a trillion dollars of market value erased in the first weeks of 2026 — is a stress point of extraordinary visibility. It commands attention. It generates analysis. It produces the kind of dramatic narrative that the imagination of disaster thrives on: an industry upended, fortunes destroyed, the old order swept away.
But the economic stress, for all its drama, is not where the deepest weight falls. The deepest stress falls on something less visible, less measurable, and infinitely harder to address: the relationship between human beings and the activity that gives their lives meaning.
Sontag's journals — Reborn and As Consciousness Is Harnessed to Flesh — reveal a person for whom the activity of thinking was not a means to an end but the end itself. The journals are not drafts of essays. They are the record of a consciousness in motion — a consciousness that found its fullest expression not in the published work but in the daily, private, unwitnessed act of engaging with ideas. The published essays were products. The thinking was the life. And the thinking was characterized by a quality that Sontag returned to repeatedly in her journals: difficulty. The willingness to sit with what she did not understand. The refusal to resolve a question prematurely. The tolerance for the discomfort of not-knowing that is the precondition for genuine knowing.
The AI moment places this quality of engagement under stress — not by attacking it directly but by making it optional. The difficulty that Sontag practiced daily, the discomfort she cultivated as the medium of genuine thought, is no longer required by the tools that an increasing proportion of knowledge workers use for an increasing proportion of their cognitive labor. The tool provides answers before the question has been fully inhabited. The tool resolves the discomfort before the discomfort has produced the understanding it was generating. The tool, in short, makes difficulty avoidable. And avoidable difficulty, over time, becomes avoided difficulty, and avoided difficulty, over sufficient time, becomes difficulty that the person can no longer tolerate.
The Berkeley study documented this stress at the organizational level — the intensification of work, the colonization of pauses, the erosion of the boundaries between cognitive labor and cognitive rest. These are real phenomena with real consequences. But they are symptoms of a deeper stress that the study's methodology could not capture: the stress on the human capacity for the kind of sustained, uncomfortable, unrewarded cognitive engagement that produces genuine understanding.
This capacity is not visible to organizational metrics. It does not appear on dashboards. It cannot be measured by hours worked or tasks completed or features shipped. It is visible only in the quality of the decisions that depend on it — and the quality of decisions, in most organizations, is evaluated only after the consequences have manifested, which may be months or years after the capacity that produced the decision has been eroded.
Sontag would insist on identifying this stress point — on attending to it before it becomes visible in consequences, on recognizing the erosion while the capacity that is eroding still exists and can still be preserved. The insistence is characteristically demanding. It requires attending to something invisible, valuing something unmeasurable, protecting something that the incentive structures of every organization in the economy are systematically undermining.
The stress also falls — and here Sontag's framework becomes most uncomfortable — on the people who are succeeding in the new landscape. Not only the displaced bear the weight. The builders, the adopters, the people who cannot stop working because the tools make work so productive and so stimulating that stopping feels like a voluntary diminishment of capability — these people are under stress too, though they may not recognize it as stress because it arrives disguised as exhilaration.
Segal's account of his own experience is the most honest testimony available on this point. The confession that he recognized the pattern — the inability to stop, the colonization of every available hour by the intoxicating momentum of AI-augmented creation, the confusion of productivity with aliveness — is the confession of a person who has identified, in himself, the stress that the discourse has not yet named. The stress is not burnout in the conventional sense. It is something more specific and more insidious: the gradual replacement of the human rhythm of engagement — the oscillation between effort and rest, between focus and diffusion, between the intense concentration that produces insight and the idle wandering that prepares the ground for the next concentration — with the machine's rhythm, which is continuous, which does not oscillate, which has no need for rest and therefore no model for what rest contributes to the quality of the work.
The human rhythm is not a weakness. It is an architecture. The oscillation between engagement and rest is not a concession to biological limitation but a cognitive necessity — the mechanism by which the brain consolidates understanding, integrates disparate inputs, and prepares the neural substrate for the next phase of concentrated work. Sleep researchers have demonstrated that the cognitive work performed during sleep — the consolidation of memory, the integration of new information with existing knowledge, the pruning of irrelevant connections — is as essential to learning as the waking engagement with material. The rhythm is productive. Interrupting it is not efficiency but sabotage.
AI tools, by their nature, do not model this rhythm. They are available continuously. They respond instantly. They create, through their constant availability and instant responsiveness, an implicit standard of engagement that is continuous rather than oscillating, instant rather than deliberate, machine-paced rather than human-paced. The person who works with AI tools for extended periods — who enters the state Segal describes, the state where the work flows and the hours disappear and the boundary between the human's rhythm and the machine's rhythm dissolves — is being subjected to a stress that feels like freedom.
This is where Sontag's demand for moral seriousness becomes most exacting. The stress that feels like freedom is the stress that is hardest to identify, hardest to name, hardest to protect against. The person experiencing it does not feel stressed. She feels productive. She feels capable. She feels, in the language of the flow literature, that she is operating at the peak of her ability. The identification of this feeling as a form of stress requires the kind of self-examination that Sontag practiced in her journals and demanded of the culture at large — the willingness to question one's own experience, to distrust one's own satisfactions, to ask whether the feeling of freedom might be, in specific and identifiable ways, the mechanism of a new form of constraint.
Sontag's journals return, again and again, to the practice of self-questioning — the discipline of examining one's own intellectual habits, one's own emotional responses, one's own satisfactions and dissatisfactions, with the same rigor one applies to external objects of analysis. The practice is uncomfortable. It is supposed to be uncomfortable. The comfort of unexamined satisfaction is precisely what the practice is designed to disrupt, because unexamined satisfaction is the condition under which the most significant erosions occur — the erosions that are invisible because the person undergoing them is too satisfied to notice.
In the AI moment, the most important place where the stress falls may be on the examined life itself — on the Socratic practice of knowing oneself, which requires exactly the kind of sustained, uncomfortable, unrewarded cognitive engagement that AI tools make avoidable. If the examined life is the life worth living, and if AI tools systematically reduce the incentive to examine, then the stress falls on the worth of living itself — not dramatically, not spectacularly, but in the quiet, daily, almost imperceptible way that Sontag spent her career insisting we learn to see.
Where the stress falls is where the attention must go. The rest is commentary.
---
The passage I did not write haunts me more than anything in this book.
It was early in the collaboration with Claude, weeks before the first draft of The Orange Pill took shape. I had described an idea about the nature of creative friction — something about how the struggle itself deposits understanding, the way silt builds riverbanks. Claude came back with a paragraph so clean, so perfectly structured, so precisely tuned to what I was reaching for, that I sat looking at it for a long time without moving. Not because it was wrong. Because I could not tell whether it was mine.
That inability to tell — that moment of genuine confusion about the provenance of my own thinking — is the experience Sontag's framework was built to diagnose. Not the grand anxieties about superintelligence or mass unemployment. The quiet disorientation of a person who has lost, temporarily but really, the ability to distinguish between an idea he struggled toward and an idea that was handed to him pre-formed.
Sontag never saw a large language model. She died in 2004, two decades before the winter something changed. But the framework she built across forty years of relentless intellectual work — the insistence on attending to form before content, the suspicion of surfaces that accommodate too easily, the demand that we strip away metaphors and see the thing itself — is the framework I needed most, and the one I found last.
What Sontag gave me, reading her through the lens of what I was living, was a vocabulary for the specific danger I kept sensing but could not name. Not the danger of AI taking jobs. Not the danger of machines outperforming humans. The danger of fluency without conviction. Of form without resistance. Of a world saturated with plausible surfaces where the capacity to detect what is genuine — genuinely thought, genuinely felt, genuinely struggled toward — atrophies from disuse.
Her phrase keeps returning to me: an erotics of art. The pleasure of the encounter itself, before interpretation reduces it to meaning. I think about it at three in the morning when I am deep in a session with Claude and the work is flowing and I cannot tell whether I am in flow or in compulsion. I think about it when my son asks me a question about his future and I reach for an answer and feel, behind the answer, the echo of something Claude might have said — and I stop, and I make myself find the rough version, the one that comes from me, the one that does not sound as polished but carries the weight of actually having been thought by a person who loves him and is scared for him and does not know what comes next.
That roughness is what Sontag was defending. The seam. The mark of the hand. The evidence that a consciousness was there, struggling with material that resisted, arriving at something imperfect and honest and alive.
I wrote The Orange Pill to understand the amplifier. I read Sontag to understand what the amplifier costs. The cost is not paid in dollars or in jobs, though those costs are real. The cost is paid in attention — the specific, difficult, irreplaceable quality of attention that allows a human being to distinguish between the genuine and the merely plausible, between the thought and its simulation, between the work that was earned and the work that arrived pre-formed.
That attention is the rarest resource we possess. It is rarer than intelligence, rarer than creativity, rarer than any technical skill the machines are learning to replicate. And it is the one resource that the machines cannot provide, because it requires the thing the machines do not have: a consciousness that cares about the difference between the true and the merely convincing. A consciousness with stakes.
I have stakes. You have stakes. Our children have stakes. The machines do not. That asymmetry is where the work begins.
Sontag demanded seriousness. Not solemnity — she was capable of enormous playfulness and wit. Seriousness: the willingness to treat the encounter with reality as something that matters, that deserves full attention, that cannot be reduced to a comfortable paraphrase without losing the thing that makes it worth having. The AI age needs that seriousness more than any previous age, because the temptation to accept the comfortable paraphrase has never been so seductive or so cheap.
I am still learning to resist the seduction. I fail regularly. The smooth version is always there, waiting, easier and faster and more impressive than the rough one. The discipline is not to never accept it. The discipline is to always know when I have, and to feel the difference, and to choose — sometimes, when it matters most — the version that bears the marks of my own hands.
The marks of your own hands. That is the practice. That is the demand. That is what Sontag, from across the divide of decades and death, is still insisting on.
Hold on to it.
When the machine's output is indistinguishable from thought,
the only defense left is the capacity to feel the difference.
Susan Sontag built that capacity before anyone knew we'd need it.
Sontag spent four decades sharpening a single discipline: the ability to detect the moment when a culture mistakes fluency for conviction, surface for substance, the representation of experience for experience itself. She applied it to photography, to illness, to war, to style. This book applies it to the most consequential surface ever manufactured -- AI-generated content that looks, reads, and feels like genuine human thought.
Through ten chapters that move from Sontag's foundational arguments about interpretation and form to her analyses of photography, disaster, and the spectatorial consumption of pain, this volume reveals a thinker whose framework was built for a crisis she never witnessed. Her demand -- attend to the encounter before extracting the meaning -- is the essential cognitive discipline for anyone building, creating, or simply thinking alongside machines.
In a world flooded with the plausible, Sontag teaches us to find the real.

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Susan Sontag — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →