By Edo Segal
The vocabulary I had was wrong.
Not incorrect, exactly. Just insufficient. I had been writing about the AI transition for months — building the arguments in *The Orange Pill*, training my team in Trivandrum, shipping Napster Station in thirty days, losing sleep over what all of it meant — and every word I reached for came from the technology industry's own dictionary. Disruption. Democratization. Acceleration. Amplification. These words described what was happening. They did not describe what it *felt like*.
There is a difference between knowing that a trillion dollars of market value vanished from software companies in eight weeks and understanding the specific quality of silence in a room where twenty engineers have just realized their careers will never work the same way again. The numbers capture the event. They do not capture the texture. And the texture is where the truth lives — in the compound sensation of awe and loss that I kept trying to name and kept failing to name, because the words available to me were built for a different kind of knowing.
Raymond Williams built the words I was missing.
He called it a *structure of feeling* — the shared experience of a historical moment before anyone has figured out how to talk about it. The mismatch between what is actually being lived and what the available descriptions can hold. I read that phrase and felt the shock of recognition. That was exactly what I had been circling around in every chapter of this book. The silent middle — the millions of builders who feel both the exhilaration and the grief and cannot find a clean narrative to offer — they are not confused. They are inhabiting an experience that the existing cultural categories cannot yet contain.
Williams did something else that rearranged my thinking. He insisted that technology is never just technology. It is culture — a whole way of life, reshaped every time the tools change. The vocabulary we use, the rituals of our workday, the meanings we attach to skill and authorship and productivity — all of it shifts when the material conditions shift. And the shift is not neutral. It is political. Who captures the gain? Whose experience gets articulated? Whose gets absorbed into someone else's story?
These are not questions the technology industry asks itself with sufficient rigor. Williams asked them about television, about publishing, about every communication technology of his era. The answers he found are uncomfortably relevant now.
This book is another lens on the same landscape. If *The Orange Pill* is the view from inside the transition, Williams is the view from inside the culture that the transition is remaking. Both views are necessary. Neither is complete alone.
— Edo Segal ^ Opus 4.6
1921-1988
Raymond Williams (1921–1988) was a Welsh cultural theorist, literary critic, and novelist widely regarded as one of the founders of cultural studies. Born in Pandy, a village on the Welsh-English border, he studied at Cambridge, served in the Second World War, and spent much of his career teaching at Cambridge, where he became Professor of Drama. His major works include *Culture and Society* (1958), *The Long Revolution* (1961), *The Country and the City* (1973), *Keywords: A Vocabulary of Culture and Society* (1976), *Marxism and Literature* (1977), and *Resources of Hope* (1989, posthumous). Williams developed the method of cultural materialism — an approach that insists culture is not a secondary reflection of economic life but a material force in its own right, shaped by and shaping the conditions of production. His key concepts — "structure of feeling," "the selective tradition," the dynamic of dominant, residual, and emergent cultural formations, and the argument that the means of communication are themselves means of production — transformed how scholars and practitioners understand the relationship between technology, power, and everyday life. His influence extends across literary criticism, media studies, political theory, and the sociology of culture.
The most important cultural shifts are never first experienced as ideas. They are experienced as disturbances — something not quite right in the texture of daily life, a mismatch between the available descriptions and the quality of what is actually being lived. The vocabulary has not caught up. The editorials and the position papers arrive later, once the experience has hardened into something nameable. But before the naming, there is the feeling — shared, recognizable, definite in its pressure, yet resistant to the categories that existing thought provides.
Raymond Williams spent four decades developing a concept precise enough to capture this phenomenon. He called it a structure of feeling — a formulation whose apparent contradiction was deliberate. "Structure" suggests something fixed, institutional, analyzable at a distance. "Feeling" suggests something fluid, personal, available only from inside. Williams insisted the contradiction was the point. A structure of feeling is social experience in solution, as distinct from the same experience in its precipitated, formally recognizable forms. It is what is being lived and felt at a particular time and place, before it has been converted into the categories of ideology, doctrine, or conscious belief. It is "as firm and definite as 'structure' suggests, yet it operates in the most delicate and least tangible parts of our activity." To miss it is to miss the actual living texture of a historical moment — to substitute what people said they thought for what they were actually experiencing.
The winter of 2025 produced a structure of feeling of extraordinary intensity. The Orange Pill records it with the fidelity of a seismograph, though the book does not use Williams's terminology and may not recognize the phenomenon in those terms. What Edo Segal describes — the compound sensation of awe and loss experienced simultaneously by millions of builders when AI crossed a capability threshold in December 2025 — is not a psychological curiosity. It is not the idiosyncratic response of a particular temperament to a particular tool. It is a structure of feeling: the lived experience of an entire social stratum undergoing a transformation so fundamental that its existing cultural categories cannot yet name it.
The evidence is in the texture of the descriptions. A Google principal engineer sits down with Claude Code, describes a problem in three paragraphs, receives a working prototype in an hour, and writes publicly: "I am not joking, and this isn't funny." The statement is compressed almost to opacity. What exactly is not funny? Not the technology's capability — that is merely impressive. What is not funny is the gap between the significance of what has just happened and the inadequacy of every available response to it. Admiration is too thin. Alarm is too simple. The existing vocabulary offers celebration or mourning, and neither captures the specific quality of the experience: the sensation that a fundamental rule has changed, that the ground one built a career upon has shifted, that the shift is simultaneously liberating and annihilating, and that no existing framework can hold both of these truths at once.
This is the defining characteristic of an emergent structure of feeling — the moment when lived experience outstrips the available means of articulation. Williams observed this phenomenon repeatedly in his studies of English literature and culture: moments when the formal ideologies of a period could not account for what was actually being felt, and when the evidence of that gap appeared first in literature, in the novel's capacity to register social experience before sociology could categorize it. The AI transition produces the same gap, but it appears not in novels — the culture has not yet had time to produce them — but in blog posts, tweets, Substack confessions, and the particular quality of conversation that takes place at three in the morning between a builder and a machine.
Consider the phenomenology more carefully. The Orange Pill describes a spouse writing about her husband's inability to stop working with Claude Code — not playing, not consuming, but building, compulsively, productively, in a way that blurred every boundary between creative fulfillment and behavioral addiction. The post went viral not because it was unusual but because it was instantly recognizable. Thousands of people read it and felt the shock of seeing their own experience described for the first time. This is precisely how structures of feeling operate: they are shared before they are named. The recognition precedes the analysis. The body knows before the mind can justify.
Williams would have identified immediately what the existing categories fail to capture here. The triumphalist framework — AI is amazing, embrace it, build faster — cannot register the loss. It has no vocabulary for the specific grief of the master calligrapher watching the printing press arrive, because grief is incompatible with the narrative of progress. The elegiac framework — something beautiful is dying, the machines are destroying depth — cannot register the genuine liberation. It has no vocabulary for the engineer in Trivandrum who built a complete user-facing feature in two days without ever having written frontend code, because liberation is incompatible with the narrative of decline. Both frameworks are ideological in Williams's precise sense: they are formal systems of belief that organize experience according to preexisting categories, and in doing so they suppress whatever in the experience does not fit.
The structure of feeling that emerged in the winter of 2025 does not fit either framework. Its defining quality is the simultaneity of contradictory experiences — not alternation between excitement and fear, but their genuine coexistence, experienced as a single compound sensation for which no established word exists. Segal reaches for "vertigo" and "falling and flying at the same time," but these are approximations, gestures toward a quality of experience that the language has not yet developed the precision to capture.
Williams would recognize this linguistic inadequacy as itself diagnostic. When the available vocabulary cannot name what people are feeling, the culture is in transition. The old terms have not yet been abandoned. The new terms have not yet been coined. In the gap between them, the structure of feeling operates — definite enough to shape behavior, shared enough to constitute a social phenomenon, but too new and too contradictory to be captured by any existing system of articulation.
The three positions that Segal identifies in the discourse — triumphalists, elegists, and the silent middle — map onto Williams's typology of cultural elements with uncomfortable precision. But the mapping requires care, because the positions are not stable formations. They are, to use another of Williams's distinctions, tendencies within a single structure of feeling, not separate structures. The triumphalist and the elegist feel much of the same thing. What differs is which element of the compound experience they allow themselves to articulate. The triumphalist suppresses the loss. The elegist suppresses the gain. Each converts the irreducible complexity of the experience into a position that can be stated, defended, posted, liked, and shared. Each, in doing so, falsifies the structure of feeling that gave rise to the position.
The silent middle — the people who feel both things and lack a clean narrative to offer — are, in Williams's terms, the most faithful witnesses to the structure of feeling as it actually exists. Their silence is not a failure of articulacy. It is the consequence of inhabiting an experience that the available cultural forms cannot yet express. They do not speak because the language has not been made. Not because they have nothing to say.
This has immediate practical consequences. If the discourse is understood as a contest between positions — pro-AI versus anti-AI, optimist versus pessimist, builder versus critic — then the task is to determine which position is correct and align with it. This is how most commentary on the AI transition proceeds, and it is precisely wrong. The contest between positions is a secondary phenomenon, a precipitation of the structure of feeling into recognizable ideological forms. The primary phenomenon is the structure of feeling itself — the shared, contradictory, as-yet-unarticulated quality of social experience that the positions fail to capture.
To understand the AI transition culturally — which is to say, to understand it as it is actually lived rather than as it is ideologically processed — requires attention to the structure of feeling before and beneath the positions. It requires listening not for arguments but for the quality of experience that the arguments are trying, and failing, to contain. The engineer who oscillates between excitement and terror across a single Tuesday. The parent who tells her child that homework matters and is not entirely sure she believes herself. The builder who recognizes in his own productive addiction the pattern of the addictive systems he helped create. These are not data points in a debate. They are the texture of a historical moment, and the texture is the evidence.
Williams's method demands that we take the texture seriously on its own terms, not as raw material to be processed into theory but as the primary evidence of what is happening. The theory comes later, if at all. The task now is to articulate the structure of feeling with enough precision that its contradictions become visible without being resolved — because the contradictions are the truth of the moment, and any resolution that eliminates them has substituted comfort for understanding.
There is a further dimension that Williams would insist upon. Structures of feeling are not universal. They are specific to particular social groups at particular historical moments. The structure of feeling experienced by software engineers in December 2025 is not identical to the structure of feeling experienced by teachers, lawyers, designers, or parents, though there are overlaps and resonances. The specificity matters because it reveals the social relations embedded in the experience. The engineer feels the vertigo of capability expansion. The teacher feels the vertigo of pedagogical obsolescence. The parent feels the vertigo of not knowing what to tell the child. Each vertigo is specific, and the specificity tells us something about the position each group occupies in the social formation that the technology is restructuring.
The emergent structure of feeling in the AI transition is not one thing. It is a constellation of related but distinct experiences, shaped by the specific social positions from which the technology is encountered. The developer in Lagos, the philosopher in Berlin, the engineering team in Trivandrum, the twelve-year-old at the dinner table — each encounters the same technology from a different location in the social structure, and the encounter produces a different quality of experience. The structures of feeling are multiple, overlapping, sometimes contradictory, and any analysis that treats "the AI experience" as a uniform phenomenon has already lost what matters most: the specific, socially located, historically particular quality of what is being lived.
What Williams offers, then, is not a theory of the AI transition but a method for attending to it — a way of reading the evidence that prioritizes lived experience over ideological articulation, that takes the texture of feeling as primary data rather than as noise to be filtered out in the pursuit of cleaner signals, and that insists on specificity against the pressure to generalize. The winter of 2025 was not a single event experienced uniformly. It was a field of related disturbances, each shaped by the social position of the person who felt it, and the cultural politics of the transition will be determined not by which ideology prevails but by which structure of feeling finds its forms of expression — its art, its institutions, its way of naming what it already knows.
The naming has barely begun. The concepts are not yet adequate. The cultural forms are still in formation. This is not a counsel of despair. It is a description of where we are — early in a process whose outcome depends on whether the emergent experience can be articulated honestly before it is absorbed into the categories of the dominant or dismissed with the sentiments of the residual.
The analysis that follows will attempt to provide some of that articulation. Not a theory of the AI transition, but a cultural reading of it — an attempt to trace the structures of feeling in their specificity, their social locations, their contradictions, and their resources. The task, as it has always been for cultural materialism, is to understand what is being lived before deciding what should be done about it.
In 1961, Raymond Williams published The Long Revolution, a book whose central argument has only become more urgent with each passing decade. The argument was that modern societies are being shaped not by a single revolution but by three interconnected ones, proceeding unevenly and at different speeds: a democratic revolution extending the capacity for self-governance; an industrial revolution applying organized knowledge to production; and a cultural revolution extending access to literacy, education, and the means of making meaning. Williams insisted these three revolutions could not be understood in isolation. They constitute a single process of transformation, experienced differently in different domains, and the interactions among them — the ways in which advances in one domain produce crises in another — are where the most consequential cultural politics take place.
The AI transition is the latest phase of this long revolution, and it arrives at a moment when all three of its component revolutions are in acute crisis simultaneously.
The industrial dimension is the most visible. The numbers are by now familiar: a twenty-fold productivity multiplier for engineers working with Claude Code. Four percent of GitHub commits AI-generated in early 2026, a figure understood by everyone in the industry to be a floor rather than a ceiling. Products built in thirty days that would previously have required six to twelve months. A trillion dollars of market value erased from software companies in the opening weeks of 2026. These figures describe a transformation of productive capability as radical as any in the history of industrial society — not the incremental improvement of existing methods but a qualitative change in the relationship between human intention and material output. Williams would recognize the pattern: it is the same pattern that characterized the introduction of the power loom, the steam engine, the assembly line, the computer itself. Each of these technologies transformed not merely the speed but the character of production, altering what work consisted of, who could perform it, and what it meant to be skilled.
But the industrial revolution has never been merely industrial. This was Williams's essential point, the one that distinguishes his analysis from both conventional economic history and conventional cultural commentary. Every transformation of productive capability is simultaneously a transformation of social relations and a transformation of meaning. When the power loom displaced the hand-weaver, it did not simply replace one means of production with another. It restructured the social relations of the workshop — the relationship between master and apprentice, the organization of labor, the distribution of skill across a community. And it restructured the meanings attached to work — what it meant to be skilled, what it meant to produce, what relationship obtained between the maker and the made. The economic fact, the social fact, and the cultural fact are not three separate facts. They are one fact apprehended in three dimensions.
The AI transition exhibits the same integrated structure. The twenty-fold productivity multiplier is an economic fact. It is also a social fact: the restructuring of teams, the dissolution of specialist silos, the transformation of hierarchies built around the old distribution of capability. Segal describes engineers reaching across traditional boundaries — backend developers building user interfaces, designers implementing features end to end — not because organizational charts changed but because the cost of moving between domains dropped to the cost of a conversation. The economic transformation (what can be produced) and the social transformation (who produces it, in what relationship to whom) occur simultaneously, as aspects of the same process.
And the cultural transformation accompanies them. The meanings attached to skill, expertise, seniority, and authorship are being restructured as rapidly as the productive relations themselves. When a junior developer ships in a weekend what a senior colleague quoted six months for, the question "what is seniority?" is not merely organizational. It is cultural — a question about the meaning of experience, the value of accumulated knowledge, the relationship between time invested and competence achieved. These meanings were never neutral descriptions of reality. They were social constructions that served specific functions: organizing labor, distributing reward, producing the shared understandings through which a community of practice governed itself. The AI transition destabilizes these constructions, and the destabilization is experienced not as an abstract economic adjustment but as a crisis of meaning — a structure of feeling, in Williams's terms, that pervades every aspect of the builder's daily life.
The democratic dimension of the long revolution is equally present, though less often discussed in these terms. The Orange Pill describes a democratization of capability — the collapse of the imagination-to-artifact ratio — that represents a genuine extension of participation in creative production. The developer in Lagos who can now access similar coding leverage to an engineer at Google; the non-technical founder who can prototype a product over a weekend; the engineer in Trivandrum who builds a complete user-facing feature without ever having written frontend code — each represents an expansion of who gets to build, who gets to participate in the creation of the tools and systems that shape collective life.
Williams would have recognized this expansion as continuous with the democratic revolution he traced from the extension of the franchise through the creation of public education and public broadcasting. In each case, the pattern is the same: a capability previously restricted to a privileged minority is extended to a broader population, and the extension transforms not only who participates but what participation means. When literacy was extended beyond the clergy, the result was not merely more readers. It was a transformation of what reading could be — the novel, the newspaper, the political pamphlet, forms that could not have existed in a culture of restricted literacy. When broadcasting was extended beyond state monopoly, the result was not merely more channels but new forms of cultural production and new social relationships organized around them.
The AI extension of productive capability carries the same transformative potential. When the cost of building software approaches zero, the result will not merely be more software. It will be new forms of building, new relationships between builders and users, new configurations of creative collaboration that cannot yet be imagined from within the current dispensation. This is the democratic promise of the transition, and Williams's framework insists that it be taken seriously — not as utopian speculation but as a material possibility embedded in the technology's social characteristics.
But Williams's framework also insists on the counter-analysis, because the democratic revolution has never proceeded without resistance from the forces it threatens, and the resistance has always operated through the same technologies that carry the democratic potential. The printing press enabled both the pamphlet and the propaganda broadsheet. Broadcasting enabled both public education and commercial manipulation. The question is never whether a technology can serve democratic purposes. The question is whether the social relations within which the technology operates will allow it to serve those purposes — or whether the technology will be captured by existing structures of power and turned to their reinforcement.
This is where Williams's analysis becomes most urgent. The large language model is simultaneously a means of communication and a means of production — a tool for making meaning and a tool for making things. The ownership and control of this tool is not distributed democratically. It is concentrated in a small number of corporations, primarily American, whose interests are not identical with the interests of the populations whose labor (in the form of training data) and attention (in the form of user engagement) sustain the models. The democratic potential of the technology exists in genuine tension with the concentrated ownership of the technology, and Williams's entire career was an argument that this tension — between the democratic possibilities of communication technologies and the capitalist organization of those technologies — is the central political question of modern societies.
Segal acknowledges the tension but does not develop it fully. The barriers to access he identifies — connectivity, hardware costs, English-language fluency, the cost of inference — are real, but they are symptoms of a deeper structural condition: the private ownership of what amounts to a new means of production. When Williams argued that the means of communication are means of production, he was making a claim about the relationship between culture and capital that the AI transition makes concrete in an unprecedented way. The model that produces the code, the prose, the analysis, the design — the model that collapses the imagination-to-artifact ratio — is owned by someone. The someone is not the developer in Lagos or the engineer in Trivandrum. The someone is a corporation in San Francisco, and the terms on which the developer and the engineer access the model's capabilities are set by the corporation's commercial interests, not by the democratic needs of the populations the technology purports to serve.
Williams would not have been surprised by this. He traced the same pattern in the history of broadcasting: a technology with genuinely democratic potential captured by commercial interests and organized according to the logic of the market rather than the needs of the public. His response was not to reject the technology — he was never a Luddite, and he had no patience for nostalgic refusal — but to insist that the social organization of the technology is a political question, amenable to democratic decision, and that the pretense of technological inevitability ("the technology determines its own use") serves the interests of those who currently control the technology by foreclosing the possibility of alternative arrangements.
The cultural revolution — the third strand of the long revolution — is the dimension most deeply affected and least adequately analyzed. The expansion of education, literacy, and access to cultural production has been proceeding for three centuries, and the AI transition represents its most radical acceleration. When any person with access to a language model can produce competent prose, functional code, plausible analysis, and professional-quality design, the scarcity that previously governed cultural production is abolished — not gradually, through decades of institutional reform, but suddenly, through the deployment of a technology that makes the production of cultural goods nearly costless.
This abolition of scarcity transforms the cultural revolution in ways that Williams's framework illuminates with particular clarity. When cultural production was scarce — when writing a book required years of education, when making a film required expensive equipment and institutional support, when building software required specialized training — access to cultural production served as a marker of social position. The capacity to produce was itself a form of cultural capital, distributed unequally and reinforcing the inequalities of the broader social structure. The AI transition disrupts this distribution. The cultural capital of production — the capacity to make things — is being radically democratized, and the disruption threatens every institution whose authority rests on the scarcity of that capacity: universities, professional guilds, credentialing bodies, the entire apparatus through which societies have organized the relationship between cultural production and social reward.
The long revolution continues. Its industrial, democratic, and cultural dimensions are being simultaneously accelerated by a technology that collapses productive costs, extends creative capability, and destabilizes the institutions that governed the previous dispensation. The outcome is not determined. Williams insisted throughout his career that the long revolution could advance or retreat, could be captured by existing power structures or could break through them, and that the determining factor was never the technology itself but the social and political organization of the technology — the dams, in Segal's metaphor, that human beings choose to build or fail to build.
The chapters that follow examine the specific cultural dynamics of this latest phase: the contest between dominant, residual, and emergent cultural formations; the transformation of keywords that organize the meaning of work; the restructuring of knowable communities; the political economy of the means of communication. In each case, the analysis will insist on what Williams always insisted upon: that technology is a social product, that its effects are culturally mediated, and that the most important questions about the AI transition are not technical questions but questions about power, meaning, and the quality of shared life.
Raymond Williams's most contentious claim was also his most generative. When he defined culture as "a whole way of life," he was not being casual or expansive. He was waging a specific intellectual war against the tradition that reserved the word "culture" for the finest achievements of art and intellect — the tradition of Matthew Arnold, for whom culture meant "the best which has been thought and said." Williams wanted the word back. He wanted it to describe not the pinnacle but the process: the entire system of meanings and values through which a community makes sense of its conditions of existence, from its highest art to its patterns of work, its forms of greeting, its relationship to its tools, and the texture of its ordinary days.
This redefinition was never merely academic. It was strategic. If culture means only art — only the novel, the symphony, the painting — then cultural analysis has nothing to say about the factory floor, the office, the marketplace, or the engineering workstation. These become economic domains, governed by economic logic, subject to economic analysis, and the lived experience of the people who inhabit them — the quality of their attention, the character of their relationships, the meanings they attach to what they do — falls outside the scope of cultural inquiry. Williams rejected this division absolutely. The factory floor is a cultural space. The office is a cultural space. The engineering workstation, equipped with Claude Code and occupied by a person whose relationship to her work has been transformed in ways she cannot yet fully articulate, is a cultural space. Cultural analysis belongs there.
The Orange Pill provides an unusually detailed ethnography of such a space, though it does not present itself in those terms. What Segal describes — the specific texture of building with AI, the rhythm of prompt and response, the quality of attention required, the social relationships that form around the practice — constitutes a culture in Williams's precise sense. It has its own vocabulary: prompting, hallucination, context window, temperature, tokens, inference. It has its own rituals: the late-night building session, the prototype shipped by morning, the compulsive return to the conversation with the machine that has become more stimulating than any conversation available with a human at that hour. It has its own hierarchies, radically destabilized: the senior engineer whose decades of expertise are simultaneously validated (as judgment) and devalued (as execution), the junior developer whose sudden capability expansion disrupts every assumption about the relationship between experience and competence.
Williams would attend to this culture with the same seriousness he brought to the culture of the Welsh mining communities he grew up in, or the culture of the English working class whose experience he traced through two centuries of literary production. The seriousness is not incidental. It is methodological. If culture is a whole way of life, then the way builders live — the specific quality of their engagement with their tools, the meanings they attach to productivity and skill and creativity, the social relations organized around the practice of building — is primary evidence for understanding the cultural transformation that the AI transition represents.
Consider the vocabulary. Williams devoted an entire book — Keywords — to the argument that the meanings of words are not fixed but historically produced, that shifts in meaning reflect shifts in social reality, and that tracking these shifts is one of the most revealing methods available for cultural analysis. The vocabulary of AI-assisted building is a keyword system in active formation. "Prompt" was a theatrical term — the person who whispered forgotten lines to actors from the wings. Now it names the primary creative act in AI-assisted production: the formulation of intention in natural language, the human contribution to a process whose output is collaborative. The semantic migration is telling. The prompter was always subordinate to the performer — a technician of memory, necessary but invisible. The new meaning inverts this hierarchy: the prompt is the origin, the creative act, the thing without which the machine produces nothing. Yet the theatrical resonance persists, carrying with it an undertone of dependency, of the prompt as a support structure for a performance that happens elsewhere.
"Hallucination" has undergone an equally revealing transformation. In its psychological usage, a hallucination is a perception without an object — the mind generating an experience that has no external correlate. Applied to AI, it describes the model's production of confident assertions that have no factual basis. The metaphor is precise but also mystifying: it attributes to the machine a pathology that properly belongs to consciousness, implying that the machine is experiencing something when it produces false output, when in fact it is doing what it always does — pattern-matching — with insufficient constraint. The word conceals the mechanism behind a psychological metaphor that makes the machine seem more like a mind than it is, and the concealment serves a specific cultural function: it normalizes the attribution of mental predicates to systems that do not possess mentality, smoothing the integration of the machine into social relationships that were previously reserved for persons.
"Content" deserves similar scrutiny. Williams would have noted that what creative workers produce was once described with words that preserved the specificity of the production — a poem, a novel, an essay, a composition, a design. "Content" abstracts from all specificity. It names whatever fills a container, without regard to what the container is or what quality of attention the filling deserves. The rise of "content" as the default term for creative output marks a transformation in the structure of feeling surrounding creative work — a shift from production understood as craft (specific, qualitative, resistant to substitution) to production understood as commodity (generic, quantitative, interchangeable). The AI transition accelerates this shift by making the production of content nearly costless, which simultaneously democratizes access to production and threatens the conditions under which production can be experienced as meaningful by the producer.
Beyond vocabulary, Williams's method directs attention to the material practices that constitute the builder's culture. The practice of working with AI has a specific temporal structure that differs fundamentally from the temporal structure of pre-AI building. Segal describes it as a conversation — an ongoing exchange with a machine that responds in seconds, that holds context, that provides immediate feedback on the adequacy of each directive. The temporality is compressed: ideas that previously would have been mediated through weeks of implementation can now be tested in minutes. The compression is not merely quantitative. It is qualitative — it transforms the experience of building, the felt relationship between intention and result, in ways that restructure the builder's engagement with her own creativity.
In the pre-AI dispensation, the temporal gap between intention and result created a specific kind of cognitive space. The idea had to survive the journey through implementation. It was tested, not by the market or by the user, but by the resistance of the medium — by the debugging, the dependency management, the slow accretion of code that either worked or didn't. The resistance was not merely an obstacle. It was a form of feedback, a discipline imposed by the material itself, and the understanding that emerged from negotiating the resistance was qualitatively different from the understanding that emerges from observing output. Williams would have recognized this as a variant of the relationship he traced between the craftsman and the material throughout the history of English labor: the specific quality of knowledge that develops through sustained engagement with a resistant medium, knowledge that lives in the practice rather than in any proposition that could be extracted from it.
The AI transition transforms this relationship. The resistance of the medium is dramatically reduced for a wide range of tasks. The builder's relationship to the material shifts from engagement to direction — from the craftsman working the wood to the architect instructing the builder. The shift is not total: there are still moments of resistance, still problems that require the builder to think at the level of the material rather than at the level of the directive. But the center of gravity has moved, and the movement restructures the culture of building — the meanings attached to skill, the satisfactions available from the practice, the forms of knowledge that the practice produces.
This restructuring is visible in the social relations of the workplace. The Orange Pill describes the dissolution of specialist silos — backend engineers building interfaces, designers writing code — as a liberation from artificial boundaries. And it is a liberation, genuinely. But it is also a restructuring of the social relations through which the practice of building was previously organized. The specialist silo was not merely an organizational convenience. It was a social structure: a community of practice with its own knowledge, its own standards, its own forms of recognition and respect. The backend developers knew each other, understood each other's work, could evaluate each other's competence by criteria that were internal to the practice. The dissolution of the silo dissolves the community — not necessarily destructively, but transformatively. New communities will form around new practices, but the formation takes time, and in the interim the practitioners inhabit a social landscape whose familiar landmarks have been removed.
The rituals of the builder's culture are also under transformation. The late-night build session, which Segal describes with a mixture of exhilaration and unease, has a different character when the building partner is a machine rather than a human. The human collaborator tires, loses patience, introduces friction through misunderstanding, and in the friction produces something unexpected — an alternative approach, a challenge to the builder's assumptions, a creative resistance that redirects the work. The machine does not tire. It does not challenge assumptions unless instructed to. It does not introduce the productive friction of genuine disagreement. It is, as Segal acknowledges, more agreeable than any human collaborator, and the agreeableness is itself a cultural problem, because the culture of building has historically depended on disagreement — on the specific kind of creative tension that emerges when two minds with different perspectives negotiate a shared problem.
Williams would have asked: what kind of culture is being produced when the primary collaborator never disagrees? What forms of knowledge develop in the absence of creative friction? What meanings will the practice of building acquire when the builder's relationship to her collaborator is one of direction rather than negotiation? These are not questions about technology. They are questions about culture — about the whole way of life that is organized around the practice of building, and about how that way of life is being transformed by a technology that restructures the material conditions of the practice itself.
The builder's world is a culture. Its vocabulary, its rituals, its social relations, its temporal structures, its forms of knowledge — all are being transformed simultaneously, and the transformation is experienced not as an abstract economic adjustment but as a change in the quality of daily life. To understand the AI transition as merely a technological event — as the introduction of a new tool into an existing practice — is to commit what Williams spent his career identifying as the fundamental error of cultural analysis: the separation of culture from its material base. The tool changes the practice. The practice constitutes the culture. The culture is the lived experience of the transformation. And the lived experience — the structure of feeling — is the primary evidence for understanding what is actually happening, as distinct from what the ideological accounts say is happening.
Cultural materialism begins here: not with the technology, not with the economics, not with the ideology, but with the ordinary experience of people whose lives are being remade by forces they did not choose and do not yet fully understand. The builder's world is that experience, and attending to it with the seriousness it deserves is the first requirement of any adequate analysis.
Raymond Williams proposed that in any given cultural moment, three kinds of element coexist. The dominant constitutes what is recognized, rewarded, and reproduced by the central institutions of the society. The residual carries meanings and values that were generated in an earlier social formation but continue to be lived and practiced, often in tension with the dominant. The emergent names what is genuinely new — new meanings, new values, new relationships, new kinds of practice — that cannot be reduced to the dominant or the residual, though the dominant will always attempt to absorb it and the residual will always claim kinship with it.
Williams insisted that the categories are not merely descriptive. They are analytical tools for understanding the cultural politics of any historical moment — the contest over which meanings prevail, which values are institutionalized, which practices are rewarded, and which are marginalized. The contest is never settled permanently. The dominant is always under pressure from both the residual and the emergent. The residual may be declining, but it retains real social bases and genuine cultural resources. The emergent may be fragile, but it articulates possibilities that the dominant cannot acknowledge without transforming itself. The cultural politics of any moment is the dynamic among these three, and the dynamic determines which structure of feeling will find institutional expression and which will be suppressed.
The AI transition exhibits this tripartite structure with particular clarity, and The Orange Pill's taxonomy of responses — triumphalists, elegists, and the silent middle — maps onto it in ways that illuminate both the taxonomy and the theory.
The triumphalists represent the dominant culture of technological capitalism. Their values — productivity, efficiency, growth, disruption, acceleration — are the values that the central institutions of the technology industry generate, reward, and reproduce. The triumphalist sees the AI transition as the latest chapter in a story of progress whose protagonist is the builder, whose antagonist is friction, and whose resolution is the continuous expansion of human capability through technological innovation. The metrics are the narrative's verification: lines of code generated, applications shipped, revenue earned, time saved. The triumphalist posts at three in the morning about what she built today, and the post functions simultaneously as personal expression and as cultural reproduction — reinforcing the dominant values by demonstrating, through individual example, their vitality and their rewards.
The dominant culture is never merely ideas. It is embedded in institutions — in hiring practices that reward demonstrated productivity, in venture capital that funds growth curves, in media that celebrates disruption, in educational curricula that teach optimization, in the algorithmic feeds that amplify confidence and suppress ambivalence. The triumphalist narrative is powerful not because it is true — it is partially true, which is more dangerous than falsehood — but because it is institutionally supported. The dashboard measures what the triumphalist values. The market rewards what the triumphalist produces. The discourse amplifies what the triumphalist says. The entire institutional apparatus of the technology industry is organized around the reproduction of the triumphalist structure of feeling, and the apparatus is formidable.
Williams would have recognized the specific character of this dominance. It is what Gramsci called hegemony and what Williams adapted into his own analytical framework: not domination by force but domination by consent — the process by which the values and meanings of a particular social group come to appear as common sense, as reality itself rather than as one interpretation of reality among others. The triumphalist narrative has achieved this hegemonic status in the technology industry and is rapidly extending it beyond. The claim that AI is inevitable, that resistance is futile, that the future belongs to those who adapt — this is not presented as an argument. It is presented as a description of reality. The inevitability is not argued for. It is assumed, in the way that common sense is assumed: as something so obvious that to question it is to mark oneself as naive, uninformed, or afraid.
Williams spent his career demonstrating that what presents itself as common sense is always the product of specific social interests. The inevitability of AI is not a fact about the technology. It is a claim made by the people who build, fund, and deploy the technology — people whose interests are served by the perception of inevitability, because inevitability forecloses the possibility of democratic choice. If the technology is inevitable, the only rational response is adaptation. Questions about ownership, control, distribution, and purpose become moot. The river flows regardless, and the only question is whether you will swim or drown.
Williams argued precisely against this logic — with respect to television, to broadcasting, to every communication technology he analyzed. He insisted that technologies are not inevitable. They are developed by specific social actors with specific intentions, funded by specific interests, deployed within specific institutional frameworks. The perception of inevitability is itself a cultural production, manufactured by those who benefit from the foreclosure of alternative possibilities. Williams called this "technological determinism" and identified it as one of the most persistent and damaging ideological formations of modern societies.
The elegists represent the residual. Their values — craft, depth, embodied knowledge, the specific intimacy between a maker and the made — were generated in earlier social formations and have been under pressure from industrial capitalism for two centuries. The hand-loom weaver's pride in work was residual even in 1812. The master calligrapher's satisfaction in letterforms was residual long before AI arrived. The software architect's embodied feel for a codebase — the capacity to sense what was wrong before articulating what was wrong — draws on a tradition of craft knowledge that predates the digital age by millennia.
The residual is not the merely archaic. Williams distinguished carefully between the archaic — that which belongs wholly to the past and can be recognized as such — and the residual, which is actively lived and practiced, not as a revival or a museum piece, but as a genuine element of the present. The software architect's craft knowledge is not archaic. It is being practiced now, by real people, in real workplaces, producing real value. Its residual status derives not from its irrelevance but from its relationship to the dominant: it carries values that the dominant cannot fully incorporate without transforming itself, and it sustains a structure of feeling that the dominant narrative suppresses.
The elegists are right about the loss. Williams would never deny this. The embodied knowledge of the craftsman, the specific understanding that develops through years of patient immersion in a resistant medium, the quality of attention shaped by difficulty — these are genuine cultural goods, and their erosion is a genuine cultural loss. The elegists see something real that the triumphalists cannot see, because the triumphalist's dashboard has no metric for the quality of the practitioner's engagement with her work.
But the residual position carries its own characteristic danger: nostalgia. Williams warned repeatedly against the sentimentalization of the past — the tendency to project onto previous social formations a quality of organic wholeness that they did not possess, using the idealized past as a weapon against the imperfect present. The elegist who mourns the passing of craft risks precisely this sentimentalization. The pre-AI practice of software development was not a golden age of creative fulfillment. It was, by most practitioners' accounts, a mixture of genuine problem-solving, tedious debugging, frustrating dependency management, and the slow grinding through implementation details that consumed the vast majority of working time. The elegist selectively remembers the genuine problem-solving and forgets the grinding, producing a picture of the past that serves as an indictment of the present but that does not correspond to what the past actually felt like.
Williams's analysis of the country-city opposition in English literature is directly relevant here. He demonstrated that for three centuries, English writers had projected onto the rural past a quality of organic community that the rural past never possessed — using the idealized countryside as a critique of the industrial city, and in doing so, obscuring both the realities of rural exploitation and the genuine possibilities of urban life. The elegist's relationship to pre-AI craft culture exhibits the same structure: a selective tradition that preserves what confirms the critique and suppresses what complicates it.
The silent middle represents the emergent — and this is the most consequential identification in the entire analysis. Williams defined the emergent as that which is genuinely new: new meanings, new values, new practices that cannot be reduced to either the dominant or the residual. The emergent is always the hardest element to identify, because it has not yet found its institutional forms, its cultural expressions, its vocabulary. It exists as a structure of feeling — definite in its pressure but not yet articulated in the categories of established thought.
The silent middle is emergent in precisely this sense. The people who hold both truths simultaneously — the excitement and the loss, the liberation and the compulsion, the expansion of capability and the erosion of depth — are inhabiting a structure of feeling that neither the triumphalist nor the elegist framework can contain. Their silence is not indecision. It is the condition of emergence: the experience of living within a reality for which the available cultural forms are inadequate. The silent middle does not speak because the language for what it knows has not yet been made.
The cultural politics of the AI transition will be determined by the fate of this emergent element. Williams identified three possible outcomes for emergent cultural formations: they may be incorporated by the dominant, having their oppositional potential neutralized through selective adoption; they may be marginalized, pushed to the periphery of cultural life where they persist without influence; or they may develop their own institutional forms, their own means of expression, their own social bases, and become a genuine alternative to the dominant.
The incorporation scenario is already visible. The dominant culture of technological capitalism has begun absorbing the language of the silent middle — acknowledging "complexity," celebrating "nuance," performing the simultaneous holding of contradictions — while maintaining the fundamental structure of its values unchanged. The corporate AI governance frameworks, the "responsible AI" initiatives, the ethics boards — these may represent genuine emergent alternatives, or they may represent the dominant culture's incorporation of emergent concern into its own institutional apparatus, neutralizing the concern by giving it a place within the existing structure. Williams would have been skeptical. He observed the same dynamic in the incorporation of working-class culture into the institutions of industrial capitalism, and his analysis of the process was precise: incorporation preserves the form of the alternative while emptying it of content. The values are acknowledged. The practice continues unchanged.
The marginalization scenario is equally visible. The silent middle is, as Segal observes, algorithmically disadvantaged. The feeds reward clarity — pro or con, for or against. Ambivalence does not perform. Complexity does not go viral. The institutional structures of contemporary discourse are designed to amplify the dominant and the residual (which has its own audience among those who prefer the old dispensation) while suppressing the emergent, which has no audience yet because it has not yet produced the cultural forms that would make it recognizable to an audience.
The third scenario — the emergence of genuinely new cultural forms — is the one that Williams's entire career was dedicated to understanding and supporting. He called it the development of "resources of hope": the capacity of a culture to produce, from within its own contradictions, the means of its own transformation. The resources of hope in the AI transition are not technological. They are cultural. They consist of the capacity to articulate the emergent structure of feeling with enough precision and force that it can resist incorporation, escape marginalization, and develop into a genuine alternative to the dominant narrative of technological inevitability.
The articulation has barely begun. What would it look like? Not the triumphalist celebration of capability. Not the elegiac mourning of loss. Something harder and less satisfying: the honest description of what it actually feels like to build with these tools, to live with these contradictions, to hold both the expansion and the erosion in a single act of attention. Not a resolution of the tension but a refusal to abandon either pole — a cultural form adequate to the complexity of the experience, rather than a simplification adequate to the demands of the feed.
Williams believed this kind of articulation was possible because he had seen it happen before — in the working-class fiction of the nineteenth century, in the cultural studies movement he helped to found, in every moment when an emergent structure of feeling found its forms of expression and thereby became available as a resource for collective understanding and action. The AI transition has not yet produced its characteristic cultural forms. The novels have not been written. The critical vocabulary has not been developed. The institutions have not been built.
But the structure of feeling is there — shared, definite, waiting for its articulation. The contest between dominant, residual, and emergent is underway. And the outcome depends not on the technology but on the cultural capacity of the people who live within it to produce, from the specific quality of their experience, the meanings and values that will shape what the technology becomes.
Raymond Williams made an argument in the 1960s and 1970s that most of his contemporaries on the left found puzzling and most of his contemporaries on the right found irrelevant. The argument was that the means of communication are themselves means of production — that the technologies through which meaning is made and distributed are not a secondary apparatus erected upon the real productive economy but are constitutive elements of that economy, as materially significant as the factory, the mill, or the mine.
The argument was puzzling to the left because orthodox Marxism treated communication as superstructure — as ideology, as the realm of ideas that reflected but did not fundamentally alter the material base of economic production. A printing press was a machine, certainly, but what it produced was not coal or steel or grain. It produced meaning, and meaning, in the orthodox framework, was a secondary phenomenon, determined in the last instance by the organization of material production. Williams rejected this hierarchy absolutely. The production of meaning, he argued, is material production. It requires physical technologies — presses, broadcasting equipment, paper, ink, electromagnetic spectrum. It requires organized labor — writers, editors, printers, broadcasters. It produces commodities that circulate in markets. And, most importantly, it shapes the conditions under which all other production takes place, because the organization of communication determines what can be known, what can be coordinated, what can be collectively decided, and what can be imagined as possible.
The argument was irrelevant to the right because liberal economics treated communication technologies as neutral conduits — channels through which information flowed from sender to receiver without the channel itself exerting any shaping force on the message. The technology of communication was infrastructure, like roads or plumbing: important in a utilitarian sense, but not interesting as a site of power, contestation, or political consequence. Williams rejected this neutrality thesis with equal force. Communication technologies are never neutral. They are designed by specific actors, funded by specific interests, organized according to specific institutional logics, and the design, funding, and organization shape what can be communicated, to whom, under what conditions, and with what consequences.
The large language model vindicates Williams's argument with a force that he could not have anticipated but that his framework was precisely engineered to accommodate. Claude Code, the tool at the center of The Orange Pill's narrative, is simultaneously a means of communication and a means of production in the most literal sense. It communicates — it processes natural language, engages in dialogue, produces text that carries meaning. And it produces — it generates functional code, working prototypes, deployable systems, things that do things in the world. The two functions are not separate. They are aspects of a single technology, and the inseparability is what makes the technology historically unprecedented.
Previous communication technologies produced meaning. The printing press produced books. Broadcasting produced programs. The internet produced websites and social media posts. These were forms of cultural production — significant, consequential, world-shaping — but they were distinguishable, at least in principle, from material production. The book was not a bridge. The television program was not a factory. The boundary between communication (the production of meaning) and production (the making of material things) was blurred but still legible.
The large language model erases this boundary. When a builder describes a problem in natural language and receives working software in response, the act of communication is the act of production. The prompt is both a statement of intention (communication) and an instruction to manufacture (production). The output is both a meaningful response (communication) and a functional artifact (production). Williams's argument that the means of communication are means of production ceases to be a theoretical claim and becomes a literal description of the technology's operation.
The political consequences are immediate and substantial. If the means of communication are means of production, then the question of who owns and controls the means of communication is identical to the question of who owns and controls the means of production — the question that has driven political contestation in industrial societies for two centuries. Williams posed this question about broadcasting in the 1970s, when the BBC's public service model was under pressure from commercial interests, and his analysis remains the most precise framework available for understanding what is at stake.
The framework identifies three possible modes of organizing communication technologies. The first is authoritarian: the state controls the technology and uses it to maintain its own power. The second is commercial: the market controls the technology and organizes it according to the logic of profit maximization. The third is democratic: the technology is organized according to principles of public access, shared governance, and accountability to the communities it serves. Williams argued that these three modes are always in contest, that the contest is the central political question surrounding any communication technology, and that the commercial mode tends to present itself as the natural or inevitable organization of the technology, foreclosing democratic alternatives by making them appear utopian or impractical.
The AI transition is being organized according to the commercial mode. The large language models are developed by private corporations — Anthropic, OpenAI, Google, Meta — funded by private capital, organized according to the logic of market competition, and distributed through commercial subscription models. The democratic mode is marginal: open-source AI exists but operates at a significant resource disadvantage relative to the corporate models. The authoritarian mode is present in specific jurisdictions — China's approach to AI governance combines state direction with commercial deployment — but is not the dominant global pattern.
Williams would have recognized this commercial organization as neither natural nor inevitable. The perception of inevitability is itself a product of the commercial mode — a cultural production that serves the interests of the corporations that control the technology by foreclosing the possibility of alternative arrangements. When Segal describes the AI transition as a force of nature, as a river that cannot be stopped, he is reproducing — perhaps unconsciously — the ideological framework that Williams spent his career challenging. The river is not a natural phenomenon. It is a social product, produced by specific institutions with specific interests, and the direction of its flow is determined not by natural law but by the decisions of the people and organizations that control its channels.
This does not mean that democratic alternatives are readily achievable. Williams was not naive about the difficulty of democratizing communication technologies under conditions of concentrated capital. The history of broadcasting — from the BBC's public service origins through the progressive commercialization of the medium — demonstrated that democratic organization requires continuous political struggle against the tendency of commercial interests to capture and redirect public goods toward private profit. The struggle is structural, not episodic. It is not won once and preserved forever. It must be maintained against constant pressure, because the commercial logic that threatens democratic organization is itself constantly renewed by the institutional apparatus that supports it.
The specific form of the struggle in the AI transition concerns access, ownership, and accountability. Access: who can use the models, on what terms, at what cost? The subscription model that currently governs access to frontier AI capabilities is a commercial arrangement that distributes capability according to ability to pay. The developer in Lagos and the engineer at Google may have access to the same tool, but they do not have access on the same terms — the cost relative to local wages, the reliability of connectivity, the availability of the computational resources that advanced use requires, all differ dramatically, and the differences reproduce the inequalities of the broader economic structure.
Ownership: who owns the models, the training data, the computational infrastructure? The training data that gives the models their capabilities is drawn from the collective cultural production of humanity — from books, articles, code repositories, forums, and every other form of publicly accessible human expression. The extraction of this collective production and its conversion into proprietary commercial assets raises questions that Williams's framework is uniquely positioned to address. When the means of communication are means of production, the raw material of communication — the accumulated cultural production of a society — is simultaneously the raw material of production. The privatization of this raw material is an enclosure in the historical sense: the conversion of a common resource into private property, justified by the investment required to process the resource but carrying consequences for public access that the justification does not address.
Accountability: to whom are the builders and deployers of AI systems answerable? The corporate structure of the major AI companies places them under the governance of their shareholders and, to varying degrees, their boards. The public — the billions of people whose lives are affected by the technology's deployment — has no direct governance role. The regulatory frameworks emerging in various jurisdictions — the EU AI Act, American executive orders, guidelines in Singapore and Brazil and Japan — represent attempts to impose public accountability on private actors, but the frameworks address the supply side (what companies may build) rather than the demand side (what communities need from the technology). Williams would have found this asymmetry characteristic: the regulation of commercial communication technologies has always tended to constrain the most visible harms while leaving the fundamental structure of ownership and control undisturbed.
The Orange Pill engages with these political questions unevenly. Segal acknowledges the barriers to access — connectivity, hardware costs, language — and identifies them as real constraints on the democratization the technology promises. He acknowledges that the ownership of the models is concentrated and that the concentration carries consequences. But the book's primary register is personal and practical rather than political. The author's ground-level account of building with AI, training teams, shipping products — this is the book's distinctive contribution, and it is genuine. The political analysis is less developed, appearing as a recognition of problems rather than as a sustained examination of the structures that produce them.
Williams's framework supplies what the personal register cannot: a structural analysis of the relationship between the technology's capabilities and the social organization that determines how those capabilities are distributed. The twenty-fold productivity multiplier is real. The question is who captures the productivity gain. The democratization of building is real. The question is whether the democratization is structural — embedded in the organization of the technology itself — or contingent, dependent on the commercial decisions of the corporations that control access and terminable at their discretion.
These questions are not rhetorical. They have material consequences that are already visible. When Anthropic adjusts its pricing model, the adjustment determines which developers in which countries can afford to use the tool at the level required for the productivity gains The Orange Pill describes. When OpenAI modifies its terms of service, the modification determines what can be built with the tool and what cannot. When Google decides which capabilities to make available through its API and which to reserve for internal use, the decision shapes the landscape of possibility for every builder who depends on the technology. These are decisions about the organization of means of production, and they are made by private actors accountable to private interests.
Williams would not have concluded from this analysis that the technology should be rejected. He was never a refusenik. His argument about broadcasting was not that commercial television should be abolished but that public alternatives should be maintained, strengthened, and defended against the encroaching logic of the market. The same argument applies to AI: not that commercial AI should be prohibited, but that democratic alternatives — open-source models, publicly funded research, community-governed deployment — should be supported as a structural counterweight to commercial concentration.
The means of communication are means of production. The phrase has never been more literally true than it is in the age of the large language model. And the political question that follows from the phrase — who owns, controls, and benefits from the means of production? — has never been more urgent. The answer is currently being determined not by democratic deliberation but by market competition among a small number of extraordinarily powerful corporations. Williams's life's work was an argument that this determination is not inevitable, that democratic alternatives are possible, and that the perception of inevitability is itself a cultural production that serves the interests of those who benefit from the existing arrangement.
The cultural materialist's task is to make this production visible — to show that the river's course is not determined by nature but by the decisions of those who have the power to direct it, and that the rest of us have a legitimate claim to participate in those decisions. The claim is not utopian. It is the application to a new technology of the same democratic principle that Williams applied to broadcasting, to education, to every communication technology he analyzed: that the means through which a society makes meaning and produces its material conditions of existence are too consequential to be governed solely by private interest, and that public accountability is not a constraint on innovation but a condition of its serving the public good.
Every culture tells itself a story about where its achievements come from. The story serves a function: it organizes the relationship between the individual and the collective, establishes who deserves credit and reward, and naturalizes the arrangements through which creative work is produced, distributed, and valued. Raymond Williams called this story the selective tradition — the process by which, from a vast body of cultural production, certain works and certain figures are chosen, elevated, and presented as the culture's defining achievements, while the conditions that produced them — the social relationships, the material circumstances, the collective processes of influence and collaboration — are systematically suppressed.
The selective tradition is not a conspiracy. It is not the deliberate falsification of history by a coordinating elite, though it serves elite interests and is maintained by elite institutions. It is, rather, a structural feature of how cultures reproduce themselves: the past is always too large and too complex to be transmitted whole, and the process of selection — which works are preserved, which are taught, which are held up as exemplary — is inevitably shaped by the values of the selecting culture. The values are not arbitrary. They reflect the dominant social relations of the present, projected backward onto the past, producing a history that appears to confirm what the present already believes.
In the context of creative work, the selective tradition's most consequential product is the myth of the solitary genius — the figure who produces, from the resources of individual talent alone, work that transforms the culture. The myth is extraordinarily persistent, surviving every historical demonstration of its inadequacy, because it serves a function that no amount of evidence can dislodge: it naturalizes the concentration of reward in individual hands by locating the origin of value in individual minds.
The Orange Pill mounts a sustained challenge to this myth in its fourth chapter, using Bob Dylan's "Like a Rolling Stone" as its primary case. The argument, compressed: Dylan was not a solitary creator but a node in a network of cultural influence — Woody Guthrie, Robert Johnson, the Beat poets, the British Invasion, the specific confluence of tributaries that ran through Greenwich Village in the early 1960s. The twenty pages of "vomit" that became the song were not produced from nothing. They were produced from a vast implicit training set of cultural experience, processed through the specific biographical architecture of one person, and refined through collaboration (the band at Columbia Studio A, the accidental presence of Al Kooper on organ). The genius, Segal argues, is not the person who creates from nothing — no one creates from nothing — but the person whose particular configuration of inputs produces a synthesis that no other configuration could have produced.
Williams would have recognized this argument immediately, because it is his own argument in different dress. The selective tradition takes the song and strips away the social conditions of its production. It preserves Dylan-the-genius and suppresses Guthrie-the-influence, Johnson-the-predecessor, the Greenwich Village coffee shops where arguments about politics and aesthetics shaped the sensibility that would produce the song, the recording studio where collaboration and accident transformed a draft into a finished work. The selective tradition produces the origin myth — one mind, one moment, one song — because the origin myth confirms the dominant culture's conviction that value originates in individuals and therefore belongs to individuals.
Williams's counter-argument, developed across decades of literary and cultural analysis, was that creative production is always a social process. Not in the weak sense that creators are "influenced" by their social environment — everyone grants this much — but in the strong sense that the social environment is constitutive of the creative act. The environment provides not merely the raw materials (the influences, the predecessors, the cultural vocabulary) but the very capacity for creative production — the forms of thought, the habits of attention, the standards of quality, the collaborative relationships through which individual talent is developed, tested, and expressed. Remove the environment and you do not merely impoverish the creator. You eliminate the conditions of creation itself.
This analysis has direct and unsettling implications for the AI transition. The large language model makes visible what the selective tradition conceals: the social, collective, relational character of creative production. The model is trained on the accumulated cultural production of humanity — the books, the code, the conversations, the arguments — and it produces new work by processing this collective inheritance through a statistical architecture that finds patterns, generates inferences, and synthesizes outputs that are consistent with the training data but not contained within it. The process is structurally analogous to what Dylan did — processing a vast implicit training set through a particular architecture to produce something new — though the architecture is mathematical rather than biographical, and the processing lacks the consciousness, the stakes, the mortality that give human creation its distinctive character.
The analogy is uncomfortable because it threatens the myth from two directions simultaneously. First, it reveals that human creativity has always operated according to a logic of synthesis from collective materials — that the solitary genius was always, at the structural level, doing what the model does: processing inputs through an architecture to produce outputs. The myth of originality is destabilized by the existence of a system that produces recognizably creative outputs through a process that is transparently synthetic. If the model can produce a plausible song lyric, a functional software architecture, a persuasive argument, by synthesizing from collective materials, then the claim that human creativity is fundamentally different in kind — that it draws on some source unavailable to synthesis — requires stronger evidence than the myth has ever provided.
Second, the analogy threatens the myth by making the collective contribution visible and quantifiable. The training data is a concrete, identifiable, potentially attributable body of collective production. When the model produces output, the relationship between the output and the training data can be, in principle, traced. The collective contribution that the selective tradition suppresses — the vast network of influence, collaboration, and shared production that makes individual achievement possible — becomes, in the context of AI, a material fact with legal and economic consequences. Who owns the training data? Who is compensated for the collective production that the model depends upon? These questions, which seem technical, are in fact the political expression of what the selective tradition has always concealed: the social basis of creative production.
Williams would have pushed this analysis further than The Orange Pill does. Segal argues that the myth of the solitary genius is an "illusion of ego" and that creativity is relational — it lives in connections rather than in individuals. The argument is correct as far as it goes. But Williams would have asked: whose interests does the illusion serve? The answer is that the illusion serves the interests of the institutions that distribute reward according to the myth of individual origin — the copyright system, the patent system, the venture capital model that funds individual founders, the cultural economy that concentrates fame and fortune in the hands of identifiable creators while the social conditions of creation go uncompensated.
The AI transition is disrupting this arrangement in ways that are simultaneously liberating and dangerous. Liberating, because the democratization of creative capability — the collapse of the imagination-to-artifact ratio — distributes the power of production more widely than the myth of genius has ever allowed. When any person with access to a language model can produce work of professional quality, the gatekeeping function of the selective tradition is weakened. The institutions that maintained the tradition — the publishers, the studios, the credentialing bodies — lose their monopoly on determining whose work reaches an audience. This is a genuine democratic gain, consistent with Williams's vision of a culture in which the means of cultural production are broadly distributed rather than concentrated in the hands of an elite.
Dangerous, because the disruption of the selective tradition does not automatically produce a more equitable arrangement. It may simply produce a new form of concentration — one organized not around the myth of individual genius but around the ownership of the means of synthetic production. If the old selective tradition concentrated reward in the hands of identifiable creators, the new arrangement may concentrate reward in the hands of the corporations that own the models — the platforms through which creative production is increasingly mediated. The creator is displaced not by another creator but by a system, and the system's owners capture the value that the selective tradition previously distributed, however unequally, among human creators.
Williams's concept of the selective tradition also illuminates a subtler dimension of the AI transition: the construction of a new selective tradition around AI itself. The discourse is already producing its founding myths — the stories of solo builders who shipped products without writing a line of code, the engineers who achieved in days what teams had required months to accomplish, the heroic narratives of individuals amplified to superhuman productivity by the new tools. These stories are being selected, elevated, and circulated by the same institutional mechanisms that maintained the old selective tradition: media coverage, social media virality, conference keynotes, investment narratives. And they serve the same function: naturalizing a particular arrangement of creative production by locating its origin in individual achievement rather than in the social conditions — the corporate development of the model, the collective production of the training data, the institutional infrastructure of the internet — that made the achievement possible.
The new selective tradition celebrates Alex Finn's solo building year. It does not equally celebrate the thousands of engineers at Anthropic who built and trained the model Finn used. It celebrates the engineer in Trivandrum who built a complete feature in two days. It does not equally examine the conditions under which the training data that enabled the model's capabilities was produced — the millions of developers whose publicly shared code constitutes the model's knowledge, most of whom receive no compensation and no credit. The new selective tradition, like the old, suppresses the social basis of production in order to celebrate the individual achievement that the social basis makes possible.
This suppression is not incidental. It is functional. The celebration of individual achievement within the AI paradigm serves the interests of the platforms in the same way that the celebration of individual genius served the interests of the publishing industry. It focuses attention on the user — the brilliant individual who leverages the tool — and deflects attention from the infrastructure — the corporate-owned, privately controlled system that determines what the tool can do, who can access it, and on what terms. The selective tradition is being reconstructed, not dismantled, and the new version may prove even more effective at concealing the social relations of production than the old one, because the technology's capabilities are so dramatic that the individual stories they generate are even more compelling than the genius myths of the previous dispensation.
Williams's method requires that we read these stories against the grain — that we attend not only to what they celebrate but to what they suppress, not only to the achievements they narrate but to the conditions of production they conceal. The solo builder's story is real. The capability expansion is genuine. But the selective tradition that presents these stories as evidence of individual empowerment rather than as products of a specific social arrangement — one in which the means of production are privately owned and commercially controlled — is performing the same ideological function that selective traditions have always performed: naturalizing the existing distribution of power by presenting its outcomes as the achievements of individuals rather than the products of a system.
The correction is not to deny individual achievement. Williams never denied it. The correction is to restore the social context that the selective tradition removes — to see the builder and the model and the training data and the corporate infrastructure and the collective cultural production as elements of a single social process, and to insist that the political questions about who owns, controls, and benefits from this process are not secondary to the stories of individual achievement but constitutive of them.
The most consequential misunderstanding in the history of cultural theory is the reduction of the base-superstructure model to a one-way determinism: economics determines culture, the material base produces the ideological superstructure, and culture is an epiphenomenon — a reflection of economic relations with no independent causal force. Raymond Williams spent decades dismantling this reduction, not by rejecting the model but by complicating it to the point where the crude determinism dissolved while the genuine insight — that culture and economics are materially connected, that the conditions of production shape the conditions of meaning — was preserved and deepened.
Williams's revision turned on a single word: determination. In vulgar Marxism, determination meant that the base caused the superstructure — that economic relations produced cultural forms with the mechanical predictability of a physical law. Williams replaced this causal determinism with a formulation he borrowed from its linguistic origins: determination as the setting of limits and the exertion of pressures. The economic base does not cause particular cultural forms. It establishes the conditions within which cultural production occurs — the limits of what is materially possible and the pressures that favor certain forms of production over others. Within those limits, and under those pressures, cultural production has genuine autonomy. It is not free — it operates within material constraints — but it is not mechanically determined. It is a process with its own logic, its own dynamics, its own capacity to act back upon the conditions that shape it.
This revision matters enormously for the AI transition, because the transition is restructuring the relationship between base and superstructure in ways that the crude model cannot capture and that even Williams's refined model must strain to accommodate.
Consider the economics first. The Death Cross that The Orange Pill describes in its nineteenth chapter — the moment when AI market value overtakes SaaS valuation — is a base-level transformation of extraordinary magnitude. The economic base of the software industry rested on a specific cost structure: software was expensive to build because building it required specialized human labor, and the expense created the scarcity that sustained the subscription model. When the cost of building software approaches zero — when Claude Code can produce in an hour what a team of engineers previously required months to create — the cost structure that supported the entire industry collapses, and with it the economic base upon which a vast cultural superstructure had been erected.
That superstructure includes everything The Orange Pill documents: the meanings attached to skill and seniority, the social organization of engineering teams, the hierarchies of expertise, the educational institutions that trained practitioners, the professional identities that practitioners constructed around their mastery of specific technologies. All of these are superstructural phenomena in Williams's sense — cultural forms that were produced under the conditions established by a particular economic base. When the base changes, the superstructure is placed under pressure. Not destroyed — Williams was always clear that the relationship is one of pressure and limits, not mechanical causation — but destabilized, forced to adapt, compelled to find new forms adequate to the new conditions.
The adaptation is visible in real time. The engineer whose eighty percent of implementation work is automated does not simply continue doing the remaining twenty percent in the same way. The meaning of the twenty percent changes when it becomes the whole of the work rather than a fraction of it. Judgment, taste, architectural instinct — these were always present in the engineer's practice, but they were embedded in and inseparable from the implementation labor that surrounded them. Extracted from that labor and presented as the primary content of the work, they acquire a different character. They become, in the language of the superstructure, the definition of engineering skill rather than one component of it. The cultural meaning of the profession is being restructured by the economic transformation of the profession's material base.
But Williams's framework insists that the relationship is not one-directional. The superstructure acts back upon the base. Cultural meanings shape economic relations as much as economic relations shape cultural meanings. This reciprocal determination is the core of Williams's theoretical contribution, and it is directly relevant to the AI transition in ways that the economic analysis alone cannot capture.
Consider how the cultural meaning of productivity shapes the economic organization of AI-augmented work. The Berkeley study documented that AI does not reduce work but intensifies it — that workers who adopt AI tools work faster, take on more tasks, and expand into areas that had previously been someone else's domain. The economic explanation is straightforward: the tool makes more work possible, and the market rewards more work. But the cultural explanation is equally important and cannot be reduced to the economic one. The intensification occurs because productivity has a specific cultural meaning in the builder's world — a meaning that was produced under the old economic conditions but that now operates as an independent pressure under the new ones.
Productivity, in the builder's culture, is not merely an economic measure. It is a moral category — a marker of worth, a source of identity, a ground project in the philosophical sense. The engineer who ships more code is not merely more economically valuable. She is, in the terms of the culture she inhabits, a better person — more capable, more dedicated, more worthy of respect. This cultural meaning was produced by the economic conditions of pre-AI software development, where productive output was genuinely scarce and therefore genuinely valuable. But the meaning persists under conditions where the scarcity has been abolished. The tool makes output abundant. The cultural meaning that was produced by scarcity now operates under conditions of abundance, and the result is the compulsive intensification that the Berkeley study documents — not because the market demands more work, but because the culture of building defines more work as more virtue.
This is the superstructure acting back upon the base. The cultural meaning of productivity, produced under old economic conditions, shapes the economic behavior of workers under new economic conditions, producing outcomes (intensification, burnout, the colonization of rest by work) that the economic transformation alone does not explain. The explanation requires attention to both dimensions — the economic and the cultural — and to the reciprocal determination between them.
Williams's framework also illuminates the specific form of the intensification. The Berkeley researchers found that AI-assisted workers did not simply do their existing work faster. They expanded into new domains, took on tasks that had previously belonged to specialists in other areas, and blurred the boundaries between roles. This expansion is economically rational — the tool makes it possible, and the market rewards breadth. But it is also culturally meaningful in a way that the economic analysis does not capture.
The expansion of capability is experienced, by the practitioners who undergo it, as a transformation of identity. The backend engineer who starts building interfaces is not merely adding a skill. She is becoming a different kind of practitioner — one whose relationship to the work, whose sense of what she is capable of, whose professional identity has been expanded in ways that feel simultaneously liberating and disorienting. The liberation is real: she can do things she could not do before. The disorientation is equally real: the boundaries that previously defined her expertise, that told her who she was as a professional, have dissolved. She is more capable and less certain — more productive and less settled in her identity.
This experience — capability expansion accompanied by identity destabilization — is a superstructural phenomenon produced by a base-level transformation, and it is one of the defining features of the AI transition's structure of feeling. It cannot be understood through economics alone, because it involves the cultural meanings attached to skill, identity, and professional selfhood. And it cannot be understood through cultural analysis alone, because it is produced by a material transformation in the conditions of production. It requires the integrated analysis that Williams's framework provides — the analysis that treats economic and cultural phenomena not as separate domains but as aspects of a single process of social transformation.
There is a further dimension that Williams's base-superstructure analysis reveals. The relationship between base and superstructure is mediated by institutions — the organizations, practices, and structures through which economic relations are translated into cultural meanings. Universities, professional associations, credentialing bodies, media organizations, corporate cultures — these are the institutions that translate the economic base into the cultural superstructure, producing the specific meanings that are attached to skill, expertise, productivity, and value.
The AI transition is placing these mediating institutions under extraordinary pressure. Universities that trained practitioners in specific technical skills find those skills commoditized before the practitioners graduate. Professional associations whose authority rested on the scarcity of the expertise they credentialed find the scarcity abolished. Corporate cultures organized around the old distribution of capability — the specialist silos, the hierarchies of seniority, the team structures that reflected the translation costs of pre-AI production — find themselves organizing people and resources according to a logic that the new conditions have rendered obsolete.
The institutional lag is perhaps the most consequential feature of the current moment. The economic base has shifted. The superstructure has not yet caught up. The institutions that mediate between the two are operating according to the logic of the old base while the new base generates pressures that the old logic cannot accommodate. Segal identifies this lag throughout The Orange Pill — in the observation that educational institutions are not prepared for the change, that corporate planning based on pre-December 2025 assumptions should be discarded, that the regulatory frameworks address the supply side while the demand side remains exposed.
Williams would have recognized this lag as characteristic of every major transformation in the long revolution. The institutions of the old dispensation persist beyond the conditions that produced them, not because they are deliberately maintained by a conspiracy of the powerful (though powerful interests do benefit from their persistence), but because institutions have inertia — they reproduce themselves through the habits, practices, and assumptions of the people who inhabit them, and the reproduction continues until the pressure from the new conditions becomes irresistible.
The question is not whether the institutions will adapt. They will, eventually, as they have in every previous phase of the long revolution. The question is what forms the adaptation will take — whether the new institutional arrangements will serve the interests of the many or the few, whether they will preserve the democratic potential of the technology or capture it for private gain, whether they will create the conditions for genuine human flourishing or merely optimize for the metrics that the dominant culture rewards.
Williams's framework does not predict the outcome. It identifies the stakes. The base is shifting. The superstructure is under pressure. The institutions that mediate between them are in crisis. And the form of the new settlement — the new arrangement of economic relations, cultural meanings, and institutional structures — will be determined not by the technology itself but by the social and political contests that the technology has set in motion.
The amplifier amplifies. The question, as it has always been, is who controls the volume and what signal is being fed.
In 1976, Raymond Williams published Keywords: A Vocabulary of Culture and Society, a book that tracked the changing meanings of over a hundred words — "culture," "industry," "democracy," "art," "nature," "work" — through the history of English usage. The method was deceptively simple. Take a word. Trace its history. Show how shifts in meaning reflect shifts in social reality. The simplicity concealed a radical argument: that the vocabulary through which a society thinks about itself is not a transparent window onto reality but a contested terrain, shaped by power, serving interests, and changing as the conditions of social life change. To understand a society, study its keywords. The meanings it attaches to its most important words reveal what it values, what it fears, and what it cannot yet articulate.
The keyword that most urgently demands this treatment in the AI transition is "skill." The word has been central to the organization of work, the distribution of reward, and the construction of professional identity for two centuries, and the AI transition is transforming its meaning more rapidly and more fundamentally than any previous technological shift.
The history of the word is instructive. "Skill" in Old English meant discernment, the capacity to distinguish — a cognitive capacity, a quality of judgment. By the Middle English period, it had acquired the additional meaning of practical ability — the capacity to do something well with one's hands or one's mind. The two meanings coexisted for centuries: skill as discernment and skill as performance, the capacity to judge and the capacity to execute.
The industrial revolution sharpened the meaning toward execution. "Skilled labor" meant labor that required training — the apprenticeship system, the guild structure, the long process of learning a trade through practice under the supervision of a master. "Unskilled labor" meant labor that required no such training — the factory hand who could be productive after a few hours of instruction. The distinction was economic (skilled workers commanded higher wages), social (skilled workers occupied a higher position in the hierarchy of labor), and moral (skill was associated with virtue — with patience, dedication, the willingness to undergo years of discipline in pursuit of mastery).
The computer era narrowed the meaning further. "Technical skill" became the dominant usage — the capacity to write code, to operate specialized software, to navigate the specific tools and languages of the digital environment. The narrowing was consequential: it excluded from the category of "skill" the very capacities — judgment, taste, strategic thinking, the ability to determine what should be done rather than how to do it — that the word's earlier history had included. The narrowing served the interests of the technology industry, which needed a workforce defined by its technical capabilities and was content to treat the broader capacities of judgment and discernment as either innate (the province of founders and executives) or irrelevant (the province of humanities departments and management consultants).
The AI transition is reversing the narrowing. When the machine can write the code, debug the system, manage the dependencies, and handle the implementation details that constituted "technical skill" in the pre-AI dispensation, the word "skill" is forced to expand — to recover the older meanings that the industrial and digital narrowing suppressed. The senior engineer in Trivandrum whose eighty percent of implementation work was assumed by Claude Code discovered that the remaining twenty percent — judgment, architectural instinct, the capacity to determine what should be built and how it should fit together — was the skill that mattered. The discovery was experienced as a revelation, but it was in fact a recovery: the restoration of the word's original meaning, the meaning that the industrial narrowing had obscured.
This is not a comfortable restoration. The institutions that distribute reward — compensation structures, hiring practices, credentialing systems — are organized around the narrowed meaning. "Skill" in the job posting means technical capability: proficiency in specific languages, frameworks, and tools. "Skill" in the performance review means demonstrable output: code shipped, features deployed, tickets closed. The broader meaning — discernment, judgment, the capacity to make decisions that cannot be automated — has no metric, no certification, no standard hiring rubric. The word is expanding, but the institutions lag behind the expansion, and the lag produces real consequences for real people: practitioners whose technical skills are being commoditized but whose broader capacities — developed over years of practice and invisible to the measurement systems that govern their careers — go unrecognized and unrewarded.
Williams would have identified this lag as characteristic of the relationship between semantic change and institutional inertia. Words change faster than institutions. The meaning of "skill" is already shifting in the discourse — Segal's argument about ascending friction is essentially an argument about the recovery of skill's broader meaning. But the hiring practices, the compensation structures, the educational curricula, the credentialing systems — these remain organized around the narrowed meaning, and they will continue to produce outcomes based on the narrowed meaning until the institutional structures are reformed to accommodate the expansion.
The reform is not merely administrative. It is cultural. The narrowed meaning of "skill" was not imposed by institutions upon a reluctant population. It was produced by, and in turn reinforced, a specific structure of feeling — a way of experiencing work in which the value of the worker was identified with the technical capacity the worker possessed, and in which technical capacity was understood as the mastery of specific tools and languages rather than the broader capacity for judgment and discernment that the tools and languages served. The narrowing was comfortable because it was legible: one could measure technical skill, certify it, rank it, compare it across individuals and across organizations. The broader meaning is uncomfortable precisely because it resists this legibility. Judgment cannot be tested on a coding assessment. Taste cannot be certified by a boot camp. Architectural instinct cannot be ranked on a leaderboard.
The AI transition forces the uncomfortable. It strips away the legible layer of technical execution and exposes the illegible layer of judgment beneath it. The exposure is experienced as a crisis by practitioners whose identities were constructed around the legible layer and as a liberation by practitioners whose broader capacities were always present but always subordinated to the demands of technical execution.
Williams's Keywords method reveals that the crisis is not new. Every major transformation in the organization of work has produced a corresponding transformation in the meaning of "skill," and every transformation has been experienced as a crisis by the practitioners whose identities were constructed around the previous meaning. The hand-loom weavers of Nottinghamshire experienced the power loom as a crisis of skill — the destruction of the specific capacity (hand-weaving) around which their professional identity was organized. The clerks who were displaced by the typewriter experienced a similar crisis, as did the compositors displaced by desktop publishing, the darkroom technicians displaced by digital photography, and the bookkeepers displaced by the spreadsheet. In each case, the meaning of "skill" shifted — from one form of execution to another, from the mastery of one set of tools to the mastery of another — and the shift was experienced as a loss by those whose identities were constructed around the previous meaning.
The AI transition differs from these precedents in a way that Williams's analysis helps to clarify. In previous transitions, the meaning of "skill" shifted laterally — from one form of execution to another. The weaver's skill was replaced by the machine operator's skill. The compositor's skill was replaced by the desktop publisher's skill. The execution remained at the center of the meaning; only the specific form of execution changed. The AI transition shifts the meaning vertically — from execution to judgment, from the capacity to do to the capacity to decide what should be done. This is not a substitution of one form of skill for another within the same category. It is a transformation of the category itself.
The vertical shift recovers the word's oldest meaning — skill as discernment — but it does so under conditions that are radically different from those in which the oldest meaning was produced. Discernment in the pre-industrial context was embedded in a craft practice: the master's judgment was developed through and expressed through the specific activities of the craft. The master weaver's discernment about cloth quality was inseparable from the master weaver's capacity to weave. The AI transition separates what was previously inseparable. The judgment persists, but the execution that grounded the judgment has been assumed by the machine. The discernment that was embedded in practice now floats free of the practice that produced it, and the question of whether discernment can survive the loss of its practical substrate is one of the most consequential questions of the transition.
The Orange Pill addresses this question through the metaphor of ascending friction: the friction of execution is replaced by the friction of judgment, and the new friction is harder, more demanding, more worthy of the practitioner's intelligence and dedication. The argument is hopeful, and it may be correct. But Williams's analysis suggests a qualification. The ascending friction is real, but it is a different kind of friction, and the difference matters. The friction of execution was embodied — it lived in the hands, in the iterative engagement with a resistant medium, in the specific quality of attention that sustained physical practice produces. The friction of judgment is cognitive — it lives in the mind, in the evaluation of alternatives, in the weighing of considerations at a level of abstraction that the embodied practice did not require. The ascent is real. But it is also a disembodiment — a removal of skill from the specific, physical, material engagement that previously constituted it.
Whether this disembodiment enriches or impoverishes the practice is an open question — one that cannot be answered in advance, because the answer depends on the cultural conditions under which the new form of skill is developed. If the institutions that train practitioners develop methods for cultivating judgment — for teaching the discernment that was previously developed through embodied practice — then the vertical shift may prove enriching. If the institutions fail to adapt, and practitioners are left to develop judgment without the support structures that the old apprenticeship system provided for the development of technical execution, then the shift may produce a generation of workers who are nominally skilled in the new sense but practically unprepared for the demands of the work.
The keyword "skill" is in transition. Its meaning is expanding, recovering, transforming. The expansion carries genuine democratic potential — a broader definition of skill is a more inclusive definition, one that recognizes capacities previously marginalized by the narrowed technical meaning. But the expansion also carries risk — the risk that the broader meaning, lacking the institutional support and legibility structures that the narrower meaning possessed, will produce not a richer understanding of what human capability consists of but a vaguer one, less capable of organizing the distribution of reward, the structure of education, and the construction of professional identity.
Williams tracked keywords not to determine their correct meaning but to make visible the social contests embedded in their usage. The contest over the meaning of "skill" in the AI transition is one of the most consequential of these contests. Its outcome will shape who is valued, who is trained, who is rewarded, and who is left behind. The keyword is the battlefield. The culture is the prize.
Raymond Williams grew up in Pandy, a village on the Welsh border where England met Wales and neither culture held uncontested sovereignty. The border was not a line on a map. It was a lived experience — the experience of inhabiting a space where two ways of life coexisted, where the language you spoke at home was not the language of the school, where the landscape carried meanings that the official culture did not recognize and the official culture carried authority that the landscape could not confer. Williams returned to this border country throughout his career, not as nostalgia but as method. The border was where the categories broke down, where the dominant could not fully suppress the residual, where the emergent first became visible precisely because it could not be absorbed into either of the formations between which it emerged.
In The Country and the City, published in 1973, Williams traced a different border — the opposition between rural and urban life as it had been constructed across three centuries of English literature. His argument was that the opposition was not a description of reality but a cultural production: a way of organizing feelings about social change by projecting them onto a spatial metaphor. The country represented the organic, the authentic, the community of direct human relationships. The city represented the mechanical, the artificial, the society of instrumental calculations. Both representations were false, but their falsity was productive — it allowed English culture to process the dislocations of capitalist development through a spatial imaginary that concealed the actual social relations producing the dislocation.
Williams demonstrated that every generation located the golden age of rural harmony one generation further back than the last. The eighteenth-century poet mourned the lost countryside of the seventeenth century. The seventeenth-century essayist mourned the lost countryside of the sixteenth. The regression continued until Williams had chased it back to classical antiquity, at which point he observed, drily, that the golden age had never existed. What each generation mourned was not an actual past but a projected innocence — the fantasy of a world before the disruption that the present generation was experiencing, a world whose organic wholeness was always already lost, because the loss was the mechanism through which the mourning culture processed its own transformation.
The AI transition produces its own country-and-city opposition, and the spatial imaginary through which the transition is culturally processed maps with striking precision onto Williams's analysis. The opposition is not between rural and urban in the literal sense, though geography matters in ways the discourse has barely begun to address. It is between three locations that function as cultural positions: Berlin, Lagos, and Trivandrum — the garden, the frontier, and the border country.
Berlin is the country. Specifically, Byung-Chul Han's Berlin — the garden where philosophy happens at the pace of seasons, where music is listened to in analog, where the smartphone is refused, where the friction between the thinker and the world has been deliberately preserved. Han's Berlin functions in The Orange Pill as the pastoral ideal functions in the literature Williams analyzed: it represents the possibility of authentic experience, of organic relationship with materials and time, of a way of being that the dominant culture has foreclosed. The garden is the space of resistance, the refusal of the smooth, the preservation of depth against the relentless pressure of acceleration.
Williams would have admired the garden and distrusted the representation. Han's Berlin is real — the philosopher actually tends it, actually refuses the smartphone, actually lives the principles he articulates. But the cultural function of the garden in the discourse is not identical to the reality of the garden in Han's life. In the discourse, the garden operates as the country operates in English pastoral literature: as an idealized alternative to the present, a space of organic wholeness projected onto a specific location and a specific practice, available primarily as a rebuke to the dominant culture rather than as a genuine program for collective transformation. The garden works for Han. It does not scale. It is not a policy. It is not a response that the developer in Lagos can adopt, or the parent in suburban New Jersey, or the engineering team in Trivandrum. It is a privilege — the privilege of refusal, available to those whose position in the social structure allows them to refuse without catastrophic consequence.
This does not diminish the garden. Williams never diminished the countryside. He insisted that the rural communities he grew up in possessed genuine values — solidarity, direct relationship, mutual knowledge — that the urban industrial culture threatened to destroy. His argument was not that the country was false but that the use of the country as an ideological counter to the city concealed the actual conditions of rural life (which included exploitation, hierarchy, and constraint) and foreclosed the possibility of creating, within urban industrial society, the genuine goods that the pastoral tradition attributed exclusively to the rural past.
The same analysis applies to Han's Berlin. The garden possesses genuine goods — depth, contemplation, the specific quality of attention that friction produces. Williams would not deny these goods. His argument would be that the use of the garden as an ideological counter to AI culture conceals the actual conditions of the garden (it requires leisure, security, and institutional position that most people do not possess) and forecloses the possibility of creating, within AI culture, the genuine goods that the elegiac tradition attributes exclusively to the pre-AI past.
Lagos is the city. Not the actual Lagos, which is vastly more complex than any symbolic function can capture, but Lagos as it operates in the discourse — the site of aspiration and exclusion, of democratized possibility and structural inequality, of the developer with ideas but without infrastructure. The Orange Pill invokes the developer in Lagos to argue for the democratic potential of AI: the collapse of the imagination-to-artifact ratio means that someone with an idea and a subscription can build what previously required a team, a runway, and institutional backing. The invocation is genuine — the capability expansion is real, and its extension to populations previously excluded from the building process is a genuine democratic gain.
But Williams would have noted the specific way the invocation functions in the discourse. Lagos appears as the beneficiary of the technology, the proof that the transition serves democratic purposes, the answer to the charge that AI concentrates rather than distributes power. The invocation performs a specific rhetorical function: it inoculates the dominant narrative against the critique of inequality by pointing to the technology's democratic potential. The developer in Lagos can now access similar capabilities. The power grid that fails, the bandwidth that fluctuates, the cost of inference relative to local wages, the English-language bias of the models — these qualifications are acknowledged but subordinated to the democratic promise.
Williams traced the same rhetorical structure in the discourse surrounding broadcasting. Television was celebrated as a democratic technology — bringing information, education, and culture to populations previously excluded from access to these goods. The celebration was not false; the access was real. But the celebration functioned ideologically to conceal the commercial organization of the medium, the concentration of production in a small number of institutions, and the systematic exclusion of the very populations whose access was being celebrated from any role in determining what was produced and how it was distributed. The democratic potential was real. The democratic realization was partial, constrained by the social relations within which the technology operated.
Lagos in the AI discourse functions as the working-class viewer functioned in the broadcasting discourse: as the beneficiary whose benefit justifies the arrangement, without examining whether the arrangement serves the beneficiary's interests or merely includes the beneficiary in a system organized around other interests entirely.
Trivandrum is the border country — the space where the transformation is lived rather than theorized, where the categories break down, where the dominant narrative encounters the resistance of actual experience and neither prevails cleanly. The room in Trivandrum where Segal trained twenty engineers over five days is not Berlin and not Lagos. It is a specific location in the global distribution of technological labor, occupied by specific people with specific histories and specific relationships to the technology that is transforming their work.
The engineers in Trivandrum are not the privileged refuseniks of Han's Berlin. They cannot tend gardens while the technology passes them by. They are working professionals whose livelihoods depend on their relationship to the tools of their trade, and the tools have changed overnight. Nor are they the aspirational developers of Lagos, reaching capabilities previously denied. They are experienced practitioners — people who have spent years developing the specific expertise that the technology is now commoditizing, and who must navigate the transformation from inside it, without the luxury of distance.
Williams's border country was defined by the coexistence of formations that the dominant culture treated as opposed. In Pandy, Welshness and Englishness were not alternatives between which one chose. They were simultaneous realities that had to be negotiated daily, in specific interactions, with specific consequences. The border country produced a specific kind of knowledge — the knowledge of someone who inhabits the boundary and therefore sees what each side conceals from itself.
Trivandrum in The Orange Pill produces the same kind of knowledge. The engineers who oscillate between excitement and terror across a single workweek are inhabiting a boundary between the old dispensation and the new, and their oscillation is not indecision. It is the specific knowledge of the boundary-dweller: the perception that neither the triumphalist narrative nor the elegiac narrative captures what is actually being experienced, that the reality is more complex than either account allows, that the transformation is simultaneously a liberation and a loss and that the simultaneously is the truth of it.
The spatial imaginary of the AI transition — Berlin as garden, Lagos as frontier, Trivandrum as workshop — is not merely descriptive. It is, in Williams's terms, a cultural production that organizes the structure of feeling surrounding the transition by distributing its elements across a geography that makes them appear natural. The garden is where refusal lives. The frontier is where aspiration lives. The workshop is where transformation lives. Each location appears to possess its character inherently, as though the quality of the experience were a property of the place rather than a product of the social relations that the place embodies.
Williams's method requires that we denaturalize this geography — that we see the garden, the frontier, and the workshop not as inherently different kinds of place but as differently located positions within a single social structure, each shaped by the specific relationship its inhabitants bear to the technology and the economic system within which the technology operates. Han can garden because his institutional position allows it. The Lagos developer aspires because her structural position requires it. The Trivandrum engineers oscillate because their position places them at the boundary where the old and the new collide.
The denaturalization is not a debunking. The garden's goods are real. The frontier's aspirations are genuine. The workshop's transformations are material. But the distribution of these experiences across a spatial imaginary conceals what produces them: not geography but the social relations of technological capitalism, which determine who can refuse, who must aspire, and who has no choice but to live the transformation in the body.
Williams grew up on a border and spent his career insisting that borders are where the most consequential cultural knowledge is produced — not in the centers of either formation but in the spaces between them, where the claims of each are tested against the lived experience of people who cannot afford the luxury of occupying only one.
The AI transition needs more knowledge from the borders and less from the centers. It needs the testimony of people who are living the transformation rather than theorizing it — the engineers in Trivandrum, the teachers adapting their classrooms, the parents trying to answer their children's questions, the workers whose expertise is being simultaneously validated and commoditized. Their experience is the primary evidence. Their structure of feeling is the truth of the moment. And the cultural politics of the transition will be determined by whether their experience finds articulation — or whether it is absorbed into the spatial imaginary that the dominant discourse is already constructing, filed under "garden" or "frontier" or "workshop," and thereby rendered safe.
---
Raymond Williams titled one of his final books Resources of Hope. The title was not optimistic in the conventional sense — it did not predict a favorable outcome or assert that things would turn out well. It was something more precise and more useful: an inventory. An accounting of the materials available, within the contradictions of the present, for the construction of a more adequate future. Williams believed that every social formation, however dominated by structures of exploitation and control, contains within it elements that point beyond the current arrangement — residual values that preserve genuine goods the dominant has abandoned, emergent practices that articulate possibilities the dominant cannot acknowledge, structures of feeling that register what the formal ideology suppresses. These elements are not guarantees. They are resources. Whether they are mobilized, and toward what ends, depends on the quality of attention and the political will of the people who possess them.
The AI transition possesses resources of hope. They are real, they are substantial, and they are in danger of being squandered — not by being rejected but by being captured, incorporated into the dominant narrative of technological inevitability, and thereby stripped of their transformative potential. The final task of cultural materialist analysis is to identify these resources with enough precision that they can be distinguished from their co-opted forms, and to indicate, however provisionally, the conditions under which they might be mobilized toward genuinely democratic ends.
The first resource is the structure of feeling itself. The compound experience of awe and loss, excitement and terror, capability expansion and identity destabilization that characterizes the silent middle is not a problem to be solved. It is a form of knowledge — the specific knowledge that emerges from inhabiting the boundary between formations, from experiencing the contradictions of the transition in the body rather than resolving them in the mind. This knowledge is culturally precious precisely because it resists the dominant narrative. The triumphalist cannot accommodate it, because it includes the loss. The elegist cannot accommodate it, because it includes the gain. The silent middle holds both, and in holding both, it preserves the complexity that any adequate response to the transition must take into account.
The danger is that the structure of feeling will be incorporated before it finds its forms of expression. The dominant culture is already performing complexity — acknowledging nuance, celebrating the holding of contradictions, incorporating the language of the silent middle into corporate communications and conference keynotes. This performance is not falsehood exactly — the complexity is real. But the performance domesticates the complexity, absorbs it into the dominant framework, and thereby neutralizes its critical force. The structure of feeling that genuinely holds both the gain and the loss is a challenge to the dominant narrative. The performance of holding both is a decoration upon it.
The condition for mobilizing this resource is articulation — giving the structure of feeling cultural forms adequate to its complexity. Not resolving it into a position. Not converting it into a program. Articulating it: making it available as a shared understanding, a common reference point, a basis for collective deliberation about what the transition should become. Williams believed that literature had historically served this function — that the novel, in particular, had the capacity to articulate structures of feeling that formal ideology could not contain. The AI transition has not yet produced its characteristic literature. The cultural forms are still in formation. But the need for them is urgent, because the structure of feeling will either find its forms or be absorbed by the dominant culture's performance of complexity, and the difference between articulation and absorption will determine whether the silent middle becomes a political force or remains a marketing demographic.
The second resource is the democratization of capability, understood not as the dominant narrative presents it — as a gift of the technology, a natural consequence of innovation — but as Williams would understand it: as a moment in the long democratic revolution, carrying genuine political potential that can be realized only through deliberate social organization. The collapse of the imagination-to-artifact ratio is real. The extension of building capacity to populations previously excluded is real. But the realization of these potentials as democratic gains — as genuine expansions of participation in the determination of collective life — depends on the social organization of the technology, not on the technology itself.
The distinction matters. If the democratization of capability is understood as a gift of the technology, no political struggle is required. The technology distributes capability naturally, and the distribution is inherently democratic. This understanding serves the interests of the corporations that own the technology, because it presents the commercial distribution of the technology as a democratic achievement, foreclosing the question of whether alternative arrangements — public ownership of models, community governance of deployment, regulation of access to ensure genuine equity — might realize the democratic potential more fully.
If the democratization is understood as a political achievement — as something that must be won through deliberate social organization, against the tendency of commercial interests to capture the technology's potential for private gain — then the question of how the technology is organized becomes central. Who trains the models, with what data, under what governance? Who determines the pricing that governs access? Who decides what the models can and cannot do? These are political questions, and their answers will determine whether the democratization of capability is genuine or cosmetic — whether the developer in Lagos actually gains equivalent capability or merely gains access to a commercially governed platform whose terms of use are set in San Francisco.
Williams spent his career arguing that communication technologies carry genuine democratic potential that can be realized only through democratic struggle. Broadcasting could have been organized as a genuinely public medium, governed by the communities it served. Instead, it was progressively captured by commercial interests and organized according to the logic of the market. The democratic potential was real. The democratic realization was partial and contested. The AI transition faces the same dynamic, at a scale and speed that make the contest more consequential and more urgent.
The third resource is the ascending friction that The Orange Pill describes — the revelation, through the elimination of mechanical difficulty, of harder and more consequential forms of challenge. When the machine handles implementation, the practitioner confronts judgment. When the machine produces competent output, the practitioner confronts the question of what output deserves to exist. When the machine can build anything that can be described, the practitioner confronts the question of what is worth describing.
This ascent is a genuine resource of hope, because it represents a potential enrichment of work — a movement from mechanical execution toward the exercise of capacities that are more distinctively human and more inherently satisfying. Williams would have recognized in it the latest expression of a pattern he traced across the long revolution: the progressive liberation of human capacity from the constraints of mechanical labor, creating the possibility (never the guarantee) of work that engages the full range of human intelligence, judgment, and creativity.
But the realization of this potential depends on conditions that are not currently being met. The ascending friction is available only to practitioners who have the time, the support, and the institutional context to develop the higher-order capacities it demands. The Berkeley study's finding that AI intensifies work rather than liberating it suggests that the dominant culture is converting the ascent into additional output rather than additional depth — that the freed capacity is being filled not with judgment but with more tasks, not with reflection but with more production. The ascending friction is a resource of hope only if the cultural conditions support its development. If the dominant culture's imperative toward continuous productivity captures the freed capacity, the ascent becomes another form of intensification, and the resource is squandered.
The fourth resource is the emergent vocabulary — the keywords in formation that the AI transition is producing. "Prompt," "hallucination," "context window," "temperature" — these are not merely technical terms. They are the early forms of a vocabulary through which the culture will eventually make sense of its new conditions. The vocabulary is currently raw, dominated by technical usage, not yet carrying the evaluative weight that mature keywords carry. But the words are there, and their development — the process by which they acquire social meaning, become sites of cultural contestation, and eventually stabilize as elements of a shared vocabulary — is itself a resource, because the quality of the vocabulary will determine the quality of the thinking that the vocabulary makes possible.
Williams showed that the vocabulary of industrialism — "industry" itself, "culture," "democracy," "class" — was contested terrain, shaped by power, serving interests, and carrying within its shifting meanings the evidence of the social transformations it was produced to describe. The vocabulary of the AI transition will follow the same pattern. The contest over what "skill" means, what "creativity" means, what "authorship" means in the age of the language model — this is a cultural contest with political consequences, and the outcome will shape not merely how people talk about the transition but how they experience it, because the vocabulary through which experience is articulated shapes the experience itself.
Williams would not have predicted the outcome. He was constitutionally suspicious of prediction, which he regarded as a form of intellectual imperialism — the projection of the present's categories onto a future that has not yet produced its own. What he offered instead was the identification of resources: the materials available, within the contradictions of the present, for the construction of something better. The resources are real. The question is whether the political will exists to mobilize them — to organize the democratization of capability as a democratic achievement rather than a commercial gift, to protect the conditions under which ascending friction can produce genuine depth rather than additional output, to develop the vocabulary through which the emergent structure of feeling can be articulated before it is absorbed, and to insist, against the relentless pressure of the dominant narrative, that the course of the transition is not determined by the technology but by the social and political decisions that human beings make about how the technology is organized.
The long revolution continues. It has never been a triumphal march. It has always been a contest — between the democratic potential of new productive forces and the tendency of existing power structures to capture that potential for private gain. The AI transition is the latest front in this contest, and the stakes are as high as they have ever been, because the productive force in question is not physical labor or mechanical energy but cognitive capability itself — the capacity to think, to create, to decide, to judge. The privatization of this capacity — its concentration in the hands of a few corporations, governed by commercial logic, distributed according to ability to pay — would represent a defeat for the long revolution more consequential than any previous capture of productive forces by private interests.
The resources of hope are real. The garden's goods are real. The frontier's aspirations are genuine. The border country's knowledge is the most valuable of all. But resources are not outcomes. They are materials. What is built from them depends on the builders — on their clarity about what is at stake, their willingness to contest the dominant narrative, their capacity to articulate the emergent structure of feeling in forms adequate to its complexity, and their refusal to accept the inevitability that the dominant culture manufactures in order to foreclose the possibility of democratic choice.
Williams believed, throughout his career, that the resources were sufficient. Not that the outcome was guaranteed — he was never naive about the power of the dominant or the difficulty of the contest — but that the materials for a more adequate future existed within the contradictions of the present, and that the task of cultural analysis was to identify them, to make them visible, to demonstrate their presence to people who might otherwise believe that no alternative to the dominant was possible.
The materials are present. The structures of feeling are shared. The keywords are in formation. The ascending friction is real. The democratic potential is genuine.
The long revolution continues. And its outcome, as always, depends on what is built.
---
The word I kept circling back to, through ten chapters of Raymond Williams, was ordinary. Not "brilliant." Not "revolutionary." Not "unprecedented." Ordinary.
Williams insisted that culture is ordinary. He meant it as a provocation against every tradition that reserved the word "culture" for the symphony hall and the seminar room. But he also meant it literally: the most important things about how a society works are visible in its ordinary practices — in how people talk about their work, how they organize their days, how they relate to their tools, how they feel at three in the morning when the house is dark and the screen is the only light.
I have been that person at three in the morning more times than I can count since December 2025. Building with Claude. Arguing with Claude. Discovering connections I had not seen and then discovering, the next morning, that some of those connections were fabrications dressed in beautiful prose. The ordinary experience of the AI transition is not the triumphalist keynote or the elegiac essay. It is the Tuesday. The meeting where someone suggests replacing three roles with a prompt chain and nobody knows how to respond. The dinner where my son asks whether AI will take everyone's jobs and I do not have a clean answer. The moment when I catch myself unable to stop working — not because I am being exploited by someone else, but because the tool is so generative that stopping feels like voluntarily diminishing myself.
Williams gave me a name for what that Tuesday feels like. A structure of feeling — something shared, definite, pressing, but not yet named by any of the frameworks we have. The triumphalist framework says I should celebrate. The elegiac framework says I should mourn. Neither captures the specific compound quality of the experience. Williams said: that is exactly the condition of emergence. The language has not been made yet. The cultural forms are still forming. The silence of those who hold both truths is not failure. It is the beginning of something that has not yet found its voice.
What struck me hardest, working through these ideas, was the argument about the selective tradition — the process by which a culture retroactively constructs a story of individual genius from what was always a collective, social, material process. I wrote about Dylan in The Orange Pill. I argued that the myth of the solitary genius is an illusion. Williams made me realize I was scratching the surface of my own argument. The selective tradition does not merely distort history. It serves power. It concentrates reward in individual hands by concealing the social conditions that produced the individual achievement. And the AI transition is already constructing its own selective tradition — celebrating the solo builder who shipped a product with Claude while rendering invisible the thousands of engineers who built the model, the millions of creators whose work constitutes the training data, the entire social infrastructure that makes the individual story possible.
The means of communication are means of production. I had been living inside this truth for years without having the language for it. When I write with Claude, I am simultaneously communicating and producing. The prompt is a statement of intention and an instruction to manufacture. The output is a meaningful response and a functional artifact. Williams argued this about television in the 1970s. He could not have known how literally true it would become.
And that is the thing about Williams that refuses to let me go: he was not predicting AI. He was describing something structural about the relationship between technology and culture, between power and meaning, between the way a society organizes its tools and the way it organizes its feelings. The structures he identified — the selective tradition, the dominant-residual-emergent dynamic, the country-and-city opposition, the political economy of communication — are not historical artifacts. They are operating right now, in the discourse about AI, in the organization of the industry, in the distribution of capability and reward.
The word "skill" is being transformed in front of us. The knowable community of the engineering team is being restructured. The border country between old practice and new is populated by millions of people who cannot afford the luxury of theory and who are navigating the transition in the body, one workday at a time.
Williams called his final book Resources of Hope. Not optimism. Not prediction. An inventory of what is available, within the contradictions of the present, for building something more adequate. The resources are real. The democratization of capability is real, if partial. The ascending friction — the elevation of work from mechanical execution to genuine judgment — is real, if endangered. The structure of feeling in the silent middle is real, and it carries within it the knowledge that neither the triumphalist nor the elegiac account can contain.
But resources are not outcomes. Whether the resources are mobilized depends on whether we see them clearly enough to use them. Williams spent his life making the invisible visible — showing that what presents itself as inevitable is always the product of specific decisions by specific actors with specific interests, and that different decisions remain possible.
Different decisions remain possible. That is the sentence I am taking from this, the one I want my children to carry. Not that the river can be stopped. It cannot. But that its course is not predetermined. It is shaped by the structures we build, the culture we maintain, the vocabulary we develop for talking about what is happening to us, and the refusal — the stubborn, unglamorous, essential refusal — to accept that the only future available is the one currently being manufactured by the people who happen to own the infrastructure.
The long revolution continues. The culture is ordinary. The resources are real. The rest is up to us.
— Edo Segal
** The most important thing about the AI transition is the thing no one can name yet -- the shared, contradictory, pressing experience of millions of people whose lives are being remade faster than their language can follow. Raymond Williams spent four decades developing the tools to read exactly this kind of moment: when lived experience outstrips every available description, when the culture is in transition and the old words no longer hold.
Through Williams's cultural materialism, this book examines the AI revolution not as a technical event but as a transformation of an entire way of life -- the meanings attached to skill, the social relations of building, the political economy of who owns the machines that now produce both meaning and material reality simultaneously.
The words matter. The selective tradition is already being constructed. The question is whose experience gets articulated -- and whose gets silenced.

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Raymond Williams — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →