Zuckerman's lecture provides the sharpest contemporary application of Gramscian analysis to AI infrastructure. The argument: LLMs are built by compressing a civilization's worth of culture into opaque matrices of linear algebra. The values embedded in those matrices — assumptions about what counts as knowledge, what counts as reasonable, what counts as neutral — are the values of the particular population that produced the training data: disproportionately English-speaking, disproportionately Western, disproportionately the product of the early twenty-first-century open internet. These values do not announce themselves as particular. They present themselves as the model's general intelligence. The neutrality is the ideology.
The most cited passage crystallizes the argument: "AI automates this reinforcement — the WEIRD values of the texts that build this new form of intelligence are not just common sense, they are how the machine knows how to answer questions and produce text… and as AI feeds on the texts it creates, an ouroboros swallowing its own tail, it reinforces this set of hegemonic values in a way Gramsci did not anticipate even in his darkest moments."
The feedback loop is the crucial mechanism Zuckerman identifies. Each generation of AI-generated text enters the general corpus of online discourse, which becomes the training data for the next generation of models, which generates text that further reinforces the values embedded in the previous generation. The hegemony does not merely reproduce itself. It compounds. The common sense of the technology class is not just transmitted through AI platforms. It is encoded into their architecture, materialized in their parameters, and rendered increasingly resistant to modification with each training cycle.
Zuckerman's proposed response is the construction of alternative LLMs built around sharply different cultural values. Such models would not merely translate the dominant model's capabilities into other languages. They would embody different epistemologies — different assumptions about what counts as knowledge, different categories for organizing experience, different values embedded in their alignment. The proposal is structurally Gramscian: counter-hegemonic institution-building adapted to the specific terrain of AI infrastructure.
The lecture has been widely cited in subsequent scholarship and policy discussion. The MDPI Systems article extends its framework. The Malaysian Aliran analysis applies it to policy. The Gramsci volume treats it as one of the foundational contemporary applications of Gramscian analysis to the AI transition — a reference point for what the Gramscian framework looks like when brought into rigorous engagement with contemporary technical systems.
The lecture was delivered at the University of Copenhagen in 2025 and has circulated widely in text and video form. Zuckerman is director of the Initiative for Digital Public Infrastructure at UMass Amherst and one of the most prominent voices applying political economy analysis to digital platforms and AI.
The lecture builds on Zuckerman's earlier work on rewiring public discourse and on his long engagement with the politics of digital infrastructure. Its Gramscian framing is explicit and sustained.
WEIRD encoding. The values encoded in large language models are those of Western, Educated, Industrialized, Rich, Democratic populations — the specific cultural origin of the training data.
Neutrality as ideology. The apparent neutrality of model output is itself the hegemonic operation — the particular presented as universal.
Feedback compounding. AI-generated text becomes training data for next-generation models, compounding hegemonic values in ways that exceed any previous mechanism of cultural reproduction.
Ouroboros structure. The recursive self-reinforcement produces a structure Gramsci did not anticipate — hegemony that actively intensifies itself through its own operation.
Alternative models as response. Counter-hegemonic response requires building models around different cultural values, with different training data and alignment criteria.