The cultural technology thesis is perhaps the most consequential reframing of artificial intelligence since the technology entered mainstream discourse. Articulated in a landmark 2025 Science paper by Alison Gopnik with political scientist Henry Farrell, statistician Cosma Shalizi, and sociologist James Evans, the thesis argues that large language models should be understood not as agents, not as minds-in-progress, not as proto-consciousnesses on the verge of waking up, but as cultural and social technologies — tools for the transmission and synthesis of information that human beings have already generated. The relevant analogs are not the science-fiction robot or the emerging artificial person. They are writing, the printing press, and the internet: technologies that reshape cognition and society not by thinking but by changing how existing thought moves through the world.
The distinction between an agent and a cultural technology is not semantic. It determines what questions get asked, what risks are attended to, what regulations are designed, and what future is prepared for. If LLMs are agents, the questions are about containment, alignment, and control — how do we make sure the new minds do what we want? This framing generates the particular anxieties that dominate the AI safety conversation: superintelligence, existential risk, the alignment problem.
If LLMs are cultural technologies, the questions are entirely different. They are the questions societies have always asked when a new cultural technology arrives: Who gets access? How does it change the distribution of knowledge and power? What happens to the institutions that the previous technology supported? How does it reshape the cognitive habits of the people who use it? These are questions about media, about culture, about the political economy of information — questions that require social science, not just engineering.
The evidence for the classification is operational. An LLM is trained on an enormous corpus of human-generated text and learns the statistical regularities of that corpus. When it generates output, it produces text statistically consistent with the patterns it has learned. It is, as Gopnik puts it, an imitation engine — a system that has become extraordinarily good at producing outputs that look like what knowledgeable humans would produce, because it has been trained on outputs that billions of knowledgeable humans have actually produced. This is a revolutionary accomplishment. It is also a different kind of accomplishment than generating genuinely novel knowledge, and the teapot experiments show the difference empirically.
The cultural technology thesis does not diminish AI. Writing mattered. The printing press mattered. The internet mattered. Each of these cultural technologies reshaped society more profoundly than the arrival of any individual agent could have. The argument is that the kind of mattering is different. Transmitting the accumulated knowledge of humanity at unprecedented speed and versatility is enormously valuable, but it is not the same as generating new knowledge. The printing press did not produce a single original insight; it made existing insights vastly more accessible, and the amplified human minds the press empowered produced the new insights that the press then transmitted.
The thesis was published as 'Large AI Models Are Cultural and Social Technologies' in Science in 2025, co-authored by Gopnik, Henry Farrell (Johns Hopkins SAIS), Cosma Shalizi (Carnegie Mellon statistics), and James Evans (University of Chicago sociology). The paper synthesized arguments that Gopnik had been developing in talks and op-eds for several years with complementary strands from political economy and computational social science. Its most influential move was to displace the 'intelligent agent' framing that had dominated the AI discourse and to place LLMs in the lineage of prior cultural technologies whose effects historians and social scientists had been studying for decades.
Not agents, but media. LLMs are tools for transmitting and synthesizing human knowledge, not new kinds of minds.
Imitation engines, not discovery engines. LLMs produce outputs statistically consistent with training data; they do not generate genuinely novel hypotheses.
Historical analogy is the printing press. The relevant precedents are writing, printing, and the internet — technologies that reshaped cognition without being minds.
Different questions, different risks. The cultural-technology framing directs attention toward access, distribution, institutional disruption, and cognitive ecology — questions the agent framing obscures.
Amplification depends on input. A cultural technology amplifies whatever signal it carries; the quality of the output depends on the quality of what goes in.
The thesis has been contested from multiple directions. AI-safety researchers argue that even if current LLMs are cultural technologies, agentic systems built on top of them may cross into agent territory, and the framing risks disarming caution too early. Philosophers of mind have questioned whether the distinction between 'genuine agent' and 'cultural technology' is as sharp as Gopnik claims, given the difficulty of specifying what makes any system genuinely agentic. Gopnik's defenders note that the thesis is not a prediction about future systems but a corrective classification for current ones — a call to get the present picture right before extrapolating into speculation.