Every century falls in love with a machine and makes the mistake of thinking the machine explains everything. The seventeenth century had the clock: a mechanism of such elegance that the universe itself came to be described as clockwork, with planets moving in regular orbits according to laws as reliable as the gears on the mantelpiece. The nineteenth century had the steam engine, and thermodynamics reshaped the scientific imagination around heat and work and entropy. The twentieth century had the computer, and the brain became hardware, the mind software, thinking information processing. The twenty-first century has the large language model — and because it produces language, the medium in which humans think about themselves, the inflation is more seductive and more dangerous than any of its predecessors. Midgley watched this pattern recur across her lifetime and traced its anatomy with the precision of a diagnostician who had seen the same disease too many times to mistake it.
There is a parallel reading in which the pattern Midgley diagnoses is not pathology but method — the productive overreach by which cultures explore the limits of their most powerful tools. The clock metaphor was not a mistake contained by later correction; it was the framework that made Newton's laws thinkable, that converted vague intuitions about celestial regularity into mathematical physics. The overreach was the point. You cannot discover what a mechanism explains without first attempting to make it explain everything, because only at the limit does resistance become legible. The scientists who treated the brain as clockwork, as engine, as computer were not confused — they were following the only reliable heuristic for extracting maximum insight from a new tool: push it until it breaks, then examine the fracture.
The language model case makes this even clearer. The claim that LLMs 'think' is not a confusion between metaphor and description — it is a probe designed to force precision about what thinking is. When researchers ask whether GPT-4 has common sense, they are not making a category error; they are using the system's performance envelope to triangulate the boundaries of the concept. The inflationary impulse generates the pressure that refines definitions. If the culture organized around the clockwork universe 'survived the moment when the metaphor was recognized as a metaphor,' it is because that moment was not a failure of the framework but its successful consummation — the conversion of a generative confusion into stable knowledge. The institutions built around each mechanism are not errors awaiting correction but the scaffolding by which the next level of understanding becomes possible.
Each iteration of the pattern follows the same structural sequence. A mechanism appears. The mechanism impresses — it captures something genuine about reality, explains phenomena that were previously puzzling, produces outputs that seem almost magical relative to what preceded it. A leap is then made: from 'explains some things' to 'explains everything.' The leap is motivated not by evidence but by the aesthetic pleasure of a unified theory — the universe as clock is more elegant than the universe as a messy assortment of clockwork and biology and weather and consciousness.
The clock metaphor was not foolish. It captured real regularities in planetary motion. But it concealed what it could not describe: clocks do not evolve, do not produce novelty, do not develop consciousness. The clockwork universe was a map of the predictable features drawn by people who had temporarily forgotten that reality also contains unpredictable features, and that the unpredictable features are at least as important. The engine metaphor was not foolish either — energy does flow from concentration to uniformity, entropy does increase. But the metaphor concealed the most interesting phenomenon in the universe: the emergence of complex self-organising systems that locally reverse the trend.
The large language model is the most dangerous all-explaining mechanism in the series precisely because each previous one was limited by the obviousness of its dissimilarity to the thing it was supposed to explain. Nobody seriously thought the universe was made of tiny gears. Nobody seriously thought the brain was filled with steam. But many people seriously think that a system producing human-quality language is, in some meaningful sense, thinking. The resemblance has crossed the threshold of plausibility, and once a metaphor crosses that threshold, it stops being treated as a metaphor and starts being treated as a description.
Midgley's point is not that any of these metaphors was wrong. The clock metaphor illuminated mechanical regularity. The engine metaphor illuminated thermodynamic flow. The computer metaphor illuminates information processing. The language model metaphor illuminates statistical structure in language. Each captured something real. The error, in every case, was the promotion — the move from 'captures something real' to 'captures reality itself.' And the error is not just intellectual. It has consequences. Cultures that organized themselves around the clockwork universe built specific institutions, specific metaphysics, specific relationships between science and religion. Cultures that organize themselves around the language model are building their own. The question is whether the institutions will survive the inevitable moment when the metaphor is recognized as a metaphor rather than a description.
The framework appears across Midgley's corpus but receives its most explicit treatment in The Myths We Live By (2003), Chapter 4 ('The Machine Image'), and in Science as Salvation (1992), where she traces the specific pattern of the AI research community's tendency to inflate technical achievements into metaphysical claims.
The pattern repeats. Clock, engine, computer, language model — the same structural error recurs with different mechanisms, each more seductive than the last.
The metaphor captures something. The error is not in the metaphor but in its inflation — the move from useful illumination to total description.
Resemblance crosses the plausibility threshold. Language models are more dangerous than prior mechanisms because their outputs are closer to what they claim to explain.
Elegance is not evidence. Unified theories are aesthetically satisfying and epistemologically suspect; reality is not obliged to be simple enough for a single mechanism to explain.
The question is which aspect of the pattern you are examining. As diagnosis of cultural error, Midgley is entirely correct (100%): treating a mechanism as total explanation conceals what the mechanism cannot describe, and the concealment has institutional consequences — the clockwork universe did produce a specific metaphysics, one that made consciousness and purpose harder to think about for two centuries. But as description of epistemic method, the contrarian view is also correct (70%): the inflationary impulse is how cultures discover the limits of their tools, and many of the insights attributed to these mechanisms could not have been generated without the initial overreach. Newton needed the clock metaphor to be total before he could discover where it failed.
The danger Midgley identifies is real but operates at a different timescale than the productivity the contrarian identifies. In the short term (decades), the overreach is generative — it forces precision, produces novel research programs, reveals previously invisible structure. In the medium term (centuries), it becomes limiting — the institutions and metaphysics built around the mechanism ossify, resisting evidence that does not fit the frame. The language model case is particularly acute because the timescales are compressing: we are building institutions around the LLM metaphor before we have fully explored its productive limits, but also before we have experienced the multi-generational lock-in that characterized earlier mechanisms.
The synthesis the topic itself benefits from is recognizing the pattern as a developmental cycle rather than a repeated error. Each mechanism is both illumination and concealment, both epistemic engine and cultural trap. The task is not to avoid the inflation — that may be impossible and undesirable — but to build institutions flexible enough to survive the deflation, to treat the mechanism as scaffolding rather than foundation, useful precisely because it will eventually need to be dismantled.