Mimeomorphic and Polimorphic Action — Orange Pill Wiki
CONCEPT

Mimeomorphic and Polimorphic Action

Collins and Kusch's 1998 distinction between actions whose correctness depends on copying surface behavior and actions whose correctness depends on reading social context — the analytical axis on which the entire question of AI competence turns.

The distinction between mimeomorphic and polimorphic action is the load-bearing concept of Collins's engagement with artificial intelligence. A mimeomorphic action is one whose successful performance consists in reproducing the same surface form across instances — stamping out identical parts, executing a chess move that follows from an evaluation function. A polimorphic action is one whose correct performance varies with the social situation in which it occurs — the same physical motion is a different action at a funeral and at a party. Collins's claim, sustained across three decades, is that most consequential human practice is polimorphic, and that machines, however sophisticated their pattern-matching, operate in the mimeomorphic register. They reproduce the surface of expert behavior without participating in the social practices that give the behavior its meaning.

In the AI Story

Hedcut illustration for Mimeomorphic and Polimorphic Action
Mimeomorphic and Polimorphic Action

The terminology, introduced in The Shape of Actions (1998), is deliberately technical because the ordinary vocabulary of 'behavior' and 'action' collapses the distinction Collins needs to maintain. Philosophers and cognitive scientists had long debated whether machines could exhibit intelligent behavior. Collins's move was to shift the ground: the question is not whether the behavior is intelligent but whether the action is social. A machine that plays chess well performs a mimeomorphic action competently. This tells us nothing about whether it could perform a polimorphic action, because the two categories are not on a continuum — they are different kinds.

The AI revolution has made the distinction urgent in a way Collins could not have anticipated in 1998. Large language models are the most sophisticated mimeomorphic engines ever built. They reproduce the distributional structure of expert discourse across every domain their training data covers, and the reproduction is often better than any individual expert's output because it draws on more text than any individual has read. The mimeomorphic excellence is real. The question Collins's framework forces is whether mimeomorphic excellence is sufficient for the work the machines are being asked to perform — a question whose answer varies by domain and by the social embedding of the task.

The framework illuminates failures that other frameworks miss. When the Deleuze fabrication occurs — when a machine produces a philosophical passage that is grammatically correct, rhetorically elegant, and conceptually wrong in a way only a specialist would detect — the failure is not a bug in the training data. It is the signature of mimeomorphic reproduction encountering a polimorphic boundary. The concept was being used in a way that violated the philosophical community's socially maintained understanding of its meaning, and that violation was invisible to the machine because the community's understanding is not fully captured in its published texts.

The consequence for practice is that evaluating AI output requires knowing whether the domain is predominantly mimeomorphic (where the machine's excellence is likely reliable) or polimorphic (where surface fluency is an unreliable proxy for substantive correctness). This judgment is itself polimorphic — it depends on the evaluator's understanding of how expertise actually operates in the relevant community.

Origin

Collins developed the distinction with philosopher Martin Kusch in The Shape of Actions: What Humans and Machines Can Do (MIT Press, 1998), building on Collins's earlier work on tacit knowledge and scientific practice. The terminological choice reflects Collins's sociological training: he wanted terms that would not smuggle in the ordinary-language assumption that 'action' implies understanding. Mimeomorphic actions are morphologically alike across instances. Polimorphic actions take many forms depending on context.

Key Ideas

Surface vs. situation. Mimeomorphic actions succeed by looking alike across instances; polimorphic actions succeed by responding appropriately to different social situations.

Categorical, not gradient. The distinction is not between easier and harder tasks but between different kinds of tasks, requiring categorically different competences.

The AI boundary. Machines operate reliably in the mimeomorphic register and structurally cannot enter the polimorphic register, because polimorphic competence requires social participation, not textual training.

Invisible failures. When mimeomorphic reproduction is mistaken for polimorphic competence, the failures are invisible to anyone who lacks the domain expertise to see through the surface.

Debates & Critiques

The central dispute concerns whether polimorphic competence requires social embedding in principle or merely as a matter of current technical limitation. Collins's position is the stronger one — that the boundary is structural, not contingent — but critics argue that sufficiently sophisticated multi-agent systems, trained through reinforcement learning in socially embedded environments, might acquire something approximating polimorphic competence. The empirical question remains open.

Appears in the Orange Pill Cycle

Further reading

  1. Harry Collins and Martin Kusch, The Shape of Actions: What Humans and Machines Can Do (MIT Press, 1998)
  2. Harry Collins, Artifictional Intelligence: Against Humanity's Surrender to Computers (Polity, 2018)
  3. Harry Collins, Tacit and Explicit Knowledge (University of Chicago Press, 2010)
  4. Harry Collins and Simon Thorne, 'Can LLMs reason like physicists?' (2026, arXiv preprint)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT