Frame Problem — Orange Pill Wiki
CONCEPT

Frame Problem

The structural impossibility of specifying in advance which features of a situation are relevant to a given task—first identified by McCarthy and Hayes in 1969, diagnosed by Dreyfus as a symptom of the deeper philosophical error in treating intelligence as disembodied computation.

The frame problem was identified by John McCarthy and Patrick Hayes in their 1969 paper 'Some Philosophical Problems from the Standpoint of Artificial Intelligence.' It asks: how does a system know which features of a situation to attend to, and which to ignore, when it acts? When a robot lifts a cup, does it need to represent that the table does not disappear, that gravity still applies, that the cup's color does not change? The problem is that the list of potentially relevant features is indefinite, and any attempt to enumerate them in advance either omits something that matters or multiplies representations past any finite bound. Dreyfus argued that the frame problem was not a technical puzzle to be solved by cleverer representation schemes but a symptom of the deeper philosophical error in treating intelligence as the manipulation of explicit representations—an error that phenomenology had already diagnosed decades earlier.

In the AI Story

Hedcut illustration for Frame Problem
Frame Problem

The original formulation was technical: given a set of axioms describing a situation, how does the reasoning system know which axioms remain true after an action and which require updating? Early AI researchers tried to solve the problem through frame axioms, situation calculus, nonmonotonic logic, and a succession of increasingly sophisticated technical machinery. Each approach either failed to capture the flexibility of human reasoning or collapsed under combinatorial explosion.

Dreyfus and his philosophical allies—including Daniel Dennett, whose 1984 essay 'Cognitive Wheels' is the classic philosophical treatment—argued that the technical failures reflected a philosophical mistake. Human beings do not solve the frame problem because they do not face it: their embodied engagement with a meaningful world always already provides a structure of relevance that the Cartesian picture of mind-confronting-world cannot recover. The question 'which features are relevant?' does not arise for a practitioner absorbed in her work, because the situation itself discloses what matters.

The frame problem reappears in a transformed register with large language models. The models do not enumerate features or apply explicit rules. They extract statistical regularities from text in which humans have already implicitly solved the frame problem. The models' extraordinary fluency in many situations is, from this perspective, a consequence of absorbing the textual residue of human beings who did not face the frame problem because they were in the world. But the approximation has edges, and at the edges—in situations sufficiently novel that the statistical patterns do not transfer—the frame problem returns: the model cannot determine what is relevant to a genuinely new situation because it has no direct contact with the meaningful structure that determines relevance for embodied beings.

The fluent fabrications that plague current AI systems are, in Dreyfus's framework, frame-problem failures in disguise. The system generates output that is statistically consistent with its training data but that fails to track what is actually relevant to the specific situation—because 'what is actually relevant' is a feature of embodied engagement with a meaningful world, not a feature of textual patterns.

Origin

McCarthy and Hayes introduced the frame problem in 'Some Philosophical Problems from the Standpoint of Artificial Intelligence' (1969). The problem received its most influential philosophical treatment in Dennett's 'Cognitive Wheels: The Frame Problem of AI' (1984), which presented the problem through the parable of a robot trying to retrieve its battery from a room containing a bomb.

Dreyfus's analysis of the frame problem as a symptom rather than a soluble technical puzzle appears throughout his work from the 1970s onward, but the most developed treatment is in What Computers Still Can't Do (1992). The argument was that any representational system would face the problem, and that the only genuine solution was to abandon the representational picture of mind altogether.

Key Ideas

Indefinite relevance. The features potentially relevant to any action are indefinite in number, and no finite enumeration can capture what an embodied agent implicitly knows to attend to.

Symptom, not puzzle. Dreyfus treated the frame problem as a symptom of the philosophical error of treating intelligence as manipulation of explicit representations.

The embodied dissolution. Embodied agents do not solve the frame problem; they do not face it, because their engagement with a meaningful world always already provides a structure of relevance.

Return in statistical guise. Large language models mask the frame problem through statistical fluency but reencounter it at the edges of their training distribution, where plausible outputs can be generated without tracking what is actually relevant.

Debates & Critiques

Daniel Dennett's treatment of the frame problem differed from Dreyfus's in an important way: Dennett argued the problem was genuinely difficult but soluble through better cognitive architecture, while Dreyfus argued it was a pseudo-problem that arose only from the false picture of mind as representation. The current AI moment has vindicated neither position cleanly: the statistical approach has made the problem functionally manageable in many domains while leaving its philosophical core untouched.

Appears in the Orange Pill Cycle

Further reading

  1. John McCarthy and Patrick Hayes, 'Some Philosophical Problems from the Standpoint of Artificial Intelligence,' in Machine Intelligence 4 (Edinburgh University Press, 1969)
  2. Daniel Dennett, 'Cognitive Wheels: The Frame Problem of AI,' in Minds, Machines and Evolution, ed. Christopher Hookway (Cambridge University Press, 1984)
  3. Hubert L. Dreyfus, What Computers Still Can't Do (MIT Press, 1992)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT