You On AI Encyclopedia · Frame Problem (Damasio reading) The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

Frame Problem (Damasio reading)

The classical philosophical puzzle of how any reasoning system determines what is relevant — which Damasio's framework answers: in biological organisms, the body solves the frame problem through feeling.
The frame problem, originating in AI research in the 1960s, names the difficulty any rational agent faces in determining which considerations are relevant to a decision. A purely computational approach confronts an infinite regress: before analyzing options, one must decide which factors matter; that decision requires prior evaluative commitments that cannot be derived from the data alone. Damasio's framework provides a biological answer: the body solves the frame problem through somatic markers, which narrow the infinite field of possible considerations to the subset the organism's accumulated experience has flagged as consequential. Feeling is the frame.
Frame Problem (Damasio reading)
Frame Problem (Damasio reading)

In The You On AI Encyclopedia

The frame problem was first articulated by John McCarthy and Patrick Hayes in 1969 in the context of symbolic AI, where it described the difficulty of specifying which aspects of a situation change and which remain constant after an action. Jerry Fodor and Daniel Dennett later generalized the problem to cover any form of cognition that requires determining relevance.

The philosophical stakes are substantial. If relevance cannot be derived from the data, then any system that processes information must bring evaluative commitments from somewhere — training data, reward functions, architectural biases, or bodily feelings. The question of where these commitments come from determines the character of the intelligence.

Somatic Marker Hypothesis
Somatic Marker Hypothesis

Damasio's answer — that biological intelligence handles relevance through somatic markers — explains a feature of human cognition that symbolic AI struggled to reproduce: the capacity to navigate open-ended, value-laden situations without paralysis. When you decide where to eat dinner, you do not compute over every possible restaurant. You feel like Italian food. The craving narrows the field before analysis begins.

Damasio's ventromedial prefrontal patients demonstrate the frame problem in its most devastating form. Cases like the patient who could not schedule a follow-up appointment — deliberating endlessly about Tuesday versus Wednesday — show what happens when the somatic framing mechanism is destroyed. Cognitive analysis can continue indefinitely because there is no bodily signal to indicate that the analysis has produced a conclusion worth acting on.

For AI, the implication is that systems handle the frame problem through externally specified constraints: training data, reward functions, context windows, architectural decisions. These constraints work well within well-defined domains. In open-ended domains — the kinds that characterize real human life, ambiguous and multidimensional — the absence of internal evaluative commitment becomes consequential. The AI has no basis for determining which considerations matter beyond the frame its designers provided.

Origin

The frame problem was introduced by John McCarthy and Patrick Hayes in "Some Philosophical Problems from the Standpoint of Artificial Intelligence" (1969). Daniel Dennett gave it its most influential philosophical treatment in "Cognitive Wheels: The Frame Problem of AI" (1984). Damasio's implicit solution — via somatic markers — was never framed by him as an answer to the frame problem specifically, but the connection was drawn by later readers including phenomenologically-minded philosophers of mind.

Key Ideas

Elliot (clinical case)
Elliot (clinical case)

Relevance is not given. Any reasoning system must bring evaluative commitments to determine which considerations matter, because relevance cannot be derived from data alone.

Bodies frame through feeling. Somatic markers narrow infinite fields of possibility to manageable subsets, with the narrowing encoded in bodily signals rather than explicit propositions.

The patients show the problem concretely. When somatic framing is destroyed, cognitive analysis becomes unboundable — producing the specific clinical picture of patients who cannot choose between trivially different options.

AI uses external frames. Training data, reward functions, and architectural constraints substitute for the internal evaluative commitments that biological organisms generate through feeling.

The patients show the problem concretely

External frames are brittle. They work in narrow domains and fail in open-ended ones, because they cannot adapt the evaluative framework to novel situations the way bodily feeling continuously does.

Further Reading

  1. McCarthy, John, & Hayes, Patrick. "Some philosophical problems from the standpoint of artificial intelligence." Machine Intelligence 4 (1969): 463–502.
  2. Dennett, Daniel. "Cognitive wheels: The frame problem of AI." In Minds, Machines and Evolution, ed. C. Hookway (Cambridge, 1984).
  3. Damasio, Antonio. Descartes' Error (1994) — Chapter 8 on the "annotations theory" is the implicit bridge to the frame problem.
  4. Wheeler, Michael. Reconstructing the Cognitive World (MIT Press, 2005) — explicit treatment of the frame problem through embodied cognition.
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →