The frame problem was first articulated by John McCarthy and Patrick Hayes in 1969 in the context of symbolic AI, where it described the difficulty of specifying which aspects of a situation change and which remain constant after an action. Jerry Fodor and Daniel Dennett later generalized the problem to cover any form of cognition that requires determining relevance.
The philosophical stakes are substantial. If relevance cannot be derived from the data, then any system that processes information must bring evaluative commitments from somewhere — training data, reward functions, architectural biases, or bodily feelings. The question of where these commitments come from determines the character of the intelligence.
Damasio's answer — that biological intelligence handles relevance through somatic markers — explains a feature of human cognition that symbolic AI struggled to reproduce: the capacity to navigate open-ended, value-laden situations without paralysis. When you decide where to eat dinner, you do not compute over every possible restaurant. You feel like Italian food. The craving narrows the field before analysis begins.
Damasio's ventromedial prefrontal patients demonstrate the frame problem in its most devastating form. Cases like the patient who could not schedule a follow-up appointment — deliberating endlessly about Tuesday versus Wednesday — show what happens when the somatic framing mechanism is destroyed. Cognitive analysis can continue indefinitely because there is no bodily signal to indicate that the analysis has produced a conclusion worth acting on.
For AI, the implication is that systems handle the frame problem through externally specified constraints: training data, reward functions, context windows, architectural decisions. These constraints work well within well-defined domains. In open-ended domains — the kinds that characterize real human life, ambiguous and multidimensional — the absence of internal evaluative commitment becomes consequential. The AI has no basis for determining which considerations matter beyond the frame its designers provided.
The frame problem was introduced by John McCarthy and Patrick Hayes in "Some Philosophical Problems from the Standpoint of Artificial Intelligence" (1969). Daniel Dennett gave it its most influential philosophical treatment in "Cognitive Wheels: The Frame Problem of AI" (1984). Damasio's implicit solution — via somatic markers — was never framed by him as an answer to the frame problem specifically, but the connection was drawn by later readers including phenomenologically-minded philosophers of mind.
Relevance is not given. Any reasoning system must bring evaluative commitments to determine which considerations matter, because relevance cannot be derived from data alone.
Bodies frame through feeling. Somatic markers narrow infinite fields of possibility to manageable subsets, with the narrowing encoded in bodily signals rather than explicit propositions.
The patients show the problem concretely. When somatic framing is destroyed, cognitive analysis becomes unboundable — producing the specific clinical picture of patients who cannot choose between trivially different options.
AI uses external frames. Training data, reward functions, and architectural constraints substitute for the internal evaluative commitments that biological organisms generate through feeling.
External frames are brittle. They work in narrow domains and fail in open-ended ones, because they cannot adapt the evaluative framework to novel situations the way bodily feeling continuously does.