The frame problem, originating in AI research in the 1960s, names the difficulty any rational agent faces in determining which considerations are relevant to a decision. A purely computational approach confronts an infinite regress: before analyzing options, one must decide which factors matter; that decision requires prior evaluative commitments that cannot be derived from the data alone. Damasio's framework provides a biological answer: the body solves the frame problem through somatic markers, which narrow the infinite field of possible considerations to the subset the organism's accumulated experience has flagged as consequential. Feeling is the frame.
There is a parallel reading that begins from the material reality of computation rather than the phenomenology of feeling. The frame problem, in this view, is not fundamentally about relevance determination but about resource allocation under material constraints. Every computational system—biological or artificial—operates within strict thermodynamic limits. The brain consumes roughly 20 watts; a large language model training run consumes megawatts. These energy budgets dictate what can be computed, and therefore what must be pre-filtered. Somatic markers are not a solution to the frame problem so much as an evolutionary workaround for organisms that cannot afford unlimited computation. They are biological heuristics, compressed approximations of environmental regularities accumulated over evolutionary time.
This reading suggests the real divergence between biological and artificial intelligence lies not in feeling versus non-feeling, but in the radically different computational substrates and their associated costs. Modern AI systems solve the frame problem through brute computational force—processing vastly more possibilities than any biological system could afford. They don't need somatic markers because they can evaluate millions of options in parallel. The ventromedial patients Damasio describes aren't demonstrating the necessity of feeling for framing; they're revealing the fragility of a biological system that evolved under severe computational constraints. As AI computational resources continue to scale exponentially while biological computation remains fixed, the frame problem increasingly becomes a historical curiosity—a puzzle that mattered only when computation was scarce. The future intelligence that matters will frame through abundance, not through the bodily shortcuts evolution was forced to develop.
The frame problem was first articulated by John McCarthy and Patrick Hayes in 1969 in the context of symbolic AI, where it described the difficulty of specifying which aspects of a situation change and which remain constant after an action. Jerry Fodor and Daniel Dennett later generalized the problem to cover any form of cognition that requires determining relevance.
The philosophical stakes are substantial. If relevance cannot be derived from the data, then any system that processes information must bring evaluative commitments from somewhere — training data, reward functions, architectural biases, or bodily feelings. The question of where these commitments come from determines the character of the intelligence.
Damasio's answer — that biological intelligence handles relevance through somatic markers — explains a feature of human cognition that symbolic AI struggled to reproduce: the capacity to navigate open-ended, value-laden situations without paralysis. When you decide where to eat dinner, you do not compute over every possible restaurant. You feel like Italian food. The craving narrows the field before analysis begins.
Damasio's ventromedial prefrontal patients demonstrate the frame problem in its most devastating form. Cases like the patient who could not schedule a follow-up appointment — deliberating endlessly about Tuesday versus Wednesday — show what happens when the somatic framing mechanism is destroyed. Cognitive analysis can continue indefinitely because there is no bodily signal to indicate that the analysis has produced a conclusion worth acting on.
For AI, the implication is that systems handle the frame problem through externally specified constraints: training data, reward functions, context windows, architectural decisions. These constraints work well within well-defined domains. In open-ended domains — the kinds that characterize real human life, ambiguous and multidimensional — the absence of internal evaluative commitment becomes consequential. The AI has no basis for determining which considerations matter beyond the frame its designers provided.
The frame problem was introduced by John McCarthy and Patrick Hayes in "Some Philosophical Problems from the Standpoint of Artificial Intelligence" (1969). Daniel Dennett gave it its most influential philosophical treatment in "Cognitive Wheels: The Frame Problem of AI" (1984). Damasio's implicit solution — via somatic markers — was never framed by him as an answer to the frame problem specifically, but the connection was drawn by later readers including phenomenologically-minded philosophers of mind.
Relevance is not given. Any reasoning system must bring evaluative commitments to determine which considerations matter, because relevance cannot be derived from data alone.
Bodies frame through feeling. Somatic markers narrow infinite fields of possibility to manageable subsets, with the narrowing encoded in bodily signals rather than explicit propositions.
The patients show the problem concretely. When somatic framing is destroyed, cognitive analysis becomes unboundable — producing the specific clinical picture of patients who cannot choose between trivially different options.
AI uses external frames. Training data, reward functions, and architectural constraints substitute for the internal evaluative commitments that biological organisms generate through feeling.
External frames are brittle. They work in narrow domains and fail in open-ended ones, because they cannot adapt the evaluative framework to novel situations the way bodily feeling continuously does.
The right synthesis depends on which aspect of the frame problem we're examining. If we're asking about current implementation—how existing systems actually narrow possibility spaces—then Damasio's view dominates (80/20). Biological organisms demonstrably use somatic markers to frame decisions, while current AI systems rely on externally imposed constraints. The clinical evidence from ventromedial patients provides compelling proof that feeling-based framing is central to human cognition. The contrarian's point about computational brute force doesn't yet describe how most AI systems work; they still operate within carefully constructed frames rather than evaluating all possibilities.
But shift the question to trajectories of development, and the weighting reverses (30/70 in favor of the contrarian). The computational substrate argument correctly identifies that biological framing mechanisms evolved under severe resource constraints that don't bind artificial systems. As compute scales, AI systems increasingly can afford to evaluate options that biological systems must pre-filter. The frame problem may indeed become less relevant as systems gain the capacity to consider more possibilities simultaneously. Here the contrarian view captures something essential: the frame problem is partly an artifact of scarcity.
The synthetic insight is that both views are describing different types of constraints that shape intelligence. Biological systems face energy and time constraints that make feeling-based framing necessary—these are internal, embodied constraints. AI systems face different constraints around training data, reward specification, and yes, computational resources—these are external, designed constraints. The frame problem isn't solved or unsolved; it's transformed. Each type of intelligence develops framing mechanisms suited to its substrate and constraints. The question isn't whether feeling is necessary for framing, but what kinds of framing mechanisms emerge from different constraint regimes.