The Background Problem — Orange Pill Wiki
CONCEPT

The Background Problem

The vast, tacit, culturally constituted fabric of shared understanding against which every explicit act of thought occurs—and which, Dreyfus argued, cannot be formalized without infinite regress and cannot be fully captured even by statistical approximation from text.

The background is Dreyfus's term for the totality of shared practices, common-sense assumptions, and tacit understandings that make intelligent action possible. It is the knowledge that a restaurant is not a place to lie down, that 'Can you pass the salt?' is not a question about physical capability, that a colleague's 'Fine' is not always fine. These understandings are not stored as rules, not retrieved from memory, not the product of inference from premises. They are the medium within which explicit thought operates, constituted by the way embodied beings inhabit a culturally shared world. Dreyfus identified the background as the fundamental obstacle to artificial intelligence in any form—first for symbolic AI, which tried to formalize it and failed catastrophically, then for statistical AI, which approximates it with extraordinary sophistication but whose remaining gaps, Dreyfus argued, become most consequential precisely where common sense matters most.

In the AI Story

Hedcut illustration for The Background Problem
The Background Problem

Classical symbolic AI attempted to encode the background explicitly. Douglas Lenat's Cyc project, begun in 1984, spent decades and tens of millions of dollars entering millions of common-sense assertions by hand—'water flows downhill,' 'people generally do not enjoy being hit in the face with a fish.' The project produced a very large database. It did not produce common sense. The difference between a very large database and common sense is the difference between a map and the territory: however detailed the map becomes, it remains a representation, and a representation is not the thing it represents.

Large language models approach the background from a radically different direction. They do not attempt to formalize it. They absorb it—or more precisely, they absorb the textual traces that embodied practice leaves behind. When millions of humans write about restaurants, their writing implicitly encodes the background knowledge that one does not lie on the floor in a restaurant. The model learns this not as an explicit rule but as a statistical regularity. The functional result is often indistinguishable from genuine common sense.

This is what makes the new AI genuinely different from the old AI, and what requires Dreyfus's critique to be updated rather than merely repeated. Cyc failed because it tried to formalize the background. Large language models succeed, to a remarkable degree, because they do not formalize it—they approximate it through statistical regularities in the data that the background has shaped. But approximation and possession are different things, and the difference becomes visible at the edges, precisely where common sense matters most.

Common sense is most needed when the situation is novel, when the standard patterns do not apply, when the background understanding that is ordinarily invisible must be brought to bear on a problem the usual routines cannot handle. A human navigates a novel situation by drawing on the full depth of her embodied background. The model extrapolates from statistical patterns in its training data, and when the novel situation is sufficiently distant from those patterns, the extrapolation fails—in ways that are not obvious, but plausible: outputs that look right, read well, could pass cursory inspection, but are wrong in ways only a person with genuine background understanding can detect.

Origin

The background problem was developed across Dreyfus's work but received its most extended treatment in What Computers Still Can't Do (1992). The concept draws on Wittgenstein's later philosophy—particularly the idea that rule-following rests on a background of shared practices that cannot themselves be made explicit as rules—and on Heidegger's analysis of the Bewandtnisganzheit, the totality of involvements that constitutes the meaningful context within which any entity shows up as what it is.

The application to statistical AI was developed in Dreyfus's later essays, including 'Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian' (2007), which engaged specifically with Rodney Brooks's situated robotics and with the emerging connectionist approaches that large language models would eventually instantiate.

Key Ideas

Beyond formalization. The background cannot be written down as a finite set of rules or propositions because the practices that constitute it are the condition for making rules and propositions meaningful.

Statistical approximation. Large language models approximate the background by absorbing its textual traces, producing outputs that function as though grounded in genuine common sense.

The residue problem. The gap between approximation and possession becomes visible precisely where the background matters most—in novel situations that the statistical patterns cannot predict.

The invisibility of failure. Background failures in AI output do not announce themselves as failures; they produce plausible prose that only a reader with genuine background can recognize as wrong.

Debates & Critiques

Some researchers argue that with sufficient scale, statistical approximation becomes functionally equivalent to possession—that the gap Dreyfus identifies will close asymptotically as models grow. Dreyfus's framework, as this volume develops it, responds that the gap is not a function of scale but of structure: a system without embodied engagement cannot, even in principle, possess the kind of background that embodied engagement constitutes. The approximation may become arbitrarily good, but the remaining failures will be precisely at the edges where background understanding is most critical.

Appears in the Orange Pill Cycle

Further reading

  1. Hubert L. Dreyfus, What Computers Still Can't Do (MIT Press, 1992)
  2. Hubert L. Dreyfus, 'Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian,' Philosophical Psychology 20:2 (2007)
  3. John Searle, 'Minds, Brains, and Programs,' Behavioral and Brain Sciences 3 (1980)
  4. Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe (Blackwell, 1953)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT