Situated action is Lucy Suchman's 1987 reframing of intelligent behavior as the responsive navigation of specific circumstances rather than the execution of plans. Drawing on ethnomethodology and her fieldwork at Xerox PARC, Suchman argued that plans function as resources for action, not determinants of it — more like travel itineraries than blueprints. Competence lives in the practitioner's capacity to read the situation and respond to what is actually there, improvising when the plan and the territory diverge. The concept became foundational for human-computer interaction, cognitive science, and the critique of classical AI, and it returns with fresh urgency in the age of large language models, which generate plans addressed to described situations rather than encountered ones.
The concept emerged from Suchman's observation that the photocopier help system at PARC assumed users had plans the system could recognize and support. Actual users did not have plans in this sense. They had vague intentions, partial understandings, and interpretive frameworks shaped by prior experience. They improvised constantly, reading the machine's displays and responding to what they found. When the response produced an unexpected result, they adjusted. The intelligence in their activity was not in any pre-formed plan but in the ongoing, real-time responsiveness to the specific circumstances of that particular moment.
Suchman's reframing was a direct challenge to the dominant planning paradigm in artificial intelligence, which treated intelligent behavior as the formation and execution of internal representations. Herbert Simon and Allen Newell had built careers on this assumption. In 1993 Simon and Aran Vera published a formal rebuttal in Cognitive Science arguing that situated action could be reabsorbed into planning. Suchman's response crystallized the debate that has defined the philosophy of AI ever since: is intelligence in the plan or in the situation?
The concept extends beyond photocopier interfaces to every domain where competent practice is studied. Julian Orr's research on Xerox field technicians showed the most effective repair workers were not the most procedure-faithful but the most improvisationally responsive. The surgeon who encounters unexpected adhesions, the air traffic controller reconfiguring traffic in real time, the jazz musician phrasing to the room — all exhibit situated action as the signature of expertise. The concept became foundational to ethnomethodology-informed workplace studies and to the critical strand of human-computer interaction.
In the age of large language models, situated action acquires new diagnostic power. The tools generate outputs that look like competent action, but Suchman's framework insists on a distinction: the machine generates plans addressed to described situations, while competent action navigates encountered situations. The gap between plans and actions has not closed; it has been displaced — from the human implementer who once navigated it to the machine that now generates within it.
Suchman arrived at Xerox PARC in 1979 as an anthropologist among physicists and computer scientists. Management hired her because users were struggling with machines the engineers considered well-designed. Her ethnographic method — watching what people actually did rather than what they were supposed to do — was unfamiliar in the lab. The methodology came from ethnomethodology, conversation analysis, and Erving Goffman's sociology of interaction. The synthesis of these traditions with computer science produced a framework that neither field possessed independently.
The concept was formally introduced in Plans and Situated Actions: The Problem of Human-Machine Communication (1987), Suchman's revised Berkeley dissertation. The book's quiet title concealed a radical argument that landed like a bomb in the AI research community. Her 2007 expansion, Human-Machine Reconfigurations, extended the framework to military systems, algorithmic governance, and the politics of AI development.
Plans as resources, not determinants. A plan is something actors consult, use loosely, or abandon depending on what the situation demands — not a blueprint that determines what happens next.
Intelligence in the situation. Competent action lives in the responsive, adaptive, improvisational activity through which a person navigates circumstances no plan could fully anticipate.
The gap between described and encountered. Every representation is simpler than the reality it represents. Action happens in the gap, which is structural and cannot be eliminated by better representations.
Retrospective reconstruction. Plans often appear to precede action because people construct them after the fact, telling themselves stories about what they wanted based on what they actually received.
Open worlds vs closed worlds. Human practice occurs in open worlds with unbounded contingencies; AI systems operate on closed-world representations — and the boundary between them is where situated intelligence lives.
Critics including Simon and Vera argued that situated action could be formally modeled as sophisticated planning with conditional rules — that the distinction was one of degree rather than kind. Suchman responded that the disagreement concerns where intelligence lives: in the representation or in the responsive engagement. Four decades of AI progress have not resolved the debate; large language models arguably intensify it by producing outputs that look like situated responses while operating entirely on representations.