The gap between plans and actions is Suchman's name for the irreducible distance between any representation of intended activity and the actual, situated unfolding of that activity. Plans address described situations; actions address encountered ones. The gap is not a defect of poor planning but a structural feature of the relationship between any map and any territory. It is the space where competent practitioners improvise, where situated knowledge is deposited, and where, crucially, AI outputs must be evaluated — because AI generates plans, and only someone who has navigated the territory can judge whether a plan will hold when it meets reality. In the age of large language models, the gap has not closed; it has been displaced from the human practitioner to the machine.
The gap is Suchman's most consequential single concept for thinking about AI, because it clarifies what AI does and what AI cannot do. AI systems — from the Xerox photocopier help system of the 1980s to contemporary large language models — generate plans: representations of how a described situation might be addressed. They operate on closed worlds (training data, prompts, conversation context) and produce outputs whose quality is a function of how well those outputs match statistical patterns in their training data. Whether the plans hold in the open world — whether the generated code works in this deployment environment, whether the generated brief succeeds in this courtroom, whether the generated diagnosis is correct for this patient — is a question the machine cannot answer.
In the pre-AI world, the gap was navigated by human implementers. The engineer translated a specification into code, encountering along the way the specific dependencies, edge cases, and emergent behaviors that no specification could anticipate. The navigation was where situated knowledge accumulated. Each encounter with resistance taught the implementer something about the territory that the map did not contain. The gap was productive friction — the site where practitioners were formed.
AI displaces this navigation. The AI system encounters the dependencies and edge cases and resolves them according to patterns derived from training data. The user provides a description at one end and receives an artifact at the other. The middle — where understanding develops — has been evacuated of human presence. The gap still exists, but the human is no longer in it. This displacement is the source of the deep unease that runs through The Orange Pill and that Suchman's framework names with precision.
The gap's structural character means it cannot be closed by better AI. Representations are useful precisely because they simplify; a representation as complex as the reality it represents would not be a representation but a duplicate. The permanent gap between map and territory is where human situated intelligence has always operated and where it must continue to operate — unless institutions accept a trajectory in which AI-generated plans are deployed without anyone present who could evaluate them against the actual territory they address.
The concept emerged from Suchman's PARC photocopier studies, where she observed that the machine's instructional plans and the users' actual activity were systematically misaligned. The misalignment was not a failure of the machine's designers — who were intelligent and careful — but a structural consequence of the relationship between any plan and any action. Her development of the concept drew on phenomenology, particularly the tradition of embodied engagement with ready-to-hand reality, though she rarely cited the Heideggerian lineage directly.
The gap's central role in AI critique sharpened over Suchman's career, most fully in her 2007 Human-Machine Reconfigurations and in her more recent work on military AI and algorithmic targeting, where the consequences of treating plans as actions are measured in human lives.
Plans address descriptions; actions address encounters. The asymmetry is structural. No plan can specify the action because no description can exhaust the situation.
The gap is permanent. Better representations narrow specific gaps but cannot eliminate the gap as such. Simplification is what representations do; the residue is where situated intelligence operates.
Situated knowledge develops in the gap. Practitioners are formed by navigating the specific circumstances plans could not anticipate. Remove the navigation and you remove the formation.
AI displaces rather than closes the gap. The gap is now between the user's description and the machine's output on one side, and between the machine's output and deployment reality on the other — with no human navigator in between.
Treating plans as actions is the structural error. The most dangerous institutional response to AI is to accept generated outputs as if they had been tested against the territory they address. They have only been tested against the description.