AI outputs are plans, not actions. This is Suchman's most clarifying proposition for thinking about large language models, algorithmic targeting, and every domain in which AI-generated results are deployed and trusted. A plan is a representation of action made in advance; an action is what actually occurs when an agent engages with specific circumstances. Plans address described situations — the prompt, the specification, the training corpus. Actions address encountered situations — the real deployment environment, the actual courtroom, the specific patient. The gap between described and encountered is permanent, and the catastrophic institutional error is treating plans as actions: accepting AI-generated outputs as if they had already been tested against the reality they claim to address.
The proposition extends Suchman's original distinction between plans and situated action to the specific case of AI-generated outputs. A plan is useful: it provides orientation, structure, a starting point. What a plan cannot do is guarantee that the action it describes will succeed in the circumstances the actor will actually encounter. The same is true of AI-generated code, briefs, diagnoses, and analyses. They are plans — representations of how the described situation might be addressed — not actions that have already been tested against the territory.
The distinction has practical bite across every domain of AI deployment. AI-generated code addresses the described problem; whether it works in production, with the actual dependencies, under actual load, for actual users, depends on situated knowledge that the code does not contain. AI-drafted legal briefs address the legal problem as described; whether the argument succeeds depends on situated judgment about this judge, this case, this jurisdiction. AI-generated diagnoses address the patient as characterized; whether the diagnosis is correct for this specific patient depends on the clinical judgment that physical examination develops.
Suchman's 2024 analysis of algorithmic targeting dramatizes the consequences with lethal clarity. Target recommendations from AI systems are plans — proposals based on statistical patterns in signal intelligence. The military personnel tasked with evaluating them face the same structural problem as engineers evaluating Claude's code: they must judge generated plans against situated realities. When the volume of plans exceeds the evaluative capacity, plans are treated as actions. The output is accepted as if tested against the territory. In intelligence work this produces misattributions; in targeting it produces civilian deaths.
The orientation Suchman's proposition implies is specific: receive every AI output as a proposal, not a conclusion. Evaluate every proposal against the specific circumstances of deployment. Maintain the situated knowledge that evaluation requires, through deliberate institutional investment in the practices that produce it. This orientation is neither the uncritical adoption the triumphalists advocate nor the wholesale refusal the critics recommend — it is the disciplined insistence that plans and actions are different things, and that confusing them is the specific institutional mistake that produces the worst AI-era failures.
The proposition is implicit in Suchman's 1987 framework but sharpened explicitly in her recent work, particularly her analyses of autonomous weapons and algorithmic governance. It has become one of the most cited propositions in critical AI studies precisely because it gives working language to practitioners and regulators who need to articulate what AI does and does not accomplish.
The application to military systems, where Suchman's analytical contribution has been most sustained, has given the proposition both urgency and precision. Her writing on what she has called 'the algorithmically accelerated killing machine' describes in detail what happens when plans generated at machine speed are treated as actions by humans without time to evaluate them.
Plans describe; actions happen. The difference is structural, not cosmetic. Plans can be made in advance; actions can only occur in encounter.
AI is a plan generator. Every AI output is a proposal for how a described situation might be addressed, not a tested response to an encountered one.
The catastrophic error. Treating plans as actions — accepting outputs without situated evaluation — is the characteristic institutional pathology of AI deployment.
Speed overwhelms evaluation. When outputs accumulate faster than practitioners can evaluate them, plans are accepted by default. The tempo itself becomes the mechanism of failure.
The responsible orientation. Receive every output as a proposal. Evaluate against specific circumstances. Invest in the situated knowledge evaluation requires. Do this because the gap between plans and actions is permanent and someone must navigate it.