Human-machine interpretive asymmetry is Suchman's term for the structural one-sidedness of interactions between people and computational systems. Humans bring to any interaction the full apparatus of social intelligence: they interpret behavior as meaningful, attribute intention and understanding, read context, and assign significance. Machines respond to inputs according to their programming or their training, without any understanding of the user's situation, intentions, or state. The interaction looks like a conversation. It is not one. It is a human doing all the interpretive work and a machine producing outputs sophisticated enough to sustain the projection. The asymmetry was already diagnostic in Suchman's 1987 photocopier studies and becomes more consequential as the machine's outputs become more plausible.
The concept emerged from Suchman's close study of users interacting with the Xerox photocopier help system at PARC. Users consistently read the system's displays as communicative acts — as if the machine were telling them something, asking them something, signaling what they should do. They attributed intention to the machine's behavior because that is what human social intelligence does with any entity producing behavior complex enough to be interpreted. The machine did no equivalent interpretation of the user. The full weight of the interaction's meaning fell on one side.
The asymmetry is not a defect to be corrected. It is a structural consequence of the fact that social interpretation is a capacity built by human beings in human communities through developmental histories that machines do not have. Even if a machine could simulate the outputs of social interpretation — and contemporary large language models can simulate them with remarkable fluency — the simulation is not the activity. The machine produces outputs that look like understanding; the human does the understanding of those outputs.
This matters for a specific reason that The Orange Pill's account of being 'met' by Claude makes vivid. When a user experiences an AI interaction as collaborative, reciprocal, or understanding, she is describing her own interpretive achievement, not the machine's capability. The experience is genuine. The attribution is a category error. And the asymmetry deepens as the machine's outputs become more sophisticated, because sophistication supports the projection more effectively. The user does not merely attribute understanding to the machine; she enters a state that genuinely feels like intellectual partnership, because the machine's responses are coherent enough to sustain the feeling across extended interactions.
The quality of any AI-assisted work therefore depends entirely on the quality of the human's interpretive capacity — her ability to evaluate machine outputs against the reality those outputs claim to address. And that capacity, Suchman's framework insists, is itself the product of situated experience. The asymmetry is thus doubly consequential: the machine cannot check its own outputs, and the user's ability to check them is being eroded by the very tools that generate them.
The concept was articulated most fully in Plans and Situated Actions (1987) and extended in Human-Machine Reconfigurations (2007). It draws on the ethnomethodological insistence that social order is an ongoing interpretive achievement — a tradition Suchman brought to computer science from her training under Jack Whalen and through her engagement with the work of Harold Garfinkel and Harvey Sacks.
In more recent work — including her 2023 analysis of the 'uncontroversial thingness of AI' and her AI Now Institute interviews — Suchman has sharpened the concept to address the specific case of large language models, where the asymmetry is simultaneously most consequential and most difficult to see.
Humans attribute; machines compute. The interpretive labor is entirely on the human side. The machine produces outputs; the human reads them as meaningful.
The illusion of conversation. What looks like dialogue is a human interpreting machine outputs through social intelligence and a machine responding to inputs through computational rules. The form is conversational; the structure is not.
Sophistication deepens the asymmetry. As AI outputs become more fluent, they more effectively sustain the human's interpretive projection. The asymmetry becomes harder to see as it becomes more consequential.
Feeling met is a human achievement. The experience of being understood by AI is the user's own social intelligence operating on outputs that can sustain the attribution. It is not evidence of machine understanding.
The regulation falls on the user. A human collaborator provides cues (fatigue, disagreement, shifting energy) that regulate the interaction. AI provides none. The user must supply all the cues — against the pull of an interaction that sustains the illusion of mutual engagement.