The Xerox photocopier help system was an early expert-system interface designed to guide users through complex copying procedures by inferring their goals and offering step-by-step instructions. Built on the assumption that users have explicit plans the machine can recognize and support, the system became the unintended empirical site for Suchman's 1987 demolition of the planning paradigm in AI. Her video studies of pairs of users attempting double-sided copies revealed that users did not have plans in the system's sense — they had vague intentions, partial understandings, and interpretive practices the machine could not anticipate. The resulting analysis in Plans and Situated Actions transformed human-computer interaction and remains the structural model for thinking about how AI interfaces misunderstand their users.
The help system was developed at Xerox PARC during the early 1980s as part of a broader effort to make photocopiers more usable. Xerox's market advantage depended on the workplace deployment of increasingly sophisticated machines, and user struggle with those machines was a genuine business problem. The help system embodied the prevailing engineering theory of user support: users have goals, goals can be inferred from behavior, and support consists of providing step-by-step instructions tailored to the inferred goal. The system's designers were technically sophisticated and acting in good faith. Their theory of the user was nonetheless wrong.
Suchman's method was to video-record pairs of users attempting double-sided copies and to analyze the recordings using the close attention to interactional sequence developed in ethnomethodology and conversation analysis. What emerged was a portrait of users improvising constantly, interpreting the machine's displays through frameworks the designers had not anticipated, forming hypotheses about what the machine was telling them, and proceeding by trial and error when the hypotheses failed. The intelligence in the users' activity was real, but it bore no resemblance to the planning model the help system was built to support.
The photocopier case became paradigmatic because the mundanity was the point. AI researchers of the era preferred to model exotic cognitive performers — chess grandmasters, theorem provers, medical diagnosticians. Suchman's choice of a mundane task demonstrated that even the simplest human-machine interactions exhibit the improvisational, interpretive, situated character that the planning paradigm could not capture. If planning models failed for making double-sided copies, they could hardly succeed for activities the discourse treated as central to intelligence.
The structure of the photocopier interaction — a machine generating plans, a user doing all the interpretive work, the asymmetry concealed by the appearance of conversation — has proved to be the template for every subsequent generation of AI interface. The large language models of the 2020s present the same structural situation at incomparably higher sophistication: the machine generates plausible outputs, the user interprets them through social intelligence, and the asymmetry is harder to see precisely because the outputs are more sophisticated.
Suchman's initial observations were made as part of her ethnographic work at PARC, beginning around 1980. The formal study drew on methodological traditions unfamiliar to the PARC engineering culture: Harold Garfinkel's ethnomethodology, Harvey Sacks's conversation analysis, and Erving Goffman's interactional sociology. The video corpus and its analysis formed the empirical core of her Berkeley dissertation, which became Plans and Situated Actions.
The help system had a theory of the user. That theory — users execute plans toward goals the system can recognize — was coherent, technically implementable, and empirically wrong.
Users did not execute plans. They formed intentions, interpreted displays, improvised responses, and adjusted on the fly. Their activity was intelligent in a way the planning model could not describe.
Interpretive asymmetry. The user brought full human social intelligence to the interaction; the machine brought procedural response. The interaction looked like conversation but was structurally one-sided.
Mundanity as method. Suchman's choice of photocopying — not chess, not theorem proving — demonstrated that the improvisational character of intelligence is present even in the simplest human-machine interactions.
The template persists. Contemporary AI interfaces reproduce the photocopier's structure at higher sophistication: the machine generates plans, the human does the interpretation, and the asymmetry deepens as the outputs become more impressive.