Open worlds and closed worlds is Suchman's analytical distinction between the bounded domains in which computational systems operate successfully and the unbounded, emergent reality in which human practitioners must actually live and act. A closed world is one in which the variables are known, the contingencies are bounded, and a plan can specify the action in advance: the chessboard, the training corpus, the described situation. An open world is one in which the variables cannot be fully enumerated, the contingencies are unbounded, and action must be improvised in response to what the actor actually encounters: the deployment environment, the courtroom, the battlefield. AI systems — no matter how sophisticated — operate on representations of open worlds, and the boundary between representation and reality is where situated human intelligence has always lived.
The distinction is central to understanding what AI can and cannot do. In her 2025 AI Now Institute interview, Suchman observed that 'robotics has been successful to the extent that the worlds in which robots operate have been effectively closed.' The same applies to large language models: they operate on closed-world representations — training data, prompts, conversation history — and produce outputs conditioned on those representations. They do not operate on open worlds, because open worlds are not reducible to the representations that can be fed to a computational system.
Human practice always occurs in open worlds. The deployment environment has specific dependencies at specific versions with specific interaction patterns that no description captures. The patient's body has specific circumstances, histories, and contextual factors that the clinical presentation represents only partially. The courtroom has specific judges, specific opposing counsel, specific juries with specific backgrounds, and specific case facts whose relevance becomes apparent only in situ. In every case, the practitioner's competence consists in navigating the gap between the closed-world representation (specification, chart, brief) and the open-world reality.
The closed world is not defective; it is the only kind of world computational systems can have. The problem arises when closed-world outputs are treated as adequate to open-world situations — when the AI's generated plan is accepted as if it had been tested against the territory it addresses. This treatment is the institutional pathology Suchman's recent work on military AI and algorithmic targeting documents most vividly: the closed-world output of the targeting system is accepted as a determination about an open-world situation (this specific group of people in this specific place at this specific time), with lethal consequences when the representation diverges from reality.
The distinction implies something specific about the developmental conditions for competent AI use. A practitioner who has only ever interacted with the AI's closed-world representation of her domain — the described problem, the generated output, the iterative prompt cycle — has experience of a closed world. When she encounters the open world (the actual deployment, the live client, the real patient), she faces a gap her experience has not prepared her to navigate. The situated knowledge that would allow her to cross the boundary between closed and open develops only through direct engagement with open-world reality — engagement that AI-assisted workflows increasingly eliminate.
The distinction runs throughout Suchman's work but is articulated most explicitly in her 2007 Human-Machine Reconfigurations and her recent public writing. It draws on Philip Agre's earlier work on situated AI and on broader traditions in phenomenology and STS that distinguish between representational and engaged modes of knowing.
The military application of the distinction — developed in Suchman's analyses of drone warfare and algorithmic targeting — has given the concept particular urgency. The gap between the closed-world signal intelligence of targeting systems and the open-world reality of the targeted is the gap where civilian lives are lost.
Representations are necessarily closed. A computational system can only operate on what can be encoded. The encoding is always simpler than the reality it encodes.
Practice is always open. Real action occurs in contexts whose contingencies cannot be enumerated in advance. Situated intelligence is the capacity to act competently in this openness.
The boundary is structural. It cannot be closed by better representations, because representations are useful precisely because they simplify.
Treating closed as open is the institutional error. Accepting AI outputs as adequate to the open world — as if the plan had been tested against the territory — is the characteristic failure of AI deployment.
Situated knowledge crosses the boundary. Practitioners who have engaged open-world reality can evaluate closed-world outputs against what they know of the territory. Practitioners formed only by closed-world interaction cannot.