There is a moment in every significant inquiry when the ground gives way. The evidence points in two directions. The frameworks contradict each other. The data says one thing and the gut says another. Most people experience this moment as a problem to be solved — an ambiguity to be resolved as quickly as possible so work can proceed. The itch for resolution is almost physical. Bateson spent her career arguing that this itch is not a feature of good thinking. It is a hazard. The urge to resolve ambiguity prematurely — to choose a direction before the ambiguity has been fully explored, to collapse contradictory possibilities into a clean narrative before the contradiction has yielded its insights — is one of the most common and destructive habits of Western intellectual culture. In the age of AI, where every ambiguity receives an immediate confident synthesis, the capacity to sustain ambiguity becomes the scarce and decisive human skill.
Bateson's argument is not that ambiguity is pleasant or that uncertainty is comfortable. Ambiguity is uncomfortable. That is its value. The discomfort of holding contradictory possibilities in suspension produces a cognitive state that comfortable clarity cannot produce — a state of heightened attention, active searching, openness to patterns that premature resolution would have foreclosed. The person sitting in ambiguity is working harder than the person who has resolved it, not because sitting is harder than choosing but because the cognitive work of holding multiple possibilities active simultaneously demands more of the mind than selecting one possibility and suppressing the rest.
The AI discourse is a study in premature resolution. The triumphalists have resolved the ambiguity: AI is an expansion of human capability, and the appropriate response is enthusiastic adoption. The elegists have resolved it differently: AI is a degradation of human depth, and the appropriate response is resistance or mourning. Both resolutions are clean. Both provide clear direction. Both suppress the specific insights the ambiguity, held open, would produce. The ambiguity the AI moment presents is genuine and irreducible — the tools are simultaneously an expansion and a risk, and the inability to determine which at any given moment is not a failure of self-knowledge but an accurate registration of a genuinely ambiguous situation.
AI systems themselves have no tolerance for ambiguity. A large language model does not sit with contradiction — it resolves it, immediately, confidently, often incorrectly. When asked a question that admits multiple valid answers, the model produces one answer without signaling that alternatives exist. When presented with evidence pointing in two directions, it synthesizes a coherent narrative that suppresses the contradiction rather than illuminating it. The model's design optimizes for resolution; ambiguity is, from its perspective, noise to be eliminated rather than signal to be preserved.
The human's role in the collaboration, in Bateson's framework, is to maintain the ambiguity that the AI's design eliminates — to notice when the AI has resolved a contradiction that should remain open, to reintroduce the suppressed possibilities, to insist on the discomfort that the AI's fluent coherence has smoothed away. Bateson called the capacity to sustain productive ambiguity wisdom. In her 2018 Edge.org conversation, she said directly that AI 'lacks wisdom, because wisdom is more multi-dimensional' than the kind of intelligence AI possesses.
The framework emerged from Bateson's anthropological fieldwork, particularly her years in Iran and her broader experience as a cross-cultural researcher. The anthropologist's method demands sustained ambiguity — entering a culture whose categories do not match one's own, resisting the temptation to fit observations into preexisting frameworks, allowing the culture's own logic to emerge over time. Bateson generalized the method into a theory of good thinking writ large.
The framework received sharpened articulation in her later engagements with AI. In conversations in the late 2010s, Bateson began explicitly identifying AI's incapacity for sustained ambiguity as its defining cognitive limitation — a limitation not of current systems but of the computational paradigm itself.
Premature resolution destroys insight. The answers that emerge only from extended ambiguity are not available to minds that resolve contradictions too quickly.
AI optimizes for resolution; humans must optimize for sustained inquiry. The division of labor in the collaboration runs along this axis.
Ambiguity tolerance is cultivated by discontinuity. People who have navigated multiple life transitions develop higher tolerance than those whose lives have followed linear paths.
Wisdom is multi-dimensional ambiguity tolerance. Bateson's definition: the capacity to engage productively with what one does not know — to hold open the questions that do not have clean answers.