Ambiguity as Organizational Resource — Orange Pill Wiki
CONCEPT

Ambiguity as Organizational Resource

March's argument that ambiguity — not knowing what the question is — enables exploration, and that its premature resolution by AI forecloses the interpretive alternatives from which genuine organizational novelty emerges.

Most organizations treat ambiguity as a problem to be eliminated. March spent decades arguing the assumption is not merely wrong but dangerous — that ambiguity, properly understood, is one of the most valuable resources an organization possesses, and that the drive to eliminate it produces organizations that are clear, decisive, and unable to adapt. The argument distinguishes ambiguity from uncertainty. Uncertainty is the condition of not knowing which of several well-defined outcomes will occur; organizations handle it through probability, scenario planning, expected-value calculation. Ambiguity is a different and more fundamental condition — not knowing what the question is, having multiple equally plausible interpretations of the same situation, none of which can be validated with available information. Ambiguity is not uncertainty about the answer; it is uncertainty about the question. AI resolves ambiguity with an efficiency that March's framework identifies as structurally dangerous.

In the AI Story

Hedcut illustration for Ambiguity as Organizational Resource
Ambiguity as Organizational Resource

March and Olsen argued in Ambiguity and Choice in Organizations (1976) that ambiguity is the normal condition of organizational life, not the exception. Most significant organizational decisions are made under conditions where goals are unclear, technology is uncertain, and participation is fluid. These are not pathological conditions but the conditions under which complex organizations routinely operate. The critical implication — the one that challenges clarity-as-virtue orthodoxy — is that ambiguity enables exploration. When an organization does not know exactly what it wants, it is free to discover what it wants through action. When a situation admits multiple interpretations, the organization can pursue several simultaneously, allowing evidence to accumulate before committing to one.

Conversely, when ambiguity is eliminated — when the organization has committed to a single interpretation, a single set of preferences, a single understanding — exploration ceases. The organization knows what it wants and acts to get it. This is exploitation: effective, efficient, and constrained to the interpretive framework adopted. The framework may be wrong; preferences may be poorly specified; interpretation may be partial. But commitment to clarity has foreclosed the alternatives, and the organization will not discover inadequacy until the framework fails under conditions it was not designed to handle.

AI eliminates ambiguity through a specific mechanism: immediate, confident, well-structured responses to ambiguous situations. Before AI, a practitioner encountering an ambiguous problem would sit with the ambiguity. The sitting was uncomfortable — the specific cognitive distress of not knowing what to do, the same distress that meditation traditions call 'beginner's mind' and creativity researchers identify as precondition for insight. The distress was productive precisely because it was uncomfortable: it motivated continued search, continued exploration of the interpretive space, continued openness to the possibility that the first interpretation was not the best. AI resolves the discomfort instantly. The practitioner describes the ambiguous problem; Claude responds with a clear, confident analysis selecting one interpretation from the many available. The ambiguity is gone. The path is clear. The practitioner can act.

Segal's account of working with Claude illustrates both the power and the danger. He describes an impasse on Han's diagnosis of smoothness that Claude resolved through a connection to laparoscopic surgery. The resolution was productive; the chapter that resulted is better than either participant could have produced alone. But the resolution also foreclosed alternatives. The moment Claude provided the connection, the interpretive space collapsed. The ambiguity that had kept Segal searching was resolved by one particular connection, and the other connections that the ambiguity was protecting became invisible. They were not rejected. They were never discovered. The confident, well-structured response occupied the space that the ambiguity had held open.

Origin

March developed his theory of ambiguity in collaboration with Johan Olsen in the 1970s. Their 1976 book Ambiguity and Choice in Organizations remains the foundational text. The argument drew on observations of university administration, political decision-making, and technology adoption — contexts where rational-choice models consistently failed to predict actual behavior. The ambiguity framework explained these failures not as cognitive limitations but as structural features of decision-making under conditions where preferences, alternatives, and outcomes were themselves subjects of inquiry rather than given parameters.

The framework's relevance to AI is specific. Byung-Chul Han's critique of smoothness, which Segal engages in The Orange Pill, is recognizable in March's terms as a critique of premature ambiguity resolution. The smooth surface — polished, seamless, friction-free — is a surface from which ambiguity has been eliminated. Han frames this aesthetically; March frames it organizationally; the structural argument is the same.

Key Ideas

Ambiguity versus uncertainty. Uncertainty is not knowing the answer; ambiguity is not knowing the question. Different conditions require different responses.

Normal condition. Ambiguity is the routine state of complex organizations, not a deviation from an idealized state of clarity.

Enables exploration. Multiple simultaneous interpretations keep the interpretive space open; premature resolution forecloses alternatives that might have been superior.

AI as ambiguity-resolver. The tool's confident, well-structured responses eliminate productive discomfort, selecting one interpretation from many without marking the selection as contingent.

Conceals itself. AI's output does not say 'here is one of several interpretations'; it says 'here is the analysis,' as though only one analysis were possible.

Debates & Critiques

Whether ambiguity should be preserved at all levels or only at strategic ones is contested. Operational ambiguity is genuinely costly, and AI's ability to resolve it is genuinely valuable; the Berkeley study documents real productivity gains from exactly this kind of resolution. The defensible position is neither to embrace nor to resist ambiguity universally but to distinguish the levels at which each response applies. Operational clarity plus strategic ambiguity is a difficult organizational posture to sustain, because the same learning systems that reward operational clarity will tend to reward strategic clarity as well — making the distinction a matter of deliberate institutional design rather than spontaneous organizational behavior.

Appears in the Orange Pill Cycle

Further reading

  1. James G. March and Johan P. Olsen, Ambiguity and Choice in Organizations (1976).
  2. James G. March, 'Bounded Rationality, Ambiguity, and the Engineering of Choice,' Bell Journal of Economics 9 (1978).
  3. Byung-Chul Han, Saving Beauty (2015).
  4. John Keats, letter to George and Tom Keats, 21 December 1817 (on negative capability).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT