The model's most disturbing implication is that many organizational outcomes — including outcomes subsequently rationalized as products of deliberate strategy — are better understood as artifacts of temporal coincidence than as products of purposeful choice. In a garbage can organization, solutions do not wait politely for problems to arrive. A consultant with a framework pitches it to every client regardless of fit. A technology vendor markets its product as the solution to whatever the customer happens to be worried about. Problems similarly do not wait for solutions: a budget meeting becomes a forum for airing grievances about office space; a product review becomes a discussion about organizational culture. Participants wander in and out of decision arenas based on competing demands. The people present when a decision is made are not necessarily the best qualified; they are the ones who happened to be available.
AI, as it existed in early 2026, is a solution of extraordinary generality — exactly the kind of garbage can participant that the model predicts will spread rapidly through organizations. The tool writes code, generates documentation, drafts reports, analyzes data, answers questions, conducts conversations. This generality is not incidental. It is what makes AI a powerful garbage can solution: flexible enough to attach itself to almost any problem that presents itself. A developer struggling with debugging has ten minutes before a meeting; Claude Code is available; the four streams converge and the attachment is formed. No strategic evaluation preceded this use. The solution found the problem in the time available with the participant who happened to be there.
The scope expansion follows with inexorable quality. Having used AI for debugging, the developer encounters a documentation task — a new problem, with AI still available as solution. Another attachment forms. Then specification drafting, test generation, architectural planning. Each attachment is individually incidental and collectively transformative. The scope of AI's organizational role expands not through strategic planning but through serial coincidence of problems and an omnipresent solution. This is the ratchet operating inside the garbage can.
Ethan Mollick's application of the framework to AI adoption identified a further implication: AI may not merely operate within the garbage can but reorganize it. If the tool can attach itself to problems with efficiency that human participants cannot match, it may establish new routines and patterns of problem-solution coincidence that bypass the human-mediated processes the organization previously relied on. The garbage can reorganizes itself through a process no participant directed and no authority sanctioned — the ultimate unmanaged organizational change.
Michael Cohen, James March, and Johan Olsen published A Garbage Can Model of Organizational Choice in Administrative Science Quarterly in 1972. The title alone was an affront to the rational-planning school's prevailing assumptions about orderly decision-making. The authors drew on observations of university administration and other loosely coupled institutions where the neat sequence of problem identification, alternative generation, evaluation, and selection bore little resemblance to actual practice.
The original paper included a computer simulation — unusual for social science in 1972 — that demonstrated how decisions could emerge from stream collisions even when no participant could articulate the decision's rationale. The simulation's parameters (the rates at which streams arrived, the energy participants brought to decisions, the capacity of choice opportunities) produced outcomes that matched empirical patterns in actual organizations, lending formal weight to what might otherwise have been dismissed as merely descriptive.
Four streams. Problems, solutions, participants, and choice opportunities flow independently; decisions emerge from their collisions.
Organized anarchy. Problematic preferences, unclear technology, and fluid participation characterize most complex organizations most of the time.
Solutions seek problems. Solutions do not wait for problems; they search for problems to attach to, often successfully.
AI as universal solution. The tool's extraordinary generality makes it a garbage can participant of unprecedented power, attaching itself to virtually any problem that presents itself.
Official versus actual strategy. Written AI strategies describe rational processes; actual adoption follows the garbage can, producing persistent gaps between strategy documents and ground-level reality.