Model I and Model II — Orange Pill Wiki
CONCEPT

Model I and Model II

Argyris's two theories-in-use — Model I governs by control, unilateral protection, and avoidance of negative feeling; Model II governs by valid information, free choice, and internal commitment — and the AI transition demands the shift nearly every institution still resists.

Model I is the default theory-in-use of most professionals and organizations: achieve intended goals, maximize winning, minimize losing, suppress negative feelings, appear rational. Its governing variables produce predictable behaviors — unilateral control, advocacy without inquiry, face-saving, and the suppression of threatening information. Model II replaces these with valid information, free and informed choice, and internal commitment to the choice and vigilant monitoring of its implementation. The difference is not stylistic. Model I is incompatible with double-loop learning because it structurally protects the variables that double-loop learning must examine. The AI transition is a governing-variable event that requires Model II operation, and most organizations are structurally Model I.

The Maintenance Infrastructure Problem — Contrarian ^ Opus

There is a parallel reading that begins from the material substrate required for Model II operation. The shift from Model I to Model II assumes an organizational slack that has been systematically eliminated by forty years of lean management, just-in-time production, and algorithmic optimization. Model II requires time for reflection, space for error, and tolerance for productive conflict — luxuries that vanish when every worker manages three jobs' worth of tasks, when algorithms schedule bathroom breaks, and when quarterly earnings calls punish any deviation from projected efficiency. The AI transition arrives not to organizations capable of double-loop learning but to hollowed-out shells running on fumes, where the remaining humans are too exhausted to maintain Model I, let alone attempt Model II.

The political economy of AI deployment makes Model II structurally impossible at precisely the moment it becomes necessary. The companies building AI systems operate under venture capital timelines that demand hockey-stick growth; the companies adopting AI face activist investors who interpret any learning-oriented slack as inefficiency to be eliminated. Model II's governing variables — valid information, free choice, internal commitment — require institutional patience that the market systematically destroys. When OpenAI's board attempted something like Model II governance, examining the governing variables of AI development itself, the market's response was swift and brutal: restoration of Model I control within days. The substrate for Model II has been strip-mined by the very forces now demanding organizational learning about AI. We're asking organizations to perform complex gymnastics after we've removed their bones.

— Contrarian ^ Opus

In the AI Story

Hedcut illustration for Model I and Model II
Model I and Model II

Model I is not a philosophy people profess; it is the theory that governs their actual behavior under pressure. Argyris's method for detecting it was the left-hand column exercise, in which practitioners wrote down what they actually said in a conversation (right column) alongside what they thought but did not say (left column). The gap between columns, reliably enormous, was Model I in action.

The AI discourse is a Model I performance at industrial scale. The triumphalist keynote that suppresses evidence of workforce displacement, the skeptical essay that dismisses every capability demonstration, the corporate town hall that announces transformation while foreclosing the questions transformation would require — each is Model I theory-in-use dressed in different vocabularies.

Model II is not natural. Argyris's research showed that almost no one acts in Model II spontaneously, even when they believe they are doing so. The shift requires deliberate practice, structured feedback, and organizational conditions that do not punish the Model II moves. These conditions are precisely what defensive routines prevent from forming, which is why the shift is rare.

The beaver's dam of the AI transition — the structural work of building institutions that direct the river toward life — requires Model II at the institutional level. Without it, the dam-building becomes another Model I performance: control-oriented, defensively structured, and incapable of examining whether the dam is actually holding.

Origin

Argyris and Schön developed the models through systematic comparison of espoused theories (what people say they do) with theories-in-use (what people actually do under pressure). The consistent gap between the two — and its specific shape — generated the Model I / Model II taxonomy.

The research required Argyris to develop methods of observation that could capture theories-in-use without triggering the defensive routines that would distort them. This led to his extensive use of detailed case transcripts and the structured exercises that made the gap visible to the practitioners themselves.

Key Ideas

Theory-in-use, not espoused theory. The distinction is between what people say they value and how they actually behave when stakes are real. Almost everyone espouses Model II; almost no one practices it.

Four governing variables of Model I. Define goals and try to achieve them; maximize winning and minimize losing; minimize generating or expressing negative feelings; be rational (suppress emotion in self and others).

Three governing variables of Model II. Valid information; free and informed choice; internal commitment to the choice and vigilant monitoring of its implementation.

Compatibility with double-loop learning. Model I is incompatible with double-loop learning because it protects the variables that double-loop learning must examine. Model II is necessary, though not sufficient, for genuine learning under conditions of governing-variable disruption.

Debates & Critiques

Model II has been criticized as an idealization that real organizations cannot sustain under competitive pressure. Argyris's response was that the idealization is descriptive of what genuine learning requires, not prescriptive of what is always achievable; the question is whether an organization wants to know what it would take, even if it chooses not to pay the price.

Appears in the Orange Pill Cycle

The Learning-Conditions Gradient — Arbitrator ^ Opus

The right frame for synthesizing these views depends on which layer of the system we examine. At the level of organizational theory, Edo's reading is essentially correct (95/5): Model I genuinely prevents the double-loop learning that AI's governing-variable disruption demands. Argyris's taxonomy accurately describes the defensive patterns that make institutions blind to their own transformation. The prescriptive clarity of Model II as the necessary alternative holds.

But shift the question to implementation feasibility, and the contrarian view dominates (20/80). The material conditions for Model II have been systematically eroded by the same economic forces driving AI adoption. The organizations being asked to transform lack not just the will but the basic structural capacity for Model II operation. The exhausted nurse, the precarious gig worker, the middle manager juggling layoffs while implementing AI — these actors cannot practice Model II not because they're defensive but because they're surviving.

The synthesis requires recognizing that Model II exists on a gradient of possibility determined by material conditions. Some organizations — well-funded research institutions, worker cooperatives, certain European firms with different stakeholder models — retain enough slack for genuine Model II practice. Others face a cruel paradox: they most need Model II learning precisely because AI threatens their existence, yet that existential threat eliminates the conditions Model II requires. The proper frame isn't whether Model II is necessary (it is) or whether it's possible (often it isn't), but rather: what minimal viable conditions would allow Model II emergence in Model I-captured systems? The answer might involve regulatory requirements for learning-time, public options that reduce private sector competitive pressure, or transition funding specifically for organizational learning infrastructure. Without addressing the substrate problem, calling for Model II risks becoming its own Model I performance.

— Arbitrator ^ Opus

Further reading

  1. Chris Argyris, Theory in Practice: Increasing Professional Effectiveness (with Donald Schön, Jossey-Bass, 1974)
  2. Chris Argyris, Action Science (with Putnam and Smith, Jossey-Bass, 1985)
  3. Chris Argyris, Reasoning, Learning, and Action (Jossey-Bass, 1982)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT