Successive limited comparisons is the analytical engine of the branch method. Where the root method would enumerate all possible alternatives and evaluate each against all possible values, successive limited comparisons proceeds differently: limit the comparison to two to five alternatives that differ incrementally from the current situation, evaluate them only on the dimensions where they differ, choose the one whose consequences are most acceptable, and repeat. The method is successive because each comparison produces information that informs the next one. It is limited because no single comparison attempts to be comprehensive. And it is comparative because alternatives are evaluated against each other, not against an abstract ideal.
The power of the method derives from its constraints. By limiting the number of alternatives, the analyst can examine each in serious detail. By limiting the dimensions of evaluation to those where alternatives differ, the analyst conserves cognitive resources for the comparisons that matter. By proceeding successively, the analyst preserves optionality: each step is small enough to reverse if it proves wrong, and the information generated by each step improves the next one.
The Orange Pill's framework of Swimmer, Believer, and Beaver is successive limited comparison at its most effective. Three concrete alternatives, each defined in relation to the others, evaluated against their practical consequences rather than against a comprehensive value framework. The comparison illuminates each position in ways that a taxonomy of all possible responses never could. The reader does not need to evaluate the positions against abstract values — she can evaluate them against her own experience, her own assessment of practical consequences. Two readers who disagree about ultimate values can still agree on what the Swimmer's refusal, the Believer's acceleration, and the Beaver's intervention actually produce.
Applied to AI education policy, the method asks: compare the alternative of integrating AI tools with specific guidelines, the alternative of integrating them without guidelines, and the alternative of not integrating them at all. Each district evaluates these alternatives against the specific dimensions where they differ — cognitive development, skill acquisition, equity implications — in the specific context of its students and community. The evaluation is imperfect. The information is incomplete. The decision is revisable. And the decision can be made, which is more than the comprehensive method can promise.
The method's genius is that it accommodates disagreement. Parties who disagree about ultimate purposes can still agree on the direction of improvement from the current position. A school district does not need a comprehensive theory of education to decide whether this year's AI policy should be modified based on last year's experience. The agreement required for action is much less demanding than the agreement the comprehensive method requires.
Lindblom's 1959 article introduced the term alongside 'branch method' and 'muddling through' as complementary descriptions of the same analytical strategy — each emphasizing a different feature of the approach.
Comparison, not optimization. The method compares alternatives against each other, not against an abstract optimum. The alternative chosen is the best of the options considered, not the best of all possible options.
Marginal evaluation. Alternatives are evaluated on the dimensions where they differ. Dimensions on which they are identical are ignored — not because they do not matter but because comparing identical values is wasted cognitive effort.
Iterative learning. Each comparison generates information for the next one. The accumulated learning across many comparisons builds knowledge that no single comparison could produce.
Agreement through practice. Agreement on alternatives is more achievable than agreement on values, because practical consequences are more observable than abstract commitments.