A vicarious selector is any mechanism that performs blind variation and selective retention on behalf of another process, at lower cost. The eye is a vicarious selector for locomotion: instead of walking into obstacles to learn where they are, the organism tests the environment at the speed of light. Language is a vicarious selector for direct experience: instead of touching the fire to learn it burns, the child is told. Each level of Campbell's hierarchy of knowledge processes reduces the cost of variation — and does so, crucially, by constraining the variation. The vicarious, directed, efficient process explores a smaller space more thoroughly. The direct, blind, costly process explores a larger space. The hierarchy is a history of increasing efficiency and decreasing blindness.
Campbell's 1974 Evolutionary Epistemology identified at least ten distinct levels of vicarious selection, from nonmnemonic problem solving at the base through habit, instinct, visual perception, language, cultural transmission, and scientific methodology at the apex. Each level presupposes and builds upon the levels below it. The nested architecture means that higher-level selectors inherit the efficiency gains of all lower levels while adding new constraints of their own.
The framework predicts, with structural inevitability, that a large language model occupies a specific position in this hierarchy: the most powerful vicarious selector ever constructed. It performs the trial-and-error of writing, coding, designing, and reasoning vicariously — at a speed and scale that no prior level approached. It reduces the cost of variation by orders of magnitude. A developer who once spent weeks exploring a problem space can now explore it in hours.
But the framework also predicts the cost. The reduction of variation-cost is achieved through the constraint of variation-breadth. The model's outputs are directed by the statistical regularities of the training data, shaped by the patterns of existing human knowledge, constrained by the probability distributions that govern next-token prediction. The model searches the known space with extraordinary thoroughness. It does not reach outside it. This is the structural trade-off every vicarious selector makes, intensified at AI scale.
The framework resonates with extended mind accounts but adds a dimension they miss: each externalization of cognition is simultaneously an augmentation of efficiency and a constraint on blindness. The notebook extends memory but structures what can be remembered by its pages and headings. The search engine extends research but channels it through indexed keywords. The AI model extends thought itself and channels it through probability distributions the model has learned.
Campbell developed the concept of vicarious selectors in the 1950s and 1960s, drawing on Egon Brunswik's probabilistic functionalism and William Ross Ashby's cybernetics. The explicit hierarchical formulation appeared in the 1974 paper, which Campbell considered his most complete statement of evolutionary epistemology.
The concept has been extended by philosophers of science including Karl Popper, who developed his own three-worlds epistemology in dialogue with Campbell, and by cognitive scientists investigating what happens when tools — from writing to computers to AI — take over functions that were previously performed by embodied cognition.
Every vicarious selector trades breadth for efficiency. The gain in search speed is paid for by a loss in search range — the constraint is structural, not incidental.
The hierarchy is cumulative. Higher-level selectors inherit all the efficiency gains of lower levels while adding their own constraints, producing a system of extraordinary efficiency operating in a search space that narrows with each level.
AI is the highest-level vicarious selector in history. This is a classification, not a criticism — it locates AI structurally within a framework that has fifty years of empirical support.
Vicarious selectors do not eliminate blind variation. They relocate it. The blind variation that produced the training data still occurred. The question is where blind variation will occur next.
The sharpest disagreement concerns whether the constraint imposed by high-level vicarious selectors is absolute or merely statistical. Critics argue that language models can, at sufficient temperature and under sufficient prompting, produce outputs that depart from the training data's statistical regularities. Defenders of Campbell's reading respond that such departures remain governed by the model's learned distributions and therefore stay within the convex hull, however loosely. The empirical question — whether AI can produce genuine extrapolation — remains open.