The regress argument is the logical engine of Ryle's case against the intellectualist legend and, by extension, against the picture of mind that treats all competence as the application of theoretical knowledge. The argument runs: if performing an action intelligently requires first contemplating a rule about how to perform it, then the contemplation is itself an action that can be performed well or badly. Applying the rule intelligently therefore requires further rules about how to apply rules, which require further rules still, without terminus. The regress is vicious — it cannot stop anywhere without arbitrarily privileging one level over all others, and it cannot continue forever within the finite time of an actual action. Since intelligent action manifestly occurs, the intellectualist legend must be wrong. Knowing how is not reducible to knowing that.
The argument has the elegance of mathematical proof and the durability to match. It does not depend on empirical claims about how humans or machines work. It depends only on the logical point that rule-application is itself an action, and that if all intelligent action requires prior rules, the requirement cannot be satisfied. The intellectualist legend is incoherent at its foundation, not merely false in its predictions.
The argument applies with particular force to classical symbolic AI, which attempted to capture human intelligence as explicit rule systems. Hubert Dreyfus built his influential critique of classical AI on exactly Ryle's regress: every rule requires further rules to specify the conditions of its application, and those rules require further rules, and the frame problem (as AI researchers came to call the issue) is the computational shadow of Ryle's philosophical regress. Classical AI could not escape the regress, which is why classical AI failed.
Deep learning bypasses the regress by abandoning rules altogether. Neural networks develop practical competence through training rather than rule-encoding, and the competence they acquire is precisely knowing how in Ryle's sense — dispositions to respond in certain ways under certain conditions, not explicit propositional knowledge that is then applied. The architecture of contemporary AI is, unintentionally, a vindication of Ryle's regress argument: the only way to build intelligent systems is to avoid the rule-application structure the intellectualist legend required.
The pedagogical implications follow the same logic. If intelligent performance cannot be grounded in prior rules, then educational systems that focus on transmitting propositional knowledge as preparation for later practical application are working on a model the argument shows to be impossible. Practice must be present from the start — not as an application of theory but as the ground from which theory is later abstracted. The teacher who stops grading essays and starts grading questions has restructured education around the reality Ryle's argument describes.
The argument is developed in chapter 2 of The Concept of Mind (1949), where Ryle uses it to establish the irreducibility of knowing how to knowing that. The earliest version appeared in his 1945 Presidential Address, and Ryle refined the argument in subsequent essays. Lewis Carroll's 1895 paper 'What the Tortoise Said to Achilles' presented a related regress in the philosophy of logic, and Ryle was aware of the connection.
Logical, not empirical. The argument depends only on the structure of rule-application, not on facts about brains or machines.
Vicious, not innocent. The regress does not terminate in a privileged foundation; it goes on forever, making the intellectualist legend impossible to satisfy.
Confirmed by classical AI's failure. The computational shadow of Ryle's argument is the frame problem, which defeated rule-based AI and forced the turn to neural networks.
Vindicated by deep learning. Systems that achieve intelligence do so by building dispositions, not by encoding rules. The architecture confirms the argument.
Defenders of rule-based cognition have attempted various responses: positing an innate set of terminal rules that require no further rule for application (Chomsky, Fodor); arguing that some rules are self-applying and thus break the regress; distinguishing between the logical structure of competence and the processing structure of performance. The Ryle volume treats these as interesting but ultimately unpersuasive — each response either concedes the point (admitting that some capacities must be non-propositional) or pushes the regress back one step without terminating it.