The regress argument that establishes the irreducibility of knowing how to knowing that is deceptively simple. If intelligent performance required the prior contemplation of a rule, the contemplation itself could be done well or badly — which would require a further rule about how to contemplate, which would require a further rule, and so on without limit. The intellectualist legend — the view that practical competence derives from prior theoretical knowledge — cannot even get started. 'Intelligent practice is not a stepchild of theory,' Ryle wrote. 'It is the ancestress of theory.'
The arrival of large language models has produced the most dramatic empirical confirmation of Ryle's thesis in the history of technology. Claude writes Python code without holding propositions about Python to be true, without affirming that Python is interpreted, without possessing any of the apparatus of propositional belief. And yet Claude's code compiles, runs, accomplishes tasks, handles edge cases. The knowing how is present. The knowing that is absent — or rather, the knowing that is absent in the form the intellectualist legend demanded. A 2024 paper in Information drew the connection explicitly: 'Deep neural networks do produce knowledge how, but, because of their opacity, they do not in general produce knowledge with a rationale.'
The educational system, built on the intellectualist legend, tests knowing that almost exclusively. State the causes, reproduce the formula, recite the definition. For most of the history of formal education this was a tolerable proxy, because the student who could reproduce the facts had usually engaged with the material enough to develop some of the associated dispositions. AI has destroyed the correlation. The machine now produces propositional outputs with a fluency that makes the human encyclopedist amateurish, which means a student equipped with Claude can produce the shadow of understanding without any of the substance.
The unbundling of the two kinds of knowledge is the most economically consequential feature of the AI transition. Dispositions long thought inseparable — the capacity to write code and the capacity to judge what code to write; the capacity to compose prose and the capacity to evaluate whether the prose is true — turn out to be independent. The machine possesses the productive dispositions. It does not possess the evaluative ones. The judgment economy is the economic name for this unbundling.
The distinction was introduced in Ryle's 1945 Presidential Address to the Aristotelian Society, 'Knowing How and Knowing That,' and developed at length in chapter 2 of The Concept of Mind (1949). Ryle presented the distinction as already familiar to competent speakers of English — something ordinary language preserved but philosophers had systematically neglected.
The regress argument. If knowing how reduces to knowing that, intelligent rule-application requires further rules, and the regress is infinite. Since intelligent action manifestly occurs, the reduction fails.
Practice precedes theory. Rules are abstractions from competent performance, not prescriptions that make performance competent. The skilled practitioner builds theory from practice, not practice from theory.
The AI vindication. Neural networks develop practical competence without propositional knowledge — exactly the dissociation Ryle's framework predicted. Claude knows how to write code without knowing that Python is interpreted.
The pedagogical consequence. Education organized around testing knowing that is training for a competence machines now perform better. Knowing how — judgment, taste, questioning — is the form of knowledge that retains human value.