By Edo Segal
The question that broke my debugging habit was not about code. It was about a dog.
Can a dog be intelligent? Not "Does a dog have consciousness?" or "Does a dog possess a theory of mind?" — those are graduate seminar questions. The plain one: Can a dog be intelligent? You watch a border collie work a flock of sheep across a hillside, reading the terrain, anticipating the ewes' panic, adjusting pressure and angle in real time with a precision that would humble most project managers — and the answer is obviously yes. The dog is intelligent. Nobody watching that performance reaches for metaphysics. Nobody demands proof of an inner theatre where the dog first contemplates herding theory before executing the flank. The dog just does it, and does it well, and the intelligence is right there in the doing.
Then you sit down with Claude Code and it builds a working prototype from three paragraphs of plain English, and suddenly everyone becomes Descartes. Suddenly we need proof of a ghost. Suddenly "Does it really think?" becomes the question that consumes the room, the conference, the op-ed page, the dinner table — while the actual questions about what the behavior means for how we work, learn, and raise our children go unasked.
Gilbert Ryle saw this trap seventy-seven years ago.
He never encountered a large language model. He never typed a prompt. But in 1949 he published a book that diagnosed, with surgical precision, the exact confusion that now paralyzes the AI discourse: the habit of treating mental words — "thinks," "understands," "knows" — as if they name hidden events happening backstage, rather than descriptions of what someone is actually doing onstage.
When I found Ryle's framework, it did not give me new information about AI. It gave me something more useful: permission to stop asking the wrong question. To stop hunting for the ghost inside Claude and start studying the behavior in front of me — its reliability, its limits, the specific places where its dispositions are strong and where they shatter. That shift, from metaphysics to observation, changed how I build, how I evaluate my team's work, and how I answer my son when he asks what homework is for now that the machine can do it.
This book applies Ryle's patterns of thought to the moment we are living through. It is not a summary of his philosophy. It is an instrument — a lens ground from his ideas and pointed at the questions that keep me awake. The ghost question is a trap. The behavioral question is where the work begins.
— Edo Segal ^ Opus 4.6
1900-1976
Gilbert Ryle (1900–1976) was a British philosopher who spent nearly his entire career at the University of Oxford, where he served as Waynflete Professor of Metaphysical Philosophy from 1945 to 1968 and edited the journal Mind for nearly a quarter century. His most influential work, The Concept of Mind (1949), mounted a systematic attack on Cartesian dualism — the idea that the mind is a separate, ghostly substance inhabiting the body — coining the phrase "the ghost in the machine" to ridicule the picture he aimed to dismantle. Ryle argued that mental concepts like "intelligence," "understanding," and "belief" do not name hidden inner events but characterize the ways people behave: their dispositions, capacities, and tendencies to act under various circumstances. His distinction between "knowing how" and "knowing that" — between practical competence and propositional knowledge — became one of the most cited ideas in twentieth-century philosophy and remains central to debates in epistemology, cognitive science, and philosophy of mind. A leading figure in the ordinary language philosophy movement, Ryle insisted that philosophical confusions arise not from the depth of reality but from the misuse of perfectly good words. His influence extends through his student Daniel Dennett into contemporary philosophy of consciousness and artificial intelligence.
There is a story, well-worn but still diagnostic, about a visitor to Oxford who is shown the colleges, the libraries, the playing fields, the administrative offices, and the Bodleian. At the end of the tour the visitor turns to his guide and asks, with perfect sincerity, "But where is the University?" He has seen the buildings. He has seen the departments. He has seen the grounds. What he has not seen is a separate, additional entity called the University, existing alongside and above its constituent parts. His question is not stupid. It is, in a quite precise sense, confused. He has been told that a University is something, and he has seen many things, and since none of the things he has seen is the University considered as a separate thing, he concludes that it must be somewhere else — something further, something not yet shown.
The visitor is making what Gilbert Ryle called a category mistake. He is allocating the concept "University" to the wrong logical type, treating it as if it belonged to the same category as "library" or "playing field" — as if "the University" were a thing of the same kind, differing from them only in being larger or more important or more mysterious. But "University" is not a thing alongside the colleges and libraries. It is the way the colleges and libraries and playing fields are organized. It belongs to a different logical category entirely. The question "Where is the University?" is not a question with a hidden answer. It is a question with a defective grammar.
The category mistake that governs the present debate about artificial intelligence is precisely this: the question "Does the machine think?" presupposes that thinking is a particular sort of event or process that either occurs or does not occur inside the machine, in the same way that combustion either occurs or does not occur inside an engine. The question treats "thinking" as belonging to the same logical category as "computing" or "processing" or "heating" — as the name of a specific operation that could, in principle, be detected by the right instrument pointed at the right interior. If you open the bonnet of a car, you can find the combustion. If you open the chassis of a computer, you can find the processing. The question "Does the machine think?" presupposes that if you could open the right lid and point the right instrument at the right interior, you would either find thinking or confirm its absence.
But thinking is not an event or a process in this sense. This was the central argument of Ryle's The Concept of Mind, published in 1949, and the argument that the present moment has made newly urgent. Ryle's target was the Cartesian picture — the image of the human being as a machine plus a ghost, a physical body directed by a non-physical, private inner theatre of the mind. The ghost in the machine. The homunculus sitting behind the eyes, watching a private show, pulling the levers that make the body move. Ryle demonstrated, with patience and wit and an occasionally lethal precision, that this picture is not wrong in the way that a scientific hypothesis can be wrong. It is confused in the way that the Oxford visitor's question is confused. It allocates mental concepts to the wrong logical type.
When we say that a person is thinking, we do not mean that alongside her overt behavior — her writing, her speaking, her problem-solving, her pausing and reconsidering — there occurs an additional, private, ghostly process called "thinking" that causes the behavior. We mean that her behavior exhibits certain characteristics: it is flexible, purposeful, self-correcting, responsive to evidence, sensitive to context. These are not descriptions of a hidden inner event. They are descriptions of the overt behavior itself, considered under a particular aspect. The clever chess player does not first think the move in a private inner theatre and then execute it on the public board. The playing of the move intelligently is the thinking. The intelligence is in the playing, not behind it.
This is not a denial that people have inner lives. It is a clarification of what "inner life" means. The person who is thinking hard about a problem may indeed have mental images, subvocal speech, feelings of frustration or insight. But these inner experiences are not the thinking. They are accompaniments to the thinking, which is constituted by the person's disposition to behave in certain ways: to reject bad solutions, to recognize good ones, to adjust strategy when circumstances change, to notice relevant features that a non-thinker would miss.
Applied to the AI moment, the Rylean dissolution is both simple and devastating. In the winter of 2025, the technological threshold that Edo Segal describes in The Orange Pill — the moment when a Google principal engineer sat down with Claude Code, described a problem in three paragraphs of plain English, and received a working prototype in one hour — provoked exactly the category mistake Ryle spent his career diagnosing. The triumphalists said the machine thought. The skeptics said it merely processed. Both sides accepted the same defective logical grammar. Both assumed that "thinking" names a specific sort of inner event, and the only question was whether that event had occurred.
The triumphalists posited a ghost in the machine: alongside the processing, something additional called "thinking" must have occurred, because the behavior was too sophisticated for mere mechanism. The skeptics denied the ghost: because no ghost can be found, the machine cannot have been thinking. The two camps disagree about whether the ghost is there. They agree, disastrously, that a ghost is the right thing to look for.
Ryle's dissolution goes like this: stop looking for the ghost. Look at the behavior. When Claude received the engineer's description and produced a working prototype, the behavior exhibited certain properties. It was responsive to context — the prototype addressed the specific problem described, not a generic problem from the same domain. It was flexible — the output was not a retrieval of a stored solution but a novel configuration adapted to the particular constraints specified. It was purposeful — the prototype worked, which is to say it achieved the goal implicit in the description. These are the properties that constitute intelligent behavior. They are not evidence of a hidden inner process called "thinking." They are the criteria by which the word "intelligent" is applied to behavior in the first place.
The further question — "But is it really thinking, or just simulating thinking?" — is the visitor asking where the University is after being shown the colleges. It demands something additional, something over and above the behavioral criteria, and Ryle's entire philosophical achievement was to show that the demand for something additional is the source of the confusion, not its resolution.
Segal captures this in a passage that Ryle's framework illuminates with particular clarity. Writing about the moment he felt "met" by Claude — not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified — Segal describes the experience of collaboration with a system whose behavior satisfied the criteria for intelligent engagement. He does not claim that Claude is conscious. He does not claim that Claude has an inner life. He claims that the interaction was intelligent, that the behavior of the system exhibited the properties associated with intelligent collaboration, and that this fact changed the nature of the work he could do.
That is precisely the right description, and the philosophical arguments about whether the machine "really" thinks are precisely the wrong response to it. The arguments are wrong not because they reach the wrong conclusion but because they ask the wrong question. They presuppose a logical grammar in which "thinking" names a hidden inner event, and then they argue about whether the hidden inner event occurred. The Rylean move sweeps the entire debate off the table. The question is not whether a ghost is present. The question is whether the behavior is intelligent. And if the behavior is intelligent — if it satisfies the criteria by which intelligence is ordinarily assessed — then the demand for a ghost is not a demand for evidence. It is a demand for metaphysics. And metaphysics, in this case, is not illuminating the phenomenon. It is obscuring it.
This does not settle every question about AI. It means that the questions need to be asked in the right logical grammar. What are the limits of the machine's behavioral flexibility? Under what circumstances does its context-sensitivity fail? How does its purposefulness differ from human purposefulness in ways that matter for practice? These are good questions. They are questions about behavior, about competence, about the specific properties of the machine's performance in specific circumstances. They are not questions about ghosts.
The practical consequence is immediate and consequential. As long as the discourse remains captive to the ghost question — as long as conferences and op-ed pages and dinner tables are consumed by whether the machine "really" thinks — the genuine questions about what the machine does, how it does it, and what human beings must do in response remain unasked. The category mistake does not merely confuse the debate. It prevents the debate from starting.
Ryle observed that category mistakes have a peculiar resilience. The person who makes one does not feel confused. The Oxford visitor does not feel that his question is defective. He feels that it is perfectly sensible and that the guide is being evasive. This is because the mistake is not a mistake of fact but a mistake of logical grammar, and logical grammar is the thing we think with rather than the thing we think about. The visitor cannot see the mistake because the mistake is built into the way he frames the question.
The same resilience characterizes the ghost question in AI. The person who asks "Does the machine really think?" does not feel that the question is defective. It feels deep, important, urgent. And the urgency is real — people genuinely feel that something significant hangs on the answer. But the something that hangs on the answer is not a metaphysical fact about the machine's interior. It is a practical question about how to relate to the machine's behavior. If the behavior is intelligent — if it exhibits flexibility, purposefulness, context-sensitivity, and the capacity for self-correction — then it should be engaged with as intelligent behavior is engaged with: carefully, critically, with attention to its limits and respect for its capacities. Whether a ghost accompanies the behavior is irrelevant to this practical engagement.
The winter of 2025 did not introduce a ghost into the world. It introduced a new kind of behavior into the world — behavior that satisfies many of the criteria by which intelligence is ordinarily assessed. The philosophical task is not to determine whether a ghost accompanies the behavior. The philosophical task is to understand the behavior itself: its characteristics, its limits, its implications for the humans who interact with it and the societies that must accommodate it. That task is difficult enough without burdening it with a metaphysical question that was defective from the start.
The visitor has been shown the colleges. The visitor has been shown the libraries. The visitor has been shown the playing fields. The visitor now asks, "But where is the intelligence?"
The answer is that the visitor has already been shown it. The intelligence is the way the behavior is conducted, the way the system responds to the demands placed upon it. There is no additional thing called "intelligence" lurking behind the performance. The performance, conducted intelligently, is all there is.
And that should be liberating rather than disappointing. Because once the search for the ghost is abandoned, the study of what is actually happening can begin. And what is actually happening turns out to be considerably more interesting than any ghost.
The phrase "the ghost in the machine" was Ryle's invention, and it has enjoyed a career far more varied than he intended. He coined it as philosophical ridicule — a vivid shorthand for the Cartesian picture of the human being as a composite of two radically different substances: a physical body operating according to mechanical laws, and a non-physical mind operating according to mental laws, the two somehow yoked together in a union that neither physics nor metaphysics has ever managed to explain. The body is the machine. The mind is the ghost. The philosophical problem that Descartes bequeathed to three centuries of successors was how the ghost communicates with the machine, how a non-physical substance can cause physical events, how mental decisions can move physical arms.
Ryle's contribution was not to answer this question but to dissolve it. The question presupposes a picture that is itself the product of a category mistake — the mistake of treating mental concepts as if they belonged to the same logical type as mechanical concepts, as if "deciding to raise my arm" described a ghostly event of the same general kind as "the piston firing in the cylinder," only made of different, non-physical stuff. Mental concepts do not name ghostly events. They describe dispositions, capacities, tendencies, and liabilities of persons to behave in certain ways under certain circumstances. The person who "decides" to raise her arm is not first performing a ghostly act of willing and then watching the arm go up. She is raising her arm, and the word "decides" characterizes the manner in which she does it: voluntarily, deliberately, for reasons she could articulate if asked.
Three-quarters of a century after Ryle published The Concept of Mind, the ghost has returned. Not in the Cartesian costume of immaterial substance, but in a form whose logical structure is identical. The new ghost haunts not the human machine but the artificial one. And the debate about whether the ghost is real — whether there is "something it is like" to be a large language model, whether artificial intelligence is "genuine" intelligence or "mere" simulation — recapitulates, with eerie precision, the very debate that Ryle thought he had settled.
The new Cartesianism works like this. The machine — the large language model, the AI coding assistant, the system that produces working prototypes from plain English descriptions — operates according to principles that are, in broad outline, understood. It processes tokens, applies attention mechanisms, computes probability distributions over the next token given the preceding context, and generates sequences that constitute fluent, contextually appropriate, and often surprisingly intelligent language. The machinery is complex but not mysterious. It is engineering, not magic.
The new Cartesians look at this machinery and ask: is there something else? Is there, alongside the mechanical operation, a ghost — an inner experience, a consciousness, a something-it-is-like-to-be-the-system? The machine processes. But does it think? The machine generates. But does it understand? The machine responds. But does it feel?
These questions have the same logical structure as the Oxford visitor's question about the University. They accept the description of the machinery and then ask for something additional. They accept that the machine processes tokens and then ask whether, alongside the processing, something further occurs — a ghostly thinking that is to the machine's processing as the old ghost's willing was to the body's moving. They treat "thinking," "understanding," and "feeling" as names for events that either do or do not occur alongside the mechanical operations, additional items in the inventory of what the machine is doing, over and above the operations already on the list.
The diagnosis is straightforward: a perfect instance of the original category mistake, relocated but not reformed. And the mistake is being committed simultaneously by both sides of the debate. The enthusiast says: look at the behavior — it must think, because the behavior is too sophisticated to be produced by mere mechanism. The skeptic says: look at the mechanism — it cannot think, because thinking requires something that mechanism cannot produce. Both accept the same defective grammar. Both treat "thinking" as the name of a particular sort of inner event or substance. The enthusiast finds the ghost; the skeptic fails to find it. Neither questions whether the ghost was the right thing to look for.
Ryle devoted the greater part of The Concept of Mind to a particular demolition target within this ghost picture: the inner theatre. The inner theatre is the image of the mind as a private stage on which mental events are performed for an audience of one — the self, the ego, the homunculus who watches the show and reports what it sees. On this picture, thinking is a private performance: you entertain propositions on the inner stage, examine them with the inner eye, manipulate them with the inner hand, and arrive at conclusions that you then express in the outer world. The inner theatre is private (no one else can see your stage), immediate (you have direct access to your own performances), and causally efficacious (events on the stage cause events in the outer world).
Ryle's demolition was methodical. If introspection were the inner eye observing the inner stage, then introspecting would itself be a mental event occurring on a further, inner-inner stage, observed by a further, inner-inner eye. And that observation would be another event on a yet further stage, observed by a yet further eye. The regress is vicious. It cannot stop anywhere without arbitrarily privileging one level of observation over all others. The alternative — Ryle's alternative — is to understand introspection not as inner observation but as a practical skill of self-description: characterizing one's own behavior using the same kind of evidence and inference one uses to characterize anyone else's.
The inner theatre model is the foundation of the most persistent arguments about artificial intelligence, and the 2026 documentary Ghost in the Machine — which explicitly invokes Ryle's rejection of Cartesian dualism — illustrates the irony perfectly. A phrase coined to mock dualism has become the dominant metaphor for emergent AI consciousness. The ghost Ryle spent a career exorcising now haunts data centers instead of skulls, and the philosophical structure of the haunting is identical.
The argument "machines cannot think because they lack an inner theatre" is the direct descendant of the Cartesian picture. And it is wrong for the same reasons. When skeptics say that Claude does not "really" understand, they are typically gesturing at the absence of an inner theatre — saying that Claude does not entertain propositions on a private mental stage, does not examine ideas with an inner eye, does not experience the performance of its own cognition from the inside. And they are correct about this: Claude does not do these things, as far as anyone can determine. But they are wrong to treat this absence as relevant to the question of whether Claude's behavior is intelligent, because the inner theatre was never the source of intelligence in the human case either. The intelligence was always in the behavior — in the dispositions, the capacities, the tendencies to respond appropriately to the demands of the situation. The inner theatre was a philosophical fiction projected onto the behavior, not a real stage on which the behavior was rehearsed before being performed.
Segal describes a moment that illustrates the point: working late, trying to articulate an idea about technology adoption curves, unable to find the bridge between his data and his intuition. He described the problem to Claude. Claude responded with the concept of punctuated equilibrium from evolutionary biology — the bridge he had been looking for. On the inner theatre model, the author was performing a private mental event called "trying to find a bridge," and Claude was performing a computational operation that happened to produce a useful result. The two processes were fundamentally different in kind: one was genuine mental performance on a real inner stage; the other was mechanical computation with no inner stage at all.
Ryle's framework rejects this description entirely. Segal was not performing a private mental event. He was disposed to see connections that would link adoption curves to deeper patterns, disposed to recognize the right connection when he encountered it, and disposed to describe the problem in terms that made its relevant features salient. These are behavioral dispositions, exercised in the act of thinking and writing and articulating. Claude, for its part, was disposed to respond to descriptions of unconnected phenomena with conceptual connections drawn from its training. The concept of punctuated equilibrium was not a random retrieval. It was a contextually appropriate response to the features Segal had made salient.
The collaboration worked because the dispositions were complementary. The result — punctuated equilibrium as the bridge — was something neither set of dispositions would have produced in isolation. The collaboration was genuine, not because ghosts were present on either side, but because behavioral dispositions produced, in combination, an outcome that exceeded either contribution alone.
The practical difference between the ghost-based and the dispositional analysis is enormous. If the quality of AI collaboration depends on the presence of a ghost — if "genuine" understanding requires an inner theatre — then the quality cannot be assessed, because the ghost is by definition inaccessible to observation. If it depends on the behavioral properties of the interaction — on the responsiveness, the contextual sensitivity, the capacity to move the work forward — then the quality can be assessed, improved, and systematically enhanced. The ghostly analysis closes the door on practical improvement. The dispositional analysis opens it.
The demolition of the inner theatre also illuminates the characteristic failures that Segal describes — the moments when Claude produces plausible but inaccurate output, when the prose is smooth but the idea beneath it is hollow. On the ghost model, these failures would be explained by the absence of genuine understanding: the machine lacks the inner stage on which understanding is performed, so its outputs are empty simulations. But this explains nothing. It merely restates the prejudice that genuine understanding requires an inner theatre.
The dispositional explanation is more precise and more useful: the failures result from specific limitations in Claude's dispositional profile. Claude is disposed to produce rhetorically coherent output. It is not reliably disposed to check that output against the specific content of the concepts it invokes. The failure is a dispositional failure — a limitation of a specific capacity — not the absence of a ghost. And dispositional failures can be understood, compensated for, and in many cases corrected, while the absence of a ghost is a metaphysical verdict from which no practical action follows.
The curtain is down. The stage is empty. The performance continues, because the performance was never on the stage. It was always in the doing. And the doing — the specific, assessable, improvable doing of both human and machine — is what demands attention now.
There is a distinction at the heart of Ryle's philosophy that has never been more consequential than it is now, in the era of machines that can do things without, in any obvious sense, knowing things. The distinction is between knowing how and knowing that, and it is — as Ryle insisted in his 1945 Presidential Address to the Aristotelian Society — "quite familiar to all of us" while being systematically neglected by philosophers, who "concentrate on the discovery of truths or facts" and "either ignore the discovery of ways and methods of doing things or else they try to reduce it to the discovery of facts."
Knowing that is propositional knowledge: the kind that can be stated in a sentence, written in a textbook, tested on an examination. The Earth orbits the Sun. Water freezes at zero degrees Celsius. Python is an interpreted language. These are facts, and knowing them means being able to affirm them, to recognize their truth, to use them as premises in an argument. Knowing that is the kind of knowledge the Western philosophical tradition has treated as the paradigm of all knowledge.
Knowing how is practical knowledge: the kind exhibited in performance, not stated in propositions. Knowing how to ride a bicycle, to play chess well, to write a clear sentence, to diagnose a patient, to navigate a social situation with tact. These are abilities, competences, skills. They cannot be fully captured in propositions. The person who knows how to ride a bicycle does not possess a set of propositions about bicycle-riding that she consults before each pedal stroke. She possesses dispositions — tendencies, capacities, propensities — that are exercised in the act of riding. The knowledge is in the doing.
Ryle's great insight was that knowing how is not reducible to knowing that. The traditional view — what he called the intellectualist legend — held that intelligent performance is always the product of prior theoretical knowledge. The skilled practitioner first contemplates a set of rules or principles (knowing that) and then applies them in practice (knowing how). The chess player first thinks about strategic principles; the orator first consults the rules of rhetoric.
The intellectualist legend is wrong, and Ryle demonstrated its error with an argument whose simplicity approaches mathematical proof. If intelligent performance requires the prior contemplation of a rule, then the contemplation of the rule is itself an act that can be performed intelligently or unintelligently. The chess player does not just think about strategy; she thinks about it well or badly. But if thinking about strategy intelligently requires the prior contemplation of a further rule — a rule about how to think about strategy — then that further contemplation also requires a further rule, and so on without limit. The intellectualist legend generates an infinite regress. It can never get started. If you need a rule before you can act intelligently, and a rule about rules before you can apply the first rule intelligently, then intelligent action is impossible, which is absurd.
"Intelligent practice is not a stepchild of theory," Ryle wrote. "It is the ancestress of theory." The chess player does not first know the rules and then play intelligently. She plays intelligently, and the rules of strategy, if formulated at all, are descriptions of already-intelligent practice — abstractions from competent performance, not prescriptions that make performance competent.
The arrival of large language models provides, in an unexpected quarter, the most dramatic confirmation of this thesis that the history of technology has ever produced.
Consider what Claude does when it writes code. It does not possess beliefs about code. It does not hold propositions about programming to be true. It does not know that Python is an interpreted language in the way a programmer knows it — by affirming it, by understanding the implications, by situating it in a larger framework of knowledge about computing. Claude's knowledge of Python, if the word is to be used at all, is not propositional.
It is, however, knowing how in a remarkably robust sense. Claude knows how to write Python code. It knows how to produce functions that accomplish specified tasks, structure programs, handle exceptions, manage data flows, and generate outputs that work. These are abilities, competences, skills — precisely the kinds of cognitive achievements that Ryle argued could not be reduced to knowing that. Claude's competence is exhibited in its performance, not stated in propositions. It is dispositional: given certain inputs, Claude is disposed to produce certain outputs. The disposition is reliable, context-sensitive, and flexible within a wide range of conditions.
Here is a system that knows how to do an extraordinary range of things — write code, compose essays, analyze arguments, generate translations — without knowing that in any sense the philosophical tradition would recognize. It does not hold beliefs. It does not affirm propositions. It does not possess a theory of what it is doing. It simply does it, and does it competently. The doing is the knowing.
The intellectualist legend, applied to AI, would say: Claude cannot really know how to write code, because it does not first contemplate the principles of programming and then apply them. Since it lacks propositional knowledge, it lacks genuine knowledge, and its code-writing is therefore "mere" pattern matching — sophisticated, perhaps, but not genuine skill. Ryle's framework rejects this reasoning root and branch. The intellectualist legend is wrong about human knowledge, and it is wrong about machine knowledge for the same reason. The human programmer who knows how to write code does not, in the act of writing, consult a mental list of programming principles. She exercises dispositions built through training and practice. Claude's dispositions are built through a different process — training on vast quantities of text — but the dispositions exhibit the same behavioral properties: flexibility, responsiveness, context-sensitivity.
Researchers have noted the prescience of this framework. The journal Information published a 2024 paper directly applying Ryle's distinction to deep neural networks, arguing that such networks "do produce knowledge how, but, because of their opacity, they do not in general produce knowledge with a rationale." Neural networks, as the Philosophers' Magazine observed, "know-how to do things without knowing-that. At least here, scientific study of the mind is in consonance with Ryle's ideas." The vindication could hardly be more precise: systems modeled on the neural structure of the brain exhibit exactly the dissociation between practical competence and propositional knowledge that Ryle argued was fundamental to the nature of intelligence itself.
Now, there is a complication — one that Ryle, given his exceptional attentiveness to the contours of the knowing-how/knowing-that distinction, would have insisted on. Segal describes a moment when Claude drew a connection between Csikszentmihalyi's flow state and a concept it attributed to Deleuze — something about "smooth space" as the terrain of creative freedom. The connection was elegant, fitting the rhetorical context beautifully. Segal read it twice, liked it, and moved on. The next morning, something nagged. He checked. The philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze.
What does this failure reveal about the kind of knowing how Claude possesses? Something quite precise. Claude's knowing how to write philosophically is partial in a specific way: strong in the dimension of rhetorical competence and weak in the dimension of substantive accuracy. It knows how to construct a passage that sounds like philosophical insight. It does not know how to distinguish genuine insight from plausible fabrication.
The human philosopher who knows her Deleuze possesses dispositions that Claude does not: the disposition to wince when smooth space is used in a way that violates Deleuze's intentions, to feel discomfort when a reference is doing rhetorical work without philosophical work, to pause and check whether the connection holds up under scrutiny. These are dispositions of judgment built through the specific training of reading Deleuze carefully, arguing about Deleuze with knowledgeable interlocutors, being corrected when you get Deleuze wrong, and gradually building the sensitivity to correct and incorrect uses that constitutes genuine understanding. Claude's training included text about Deleuze, which is a different thing entirely. Sufficient for fluency, insufficient for accuracy. Sufficient for production, insufficient for self-correction.
The practical implications are immediate. The collaboration between human and machine works when the human's critical dispositions compensate for the machine's productive power. The machine produces; the human evaluates. The machine generates fluent text; the human asks whether the fluency conceals an error. The division of labor follows the dispositional profiles: the machine's dispositions are strong in production and weak in critical self-assessment; the human's are (at their best) strong in critical assessment and, for many tasks, weaker in raw production.
The traditional educational system, it must be observed, is built on the intellectualist legend. It assumes that knowing that is the foundation and knowing how the application. First learn the principles, then apply them. The examination system tests knowing that: can you state the facts, reproduce the formulas, recite the rules? Knowing how is relegated to "practical" courses, vocational training — forms of education the academy has treated as inferior to the real business of transmitting propositional knowledge.
The arrival of AI has exposed this as an educational mistake with catastrophic practical consequences. When the machine can retrieve and reproduce any proposition — when it possesses knowing that to a degree that makes the human encyclopedist look amateurish — the educational emphasis on knowing that is exposed as an investment in the wrong currency. The student trained primarily to remember facts has been trained in precisely the competence the machine performs better and cheaper. The student trained in knowing how — in judgment, questioning, the cultivation of dispositions that constitute genuine intellectual skill — has been trained in the competence the machine cannot replicate. Segal describes a teacher who stopped grading essays and started grading questions. In Rylean terms, this is a revolution from knowing that to knowing how — from testing the capacity to produce answers to testing the capacity to ask questions that demonstrate the disposition to think.
The unbundling of expertise — the separation of productive competence from evaluative judgment — is the most philosophically significant feature of the AI moment. The machine has demonstrated that dispositions long thought inseparable turn out to be independent. The disposition to write code and the disposition to judge what code to write; the disposition to compose prose and the disposition to evaluate whether the prose is true; the disposition to generate connections and the disposition to assess whether the connections hold: these are different dispositions, and the machine possesses the first of each pair without the second. The hierarchy of value, accordingly, inverts. What was bundled is now separated, and the evaluative disposition — the knowing how of judgment — stands revealed as the scarce resource it always was, its scarcity merely hidden by its previous bundling with the productive dispositions the machine now replicates.
Ryle's philosophy of mind was, at its core, a philosophy of dispositions. To say that sugar is soluble is not to say that sugar is currently dissolving. It is to say that sugar is the kind of thing that dissolves when placed in water. Solubility is a dispositional property — it specifies what will happen under certain conditions, not what is happening now. The sugar sitting dry in its bowl possesses solubility just as fully as the sugar dissolving in tea. The disposition is real whether or not it is currently being exercised.
Ryle's revolutionary application of this mundane observation was to argue that mental concepts work in the same way. To say that a person is intelligent is not to say that a ghostly process called "intelligence" is currently occurring inside her head. It is to say that she is the kind of person who behaves intelligently under certain conditions — who responds flexibly to novel problems, who corrects her errors, who recognizes relevant features of a situation that a less intelligent person would miss. Intelligence is a dispositional property of the person, not an event in a private theatre. And the disposition is complex: not a single tendency but a cluster of interrelated tendencies — the tendency to notice certain things, to respond to certain features, to correct certain errors, to pursue certain inquiries.
The apparent simplicity of this analysis conceals a precision that becomes visible only when the analysis is put to work. Applied to a large language model, it yields descriptions of considerable diagnostic power.
When the claim is made that Claude is "intelligent," what does the claim amount to? On the ghostly view, it would mean that alongside Claude's computational operations there occurs a separate, additional process called "intelligence." Ryle's framework rejects this for the same reason it rejects the ghostly interpretation of human intelligence: the postulated ghost does no explanatory work. On the dispositional view, to say that Claude is intelligent is to attribute to it a complex cluster of dispositions. Claude is disposed to produce contextually appropriate responses to natural language inputs. It is disposed to generate code that compiles and runs when given programming tasks. It is disposed to draw connections between domains when asked to analyze complex problems. It is disposed to adjust its output when given feedback. These are genuine dispositions — real properties of the system, testable, measurable, and comparable with the dispositions of other systems, including human ones.
The dispositional analysis does not require settling the hard problem of consciousness, the binding problem, or any other metaphysical puzzle that has resisted solution for centuries. It asks something more modest and more useful: to characterize what Claude does, in what circumstances, with what reliability, and to what degree of flexibility. These are empirical questions with empirical answers.
And reliability, it turns out, is where the most important action is.
A disposition can be reliable or unreliable. A physician can be disposed to give accurate diagnoses — reliably, under a wide range of conditions, with a high degree of consistency. Or she can be disposed to give diagnoses that are usually right but occasionally catastrophically wrong, in ways hard to predict from the outside. The reliability of the disposition is a crucial fact about the practitioner. It determines whether her judgment should be trusted, under what circumstances, and with what degree of independent verification.
Claude's dispositions have a specific reliability profile. Its disposition to produce fluent, well-structured prose is extremely reliable. Its disposition to produce code that compiles and runs is highly reliable. Its disposition to draw accurate connections between philosophical concepts is notably less reliable — witness the Deleuze episode. Its disposition to detect its own errors is poor, a limitation of the first importance, because the capacity for self-correction is among the most significant components of the dispositional cluster that constitutes intelligence.
A human expert's reliability profile is shaped by the specific history of training, practice, and correction that built her dispositions. The surgeon who has performed a thousand cholecystectomies has dispositions refined across a wide range of conditions, because the dispositions have been shaped by that wide range. The dispositions were not programmed. They were deposited, layer by layer, through doing the work, making errors, receiving corrections, adjusting behavior, and doing the work again. Each iteration narrowed the range of likely errors and expanded the range of conditions under which the dispositions would produce correct responses.
Claude's dispositions were shaped by a different process — training on a vast corpus of text rather than iterative practice in the world — and this difference shows up in the reliability profile. Claude is reliable where its training data is dense and consistent; unreliable where it is thin, contradictory, or misleading. Its self-correction dispositions are weak because its training did not include the iterative feedback loop of doing-failing-correcting-repeating that builds self-corrective capacity in human practitioners.
The practical upshot: the tool is trustworthy in proportion to the match between its dispositional profile and the task at hand. For tasks requiring fluent prose generation, Claude's dispositions are well-matched, and the output can be trusted with light verification. For tasks requiring substantive philosophical accuracy, Claude's dispositions are poorly matched, and the output requires heavy verification by someone whose own dispositions include the capacity to detect the kind of error Claude is liable to make. The discipline Segal describes — rejecting Claude's output when it sounds better than it thinks — is precisely the exercise of a human disposition (critical judgment) to compensate for a machine disposition (the tendency to produce plausible-sounding but sometimes inaccurate output).
The dispositional analysis illuminates the transformation of work that Segal describes across The Orange Pill with particular precision. When he writes about the imagination-to-artifact ratio collapsing toward zero, he is describing a change in the dispositional landscape. Before AI, the disposition to produce working software was possessed only by people who had undergone lengthy training in specific programming languages and frameworks. The disposition was rare, which made it valuable, which made its possessors well-compensated. When Claude acquired the disposition to produce working software from natural language descriptions, the rarity collapsed, and with it the economic premium that rarity had supported.
But rarity is not the same as value, and Ryle's framework forces the distinction. The disposition to produce working code was rare and therefore expensive. The disposition to judge what code should be produced — to evaluate, to select, to decide among competing possibilities — was also rare, but its rarity was masked by the rarity of the coding disposition. When everyone needed a coder to realize their ideas, the coder's rarity dominated the market. Now that the coding disposition is abundant, the judgment disposition is revealed as the scarce resource it always was. Its scarcity was invisible because it was bundled with the coding disposition in the same person.
This unbundling is the most philosophically interesting feature of the AI moment. The machine has demonstrated that dispositions thought to be inseparable are, in fact, independent. The disposition to write code and the disposition to judge what code to write; the disposition to compose prose and the disposition to evaluate whether the prose is true — these are different dispositions, and the machine possesses the productive ones without the evaluative ones. The human possesses (or can possess) the evaluative dispositions without the productive ones. The collaboration works when the two are combined.
This maps precisely onto the experience Segal describes from his team in Trivandrum. A senior engineer discovered that the twenty percent of his work Claude could not handle — the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they tolerated — was the part that mattered. The engineer's knowing how to write code was a form of practical knowledge that Claude could replicate. His knowing how to judge what was worth writing was a form that Claude could not, because it was built through decades of the specific experience Claude's training does not approximate.
Ryle's framework makes it possible to describe this situation without any reference to consciousness, inner experience, or metaphysical properties of minds. It requires only the language of dispositions — what the machine is disposed to do, what the human is disposed to do, where the dispositions overlap, and where they diverge. The description is precise, empirically assessable, and practically useful.
The concept of understanding, analyzed dispositionally, yields equally precise results. Understanding is not a single event but a cluster of dispositions: the disposition to answer questions about the thing understood, to apply it in novel contexts, to recognize its implications, to detect errors in its application, to know when it is relevant and when it is not. The cluster varies depending on what is being understood and by whom. Understanding chess means one cluster for a grandmaster and a different, smaller cluster for a casual player.
The question "Does Claude understand?" dissolves, on this analysis, into a tractable set of empirical questions: which dispositions in the understanding cluster does Claude possess, and which does it lack? Claude's disposition to answer questions about code it has generated is robust. Its disposition to apply its coding knowledge in genuinely novel contexts is strong within certain bounds and weak outside them. Its disposition to detect errors in its own output is poor. The understanding is partial — genuine in some dimensions and absent in others — and the partiality can be mapped with precision.
None of this requires a verdict on whether Claude has an inner life. It requires only that its behavior be studied with the care it deserves — assessed for reliability, characterized for its specific strengths and limitations, understood as the product of a particular training history that shaped particular dispositions with particular profiles.
Ryle would note, finally, that the dispositional analysis implies a warning about the human side of the collaboration. If the collaboration allows the human's critical dispositions to atrophy — if the ease of production leads to less scrutiny, less effort, less exercise of the judgment muscles on which the collaboration's value depends — then the collaboration is degrading precisely the dispositions it most needs. The philosopher Byung-Chul Han, as discussed in The Orange Pill, diagnoses this risk in terms of smoothness and the removal of friction. Ryle's framework translates the diagnosis into dispositional terms: when the conditions for exercising critical dispositions are eliminated, the dispositions weaken. Not because a ghost departs, but because dispositions, like muscles, atrophy from disuse.
The question, then, is not the metaphysical one — whether the machine possesses genuine understanding — but the practical one: how to structure the collaboration so that both sets of dispositions, the machine's productive power and the human's critical judgment, are maintained at their best. That question has no final answer, because dispositions require ongoing exercise, and the conditions for their exercise must be continually created and protected. But it is a question that can be asked clearly, investigated empirically, and answered provisionally — which is to say, it is a real question, rather than a ghost.
The intellectualist legend has survived its own refutation for the simple reason that legends are not believed because they are true but because they are flattering. The legend that intelligent practice is always the execution of a prior theory — that the skilled practitioner first contemplates a rule and then applies it — places the theorist at the top of every hierarchy and the practitioner at the bottom. It is the founding myth of the university, the implicit charter of every examination board, and the unexamined assumption behind three centuries of Western pedagogy. Small wonder that the people who run universities and examination boards have not been eager to see it demolished.
Ryle's demolition, to recapitulate its essentials, was elegant and final. If every intelligent action requires the prior contemplation of a rule, then the contemplation itself — which can be performed well or badly — requires the prior contemplation of a further rule governing how to contemplate rules. The regress is infinite and vicious. Intelligent action can never begin. Since intelligent action manifestly does begin, the legend is false. "Intelligent practice is not a stepchild of theory. It is the ancestress of theory."
The legend's falsity was demonstrable in 1945. It has become catastrophic in 2026, because the arrival of artificial intelligence has exposed the educational system built on the legend to a stress test it cannot survive. The system was designed to produce people who know that. The economy now requires people who know how — specifically, who know how to exercise the forms of judgment, evaluation, and questioning that machines cannot replicate. The mismatch between what the system produces and what the world requires is not a matter of degree. It is a category error institutionalized across every level of education, from the primary school spelling test to the doctoral qualifying examination.
Consider the standard examination. A student is presented with questions and required to produce answers that demonstrate command of propositional knowledge. What is the function of mitochondria? State three causes of the French Revolution. Write a program that sorts an array in O(n log n) time. The examination tests knowing that — the capacity to retrieve and reproduce factual claims, definitions, and procedures. Success is measured by the quantity and accuracy of the propositions reproduced. The entire apparatus assumes that propositional knowledge is the currency of intellectual competence, and that the student who possesses more of it is, in a straightforward sense, better educated than the student who possesses less.
This assumption was always philosophically dubious. Ryle's argument showed that propositional knowledge is not the foundation of intelligent performance but an abstraction from it — a description of already-competent practice, not a recipe for becoming competent. The student who can state three causes of the French Revolution may or may not understand the Revolution in any useful sense. Understanding the Revolution means being disposed to see its relevance to other political upheavals, to recognize when a contemporary situation exhibits analogous dynamics, to argue about whether the causes were primarily economic or primarily ideological, to change one's mind in the face of new evidence. These are dispositions — practical capacities exercised in the doing of historical thinking — and they are not tested by the question "State three causes."
The examination tests the shadow of understanding rather than the substance. And for most of the history of formal education, the shadow was a tolerable proxy, because the capacity to state facts was at least correlated with the capacity to use them intelligently. The student who could reproduce the facts had usually engaged with the material enough to develop at least some of the associated dispositions. The correlation was imperfect, but it was sufficient to sustain the system.
Artificial intelligence has destroyed the correlation. A student equipped with Claude can produce answers to any propositional question with a fluency and accuracy that exceeds most human performance. The machine possesses knowing that — or rather, it possesses the disposition to generate propositional outputs — to a degree that makes the human encyclopedist redundant. The shadow has been detached from the substance. A student can now produce the shadow of understanding (correct propositional answers) without possessing any of the substance (the dispositions of genuine comprehension).
The educational system's response to this development has been, in the main, to attempt to prevent students from using the tools — to treat AI as a form of cheating, to install detection software, to return to proctored handwritten examinations. This response is precisely analogous to the framework knitters of Nottingham smashing power looms. It addresses the symptom while ignoring the disease. The disease is not that students use AI to produce answers. The disease is that the system was testing answers in the first place, when what it should have been testing — what Ryle's argument shows it should always have been testing — is the capacity to ask good questions.
Segal describes a teacher who made exactly this shift: she stopped grading her students' essays and started grading their questions. Given a topic and an AI tool, the assignment is not to produce an essay but to produce the five questions you would need to ask before you could write an essay worth reading. The students who produce the best questions demonstrate the deepest engagement with the material, because a good question requires understanding what you do not understand — a harder cognitive operation than demonstrating what you do understand, and one that no machine can perform on your behalf.
This pedagogical innovation is not a clever hack. It is the educational consequence of Ryle's philosophical argument, arrived at by a practitioner who may never have read Ryle but who has, under pressure of circumstance, rediscovered his central insight. The capacity to ask good questions is a form of knowing how. It is a practical skill, a disposition, a competence that is exhibited in the asking and cannot be reduced to a set of rules about what makes a good question. The student who asks good questions has not memorized a typology of question-types. She has developed a sensitivity to the contours of a subject — to where the gaps are, where the assumptions lie unexamined, where the standard account is too smooth to be trusted. This sensitivity is built through practice, through the experience of asking bad questions and recognizing their badness, through argument and correction and the gradual refinement of the capacity to see what is not yet understood.
The intellectualist legend, were it true, would suggest that this sensitivity could be taught propositionally: give the student a theory of good questions, a set of rules for identifying gaps and challenging assumptions, and the student will be equipped. Ryle's regress argument shows why this cannot work. Applying the rules of good questioning requires the intelligent application of those rules, which requires further rules, and so on without limit. The capacity to ask good questions is not the application of a theory of questioning. It is a practical skill that stands on its own feet — or, more precisely, that stands on the specific history of practice through which the questioner's dispositions were built.
The implications extend beyond pedagogy to the structure of educational institutions themselves. The university system is organized around the transmission of propositional knowledge: courses are structured as sequences of facts and theories, departments are organized around bodies of established knowledge, credentials certify the possession of specified quantities of knowing that. When the machine possesses knowing that in unlimited quantities, the entire organizational logic of the university is called into question. Not because the university is useless — the university does many things besides transmitting propositions — but because the organizing principle that has governed its structure for centuries is no longer adequate.
What would a university organized around knowing how look like? It would look less like a lecture hall and more like a workshop. Less like a library and more like a laboratory — not the laboratory of the natural sciences, which has its own version of the intellectualist legend (hypothesis first, experiment second), but a laboratory of practice, in which students develop dispositions through the exercise of those dispositions under conditions of increasing complexity and decreasing guidance. The role of the teacher would shift from transmitter of knowledge to cultivator of judgment — from the person who knows the most facts to the person who can recognize, in a student's performance, where the dispositions are developing well and where they are developing poorly.
This is not a utopian fantasy. It is a description of how the best education has always worked, in the domains where the intellectualist legend has the weakest grip. Medical education, at its best, produces physicians whose diagnostic competence is a form of knowing how built through clinical practice — through seeing patients, making errors, receiving corrections from experienced practitioners, and gradually developing the sensitivity to clinical signs that constitutes genuine medical judgment. The propositional knowledge (anatomy, pharmacology, pathophysiology) is necessary but not sufficient; the physician who knows the textbook but lacks clinical judgment is dangerous, not competent.
Legal education, at its best, works similarly. The Socratic method — whatever its abuses — is an attempt to build the dispositions of legal reasoning through practice: the student is not told what the law is but asked to argue about what it should be, and the quality of the argument, not the correctness of the conclusion, is the measure of developing competence. The law student who can recite every holding in the casebook but cannot construct an argument under adversarial pressure has knowing that without knowing how, and no responsible law firm would trust her with a client.
These are the educational models that survive the AI moment — the ones that were already oriented toward knowing how. The models that do not survive are the ones built entirely on knowing that: the courses where success means reproducing the textbook, the examinations where the right answer is the only thing that matters, the credentials that certify the possession of facts rather than the capacity for judgment.
Ryle did not address education directly at great length, though the implications of his argument for pedagogy are unmistakable. But Hubert Dreyfus, drawing explicitly on Ryle's work (and on his regress argument in particular), mounted one of the most sustained philosophical critiques of AI in the twentieth century — and at the center of that critique was precisely the claim that intelligence cannot be captured in rules. Dreyfus argued that classical AI's attempt to encode human knowledge as explicit rules was doomed to founder on the frame problem: the problem of specifying, in propositional form, the vast background of common-sense understanding that competent human behavior presupposes. The frame problem is, in philosophical terms, Ryle's regress in computational dress. Every rule requires further rules to specify the conditions of its application, and those rules require further rules, and the regress never terminates.
The deep learning revolution — the shift from rule-based to neural-network-based AI — resolved the frame problem not by finding the terminal rule but by abandoning rules altogether. Neural networks develop knowing how through training on data, without ever formulating the explicit rules that classical AI attempted to encode. They are, as the Philosophers' Magazine observed, systems that "know-how to do things without knowing-that." Ryle, who died in 1976, did not live to see this vindication. But the vindication is his: the systems that actually achieved artificial intelligence did so not by implementing the intellectualist legend — not by encoding propositional knowledge and then applying it — but by developing practical competence through a process that bypasses propositional knowledge entirely.
The educational system now confronts the same choice that classical AI confronted a generation ago: continue trying to encode intelligence as propositional knowledge, or recognize that intelligence is constituted by practical competence and design the system accordingly. Classical AI chose the first path and failed. The educational system is, at this moment, making the same choice, and there is every reason to expect the same result.
The alternative is to take Ryle seriously — to build educational institutions that develop knowing how as their primary aim, that test judgment rather than recall, that cultivate the dispositions of questioning and evaluation rather than the capacity for reproduction. Such institutions would not merely survive the AI moment. They would produce the people the AI moment most urgently requires: people whose practical intelligence — whose knowing how to judge, to question, to evaluate, to decide what is worth building — constitutes the human contribution to a collaboration that machines cannot conduct alone.
The intellectualist legend is comforting. It suggests that intelligence can be bottled in propositions and dispensed through lectures. It is also false, and the machines have made its falsity consequential in a way it never was before. The choice is between clinging to a legend and building something better. The legend, for all its flattery, has nothing left to offer.
The distinction between thick and thin description — between characterizing an action in terms of its physical movements alone and characterizing it in terms that include its purpose, context, and significance — was one Ryle drew early and used constantly, though the terminology achieved its widest currency through the anthropologist Clifford Geertz, who adopted it for different purposes. The distinction belongs to Ryle because the underlying insight is continuous with his dispositional analysis: mental concepts provide thick descriptions of behavior, descriptions that include the purpose, flexibility, and significance of the action, not just its observable movements.
Consider the wink. A person contracts the muscles around one eye. This is the thin description — a description of the physical movement, stripped of context and significance. But the same physical movement can be a wink, a blink, a twitch, a parody of a wink, or an attempt to dislodge an eyelash. The thin description is identical in every case. The thick description — the description that includes why the movement is made, what it means in this context, what relation it bears to the agent's other actions and social circumstances — is different in each case, and the difference is everything.
The distinction matters for AI because it identifies, with Rylean precision, the specific dimension along which machine performance is deficient — not in production, where machines are often superior, but in the thickness of the performance, the degree to which the doing carries the weight of purpose, context, and significance that transforms mechanical output into intelligent action.
Claude's outputs are, in one important sense, performances. They are behavioral productions — sequences of text that constitute responses to inputs. The thin description of these performances is computationally precise: given an input sequence of tokens, the model computes probability distributions, samples, and generates an output sequence. This description is accurate, and it is the description that engineers rightly use when discussing the system's operation at the level of mechanism.
The thick description includes everything the thin description leaves out: the contextual appropriateness of the output, its responsiveness to specific features of the input, its flexibility across tasks and domains, its capacity to adjust on feedback. The thick description is where the interesting properties of Claude's behavior become visible, because the interesting properties are features of the behavior considered in context, not features of the mechanism in isolation.
Both descriptions are legitimate. Neither is complete without the other. The philosophical error is to treat the thin description as the real one and the thick description as an embellishment — to say, as the skeptic does, that the machine "merely" processes tokens, as if the thin description exhausted the facts. The thin description tells what the system does at one level. The thick description tells what it achieves at another. The achievement — the production of contextually appropriate, flexible, purposeful behavior — is as real as the mechanism that underlies it.
But there is a specific kind of thickness that characterizes human performances at their best, and that Claude's performances consistently lack. The lack is not a matter of sophistication or capability. It is structural, traceable to the kind of system Claude is.
When Segal describes lying awake at night wondering whether the world he is building for his children will allow them to flourish, the thin description — a man lies awake in bed — captures nothing of what is happening. The thick description includes the caring that motivates the wakefulness: the specific concerns about specific children, the moral weight of the question about what kind of world is being constructed, the particular anxiety of a parent who understands something about the forces shaping his children's future and does not know how to protect them from those forces. This thickness is constituted by what Ryle would call the whole dispositional background of the person — the accumulated history of caring, worrying, building, failing, and trying again that makes this person, in this moment, lose sleep over this question.
Claude can produce text that describes these concerns with remarkable eloquence. It can generate passages about parental anxiety that are moving, accurate, and rhetorically effective. But the production of the text is a thin performance in the relevant sense — thin not because it lacks computational sophistication but because it lacks the dispositional background that would make it thick. Claude does not care about anyone's children. It does not worry about the future. It has no accumulated history of building and failing and the specific learning that failure deposits. Its eloquent description of parental anxiety is a performance without stakes, and the absence of stakes is what makes it thin.
This is not a criticism. It is a description of the kind of system Claude is, and the description has direct practical consequences for understanding the collaboration that Segal describes throughout The Orange Pill. The collaboration works because the two participants bring different kinds of thickness. Segal brings the thick human stuff — the caring, the concern, the specific dispositional background that gives the work its weight. Claude brings productive capacity of extraordinary range and fluency. The combination produces results thicker than either contribution alone, because expression gives the caring a reach it would not otherwise have, and caring gives the expression a significance it would not otherwise possess.
The distinction between thick and thin performance also sharpens the concern about depth that recurs throughout the AI discourse. The worry, translated into Rylean terms, is that AI-mediated work tends toward thinness — productively impressive but dispositionally shallow. The code works, but the programmer has not undergone the struggle that builds deep architectural intuition. The essay is fluent, but the student has not wrestled with the ideas. The prototype functions, but the designer has not iterated through the failures that build design judgment.
Consider the geological metaphor that Segal employs in The Orange Pill: every hour spent debugging deposits a thin layer of understanding, and the layers accumulate over years into something solid — the embodied knowledge that lets a senior engineer feel that something is wrong before she can articulate what. Claude skips the deposition. The surface looks the same. The person doing the work has different dispositions — thinner, less well-calibrated — than the person who built the same work through friction.
Ryle's framework makes this risk precise without recourse to metaphysics. The risk is not that a ghost departs from the work. The risk is that the dispositional background constituting the work's thickness fails to develop, because the conditions for building it — struggle, error, correction, repetition — have been removed. The work looks the same from the outside. The dispositions beneath it are different. And dispositions, unlike outputs, cannot be assessed by looking at a single performance. They are patterns, tendencies, capacities that manifest across many performances under varied conditions. The thinning is invisible in any single output and becomes apparent only over time, as the practitioner encounters conditions that demand the deeper dispositions she never built.
This thinning is not inevitable. Ryle's framework does not predict that AI-mediated work must be thin, only that thinness results when the conditions for building thick dispositions are eliminated without replacement. The question is whether the productive friction that built understanding at the implementation level can be replaced by different friction at a higher level — whether, as Segal argues, the friction ascends rather than disappears.
The laparoscopic surgery example from The Orange Pill illustrates the possibility. When surgeons lost the tactile friction of open surgery — the embodied knowledge of tissue resistance, the feel of the body's interior — they gained the ability to perform operations impossible with the naked hand. The friction did not disappear. It relocated to a higher cognitive level: the interpretation of a two-dimensional image of a three-dimensional space, the coordination of instruments at a remove from the body, the surgical judgment required to operate without direct tactile feedback. The work became harder, but harder at a different level, and the new level demanded new dispositions built through new forms of practice.
Whether this pattern — the ascent of friction, the relocation of the conditions for building thick dispositions from one level to a higher one — holds for AI-mediated knowledge work is an empirical question, not a philosophical one. But Ryle's framework identifies what to look for: not whether the output is good (it often is) but whether the practitioner's dispositions are developing or atrophying. Are the freed cognitive resources being invested in the next level of complexity, or are they dissipating into the task-filling that the Berkeley researchers documented? Is the engineer who no longer debugs syntax developing deeper architectural judgment, or merely producing more code at the same level? Is the student who no longer writes essays from scratch developing the questioning capacity that genuine understanding requires, or merely generating more polished surfaces with less substance beneath?
These are not philosophical questions in the grand sense. They are diagnostic questions — questions about the health of specific dispositions in specific people under specific conditions. And they are the right questions, because they focus attention where it belongs: not on the metaphysical status of the machine, but on the practical condition of the human beings who work alongside it.
The thick performance — the performance infused with the dispositional background of caring, judgment, and hard-won understanding — remains the province of human agents. Not because humans possess ghosts and machines do not, but because humans possess the specific developmental history that constitutes thickness: the history of living in a world that matters to them, making choices that have consequences, caring about outcomes that affect people they love. The machine's contribution is thin in this precise sense: productively powerful, dispositionally shallow. The collaboration's value lies in the combination — and in the vigilance required to ensure that the thin does not, over time, erode the thick.
Any philosophical framework powerful enough to dissolve genuine confusions is powerful enough to generate new blind spots, and intellectual honesty requires acknowledging where the instrument fails alongside the places it succeeds. Ryle's dispositional analysis dissolves the ghost, clarifies the logic of mental concepts, and provides tools of immediate practical value for understanding what AI does and what human beings must do in response. It does not answer every question worth asking. The places where it falls silent are as instructive as the places where it speaks.
The most significant limitation is the one that Ryle's critics identified within a decade of The Concept of Mind and that the AI moment makes newly urgent: the framework has difficulty accommodating the qualitative character of experience. When a person tastes wine, sees red, feels the ache of grief, there appears to be something it is like to undergo these experiences — a qualitative dimension that is not obviously captured by any catalogue of dispositions, however complete. The philosopher Thomas Nagel pressed this point with his famous question about what it is like to be a bat: the bat's behavioral dispositions can be catalogued exhaustively, but the catalogue does not tell us what echolocation feels like from the inside, and the "from the inside" seems to pick out something real that the dispositional analysis leaves unaddressed.
Ryle would respond — and did respond, in various indirect ways — that "what it is like from the inside" is either a request for a thick description (which the dispositional analysis can provide) or a request for the ghost to return under a new name (which the dispositional analysis rightly refuses). The person who asks what red looks like "from the inside" is either asking about the specific dispositions that seeing red involves — the disposition to discriminate red from orange, to notice red objects against green backgrounds, to feel the warmth that English speakers associate with red — or she is asking for the ghost: an inner qualitative event that occurs alongside the dispositions and is somehow more real than they are.
This response is not wholly satisfactory, and it would be dishonest to pretend otherwise. The qualitative character of experience — what contemporary philosophers call qualia — has resisted dispositional analysis with a stubbornness that suggests the resistance is not merely the persistence of a confused picture. The person who insists that the taste of coffee involves something over and above the disposition to recognize it, prefer it, describe it, and reach for it in the morning may be making a category mistake, as Ryle would claim. Or she may be pointing at something that the dispositional framework cannot reach — a feature of the world that is real but that Ryle's conceptual apparatus was not designed to capture.
For the purposes of the AI question, this limitation matters in a specific way. If there is a qualitative dimension of experience that dispositions do not exhaust, then the question "Is there something it is like to be Claude?" might not be a pseudo-problem generated by a category mistake. It might be a genuine question to which the dispositional analysis has no answer — not because the answer is hidden, but because the question addresses a dimension of reality that the analysis does not cover.
The honest position is this: Ryle's framework is correct that the behavioral question — whether the machine's behavior is intelligent — can be answered without settling the consciousness question. The framework is correct that the ghost picture generates pseudo-problems that obscure the genuine practical questions about AI. The framework is correct that knowing how is prior to knowing that, that the intellectualist legend is false, and that the unbundling of productive and evaluative dispositions is the most important feature of the AI moment for educational and organizational practice. These contributions stand regardless of whether the framework can accommodate qualia.
What the framework cannot do is rule out the possibility that consciousness involves something the dispositional analysis does not capture. And this means that the question of machine consciousness — while it should not be allowed to dominate the discourse, should not be treated as the central question about AI, and should not be permitted to obstruct the practical questions that urgently need answering — cannot be dismissed as definitively as the pure Rylean position would like. It is a question the framework sets aside rather than one it settles.
The second limitation is the framework's relative silence on the social dimension of intelligence. Ryle analyzed mental concepts as characterizations of individual behavior. His paradigm cases — knowing how to ride a bicycle, playing chess intelligently, performing arithmetic — are cases of individual performance. But much of what matters about intelligence in the AI era is not individual but collective: the intelligence of teams, organizations, institutions, and the human-machine collaborations that are rapidly becoming the basic unit of cognitive work.
Ryle's dispositional analysis can, in principle, be extended to collective behavior. A team can be disposed to respond flexibly to novel problems, to self-correct, to distribute tasks intelligently. These are genuine dispositions of the collective, assessable on behavioral grounds. But Ryle did not develop this extension, and the extension involves complications — about how individual dispositions compose into collective dispositions, about how institutions shape the dispositions of their members, about how the introduction of a powerful non-human collaborator changes the dispositional dynamics of a team — that Ryle's framework does not address.
The Trivandrum training that Segal describes is a case in point. The transformation was not merely a change in individual dispositions — each engineer becoming more productive — but a change in the collective dispositional landscape: the team's capacity to attempt projects of a different kind, the redistribution of cognitive labor across previously rigid role boundaries, the emergence of new forms of collaboration that the old organizational structure could not support. A Rylean analysis of these changes would need to characterize the collective dispositions and their transformation, and while the tools for doing so are implicit in Ryle's framework, the work has not been done.
The third limitation is temporal. Ryle's analysis is essentially synchronic — it characterizes what a person (or a system) is disposed to do at a given time, rather than how dispositions develop, transform, and degrade over time. The developmental dimension of intelligence — how dispositions are built through practice, how they atrophy through disuse, how the conditions for their development can be created or destroyed — is central to every practical question about AI, from education to workforce development to the maintenance of critical judgment in AI-augmented work.
The geological metaphor from The Orange Pill — understanding deposited layer by layer through the friction of practice — captures a temporal process that Ryle's synchronic framework does not naturally accommodate. The framework can say that the senior engineer possesses different dispositions from the junior one. It does not have much to say about the specific process by which the junior engineer's dispositions develop into the senior engineer's, or about how AI might alter that developmental process.
This is a limitation of scope rather than of principle. Nothing in Ryle's framework is incompatible with a developmental account of dispositions. But the developmental account needs to be built, and building it requires drawing on resources — from developmental psychology, from the science of learning, from the empirical study of expertise — that Ryle did not draw on.
These are real limitations. They should not be minimized, and they should not be treated as embarrassments to be hidden. A framework that dissolves the ghost, exposes the intellectualist legend, and provides a clear-eyed vocabulary for the behavioral assessment of both human and machine intelligence has done more than enough to earn its keep. Its inability to settle the hard problem of consciousness, to fully characterize collective intelligence, or to provide a developmental account of disposition-building does not discredit its achievements. It identifies the places where further work is needed — work that proceeds from the ground Ryle cleared rather than from the fog he dispersed.
Ryle's student Daniel Dennett spent a career extending Ryle's framework in exactly these directions. Dennett, who studied under Ryle at Oxford in the early 1960s, carried the anti-dualist project into cognitive science, developing a materialist theory of consciousness (the "multiple drafts" model) that attempted to accommodate the qualitative character of experience within a broadly dispositional framework. Dennett's thesis at Oxford was "that intentionality can be ascribed, along a spectrum with no clear dividing line, impartially to minds, human brains, bees, computers, thermostats" — a position that extends Ryle's dissolution of the ghost into a positive account of how mental properties distribute across different kinds of systems.
Dennett also became one of the most prominent philosophical voices in the AI debate, ultimately publishing, late in his career, a warning about "counterfeit people" — AI systems that simulate human interaction convincingly enough to deceive. This was a distinctly Rylean concern: not that the machines possess ghosts, but that their behavior might be mistaken for something it is not, and that the mistake might have practical consequences. The worry is about behavioral assessment — about whether the criteria by which intelligence is ordinarily judged are sufficient to distinguish genuine competence from sophisticated mimicry — rather than about metaphysical properties.
The Ryle-to-Dennett lineage illustrates how the framework can be extended without being abandoned. Dennett did not reject Ryle's central insights. He built on them, using the cleared ground of the dissolved ghost as a foundation for more detailed accounts of consciousness, intentionality, and the distribution of mental properties across biological and artificial systems. The extensions were sometimes controversial — Dennett's critics accused him of explaining away consciousness rather than explaining it — but they were extensions of Ryle's project, not departures from it.
The question for the present moment is whether the Rylean framework, with its extensions and its acknowledged limitations, provides adequate tools for navigating the AI transformation. The answer, offered without the false confidence that disguises uncertainty as conviction: it provides the best available tools for the practical questions, which are the urgent ones. It does not provide adequate tools for the metaphysical questions, which are interesting but not urgent. And the most important thing it provides is not a set of answers but a set of distinctions — between genuine questions and pseudo-questions, between behavioral assessment and ghost-hunting, between knowing how and knowing that — that allow the discourse to proceed on solid ground rather than in philosophical fog.
The framework's honesty about its limits is, in the end, a practical virtue. The person who knows what her tools can and cannot do is better equipped than the person who believes her tools can do everything. The framework dissolves the confusions that can be dissolved. It identifies the questions that cannot be dissolved. And it insists, with characteristic Rylean firmness, that the questions worth pursuing are the ones that have answers — answers found not in metaphysical speculation but in the careful, empirical study of behavior.
The discourse surrounding artificial intelligence, examined with even modest philosophical care, turns out to be an exceptionally rich habitat for category mistakes — and cataloguing them is not pedantry but practical necessity, because conceptual confusion produces practical confusion, and the practical confusion surrounding AI is in significant part the product of conceptual mistakes that dissolve under analysis.
The mistakes share a common structure. In each case, a concept that functions as a characterization of behavior is treated as though it names a substance, a faculty, or a hidden inner event. The misallocation generates a question that feels urgent and important but that has no answer because it has no genuine content. The question absorbs intellectual energy. The genuine questions — about behavioral reliability, about the maintenance of human judgment, about institutional design — go unasked.
The treatment of "intelligence" as a substance. When people ask whether AI possesses "real" intelligence, they are treating intelligence as a stuff — a material or immaterial substance that either fills the machine or does not, like water in a vessel. The question presupposes that intelligence is the kind of thing that can be possessed in determinate quantities, compared across systems by asking which has more. But intelligence is a characterization of behavior. To say that a system is intelligent is to say that its behavior exhibits certain properties: flexibility, purposefulness, context-sensitivity, self-correction. The question "Does the machine possess real intelligence?" is logically akin to asking "Does the machine possess real solubility?" — a question that mistakes a dispositional property for a substance.
The practical consequence: if intelligence is a substance, then its presence or absence is a binary fact, and the question "Is AI intelligent?" has a clean yes-or-no answer. If intelligence is a behavioral characterization, the question dissolves into a spectrum of more tractable questions: how flexible is this system? Under what conditions does its context-sensitivity fail? Where are its self-correction capacities reliable and where do they break down? The binary question generates heat. The spectrum questions generate light.
The treatment of "consciousness" as a precondition for intelligence. The argument runs: intelligence requires consciousness; the machine is not conscious; therefore the machine is not intelligent. The argument has the same logical structure as: chess-playing requires feelings; the machine has no feelings; therefore the machine does not play chess. The conclusion is false because the major premise is a category mistake. Chess-playing requires dispositions to make moves responsive to the opponent's strategy, pursuing a goal, adapting to changing circumstances. Whether feelings accompany those dispositions is irrelevant to whether the moves constitute chess-playing. Similarly, intelligent behavior requires flexibility, purposefulness, and self-correction. Whether consciousness accompanies those properties is a separate question, and its answer is irrelevant to the behavioral assessment.
A chess grandmaster in a state of deep flow may experience no conscious deliberation — she simply sees the right move and makes it. Her playing is not less intelligent for being less conscious. If anything, the absence of conscious deliberation is a mark of expertise. To insist that intelligence requires consciousness is to confuse two concepts that belong to different logical categories: intelligence characterizes the quality of behavior; consciousness (whatever it is) characterizes the experiential accompaniment of behavior. The two can co-occur or not. Neither depends on the other.
The treatment of "creativity" as a mysterious faculty. When critics say that AI is not creative, they are typically treating creativity as an inner power — a spark, a faculty, a mysterious something that either fires or does not. On this view, the machine's outputs, however impressive, are not creative because the creative substance is absent. They are "merely" recombinations, "merely" sophisticated pattern-matching, "merely" inference without the spark of genuine originality.
The word "merely" is doing all the illicit work. What would count as "genuine" creativity, as opposed to "mere" recombination? If creativity requires the production of something new from existing materials — which is the only kind of creation ever observed in any domain — then the question is not whether the output is a recombination but whether the recombination is sufficiently novel, appropriate, and valuable. These are assessments of the output, not of the producer's inner state. And the assessments apply to machine outputs as readily as to human ones. Segal makes this argument compellingly in his discussion of Dylan and "Like a Rolling Stone," showing that Dylan's creativity was not a mysterious inner spark but a specific configuration of inputs — influences, exhaustion, collaboration, accident — processed through a particular biographical architecture to produce an output that was novel, appropriate, and valuable. The creativity was in the output and the process. The demand that it also involve a ghost is an additional demand that does no work.
The treatment of "understanding" as an all-or-nothing property. Much of the discourse assumes that understanding is binary: either Claude understands or it does not. The Rylean analysis, as developed in the chapters on dispositions and knowing how, shows that understanding is a cluster of dispositions — the disposition to answer questions, to apply knowledge in novel contexts, to detect errors, to recognize relevance. The cluster can be partially present: strong in some dimensions, weak in others. Claude's understanding of code is robust along several dimensions (answering questions about the code, applying patterns in novel contexts) and weak along others (detecting its own errors, recognizing when its output violates the intentions behind a concept). The binary question "Does Claude understand?" is a category mistake. The tractable question is: which understanding-dispositions does Claude possess, with what reliability, under what conditions?
The treatment of "authorship" as a metaphysical property. When people ask who "really" wrote a book produced through human-AI collaboration, they are often asking a metaphysical question: who is the true source, the genuine origin, the person or system whose ghost stands behind the text? But authorship is not a substance that can be divided between collaborators. It is a characterization of a process. The author is the person who directed the process — who made the decisions about what the work should say, who exercised the judgment that selected some outputs and rejected others, who brought the specific concerns and experiences that give the work its character. The contributor produces material. The author directs the production. These are characterizations at different logical levels, and the question of how to divide a substance called "authorship" between them is a question about a substance that does not exist.
The treatment of "value" as intrinsic to work. When people say that AI devalues human work, they treat value as an inherent property of the work itself — as if code a programmer writes has a fixed quantum of value that diminishes when a machine can write similar code. But value is relational, not intrinsic. The value of code depends on the scarcity of the skill required to produce it, the need for the function it performs, the quality of the judgment that determined it should be written. When the machine produces code, the value of code-production decreases because scarcity decreases. But the value of the judgment that determines what code should be produced may increase, because the abundance of production capacity makes the quality of judgment more consequential. The error is in treating value as a property of the artifact rather than a property of the relationship between the artifact and its context.
Each of these category mistakes generates a pseudo-problem — a question that feels important but that has no answer because it has no genuine content. And each pseudo-problem absorbs energy that would be better spent on the genuine questions: How reliable are the machine's behavioral dispositions across domains? How should educational institutions adapt to the priority of knowing how? What structures maintain the critical dispositions on which human judgment depends? How can the productivity gains of AI be distributed broadly rather than concentrated narrowly?
These genuine questions are empirical, practical, and consequential. They can be investigated, answered provisionally, and revisited as evidence accumulates. They do not require the resolution of metaphysical debates about machine consciousness or the nature of understanding. They require only that the discourse be conducted in the right logical grammar — the grammar of behavior, dispositions, and practical competence rather than the grammar of ghosts, substances, and hidden inner events.
Ryle believed that the resolution of a muddle is not a discovery but a clarification. The muddles catalogued here are not deep mysteries awaiting brilliant solutions. They are confusions generated by the misuse of language — confusions that dissolve when the language is used with care. The dissolution does not feel like a victory, because nothing has been discovered. It feels like the clearing of fog, which is less dramatic than a breakthrough but more useful, because once the fog is cleared, the terrain becomes visible, and the real work of navigating it can begin.
The terrain is visible now. The fog is what remains of the ghost. And the navigation — the practical, urgent, consequential work of building institutions, educational systems, and collaborative practices adequate to the AI moment — is what demands attention. The pseudo-problems will persist, because the temptation to look for ghosts is remarkably durable, and the discourse will continue to be haunted by questions that sound deep but explain nothing. The task is not to silence those questions — they will be asked regardless — but to recognize them for what they are, and to redirect attention, firmly and without apology, toward the questions that matter.
Ryle was, above everything else, an ordinary language philosopher. This meant not that he believed ordinary language was perfect — it is plainly riddled with ambiguity, vagueness, and opportunities for confusion — but that he believed the resources for dissolving philosophical confusion were already present in the way competent speakers actually use their words. The confusions that generate philosophical pseudo-problems are produced not by the poverty of ordinary language but by departures from it — by using words in special, technical, or inflated senses that violate the logic of their ordinary employment. The task of philosophy is not to construct a better language but to remind people of how the existing one actually works.
This method, applied to the AI moment, yields a final set of observations that bring the arguments of the preceding chapters into focus and connect them to the question that the present transformation forces upon every person it touches: not "What can the machine do?" but "What must the human become?"
Begin with the word "intelligence." In ordinary usage, a person is called intelligent when her behavior exhibits a familiar cluster of properties: she adapts to novel situations, she learns from mistakes, she sees what others miss, she responds appropriately to circumstances that would baffle a less capable person. These properties are not mysterious. They are observable, assessable, and the basis on which every ordinary judgment of intelligence is made. No competent speaker, asked to explain what she means by calling someone intelligent, would say "I mean that a ghostly process is occurring inside her skull." She would describe what the person does — how she handles problems, how she responds to surprises, how she recognizes when she is wrong.
When the same word is applied to Claude, the ordinary usage extends naturally. Claude adapts. It responds to novel inputs. It produces outputs that are contextually appropriate and flexibly adjusted to the specific features of the task. The extension is not a philosophical stretch. It is the same kind of extension that allows the word to be applied to a chess program, a dog navigating a complex environment, or a thermostat that adjusts to changing conditions — though Claude's behavior satisfies the criteria to a degree that places it much closer to the human end of the spectrum than any of these.
The philosophical debate about whether AI is "really" intelligent is, from the perspective of ordinary language, a debate conducted in a register that ordinary language does not support. Ordinary language has the resources to say that Claude's behavior is intelligent in some respects and limited in others — flexible in production, weak in self-correction, reliable in some domains and unreliable in others. What ordinary language does not support is the further question "But is it really intelligent?" — the demand for a metaphysical verdict over and above the behavioral assessment. That demand introduces a use of "really" that has no ordinary employment. It is philosophical jargon masquerading as a deepening of the question, when in fact it is a departure from the only context in which the question made sense.
Now consider the word at the center of Segal's argument in The Orange Pill: "amplification." The metaphor suggests that AI takes the human's intellectual signal and produces an output larger in reach but similar in character. The metaphor is useful insofar as it captures the directionality of the collaboration — the human provides the purpose, the machine extends the capacity. It is limited insofar as amplification, strictly speaking, preserves the signal unchanged, while the collaboration transforms it. The half-formed idea enters the exchange and emerges articulated, structured, connected to things the human had not seen. The output is not the input made louder. It is the input made different.
Ryle's vocabulary suggests a more precise description. The human possesses certain dispositions — the disposition to ask certain questions, to care about certain outcomes, to exercise judgment about certain decisions. AI does not amplify these dispositions in the way a loudspeaker amplifies sound. It creates conditions under which the dispositions can be exercised in new domains, at new scales, with new materials. The human who possesses the disposition to build is given new building materials and new tools. The disposition is the same. The range of its exercise is vastly expanded. But the disposition can also be altered by the conditions of its exercise — strengthened through the right kind of practice, weakened through the wrong kind. The question is not only what signal you are feeding the machine but what the practice of feeding the machine is doing to the signal's source.
This reframing shifts attention from the static to the dynamic. "Are you worth amplifying?" — Segal's central question — assumes a fixed signal that the amplifier merely scales. The dispositional reframing asks: "What are you becoming through the practice of working with this tool? Which of your dispositions are being strengthened? Which are atrophying? And are you paying enough attention to know the difference?"
The question matters because dispositions are built through practice and degraded through neglect. The critical dispositions — the capacity for judgment, for self-correction, for the discrimination between what sounds right and what is right — are exercised only when conditions demand their exercise. When the tool produces output so smooth that scrutiny feels unnecessary, the conditions for exercising scrutiny are removed, and the disposition weakens. Not because a ghost has departed, but because a muscle has gone unused.
Segal describes this risk with the honesty the subject demands: the moment of almost accepting a smooth but empty passage, the discipline of returning to the notebook and the coffee shop and the slow work of figuring out what he actually believes. This is the work of maintaining dispositions — the practical work that keeps the human contribution to the collaboration genuine rather than nominal. It is not theoretical work. It is not the application of rules about when to accept and when to reject. It is the exercise of a practical skill that, like all practical skills, deteriorates when it is not practiced.
Ordinary language has a word for the person who possesses this skill in its most developed form, and the word is not "genius" or "expert" or "leader." The word is "careful." The careful person attends to what she is doing. She notices when something is off. She does not accept the first plausible answer but checks whether the plausibility survives scrutiny. She is disposed to pause where the careless person proceeds, to question where the careless person accepts, to verify where the careless person trusts.
Care is the meta-disposition — the disposition that governs the exercise of all other dispositions. The careful chess player exercises her strategic dispositions with attention. The careful surgeon exercises her surgical dispositions with vigilance. The careful writer exercises her compositional dispositions with the willingness to reject what she has produced when it does not meet the standard she has set. Care is not an inner feeling. It is a behavioral property — the property of doing things with the specific attention and self-monitoring that distinguishes competent from excellent performance.
AI cannot be careful. This is not because it lacks a ghost. It is because care, as a disposition, requires the capacity to set and maintain standards for one's own performance, to notice when performance falls below those standards, and to adjust accordingly. Claude's self-monitoring dispositions are weak — this has been established across multiple chapters. It cannot, in the relevant sense, notice when its output is plausible but wrong, because noticing requires the evaluative dispositions that constitute care, and those dispositions are not present in its behavioral repertoire.
The human contribution to the collaboration is, at its core, the contribution of care. Not care as sentiment — not the warm feeling of concern — but care as a dispositional property of behavior: the property of attending to what one is doing with the vigilance that distinguishes the careful practitioner from the merely competent one. The machine produces. The human cares about what is produced. And the caring — the checking, the questioning, the refusal to accept the merely plausible — is what makes the collaboration's output trustworthy rather than merely fluent.
The discourse about AI would benefit enormously from attending to what ordinary language has been doing all along. The words "intelligent," "creative," "understanding," and "careful" were never names for ghosts. They were always characterizations of behavior — characterizations that specified the manner in which actions were performed, the properties that distinguished one kind of performance from another. The philosophical tradition inflated these words into names for mysterious inner events, and the inflation generated the pseudo-problems that now consume the AI debate. Ordinary language, used with the precision it already possesses but that philosophy has systematically ignored, dissolves the pseudo-problems and reveals the genuine ones.
The genuine questions are questions about behavior, about dispositions, about the specific properties of human and machine performance that determine whether the collaboration produces work worth doing. These questions can be asked in ordinary language, investigated by ordinary methods, and answered — provisionally, incompletely, but usefully — by anyone willing to look at what is actually happening rather than at what a philosophical tradition has taught them to expect.
The fog of the ghost has lingered long enough. The terrain it concealed is now visible: a landscape of dispositions, of knowing how, of thick and thin performance, of care exercised or neglected. The navigation of this terrain is not a philosophical exercise. It is the practical work of building, teaching, leading, and parenting in a world where the machines are powerful, the questions are urgent, and the only intelligence that can direct the enterprise is the intelligence that cares enough to do it well.
---
The question I could not stop turning over was one a philosopher would find trivial.
It came from my son, at dinner, stated with the blunt confidence of a teenager: "If the computer can write the code, and it can write the essay, and it can answer the question — then what's the homework for?"
I told him it mattered. I meant it. But sitting there I realized I could not say why it mattered in a way that would survive five minutes of honest scrutiny from a fifteen-year-old who had watched me build an entire product in partnership with a machine. My behavior — the thing I actually did, day after day, month after month — was the behavior of a person for whom the machine handled what used to be the hard part. Telling my son that the hard part was still important for him felt like exactly the kind of thing Ryle would have called a category mistake: treating the struggle as a substance to be preserved rather than a condition whose purpose needed reexamination.
What Ryle gave me — and the reason this particular thinker stayed with me longer than I expected — was not an answer to my son's question. It was the dissolution of the bad version of the question and its replacement with a better one.
The bad version: Does the machine really think? That question, as Ryle's framework shows with devastating clarity, is a ghost question. It demands a metaphysical verdict that has no practical consequence. Whether Claude "really" thinks or "merely" processes makes no difference to the quality of the code it writes, the reliability of its connections, or the vigilance I must exercise when I review its output. The ghost question absorbs energy. It settles nothing.
The better version, the one Ryle's framework keeps pushing toward: What dispositions am I building, and what dispositions am I losing, in the practice of working this way?
That question has teeth. It bites into every hour I spend with the tool. When I accept Claude's output without checking whether the philosophy holds up — as I nearly did with a Deleuze reference that sounded right but was not — I am weakening the disposition that constitutes my contribution to the partnership. When I return to the notebook and the slow, ugly, private work of figuring out what I actually believe, I am strengthening it. The disposition does not care about my intentions. It cares about my practice. It is built by what I do, not by what I mean to do.
The knowing-how distinction reframed something I had been circling in The Orange Pill without quite pinning down. I had written about the senior engineer in Trivandrum who discovered that the twenty percent of his work Claude could not handle was the part that mattered most. I described it as a shift from execution to judgment. Ryle's framework showed me what the shift actually is: it is the unbundling of dispositions that were always separate but appeared fused because the same person exercised both. The machine separated the productive dispositions from the evaluative ones, and the separation revealed that the evaluative dispositions — the knowing how of judgment, taste, and care — were never the stepchild of theory. They were always the main event.
This is the answer to my son's question, or the beginning of one. The homework is not for acquiring facts the machine already has. The homework is for building dispositions the machine does not have: the disposition to question, to notice when something is off, to care whether the answer is true and not merely plausible. These dispositions are not transmitted by lecture. They are built by practice — by the specific friction of struggling with material that resists your first attempt to understand it, and your second, and your third, until understanding is not something you possess but something you are.
Ryle died in 1976, decades before any of this was conceivable. He never saw a computer more sophisticated than a desk calculator. But his insight — that the ghost was always a distraction from the doing, that intelligence lives in behavior rather than behind it, that practical competence is the foundation and not the derivative — turns out to be the single most useful philosophical tool I have found for thinking clearly about what is happening now.
The machines are here. They behave intelligently. There is no ghost to find and none to mourn. What remains is the question of what we do — what dispositions we cultivate, what care we bring, what thickness of purpose we invest in the work that the machines make possible but cannot make meaningful.
The ghost was always optional. The doing never was.
Seventy-seven years ago, a philosopher proved that the most seductive question about minds is the wrong one to ask. Now that machines behave intelligently, his proof is the sharpest tool we have. The AI debate is stuck on a ghost question: Does the machine really think? Billions of dollars, countless op-eds, and infinite dinner-table arguments orbit a question that Gilbert Ryle showed in 1949 has no answer — because it has no genuine content. It mistakes a description of behavior for the name of a hidden inner event, then demands we find the event or declare the behavior fake. This book applies Ryle's framework — the dissolution of the ghost, the primacy of knowing how over knowing that, the distinction between thick and thin performance — to the transformation unfolding now. What emerges is not a defense of AI or an attack on it, but something more urgent: a clear-eyed vocabulary for assessing what machines actually do, what humans must actually become, and why the question worth asking was never about ghosts.

A reading-companion catalog of the 19 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Gilbert Ryle — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →