Free-floating rationales are reasons that exist in the structure of a system without being represented anywhere as explicit thoughts. The peacock's tail has a rationale — sexual selection has shaped it toward a specific function — but no peacock and no designer formulated the rationale. Dennett coined the term to capture the peculiar fact that evolution produces rational designs without reasoners, and that much of biological intelligence is this kind of structured response to reasons that nothing in the system has ever thought. The concept maps directly onto large language models: the weights encode countless free-floating rationales for linguistic and inferential moves, none of them articulated by any agent, including the network itself.
The concept was developed most fully in Darwin's Dangerous Idea (1995) and elaborated in From Bacteria to Bach and Back (2017). It is the bridge between Dennett's account of Darwinian process and his account of mind: once you see that evolution produces free-floating rationales, you can see that most of what a brain does is also of this kind, with explicit reasoning as a late, local, and expensive overlay.
For AI, the implications are immediate and underappreciated. A trained neural network is a vast repository of free-floating rationales — weighted dispositions that respond appropriately to reasons the network has never formulated. When such a system produces an output that looks reasoned, we face a genuine question: is this mere pattern-matching, or is it the same kind of structured responsiveness to reasons that constitutes most biological intelligence? Dennett's answer is that the distinction is less sharp than the question assumes.
The framework connects to competence without comprehension but adds a specific claim: the competences are rational in a precise sense — they track reasons — even when nothing in the system represents the reasons as such. This is how evolution built the immune system, how learning built the physician's diagnostic intuition, and how gradient descent builds the language model's apparently thoughtful completions.
Read alongside The Orange Pill's account of AI-augmented creative work, the concept explains a specific phenomenon: collaborators report that Claude seems to track rationales the user had not articulated. The reading is not mystical. The network's weights encode free-floating rationales extracted from enormous quantities of human text. The system responds to reasons it does not formulate, which is exactly what brains, bees, and bacterial gene regulation do.
Dennett introduced the concept in Darwin's Dangerous Idea (1995) to explain how evolution can be rational without being guided by a rational agent. By 2017, in From Bacteria to Bach and Back, he had expanded it into a general framework for thinking about all pre-linguistic rationality — from bacterial chemotaxis to primate social cognition.
The late-career application to machine learning appeared in his Possible Minds contribution (2019) and in several essays and interviews before his death in 2024. His position: the old categories of 'real reasoning' versus 'mere computation' were philosophical ghosts, and free-floating rationales were the honest replacement.
Reasons without reasoners. Design can be rational — can track reasons for its structure — without any agent in the system having formulated those reasons.
Evolution as the canonical case. Four billion years of biological rationality have been produced by a process that comprehends nothing and reasons about nothing.
Networks inherit the pattern. Trained neural networks embody vast numbers of free-floating rationales extracted from the human text they were trained on — and then deploy those rationales without articulating them.
The articulation gap. The fact that a system cannot state its reasons does not mean it has none; it means the reasons are encoded in the structure rather than in a separate representational layer.