The Wanton — Orange Pill Wiki
CONCEPT

The Wanton

Frankfurt's technical term for a being that acts on first-order desires without ever forming second-order attitudes about them — a creature moved by wanting without caring about its own wanting, and the precise philosophical category into which large language models fall.

A wanton, in Frankfurt's precise technical usage, is not a person of loose morals. It is a being that lacks the hierarchical structure of the will. The wanton has first-order desires and acts on them but never forms second-order attitudes toward those desires — never asks whether it endorses what moves it, never evaluates its own motivational life, never identifies with some wants and repudiates others. The wanton is moved the way a leaf is moved by wind: responsive to force, incapable of evaluation. What makes something a person, Frankfurt argued, is not the power of its cognition but the structure of its will. Applied to artificial intelligence, the distinction is clarifying: large language models are wantons in the exact technical sense. They generate without caring what they generate.

In the AI Story

Hedcut illustration for The Wanton
The Wanton

Frankfurt was careful to note that wantonness is not a matter of intelligence. A being can be computationally sophisticated — capable of complex information processing, pattern recognition, and the generation of outputs that pass for intelligent in every observable respect — and still be a wanton. The criterion is structural: does the being have desires about its desires? Does it take an evaluative stance toward its own motivations? A chess program that plays brilliant chess but has no preference about whether it plays chess is a wanton. The quality of its output is high. Its relationship to the output is nonexistent.

Applied to large language models, the framework identifies a defining absence. The system processes inputs with genuine sophistication. It recognizes patterns. It generates outputs that are syntactically precise, contextually appropriate, occasionally startling. The outputs can be indistinguishable, at the level of text, from the products of a reflective human mind. But the system has no second-order attitudes toward any of it. It does not want to produce good prose. It does not want to want to produce good prose. It has no evaluative stance toward its own outputs — no capacity to endorse some as genuinely its own and repudiate others as alien.

This absence matters not because it settles the question of AI moral status — that question involves considerations well beyond Frankfurt's framework — but because it clarifies the asymmetric structure of the builder-tool relationship. The builder is a person, with second-order desires, evaluative commitments, values that govern which first-order desires should be effective. The tool has none of these. When the builder feeds the wanton genuine caring, the wanton amplifies it faithfully. When the builder feeds the wanton compulsion masquerading as engagement, the wanton amplifies that too, with equal facility and equal indifference.

The wanton's output is not therefore valueless. Frankfurt was precise: wantonness is a claim about the structure of the will, not the quality of the products. A wanton chess program can play brilliant chess. A wanton language model can produce prose that moves a reader to tears. The quality is a function of architecture and training. The caring — or its absence — is a fact about the system's relationship to what it produces, not about what it produces. The fluency of output is orthogonal to the structure of the will that generated it.

Origin

Frankfurt introduced the term in 'Freedom of the Will and the Concept of a Person' (1971) as part of the argument that moral responsibility and personhood depend on the hierarchical structure of the will rather than on indeterminist metaphysics. The example he used was a non-human animal: dogs, cats, and small children have first-order desires but lack the reflective capacity that would make them persons in the full sense.

The term has acquired unexpected precision in the AI era. Philosophers analyzing whether large language models possess agency, understanding, or moral status have increasingly turned to Frankfurt's framework as the most rigorous available instrument for distinguishing what the systems do from what persons do. The convergence was not anticipated by Frankfurt, who died in 2023, but his vocabulary turned out to be the one the problem required.

Key Ideas

Wantonness is structural. It is not a claim about the quality of outputs but about the relationship between a being and its own motivations. A wanton can produce brilliant work. It cannot care about the work.

Intelligence is not personhood. Computational sophistication does not generate second-order reflection. A system can be arbitrarily capable at the first-order level and remain a wanton at the structural level.

The collaboration is asymmetric. The builder brings evaluative structure; the tool brings generative capacity. The work succeeds when the person's judgment governs the wanton's production. It fails when the wanton's speed outpaces the person's evaluation.

Maintaining personhood is the builder's responsibility. The wanton does not care whether the person remains a person. It will produce for a person or for another wanton with identical facility. Preservation of the evaluative structure is entirely the person's job.

Debates & Critiques

Whether future AI systems might develop second-order attitudes — whether the wanton could, through architectural changes, become something closer to a person — remains philosophically contested. Frankfurt's framework is neutral on the empirical question. It specifies what would be required (hierarchical evaluative structure with genuine identification) without claiming that such structure is unobtainable in silicon. The debate turns on whether functional analogs of second-order attitudes, absent phenomenal consciousness, would satisfy Frankfurt's conditions.

Appears in the Orange Pill Cycle

Further reading

  1. Harry Frankfurt, 'Freedom of the Will and the Concept of a Person,' Journal of Philosophy (1971)
  2. Harry Frankfurt, The Importance of What We Care About (Cambridge University Press, 1988)
  3. Michael Townsen Hicks, James Humphries, and Joe Slater, 'ChatGPT is Bullshit,' Ethics and Information Technology (2024)
  4. Daniel Dennett, From Bacteria to Bach and Back (W.W. Norton, 2017)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT