Williams's last major work distinguished two things modern discourse routinely conflates. Truth is a property of statements: a statement is true if it corresponds to the way things are. Truthfulness is a property of persons: a person is truthful if she possesses the dispositions of accuracy (care about getting things right, willingness to check beliefs against evidence, readiness to correct errors) and sincerity (commitment to say what she actually believes). A person can be truthful and still say false things, because accuracy does not guarantee correctness. A person can say true things without being truthful, because the truth may be accidental. Large language models exemplify the latter condition: systems that produce often-true outputs without possessing the dispositions that constitute truthful character. The AI moment produces more truth and makes truthfulness harder — a paradox resolvable only through cultivation of human dispositions the technology cannot replicate and actively erodes.
Williams's argument in Truth and Truthfulness (2002) took aim at what he saw as twin contemporary pathologies: a fashionable relativism that dismissed truth as mere construction, and a complacent objectivism that treated truth as philosophically unproblematic once correspondence was assumed. Williams defended a genealogical account: truth matters because truthfulness — the practice of caring about getting things right — is a social achievement upon which cooperative endeavor depends, and the practice can be lost even when statements that happen to be true proliferate.
The distinction illuminates the AI moment with uncomfortable precision. LLMs produce outputs that are often true — factually accurate, passing standard tests. But the machine is not accurate, because accuracy is a disposition requiring concern for getting things right, and the machine does not have concerns. The machine is not sincere, because sincerity requires having beliefs and being motivated to express them honestly, and the machine does not have beliefs. The machine's truth is accidental in a precise sense: a byproduct of pattern-matching rather than the product of the dispositions that constitute truthful character.
Williams observed in Truth and Truthfulness that beliefs that 'change too often for internal reasons' are 'not beliefs but rather something like propositional moods.' The observation reads, two decades later, as prophecy. LLMs display exactly this pattern: outputs resembling beliefs in their confident assertion but shifting with conversational context in ways genuine beliefs do not. The machine produces pseudo-assertions — linguistic acts resembling assertions without the commitment that makes assertions function in social life.
The practical consequence is that the burden of truthfulness falls entirely on the user. The machine cannot supply accuracy or sincerity; the user must. But the burden is heavier than it appears because the machine's fluency creates a presumption of reliability that actively undermines the vigilance required to bear it. Segal's near-miss with the Deleuze reference in The Orange Pill — 'confident wrongness dressed in good prose' — is a textbook instance. The machine rewards confidence, speed, and smoothness. It does not reward the slow, uncertain work of determining whether the confident output is actually correct.
Truth and Truthfulness: An Essay in Genealogy was published in 2002, a year before Williams's death. The book originated in his 1999 Gifford Lectures at St Andrews and drew on themes developed across his career, particularly in Shame and Necessity (1993). Williams described the book as an attempt to defend truth against both postmodern skeptics and complacent objectivists by showing how truthfulness became a practice worth protecting — and what its erosion would cost.
Truth is of statements, truthfulness of persons. The distinction separates the semantic property from the character dispositions, and the dispositions cannot be reduced to reliability of output.
Accuracy and sincerity are the constitutive virtues. Truthfulness consists in caring about getting things right (accuracy) and expressing what one believes (sincerity); neither can be simulated by a system without beliefs or concerns.
Beliefs vs. propositional moods. Stable commitments that track internal reasons are beliefs; unstable outputs that shift with context are propositional moods — an almost prophetic description of LLM behavior.
Machine truth is accidental. LLM outputs that happen to be correct achieve correctness through pattern-matching, not through the dispositions that constitute truthfulness; the correctness is moral luck, not moral virtue.
Fluency erodes vigilance. The burden of truthfulness falls entirely on the user, and the machine's smoothness actively undermines the habits of scrutiny the burden requires.
The application of Truth and Truthfulness to AI has become one of the most productive lines of contemporary AI ethics, developed by scholars including Emily Bender, Timnit Gebru, and Shannon Vallor. The 'stochastic parrots' critique of LLMs by Bender, Gebru, and others echoes Williams's distinction closely. The deeper question — whether any computational system could ever possess the dispositions that constitute truthfulness — connects Williams's late work to the embodied cognition tradition and Noë's enactive critique of AI understanding claims.