How Do I Know That? — Orange Pill Wiki
CONCEPT

How Do I Know That?

Tufte's three-question discipline — How do I know that? How do you know that? How do they know that? — applied to every claim, every output, every display in the age of AI-generated evidence.

Tufte has ended his Microsoft Machine Learning Summit keynote, and many others, with three questions he asks in every context where evidence informs decisions: How do I know that? How do you know that? How do they know that? Three sentences. Three applications of the lie-factor principle extended into epistemic hygiene. The questions are beautiful in their economy — no unnecessary words, no decoration, no evasion. Just the demand: show me the evidence. Show me the evidence that this output is trustworthy. Show me the evidence that this code is sound. Show me the evidence that this conclusion follows from these premises. Tufte's July 2025 response to a physician's post about Microsoft's medical AI demonstrated the discipline in practice: he asked whether other datasets had been examined but left unpublished, noted that the accompanying graphic required memorizing a complex color code, and invoked the observation by former editors of the New England Journal of Medicine and The Lancet that half of published research papers are false. Three sentences. The lie factor identified in the time it takes to compose a tweet.

In the AI Story

Hedcut illustration for How Do I Know That?
How Do I Know That?

The three questions form a disciplined sequence. How do I know that? — applied to the claims I am about to make, checking my own epistemic warrant before speaking. How do you know that? — applied to the claims I am receiving, checking the evidence behind what I am being told. How do they know that? — applied to the claims made by parties cited or referenced, checking the chain of evidence that supposedly supports the intermediate source.

Each question applies with specific force to AI-generated output. The builder who receives a fluent paragraph citing philosophical concepts, statistical findings, or technical claims should ask all three. How does the AI know what it is asserting? (Short answer: it does not; it pattern-matches against training data.) How does the builder know the AI is reliable on this particular topic? (Short answer: through independent verification.) How would the reader of the AI-assisted output know to trust the chain? (Short answer: only if the chain of evidence is visible and traceable.)

The questions function as a baloney detection kit in Sagan's sense, applied specifically to the characteristic failures of AI-era evidence presentation. Where Sagan's kit addresses pseudoscience, astrology, and psychic claims, Tufte's questions address the more insidious failure of confident AI output whose surface signals reliability while its substance has not been checked.

The discipline is not passive skepticism. It is the active cultivation of the reflex to ask, before accepting any claim, what evidence supports it. Tufte has argued that this reflex is the core competency of inferential literacy — the cognitive skill the age of AI most urgently requires. Not the ability to use the tools; that is rapidly becoming trivial. The ability to evaluate what the tools produce at the speed they produce it.

Origin

Tufte has used versions of the three questions throughout his teaching and public appearances for decades. Their most explicit articulation appears in his keynote lectures to the machine-learning community, including his Microsoft Machine Learning Summit keynote and his ChinaVis address, where the three-question formula has become a signature closing.

Key Ideas

Three questions, three sentences. The economy of the form is deliberate — maximum epistemic discipline in minimum verbal space.

Applied reflexively. The first question applies to my own claims, the second to claims I receive, the third to the evidence chain behind intermediate sources.

The discipline is active. Not passive skepticism but the cultivated reflex to ask for evidence before accepting claims.

Specifically diagnostic for AI. The characteristic failure mode of AI output — fluent claims without verified evidence — is exactly what the three questions are designed to catch.

Inferential literacy as core competency. The age of AI demands not better tools but better evaluators, and the three questions are the cognitive instrument those evaluators wield.

Appears in the Orange Pill Cycle

Further reading

  1. Edward Tufte, Beautiful Evidence (Graphics Press, 2006)
  2. Carl Sagan, The Demon-Haunted World (Random House, 1995)
  3. Edward Tufte, The Visual Display of Quantitative Information (Graphics Press, 1983)
  4. Harry Frankfurt, On Bullshit (Princeton, 2005)
  5. Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT