This page lists every Orange Pill Wiki entry hyperlinked from Lorraine Daston — On AI. 11 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Daston's characterization of AI as our first encounter with an alien form of intelligence — an entity that operates outside the social, moral, and institutional frameworks within which human knowledge has always been produced and evaluated.
The three-element institutional infrastructure — feedback mechanisms, professional standards, and educational programs — that Daston's framework identifies as necessary for calibrating trust in any knowledge-producing technology.
The structural difficulty of developing accurate user models of a knowledge technology's reliability — and the specific way AI impairs both conditions (error detection and error identification) on which calibration historically depends.
The central diagnostic concept of Daston's AI volume: a surface property of a knowledge technology's output that activates learned trust heuristics beyond the epistemic warrant the underlying process provides.
The structural diagnostic of the AI transition: the breaking of the centuries-long correlation between surface prose fluency and depth domain expertise that had made fluency a reliable proxy for authority throughout the history of literate…
Daston's foundational concept for the system of affect, trust, and obligation that governs knowledge-producing communities — the social infrastructure without which scientific claims cannot be reliably evaluated.
The load-bearing diagnostic claim of Daston's AI volume: that the specific sequence of overconfidence, costly errors, and eventual institutional calibration has repeated across five centuries of knowledge technologies with sufficient regu…
The governance regime change in which the accumulated textual, visual, and computational output of millions of individuals was appropriated for AI training under terms their original contribution did not contemplate — the paradigmatic case …
Daston's 2022 distinction between rules accompanied by copious contextual judgment (thick) and rigid, mechanically executable specifications (thin) — and the historical observation that thin rules always depend on thick rules cleaning up a…
Daston and Galison's name for the pre-photographic regime of scientific representation — in which illustrators synthesized ideal forms from accidental specimens, and the image's authority derived from the trained judgment of the expert han…