This page lists every Orange Pill Wiki entry hyperlinked from Matthew Crawford — On AI. 22 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Crawford's distinction between authoring a work and directing its construction — two forms of agency that produce categorically different experiences and categorically different practitioners.
Crawford's term for the progressive replacement of human judgment by automated systems in decisions affecting the public — a form of power that operates through opacity rather than coercion.
The study of how AI-saturated environments shape the minds that live inside them — the framework for asking what becomes of judgment, curiosity, and the capacity for sustained attention when answers become abundant and friction is engineer…
The geological accumulation of knowledge deposited through struggle — the kind that lets a senior engineer feel a codebase the way a physician feels a pulse, and the kind smooth interfaces quietly prevent from forming.
Goods contingently attached to a practice but not constitutive of it — money, prestige, power, status — the goods that markets can measure and that AI systems can amplify.
Crawford's application of MacIntyre's distinction to AI-mediated work — the goods produced by genuine practice vs. the commodities markets reward.
Goods recognizable only through participation in the practice that produces them — the elegance of a well-designed system, the diagnostic intuition of a physician, the taste that distinguishes excellence from mere competence.
A coherent and complex form of socially established cooperative human activity through which internal goods are realized — the conceptual pivot of MacIntyre's ethics and the unit of analysis for understanding what AI threatens.
Crawford's name for the metaphysical assumption that every particular thing can be substituted by its standardized double — a worldview the AI age makes both more pervasive and more consequential.
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
Frederick Winslow Taylor's 1911 framework for the systematic transfer of productive knowledge from workers to management — the intellectual template that Noble showed numerical control completed in the machine shop and that AI now complete…
Crawford's name for the specific cognitive and moral formation that occurs through sustained submission to material reality that refuses to flatter the practitioner.
Crawford's name for the integrated physical, temporal, and social environment in which craft work produces focused attention through material demand — the counterpoint to the screen-based attention ecology.
Crawford's foundational concept for reality's refusal to be fooled — the material judge whose verdict is independent of the practitioner's intentions, credentials, or rhetoric.
Crawford's 2024 essay arguing that outsourcing cognitive work to AI is voluntary self-absence from the tasks through which identity is formed and expressed.
Crawford's 2021 Senate testimony naming algorithmic governance as a new priesthood — concentrating power in those who mediate between the public and algorithmic processes the public cannot inspect.
Crawford's 2025 essay extending Marx's analysis of industrial capitalism to cognitive labor — arguing that AI concentrates cognitive power in the corporations that own the infrastructure.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Crawford's 2023 lecture identifying the tacit ideology that legitimizes replacing human judgment with automated systems through four premises about human inadequacy.