This page lists every Orange Pill Wiki entry hyperlinked from Matthew B. Crawford — On AI. 28 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Crawford's political-philosophical argument that AI-based governance produces a new form of authority insulated from democratic accountability — algorithmic power that is not required to give an account of itself.
The structural paradox Crawford's framework reveals: the tool's effectiveness depends on judgment built through engagement the tool eliminates — so the tool progressively undermines the conditions for its own effective use.
The dominant framework in cognitive science since the 1950s: the mind is a computer, thinking is computation, and consciousness is the execution of the right program — the position Noë argues is profoundly wrong in its foundations.
The class of evaluations that can be passed by output lacking the understanding it purports to represent — specifications, plausibility judgments, peer reviews by evaluators whose frameworks share the assumptions being tested.
The research tradition — converging from neuroscience, philosophy, and robotics — that mind is not separable from body, and whose empirical maturity over four decades has made the computational theory of mind increasingly hard to defend.
Crawford's diagnostic distinction between understanding earned through sustained engagement with resistant material and output that mimics the surface characteristics of understanding without the embodied foundation.
Goods recognizable only through participation in the practice that produces them — the elegance of a well-designed system, the diagnostic intuition of a physician, the taste that distinguishes excellence from mere competence.
The specific epistemic hazard AI introduces — output optimized to sound right rather than to be right, producing confident simulation of expertise that passes surface evaluation while lacking the foundation to catch its own errors.
A coherent and complex form of socially established cooperative human activity through which internal goods are realized — the conceptual pivot of MacIntyre's ethics and the unit of analysis for understanding what AI threatens.
Crawford's name for the metaphysical assumption that every particular thing can be substituted by its standardized double — a worldview the AI age makes both more pervasive and more consequential.
Taylor's systematic framework for organizing work through observation, measurement, task decomposition, and the separation of planning from execution — the operating system of twentieth-century production, and the unexamined inheritance tha…
The empirical finding, central to Bainbridge's framework, that manual and cognitive skills deteriorate when not exercised — and that automation systematically removes exactly the exercises through which expertise is maintained.
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
The structural property of large language models by which the reasoning behind their outputs is not inspectable in the form a human reviewer would need to evaluate it — extending structural secrecy from the organization into the tool its…
The structural challenge that AI creates by eliminating the bodily engagement through which expertise was historically developed and transmitted between generations.
Crawford's claim that sustained attention to resistant material is not merely a cognitive skill but a moral achievement — and that AI-mediated workflows threaten the conditions under which such attention can be cultivated.
Crawford's foundational concept for reality's refusal to be fooled — the material judge whose verdict is independent of the practitioner's intentions, credentials, or rhetoric.
Crawford's philosophical instrument — the engine as the paradigmatic incorruptible evaluator that reveals whether a diagnosis is genuine by refusing to run when the diagnosis is wrong.
Crawford's name for the existential condition arising when AI occupies the cognitive territory through which practitioners would have developed their identity as competent persons in the world.
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
Crawford's 2024 essay arguing that outsourcing cognitive work to AI is voluntary self-absence from the tasks through which identity is formed and expressed.
Crawford's 2025 essay extending Marx's analysis of industrial capitalism to cognitive labor — arguing that AI concentrates cognitive power in the corporations that own the infrastructure.
Crawford's 2009 book — an autobiographical and philosophical inquiry into the value of manual work — that established him as the leading contemporary philosopher of craft and argued the knowledge economy systematically undervalues the int…
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
American political scientist (1936–2024), Sterling Professor at Yale, whose work on peasant politics, state power, and resistance produced the single most influential framework for diagnosing the failures of comprehensive planning — and the…