This page lists every Orange Pill Wiki entry hyperlinked from Larry Laudan — On AI. 32 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
Problems a tradition's own commitments predict should not exist — more diagnostic of degeneration than ordinary unsolved problems, because their existence indicates that something in the framework's core is wrong.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The Gramscian-Hanian condition in which the subject exploits herself and calls it freedom — the overseer's function having been transferred from the factory floor to the interior of the self through decades of hegemonic cultural work.
Laudan's category for internal tensions within a theoretical framework — contradictions the tradition's own commitments generate and cannot resolve without modification — distinguished from external empirical questions.
The progressive decay of the capacity for sustained, unaided concentration that occurs when practitioners rely continuously on AI assistance — incremental, imperceptible, and grounded in the neuroscience of synaptic pruning.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The operational framework for rational AI adoption that emerges from Laudan's analysis: conditional commitment, acknowledgment of residual problems, continuous evaluation, and distributed epistemic responsibility — not a position but a pra…
Laudan's operative distinction: a tradition is progressive when it expands to address anomalies while preserving its problem-solving capacity, degenerative when it contracts to exclude them or dismisses them as non-problems.
Laudan's flexible replacement for Kuhn's paradigms — general frameworks that identify problems, specify standards of evaluation, and compete through comparative problem-solving effectiveness rather than appeal to fixed neutral ground.
Problems the new tradition cannot solve within its own commitments because solving them would require maintaining the very practices the new tradition exists to replace — losses structurally unpreservable, not merely unaddressed.
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
The developmental event — paradigmatically the twelve-year-old's 'What am I for?' — that marks the emergence of philosophic understanding and requires educational engagement that supports rather than closes the question.
The point where Laudan's framework reaches its boundary: the twelve-year-old's "What am I for?" is not a problem in the technical sense — it has no solutions, only responses — and its exploration requires practices the problem-solving mode…
The research tradition in the AI discourse organized around depth preservation — measuring progress by the maintenance of craft, embodied knowledge, and the formative friction of struggle, and identifying AI as a threat to the conditions …
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
The structural feature of research traditions by which their enabling assumptions become invisible to their practitioners — the water the fish cannot see — requiring deliberate effort to make visible what allows the tradition to function a…
Laudan's paradigm conceptual problem of the AI transition: flow states and auto-exploitation are behaviorally indistinguishable, their competing theoretical frameworks make opposed predictions, and no empirical observation currently differ…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Laudan's operational replacement for truth-based evaluation: a theory or tradition is progressive when it solves more problems and generates fewer anomalies than its competitors, assessed comparatively and revised continuously.
The specific behavioral configuration — compulsive AI-augmented engagement experienced as exhilaration from within and pathology from without — produced by a reinforcing loop without a balancing counterpart.
Edo Segal's name for the vast majority experiencing the full emotional complexity of the AI transition without a clean narrative to organize it — most accurate in perception, least audible in discourse.
A structural bias in AI-era inquiry: the asymmetry between producing output (fast, cheap) and evaluating it (slow, expensive) tilts every interaction toward acceptance and systematically erodes the capacity for independent judgment.
The research tradition in the AI discourse organized around capability expansion and democratization — measuring progress by productivity gains, adoption speed, and the compression of the imagination-to-artifact ratio.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
Hungarian-American psychologist (1934–2021), father of flow theory, Nakamura's mentor and collaborator across four decades, whose foundational mapping of the peak experience provided the framework Nakamura extended into vital engagement.
American philosopher and historian of science (1922–1996), author of The Structure of Scientific Revolutions (1962), whose account of paradigms and incommensurability set the intellectual problem Laudan's career was constructed to solve.