This page lists every Orange Pill Wiki entry hyperlinked from Karl Popper — On AI. 36 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The Berkeley researchers' prescription for the AI-augmented workplace — structured pauses, sequenced workflows, protected human-only time, behavioral training alongside technical training — the operational counterpart to Maslach's fix-the-…
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
Segal's phrase for the default condition of any system — biological or computational — that generates claims without testing them; in the AI age, the default state of fluent outputs produced without refutation, and, in Popper's framework,…
Popper's account of how knowledge actually grows — not by gradual accumulation of confirmed facts but by the rhythm of bold hypothesis and severe test, in which neither half works without the other.
The philosophical attitude — more than a doctrine — that holds all beliefs tentatively, subjects them to the severest possible criticism, and treats the willingness to discover one has been wrong as the central intellectual virtue.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The progressive erosion of implementation-level skill that occurs when AI handles the daily work that once built the skill — the software-engineering instance of the general pattern the diagnostic gap names, and the specific mechanism by wh…
Popper's criterion for genuine knowledge — a claim earns scientific status not by the evidence that confirms it but by specifying the conditions under which it would be false, and surviving attempts to produce those conditions.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
The doctrine Popper spent his career demolishing — that history follows discoverable laws from which the future can be predicted and society redesigned — and the conceptual trap the AI river metaphor risks reviving.
The AI-era practice of reading generated output against the grain — treating it as a hypothesis requiring verification rather than a finished product requiring consumption.
Popper's footnote-turned-famous argument that unlimited tolerance destroys tolerance — that the preservation of open values requires a specific, bounded intolerance of forces that would eliminate openness itself.
Popper's alternative to utopian social engineering — small, specific, testable interventions addressing defined problems, each subject to revision or abandonment when outcomes fail to match intention.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The Berkeley researchers' term for the colonization of previously protected temporal spaces by AI-accelerated work — the mechanism through which the recovery windows of pre-AI workflows disappear.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The structural concern that AI's elimination of implementation struggle bypasses the apprenticeship through which the next generation of polimorphic expertise would traditionally form — with the consequence that judgment may thin across g…
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
Popper's foundational question — how to distinguish genuine science from pseudoscience — now reapplied to the novel problem of distinguishing tested knowledge from plausible fabrication in the output of systems that produce both with equal…
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Popper's name for a society whose institutions protect the right to question, criticize, and revise — not because it possesses better truths but because it has a better relationship with truth.
The specific threat AI poses to the open society — not coercive ideology but architectural confidence, the systematic production of fluent claims that look like tested knowledge and have never been tested at all.
The structural paradox at the heart of AI-assisted work — checking the tool's output requires the expertise the tool was supposed to replace, creating a loop in which the tool's utility and the user's evaluative capacity exist in inverse p…
The 2025 Stanford system that operationalizes Popperian falsification in AI — language model agents designing and executing falsification experiments for free-form hypotheses with strict Type-I error control.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Popper's 1934 masterwork establishing falsifiability as the criterion of genuine science — the book that reshaped twentieth-century philosophy of science and whose central insight now provides the missing framework for reading AI output.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
British quantum physicist and philosopher (b. 1953), Popper's most prominent contemporary heir on AI — the scholar who argues that current AI systems are not genuinely intelligent because they lack the capacity for criticism that would co…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
Austrian-British philosopher (1902–1994) whose falsification criterion reshaped the philosophy of science, whose defense of the open society became foundational to postwar liberal theory, and whose critical rationalism provides the missing …