This page lists every Orange Pill Wiki entry hyperlinked from Lisanne Bainbridge — On AI. 22 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
Bainbridge's prescriptive principle that automated systems should be designed around the conditions required for successful human intervention in rare events — not optimized solely for normal operation, with the human treated as an aftertho…
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Lisanne Bainbridge's 1983 insight that automation does not simply remove the human from a task — it transforms the human's role into monitoring, which humans do badly.
The moment in an automated system's operation when control returns to the human — who must take over with skills degraded by the very automation that is now failing, under conditions of surprise and time pressure that foreclose recovery.
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
The empirical finding, central to Bainbridge's framework, that manual and cognitive skills deteriorate when not exercised — and that automation systematically removes exactly the exercises through which expertise is maintained.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
Consciousness as a small flame in an infinite darkness — fragile, improbable, illuminating only a few inches beyond itself, and burning as the founding act of revolt.
Bainbridge's structural insight that the ironies of automation are not independent problems but a compounding system — skill decay worsens the rare event problem, which worsens the monitoring paradox, which worsens the training problem, eac…
The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
Nakamura's empirical finding that the transmission of standards — not knowledge, not technique — is the single most important function the mentor provides, and the function AI most thoroughly fails to replicate.
Bainbridge's foundational observation that monitoring is cognitively more demanding than performing — the human attention system degrades over time when the monitored process is reliable, because sustained vigilance without engagement is …
The structural feature of automated systems — identified by Bainbridge — by which the situations requiring human intervention are, by definition, the ones the operator has had least opportunity to practice, producing a mismatch between when…
Edo Segal's name for the vast majority experiencing the full emotional complexity of the AI transition without a clean narrative to organize it — most accurate in perception, least audible in discourse.
Bainbridge's diagnosis of the structural impossibility of training operators for exceptional situations by exposing them only to routine ones — a mismatch that renders conventional training inadequate for the very scenarios training is supp…