Lisanne Bainbridge — On AI — Wiki Companion
WIKI COMPANION

Lisanne Bainbridge — On AI

A reading-companion catalog of the 22 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Lisanne Bainbridge — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Lisanne Bainbridge — On AI. 22 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (20)
Aesthetics of the Smooth
Concept

Aesthetics of the Smooth

Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.

Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Designing for the Exception
Concept

Designing for the Exception

Bainbridge's prescriptive principle that automated systems should be designed around the conditions required for successful human intervention in rare events — not optimized solely for normal operation, with the human treated as an aftertho…

Imagination-to-Artifact Ratio
Concept

Imagination-to-Artifact Ratio

Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…

Ironies of Automation
Concept

Ironies of Automation

Lisanne Bainbridge's 1983 insight that automation does not simply remove the human from a task — it transforms the human's role into monitoring, which humans do badly.

Manual Reversion
Concept

Manual Reversion

The moment in an automated system's operation when control returns to the human — who must take over with skills degraded by the very automation that is now failing, under conditions of surprise and time pressure that foreclose recovery.

Productive Addiction
Concept

Productive Addiction

The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.

Skill Decay Under Automation
Concept

Skill Decay Under Automation

The empirical finding, central to Bainbridge's framework, that manual and cognitive skills deteriorate when not exercised — and that automation systematically removes exactly the exercises through which expertise is maintained.

The Amplifier
Concept

The Amplifier

The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.

The Beaver's Dam
Concept

The Beaver's Dam

The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.

The Candle in the Dark
Concept

The Candle in the Dark

Consciousness as a small flame in an infinite darkness — fragile, improbable, illuminating only a few inches beyond itself, and burning as the founding act of revolt.

The Compounding Loss
Concept

The Compounding Loss

Bainbridge's structural insight that the ironies of automation are not independent problems but a compounding system — skill decay worsens the rare event problem, which worsens the monitoring paradox, which worsens the training problem, eac…

The Distribution Problem
Concept

The Distribution Problem

The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.

The Fishbowl
Concept

The Fishbowl

The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…

The Mentor Relationship
Concept

The Mentor Relationship

Nakamura's empirical finding that the transmission of standards — not knowledge, not technique — is the single most important function the mentor provides, and the function AI most thoroughly fails to replicate.

The Monitoring Paradox
Concept

The Monitoring Paradox

Bainbridge's foundational observation that monitoring is cognitively more demanding than performing — the human attention system degrades over time when the monitored process is reliable, because sustained vigilance without engagement is …

The Rare Event Problem
Concept

The Rare Event Problem

The structural feature of automated systems — identified by Bainbridge — by which the situations requiring human intervention are, by definition, the ones the operator has had least opportunity to practice, producing a mismatch between when…

The Silent Middle
Concept

The Silent Middle

Edo Segal's name for the vast majority experiencing the full emotional complexity of the AI transition without a clean narrative to organize it — most accurate in perception, least audible in discourse.

The Training Problem Under Automation
Concept

The Training Problem Under Automation

Bainbridge's diagnosis of the structural impossibility of training operators for exceptional situations by exposing them only to routine ones — a mismatch that renders conventional training inadequate for the very scenarios training is supp…

Person (1)
Edo Segal
Person

Edo Segal

Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.

Event (1)
The Trivandrum Training
Event

The Trivandrum Training

The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
22 entries