This page lists every Orange Pill Wiki entry hyperlinked from Charles Perrow — On AI. 45 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Groys's diagnosis of the dominant cultural aesthetic of the AI age — a logic that eliminates friction, conceals construction, and trains viewers to mistake the polished surface for the thing itself.
Byung-Chul Han's diagnosis, engaged in both The Orange Pill and this book, of the cultural trajectory toward frictionlessness that conceals the labor, struggle, and developmental process that gave work its depth.
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The engineering term for the condition in which a single cause defeats multiple redundant systems simultaneously — and the precise structural description of what happens when one mind operates across domains previously covered by independe…
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The progressive decay of the capacity for sustained, unaided concentration that occurs when practitioners rely continuously on AI assistance — incremental, imperceptible, and grounded in the neuroscience of synaptic pruning.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
The class of organizations — nuclear submarines, aircraft carriers, air traffic control — that operate safely in Perrow's dangerous quadrant through specific organizational disciplines: preoccupation with failure, reluctance to simplify, sens…
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Perrow's term for systems whose components interact through pathways designers did not anticipate — nonlinear, networked, often invisible — producing failure combinations that exceed any safety analysis that could have been conducted.
Errors embedded in a system but not yet manifested as visible problems — dormant until the specific conditions that trigger them arise, and accumulating invisibly in frictionless AI-augmented workflows.
Perrow's governing injunction: the response to systems whose architecture guarantees catastrophic failure is not prevention (impossible) but survival — designing for graceful failure, not flawless operation, and accepting that the measure…
The architectural prescription — drawn from Perrow's later work and extended by AI safety researchers — that systems designed as loosely coupled modules with limited interaction pathways absorb failures that tightly integrated systems tran…
Perrow's foundational thesis that certain systems, by virtue of their architecture, produce catastrophic failures that cannot be prevented by better operators or better design — failures that are features of the system, not deviations from…
Diane Vaughan's concept for the gradual organizational acceptance of departures from design parameters as the new normal — the slow drift that produces the appearance of safety right up until the accident.
The Orange Pill's term for compulsive engagement with generative tools — re-specified by the Skinner volume not as metaphor but as the precise behavioral signature of a continuous reinforcement schedule without an extinction point.
The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.
The safety-engineering doctrine — central to Perrow's framework — that systems operating in the dangerous quadrant require diverse independent backups, and that the defense against common-mode failure is independence, not duplication.
The alternative to anticipation — deploy, observe, adapt, correct — that Wildavsky defended as the only governance strategy that historically produces safety rather than merely claiming to.
The mechanism — documented in the Berkeley study of AI workplace adoption — by which AI-accelerated work colonizes previously protected temporal spaces, converting every pause into an opportunity for productive engagement.
Edmondson's term for the dynamic activity of teamwork across boundaries — collaboration as verb rather than noun, and the organizational capability the AI transition most demands.
The Orange Pill's metaphor for the institutional work of redirecting the river of AI capability — not to stop the current but to shape what grows around it.
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.
Perrow's two-by-two diagnostic instrument classifying systems by interactive complexity and coupling tightness — and identifying the upper-right quadrant, where both conditions hold, as the zone where normal accidents become statistically …
The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The structural contradiction identified in aviation safety research: the automation that improves normal performance degrades emergency performance, because the skills required for intervention are maintained only through the performance a…
The structural observation that efforts to enhance system robustness can render systems more brittle — because safety mechanisms are themselves components whose interactive complexity adds to the system they are designed to protect.
The institutional and cognitive confinement produced by disciplinary specialization — the fishbowl that specialists breathe without seeing, and the structure AI both cracks and reinforces.
The structural inversion of the twenty-fold productivity gain: if a single AI-augmented worker can produce the output of twenty specialists, she can also produce the failures of twenty, concentrated in a single cognitive bottleneck.
Perrow's term for systems in which processes are time-dependent, invariant in sequence, and admit no slack — so that when disruption occurs, it propagates at the speed of the process itself, outrunning the cognition required to intervene.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Perrow's 2007 extension of normal accident theory to critical infrastructure and organizational concentration — arguing that structural approaches (modularity, decentralization) outperform procedural fixes in systems where catastrophe is a…
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
American sociologist (1925–2019) at Yale whose four decades of research on how complex organizations fail produced Normal Accident Theory — the single most influential framework for understanding catastrophic failure in high-risk systems.
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.
French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.
The 1986 Soviet nuclear catastrophe — caused not by the reactor but by the safety test intended to verify the reactor's protective mechanisms — and the paradigmatic example of the safety system itself as the risk.

The moment in The Orange Pill's drafting when Claude produced a fluent philosophical connection between Csikszentmihalyi's flow state and Deleuze's concept of smooth space — eloquent, structurally elegant, and wrong — caught the next m…
The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…
The March 1979 partial meltdown of Unit 2 at the Pennsylvania nuclear plant — the founding case study of Normal Accident Theory and the event that transformed Charles Perrow from organizational sociologist into risk theorist.