This page lists every Orange Pill Wiki entry hyperlinked from James March — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The Berkeley researchers' prescription for the AI-augmented workplace — structured pauses, sequenced workflows, protected human-only time, behavioral training alongside technical training — the operational counterpart to Maslach's fix-the-…
March's argument that ambiguity — not knowing what the question is — enables exploration, and that its premature resolution by AI forecloses the interpretive alternatives from which genuine organizational novelty emerges.
The four structural principles March's framework prescribes for maintaining the exploration-exploitation balance in AI-augmented organizations: protection of slack, preservation of experiential diversity, institutional tolerance for failure…
March's foundational 1991 distinction between the refinement of existing capabilities and the search for new ones — two activities that compete for the same finite resources, with the competition rigged in favor of exploitation.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
The layered, embodied form of knowledge that accumulates in a practitioner through years of focal engagement with her material — too slow to notice day-to-day, too deep to transmit by documentation, and invisible to every metric the device …
Keats's name for the capacity to dwell in uncertainty without irritable reaching after resolution — the faculty built through endured impasses and atrophied by rapid AI-mediated synthesis.
The inevitable complement to organizational learning — the decay of unpracticed routines that erases institutional knowledge invisibly, now accelerated by AI's elimination of the instructive experiences from which the most consequential lea…
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
Edo Segal's name for the simultaneous experience of exhilaration and fragility that accompanies the orange pill moment — grounded by Prigogine's physics in the structural properties of far-from-equilibrium systems.
March's four-decade engagement with Cervantes' knight as a model for leadership under irreducible ambiguity — the figure who acts with total commitment in a world he does not fully understand, whose persistence is the only form of integrity…
Michael Polanyi's 1966 insight that we know more than we can tell — refined by Collins into a taxonomy of three species that has become the decisive framework for understanding what AI systems can and cannot absorb from human practice.
The Berkeley researchers' term for the colonization of previously protected temporal spaces by AI-accelerated work — the mechanism through which the recovery windows of pre-AI workflows disappear.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
The mechanism by which organizations die of competence — getting so good at what they already do that they cannot discover what they should do next. March and Levinthal's most cited diagnostic concept, now operating at AI-era scale.
The Cohen-March-Olsen 1972 model of organizational choice as the collision of four loosely coupled streams — problems, solutions, participants, and choice opportunities — producing decisions that are artifacts of temporal coincidence rather…
March and Levinthal's 1993 diagnosis of the three structural biases — temporal, spatial, and failure-averse — that make learning systems favor the near, the certain, and the measurable over the distant, the uncertain, and the meaningful.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
Edo Segal's twenty-fold multiplier from Trivandrum — received by the culture with the reverence a quantitative civilization reserves for quantitative claims, and the archetypal thin description of a transformation whose meaning lives elsew…
The mechanism through which AI adoption actually occurs in organizations — not through strategic decision but through the accumulated residue of individually rational local adaptations that no one tracks, authorizes, or evaluates systemical…
March's 1971 concept of the organizational and individual practices that enable action without prior justification — the necessary complement to the technology of reason, and the capacity AI most dramatically threatens.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.
The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.
The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…