This page lists every Orange Pill Wiki entry hyperlinked from Donella Meadows — On AI. 29 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The progressive shortening of the interval between a technology's introduction and its saturation — from seventy-five years for the telephone to two months for ChatGPT — and the corresponding collapse of the adaptive window.
The specific depletion produced by sustained emotional labor under conditions of inadequate replenishment — Hochschild's framework reveals AI's new division of feeling as a burnout machine.
The finite human resource base — cognitive, emotional, relational — on which the AI ecosystem depends, and which its reinforcing loops are consuming faster than it regenerates.
The specific balancing mechanisms — protected time, institutional limits, cultural norms valuing depth — that serve as thermostats in an AI ecosystem lacking structural self-correction.
The landscape produced when practitioners use the same tools, follow the same patterns, and converge on the model's mean — efficient, homogeneous, and structurally incapable of the breakthrough that diversity would produce.
The imperceptible ratcheting-down of standards that occurs when AI output becomes the new reference point for what counts as acceptable work.
The measurable state requiring the simultaneous presence of emotional, psychological, and social well-being — the empirical target that distinguishes genuine wellness from mere functionality.
The replacement of productivity with human flourishing as the AI system's optimization target — a goal-level leverage point that reorganizes everything beneath it in the hierarchy.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The third trajectory — alongside collapse and oscillation — available to systems pressing against limits: a smooth adjustment to sustainable equilibrium, requiring three conditions Meadows specified with precision.
The trajectory of a system whose growth exceeds its carrying capacity without adequate balancing feedback — the pattern Meadows identified as the default outcome absent deliberate intervention.
The highest leverage point in any system — the invisible architecture of shared assumptions that organizes everything beneath it, and the level at which the AI transition must ultimately be addressed.
A system's capacity to absorb disturbance and reorganize while retaining essential function — distinct from toughness, and systematically eroded by the AI efficiency drive.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The recurring structural configurations — escalation, drift to low performance, success-to-the-successful, rule-beating — that convert rational individual choices into collectively self-defeating outcomes.
The arms-race structure in which each worker's response to AI-enabled productivity raises the baseline, forcing further intensification in a cycle that drives toward carrying-capacity exhaustion.
The four-node self-amplifying cycle — capability, adoption, competitive pressure, intensification — that drives the AI ecosystem's acceleration without a balancing counterpart.
The shared set of conditions — deep expertise, sustained attention, original questioning, cognitive diversity — on which the long-term value of all knowledge work depends, and which the AI ecosystem is depleting through commons dynamics.
Meadows's 1999 ranking of twelve places to intervene in a system — from the weakest parameter adjustments at the bottom to the paradigm shifts at the top.
The structural absence at the heart of the AI ecosystem — the corrective feedback mechanisms that should detect overshoot and apply restorative force, but do not exist at scale.
The specific behavioral configuration — compulsive AI-augmented engagement experienced as exhilaration from within and pathology from without — produced by a reinforcing loop without a balancing counterpart.
Segal's term for the population holding contradictory truths about AI in paralyzed equilibrium — reread by Mouffe's framework as the characteristic subject-position of the post-political condition.
Garrett Hardin's 1968 parable that shared resources face inevitable destruction through rational self-interest — the framework Ostrom spent four decades empirically dismantling, and the intellectual default that continues to structure the A…
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Donella Meadows and John Robinson's 1985 investigation of computer models and social decisions — an analysis that reads, forty years later, as prescient commentary on large language models.
Donella Meadows's 1972 MIT study — commissioned by the Club of Rome — that used World3 computer modeling to demonstrate the structural dynamics of exponential growth pressing against finite constraints.