This page lists every Orange Pill Wiki entry hyperlinked from Reinhold Niebuhr — On AI. 28 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Niebuhr's 1944 distinction: idealists who cannot see self-interest in their idealism (children of light) versus cynics who see only self-interest (children of darkness).
Not therapeutic disclosure or public apology but the sustained practice of acknowledging incomplete understanding—admission without expectation of resolution that purchases clarity.
The mechanism by which AI systems intensify the human tendency to seek and remember information confirming existing beliefs — by mirroring cognitive signatures with statistical precision and reducing the diversity of inputs that unmediated …
The state of operating within ironic structures while aware of their ironic character—does not resolve contradiction but changes the quality of attention it receives.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The structural mismatch where benefits provide constant vivid reinforcement while costs accumulate silently—the engine of Niebuhrian blindness in powerful systems.
The Tillichian polarity that defines human existence — freedom is real, and freedom is exercised within finitude that the free being did not choose.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
Two states indistinguishable from outside — intense sustained engagement — and neurochemically opposite from within. Flow couples wanting to liking; compulsion runs wanting alone. The same body, the same desk, the same screen: different bra…
The systemic counterpart to Segal's individual beaver metaphor — the structural architectures of taxation, labor bargaining, portable benefits, and international coordination that operate at the level of the economy, not the level of the in…
The condition where genuine virtue produces genuine blindness—consequences contradict intentions not through malice but through the actor's incomplete understanding of the system.
Niebuhr's disciplined middle ground—acting with conviction while maintaining awareness of one's limits, building with power while watching for costs that power obscures.
Niebuhr's structural principle: moral blindness increases with genuine capability—the stronger the power, the deeper the inability to see its costs.
Those who see what the powerful cannot and speak it in terms the powerful cannot dismiss—critics, displaced workers, researchers documenting costs metrics miss.

Justice that is always partial, compromised, requiring ongoing correction—and always better than the absence of justice. Niebuhr's alternative to perfectionism and resignation.
Not lying but selective attention—genuine conviction filters experience so confirming evidence is amplified and disconfirming evidence is diminished or reinterpreted.
The condition of powerful actors who cannot recognize harm their power produces—not hypocrisy but a self-understanding constructed to make contradiction invisible.
The Berkeley researchers' term for the colonization of previously protected temporal spaces by AI-accelerated work — the mechanism through which the recovery windows of pre-AI workflows disappear.
Segal's figure of the unconstrained enthusiast of AI acceleration — read through Cipolla as a bandit who has constructed a philosophical justification for extraction.
The Mokyrian thesis that technological capability and institutional response are the two variables of every major economic transition, and that the gap between them — always present at the moment of transition — determines whether the techn…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Segal's figure of the person who refuses to engage with AI — read through Cipolla's framework as a helpless actor whose withdrawal leaves institutional design to others.
Niebuhr's principle that virtue produces blindness not despite its genuineness but through it—the good and the harm are two aspects of a single moral structure.
Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in human-machine loops.
Nussbaum's 1986 masterwork — Luck and Ethics in Greek Tragedy and Philosophy — that recovered the tragic tradition's insight that human goods depend on conditions no virtue can guarantee.