This page lists every Orange Pill Wiki entry hyperlinked from Aaron Antonovsky — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis of the cultural trajectory toward frictionlessness — a smoothness that conceals the labor and struggle that gave previous work its depth.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The principle — defended by Wiener at considerable personal cost — that the creators of powerful systems bear moral responsibility for what those systems do after deployment, and that the claim of value-neutral research is a fiction that tr…
The pattern by which AI tools lower the floor of who can build — enabling production by individuals whose stock consists of an idea, a subscription, and the capacity for sustained attention.
The salutogenic critique of frictionless interfaces: when the user need not understand the system to operate it, comprehensibility is actively undermined and the loss is masked by surface ease.
A category of risk whose realization would either annihilate humanity or permanently and drastically curtail its potential. AI joined this category in mainstream academic usage in 2014.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The biological, material, cognitive, emotional, social, and cultural assets — economic security, social support, institutional anchoring, mentoring — that enable effective coping with stressors and that AI tools simultaneously provide and e…
Antonovsky's replacement for the binary of health-versus-disease: a continuum along which individuals move constantly, and on which AI-mediated work simultaneously pulls workers in both directions.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
The gap between what a person can conceive and what they can produce — a ratio that has been collapsing since the Neolithic and that the language model reduced to approximately the length of a conversation.
A second-order Sense of Coherence that includes the recognition that fishbowls crack — that coherence itself must be rebuilt periodically as the conditions that supported it shift.
The extension of Antonovsky's framework to teams and institutions: collective comprehensibility, manageability, and meaningfulness as more predictive of outcomes in AI-augmented work than individual resilience alone.
The peculiar pathology of AI-augmented work: compulsive engagement with a tool that is genuinely producing valuable output — a condition for which existing therapeutic vocabularies have no good name.
The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
Antonovsky's reframing of health science: instead of asking what makes people sick?, ask what keeps people healthy? — the orientation that reorients the entire AI discourse from harm-prevention to flourishing-promotion.
Antonovsky's central construct: a person's enduring perception that life's stimuli are comprehensible, manageable, and meaningful — the dispositional orientation that predicts who navigates AI-mediated work toward flourishing rather than b…
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The Orange Pill's metaphor for the institutional work of redirecting the river of AI capability — not to stop the current but to shape what grows around it.
The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
Small teams whose purpose is to decide what should be built rather than to build it — an organizational form, drawn from The Orange Pill, that locates human value in judgment and exemplifies organizational salutogenesis.
The Orange Pill's reframing of the central AI question: not whether AI is dangerous or wonderful, but whether you are worth amplifying — and at the institutional level, whether we are building the conditions that make worthiness possible f…