This page lists every Orange Pill Wiki entry hyperlinked from Mary Gentile — On AI. 23 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The ethical framework that emerges when moral status is grounded in integrated information — a framework that demands measurement before moral judgment, and that reveals both the emptiness of current AI and the stakes of future architectur…
Gentile's moral distinction between silence produced by individual failure and silence produced by institutional failure — a distinction without which the analysis of ethical silence collapses into unproductive blame.
The rhetorical skill of expressing genuine ethical conviction in terms the organizational culture can hear — neither manipulation nor capitulation but the recognition that communication is a two-party act.
The thesis that lower-order cognitive friction was not only a metabolic cost — it was simultaneously a daily exercise regimen for the prefrontal circuits that support general-purpose executive function across every domain.
Mary Gentile's practice-based ethics curriculum — the methodology that replaced moral reasoning with moral performance through scripts, rehearsal, and peer coordination.
The organizational condition in which voice, once produced, is channeled into decision-making processes that can act on it — the necessary complement to psychological safety, without which safe speech becomes futile speech.
Gentile's name for the family of familiar arguments that function not as lies but as pre-authorized excuses for silence — the scripts of conformity that ethical voice must learn to counter-script.
Edmondson's foundational construct — the shared belief that a team is safe for interpersonal risk-taking — and the single strongest predictor of whether AI adoption produces learning or concealment.
The specific words, rehearsed in advance, that carry ethical conviction across the narrow window of organizational decision — the operational unit of Gentile's methodology.
The narrowing of the window between ethical recognition and ethical consequence — the condition that converts unprepared voice from inefficient to impossible.
The self-reinforcing belief that one is alone in one's ethical concern — the mechanism by which pluralistic ignorance converts widely shared private doubt into visibly unanimous organizational consensus.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
Gentile's foundational empirical finding: in the vast majority of professional ethical failures, the people involved knew what was right. The barrier is performance, not awareness.
The specific behavioral configuration — compulsive AI-augmented engagement experienced as exhilaration from within and pathology from without — produced by a reinforcing loop without a balancing counterpart.
Gentile's continuity doctrine: ethical voice is not a body of knowledge to be acquired but a practice to be maintained — a skill that degrades without use and must be continuously adapted to evolving conditions.
The Orange Pill's figure for those who hold the exhilaration and the loss simultaneously—recognized here as an intuitive formulation of Heideggerian Gelassenheit.
The strongest objection to Gentile's framework: individual voice cannot overcome systemic incentives that reward the behaviors it addresses — and the response that voice is how structures eventually change.
The empirical reversal: organizations that attend to values during building do not innovate more slowly but more durably — the product built with ethical attention lasts longer than the one built without it.
The Gentile proposition that ethical voice is a skill like surgery or music — teachable, improvable, and reliable only through rehearsal — rather than a character trait possessed by the heroic few.
Gentile and Krasniansky's 2020 Darden case study on Northpointe's criminal-justice risk algorithm — the canonical GVV exercise for AI ethics, asking not whether bias existed but what the engineer who saw it should have said.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
The skilled textile workers whose 1811–1816 destruction of wide stocking frames became the founding Luddite event — and whose ontological error, Ellul's framework suggests, was believing they faced a technology when they faced a logic.