This page lists every Orange Pill Wiki entry hyperlinked from Axel Honneth — On AI. 22 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The philosophical problem opened by AI collaboration — whether functionally recognition-like responses from systems lacking subjectivity can provide the social grounding of esteem that genuine recognition requires.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The first form of recognition in Honneth's triad — the acknowledgment occurring in primary relationships of care that produces basic self-confidence, the capacity to trust one's own needs as legitimate.
The specific form of suffering that occurs when a social order violates the legitimate expectations of reciprocity underlying its recognition structure — damage not to circumstance but to the social infrastructure of identity itself.
Honneth's framework holding that human identity is a social achievement constituted through three forms of mutual acknowledgment — love, rights, and social esteem — each producing a distinct dimension of selfhood.
The insidious form of misrecognition in which a discourse acknowledges the cognitive dimension of a recognition claim — your perception is accurate, your grief is valid — while denying the practical dimension that would create institutiona…
The second form of recognition — the acknowledgment by legal and political institutions that the individual is a full and equal member of the moral community, producing self-respect as the capacity for self-governance.
The largest cohort of the AI transition — suffering template deprivation because the recognition order lacks vocabularies for compound experience of expansion and loss simultaneously.
The third form of recognition — the community's valuation of an individual's specific contributions, abilities, and achievements — producing self-worth as the sense that one's capacities matter to the collective project.
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.
The population mourning what the AI transition eliminates — senior practitioners whose recognition demand is systematically truncated: their diagnosis acknowledged, their claim to institutional response denied.
Honneth's model of how social esteem is produced — a three-station circuit running from contribution, through communal reception and valuation, back to the contributor's sense of self-worth.
Webb's thesis, extended to the AI age: that technology does not determine the conditions of work — institutions do — and that the gap between extraordinary capability and inadequate institutional response is where the suffering occurs.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The vast majority experiencing the full emotional complexity of the AI transition without a clean narrative to organize it — most accurate in perception, least audible in discourse.
The thought collective in the AI discourse whose thought style foregrounds capability expansion and backgrounds cost — producing genuine perception of real features of the transition, and genuine blindness to others.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
German social philosopher (b. 1949), Habermas's most influential successor, whose recognition theory has become the foundational framework for analyzing moral injury across labor movements, civil rights struggles, and the AI transition.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…