This page lists every Orange Pill Wiki entry hyperlinked from Edgar Schein — On AI. 26 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The deepest level of Schein's cultural model — beliefs so taken for granted that articulating them would seem absurd, and the level at which the AI transition is forcing painful revision.
Schein's metaphor for how organizational culture detects foreign elements — including AI tools — and responds with inflammation, encapsulation, rejection, or, rarely, genuine integration.
The gap between what organizations say they believe about AI — augmentation, not replacement — and what they do when the productivity gains arrive and the board asks the obvious question.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
Schein's practice of asking questions one does not already know the answer to — now essential for evaluating AI output whose fluent surface invites acceptance without examination.
The dissolution of the self-structure when the competency around which professional identity was organized is economically disposed of — the psychological dimension of expertise displacement.
The fear of becoming incompetent in public that Schein identified as the primary obstacle to organizational change — now intensified by AI's challenge to professional identity rather than merely to professional skills.
Schein's category for relationships grounded in genuine mutual interest and personal openness — the relational foundation that AI tools cannot provide and that organizations must build to support AI adoption.
Schein's resolution of the leader's dilemma — cultural change cannot be mandated, but it can be facilitated through the deliberate alignment of every primary embedding mechanism with the desired direction.
The specific organizational condition — ordinary in Level Two relationships, rare in most professional cultures — that permits admissions like I cannot evaluate this output without professional self-harm.
Schein's catalog of the behaviors through which leaders communicate — often unconsciously — what the culture actually values: what they pay attention to, react to emotionally, and reward.
Schein's clinical methodology — helping clients see what is actually happening rather than telling them what to do — and the diagnostic approach the AI transition most urgently requires.
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
Edmondson's foundational construct — the shared belief that a team is safe for interpersonal risk-taking — and the single strongest predictor of whether AI adoption produces learning or concealment.
Michael Polanyi's term for the knowledge that lives in the hands and nervous system rather than in explicit propositions — acquired through practice, failure, and embodied pattern recognition, and dissolved by AI workflows that produce ou…
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
Schein's foundational framework — artifacts, espoused values, and basic underlying assumptions — that identifies where organizational culture actually lives and why AI transformations stall at the surface.
Schein's identification of the operator, engineer, and executive subcultures whose conflicting basic assumptions determine how AI adoption actually unfolds inside organizations.
American organizational behavioral scientist (b. 1959), Novartis Professor at Harvard Business School, whose concept of psychological safety has become the foundational framework for understanding team performance under uncertainty.
American organizational psychologist (1928–2023) and MIT Sloan professor whose three-level model of culture became the dominant framework for understanding how organizations actually function beneath their visible structures.
The mid-sized Austin software company whose AI adoption quadrupled lines of code and tripled defect rates within six months — the paradigmatic case of artifact-level success masking assumption-level failure.
The 1956 summer workshop at Dartmouth College where the phrase "artificial intelligence" was coined and the field, as a discipline, began.
The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…