This page lists every Orange Pill Wiki entry hyperlinked from Gary Klein — On AI. 17 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The expert capacity to register meaningful deviations from expected patterns — the hallmark of genuine expertise and the function most endangered by AI-mediated work.
The quiet risk of comprehensive automation: not that machines dominate us, but that we lose the capabilities they replace. Asimov's Solarians are the founding fiction; contemporary work on cognitive offloading is the empirical counterpart.
The expert's capacity to abandon or restructure patterns when evidence demands — the operation that distinguishes insight from routine recognition.
Klein's empirical framework for how experts see what others miss — the three paths (connection, contradiction, creative desperation) through which new understanding arises in natural settings.
Klein's term for the structural unfairness in AI-versus-expert studies: the AI is given learning opportunities the human experts are denied.
The expert's capacity to project an action forward in time, imagining how the situation will evolve if the action is taken, watching for the moment when the projected scenario breaks down.
Pasteur's dictum — chance favors the prepared mind — operationalized by Klein as the cognitive preconditions for insight in natural settings.
Klein's 1980s framework describing how experienced practitioners decide under pressure — not by comparing options but by recognizing patterns and mentally simulating actions.
The effortful, conscious process of constructing a coherent interpretation of a situation that does not match any recognized pattern — the cognitive mode experts shift into when recognition fails.
Klein's term for the human judgment embedded in AI training data that the system then appears to have generated itself — the structural reason AI-versus-expert comparisons are methodologically unfair.
Klein's term for the accumulated repository of recognized situations and their associated features, actions, and expected outcomes — an organic cognitive structure that grows through experience and degrades through disuse.
Klein's project-planning method in which a team imagines the project has already failed and works backward to identify the causes — and the social process AI cannot reproduce.
Klein's framework for appropriate reliance on AI — not more trust or less trust, but trust calibrated to the system's actual performance in the specific situation at hand.
The 2009 published convergence of two thinkers on opposite sides of the expertise debate — establishing the empirical conditions under which intuitive expertise can be trusted.
The 1949 Montana wildfire disaster that killed thirteen smokejumpers and became Klein's paradigm case of cognitive flexibility failure — and Wagner Dodge's escape fire became the paradigm of creative desperation insight.