Norbert Wiener — On AI — Wiki Companion
WIKI COMPANION

Norbert Wiener — On AI

A reading-companion catalog of the 38 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Norbert Wiener — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Norbert Wiener — On AI. 38 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (26)
AI Alignment
Concept

AI Alignment

The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.

AI Safety
Concept

AI Safety

The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …

Builder Responsibility
Concept

Builder Responsibility

The principle — defended by Wiener at considerable personal cost — that the creators of powerful systems bear moral responsibility for what those systems do after deployment, and that the claim of value-neutral research is a fiction that tr…

Cybernetics
Concept

Cybernetics

The mid-twentieth-century interdisciplinary science of steering — communication and control in animals, machines, and organizations — founded by Wiener in 1948 and systematically excluded from the AI field at its Dartmouth founding.

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Emergent Capabilities
Concept

Emergent Capabilities

The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…

Entropy
Concept

Entropy

The second law of thermodynamics' universal tendency toward disorder — Wiener's fundamental antagonist, the force against which every act of intelligence is a local and temporary resistance.

Flow State
Concept

Flow State

Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.

Homeostasis
Concept

Homeostasis

The maintenance of an organism's internal conditions within a viable range — Walter Cannon's 1932 term for the biological phenomenon that Wiener generalized into the universal principle of negative feedback.

Human-AI Collaboration
Concept

Human-AI Collaboration

The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."

Mechanistic Interpretability
Concept

Mechanistic Interpretability

The research program of reverse-engineering what is actually happening inside a neural network — the AI equivalent of the Rama explorers' attempt to understand an alien ship not by what it does but by taking it apart and naming its parts.

Negative Feedback
Concept

Negative Feedback

The regulatory mechanism in which a system detects deviation from a target state and activates a correcting response — the engineering principle behind homeostasis, governance, and every system that sustains itself against entropy.

Positive Feedback
Concept

Positive Feedback

The runaway dynamic in which a system's output feeds back as input and amplifies — the screech of the microphone, the cascade of hemorrhage, the grinding compulsion of the AI-augmented builder who cannot stop.

Purpose and Goals
Concept

Purpose and Goals

Wiener, Rosenblueth, and Bigelow's 1943 redefinition of teleology as an observable feedback pattern — and the distinction between mechanical purpose (pursuing a goal) and human purpose (evaluating whether the goal is worth pursuing).

River of Intelligence
Concept

River of Intelligence

Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.

RLHF and Post-Training
Concept

RLHF and Post-Training

The family of techniques — reinforcement learning from human feedback (RLHF), DPO, constitutional AI, and related methods — that shape a pretrained language model into a usable assistant. The stage where the model becomes the product.

Signal and Noise
Concept

Signal and Noise

Claude Shannon's 1948 distinction between the message you intend to transmit and everything that interferes with its transmission — the spine of information theory and the diagnostic framework for what an amplifier carries.

Symbolic AI (the road Dreyfus contested)
Concept

Symbolic AI (the road Dreyfus contested)

The research paradigm—dominant from the 1956 Dartmouth Workshop through the 1980s—that attempted to build intelligence by manipulating symbolic representations according to formal rules, and whose failures vindicated Dreyfus's critique.

The Amplifier
Concept

The Amplifier

The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.

The Anti-Aircraft Problem
Concept

The Anti-Aircraft Problem

The World War II engineering problem — how to aim a gun at an adaptive human pilot — that forced Wiener and Julian Bigelow to develop the mathematics of feedback loops that became the foundation of cybernetics and, eventually, of modern AI.

The Burnout Society
Concept

The Burnout Society

Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.

The Extended Mind
Concept

The Extended Mind

Andy Clark and David Chalmers's 1998 thesis that cognition routinely extends beyond the skull into tools, notebooks, devices, and other people — the philosophical foundation for thinking about AI as a cognitive partner rather than a separat…

The Governor
Concept

The Governor

James Watt's 1788 centrifugal device — and Wiener's paradigmatic metaphor — for the regulatory mechanism that converts raw power into sustainable capability. The small, almost laughably simple structure without which the engine destroys its…

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Steersman
Concept

The Steersman

From the Greek kybernetes: the figure whose hand stays on the tiller, reading the water, making continuous small corrections. Wiener's chosen image for the human role in any purposive system containing both humans and machines.

Turing Test
Concept

Turing Test

Alan Turing's 1950 proposal to replace the unanswerable question "can machines think?" with a testable question about conversational indistinguishability — the most-cited fictional device in the philosophy of AI.

Technology (2)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Neural Networks
Technology

Neural Networks

The class of machine-learning architectures loosely modeled on biological neurons — the substrate of the current AI revolution and the opposite of Asimov's designed-then-programmed positronic brain.

Work (3)
God & Golem, Inc.
Work

God & Golem, Inc.

Norbert Wiener's final book, published in 1964, winner of the National Book Award posthumously — the founding meditation on learning machines, self-reproducing systems, and the moral responsibilities of their creators.

The Berkeley Study
Work

The Berkeley Study

Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in human-machine loops.

The Human Use of Human Beings
Work

The Human Use of Human Beings

Wiener's 1950 popular treatise extending the mathematics of cybernetics into a social and ethical framework — and delivering, seventy-five years early, the clearest warning yet written about the human cost of deploying powerful automated sy…

Person (6)
Byung-Chul Han
Person

Byung-Chul Han

Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.

Claude Shannon
Person

Claude Shannon

American mathematician and engineer (1916–2001) whose 1948 A Mathematical Theory of Communication founded information theory and supplied the mathematical framework within which every transmission of meaning — including human-AI collaborati…

Gilles Deleuze
Person

Gilles Deleuze

French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.

John McCarthy
Person

John McCarthy

American computer scientist (1927–2011), coiner of the term 'artificial intelligence,' organizer of the Dartmouth Workshop of 1956, and one of the principal figures Dreyfus's critique targeted across four decades.

Julian Bigelow
Person

Julian Bigelow

American engineer (1913–2003) who partnered with Wiener on the wartime anti-aircraft fire control problem, co-authored the 1943 paper that founded cybernetics, and later served as chief engineer for John von Neumann's IAS computer at Prince…

Mihaly Csikszentmihalyi
Person

Mihaly Csikszentmihalyi

Hungarian-American psychologist (1934–2021), father of flow theory, Nakamura's mentor and collaborator across four decades, whose foundational mapping of the peak experience provided the framework Nakamura extended into vital engagement.

Event (1)
The Dartmouth Workshop of 1956
Event

The Dartmouth Workshop of 1956

The 1956 summer workshop at Dartmouth College where the phrase "artificial intelligence" was coined and the field, as a discipline, began.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
38 entries