This page lists every Orange Pill Wiki entry hyperlinked from Edsger Dijkstra — On AI. 18 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The expansion of who can produce software via AI tools — read through Dijkstra's framework not as empowerment but as the distribution of a new and particularly dangerous form of ignorance: the ability to build without the ability to verify.
Abstraction as Dijkstra meant it: the suppression of irrelevant detail for the purpose of selective attention — a window, not a wall. The detail suppressed must remain inspectable when needed.
Not an aesthetic preference but an epistemic property: a solution is elegant when its correctness is visible. Elegance is the precondition of trust.
Dijkstra's benchmark for adequate software: a system is intellectually manageable when a human being can reason about it — not by holding it all in view but by understanding each part and trusting their composition.
The 2020s interface paradigm in which the user describes desired outcomes in natural language and receives executable code — the ultimate abstraction layer in Dijkstra's sense, concealing not merely the hardware but the programming logic i…
Dijkstra's insistence that a program's correctness should be established by proof — formal reasoning from specification to implementation — not by the accumulated evidence of tests that passed.
Dijkstra's 1974 principle — widely misunderstood as an organizational technique for code — that is in fact an epistemological discipline: the programmer addresses one concern at a time, in isolation, because the human skull cannot hold mor…
Dijkstra's disciplined alternative to arbitrary go to control flow — programs composed from sequences, selections, and iterations, each with single entry and single exit, so the logic can be reasoned about hierarchically.
Dijkstra's load-bearing distinction — "testing can show the presence of bugs, but never their absence" — applied to a world where it passed the tests has become the industry's stand-in for it is correct.
Byung-Chul Han 's term for the contemporary cultural preference for frictionless surfaces — the iPhone's glass, the algorithmic feed, the AI-generated text — that conceals the labor and struggle that traditionally produced depth.
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
The 2026 formal result that no verification procedure can simultaneously satisfy soundness, generality, and tractability — a mathematical ceiling on Dijkstra's program of provable correctness.
The ability to read and evaluate code — to trace its logic, identify its assumptions, and determine where it will fail — even if one cannot write it. The specific form of competence the AI era requires and few curricula teach.
Dijkstra's March 1968 letter to the Communications of the ACM — a one-and-a-half-page argument that arbitrary jumps destroy the possibility of reasoning about a program, and the founding document of structured programming.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.