The democratization of programming is the claim Segal advances in The Orange Pill: that AI tools represent the most morally significant expansion of human capability since the invention of writing, because they allow people who previously lacked the infrastructure to build software to do so through conversation. The developer in Lagos, the teacher, the marketing manager — all can now build. Dijkstra's framework does not dispute the expansion; it disputes the moral calculus. The ability to build without the ability to verify is not empowerment. It is the distribution of a new and particularly dangerous form of ignorance — builders who can produce artifacts whose quality they cannot assess, testing the output against their own expectations and discovering, if they discover at all, that their expectations did not exhaust the ways the system could fail.
Segal's democratization argument has moral weight: who could argue against expanding who gets to build? Dijkstra would have argued — not because he opposed human flourishing but because he understood that build and build reliably are different verbs. The new population of builders includes some who understand software well enough to evaluate what the AI generates, to identify its failure modes, to test it rigorously, and to maintain it over time. It includes many who do not. They have the imagination to describe what they want. They do not have the expertise to evaluate what they receive.
The specific failure mode the framework predicts is that when the builder does not understand the implementation, her tests reflect her understanding, and her understanding may be — will be, in many cases — incomplete. The teacher who builds a curriculum tool through conversation with Claude tests whether the tool displays the right content. She does not test whether it handles concurrent users, because she does not know what concurrent access is. She does not test whether it sanitizes input, because she does not know what injection attacks are. Her tests are a reflection of her knowledge. Her knowledge does not extend to the domains where the code is most likely to fail.
The typical response within the AI discourse is that the tool handles these concerns — that generated code incorporates best practices for security, concurrency, and data integrity because the training data includes such practices. This response illustrates exactly the dependency Dijkstra spent his career warning against: trust in the tool as a substitute for understanding. The AI may handle these concerns. It may not. The builder cannot tell the difference, because telling the difference requires the expertise the AI was supposed to replace. She is in the position of a patient evaluating her own surgery — she can report whether she feels better, but she cannot assess whether the procedure was performed correctly.
The humble programmer that Dijkstra described cannot arise in this environment. Humility about programming requires knowing what you do not know, and what you do not know is framed by what you do know. You cannot be humbly aware of your limited understanding of concurrency if you do not know what concurrency is. Remove the expertise, and you remove the basis for humility. What remains is not confidence. It is the absence of the knowledge required to doubt.
The framing of AI-assisted building as democratization appears throughout the technology literature of the 2022–2026 period and receives its fullest expression in Segal's The Orange Pill. The historical analogy most commonly drawn is to the printing press and its democratization of authorship; the parallel has some merit and some sharp limitations that the Dijkstrian critique exposes.
The counter-argument — that democratized production without democratized verification produces mass deployment of unverified artifacts — has been developed in the software safety literature since the 1980s and has acquired new urgency in the AI era.
Build and build reliably are different. Expanding who can build is morally different from expanding who can build reliably. The AI era has done the first without doing the second.
Tests reflect the tester's knowledge. Non-expert builders test what they can think of; what they can think of is bounded by their understanding; the failures live outside their understanding.
Trust as substitute for expertise. The defense that AI handles the concerns non-experts cannot address is a form of trust in the tool that Dijkstra's framework specifically rejects. Trust must be grounded in verification, not in the tool's reputation.
Humility requires expertise. The intellectual virtue of knowing one's limits is a function of knowing enough to see them. Democratized access without democratized expertise removes the basis for humility.
The printing press analogy is incomplete. Bad books waste the reader's time. Bad software can corrupt data, expose private information, or create security vulnerabilities that are exploited. The consequences of democratized production scale with the consequentiality of the product.
The strongest counter-case holds that the Dijkstrian critique is an elitist defense of professional gatekeeping — that the same argument was made against every previous democratization of a skilled practice and was every time overcome by the productive energies the democratization unleashed. The Dijkstrian reply is that every previous democratization was accompanied, eventually, by the development of institutions that mitigated its downside risks (editorial traditions for books, regulatory frameworks for pharmaceuticals, licensing regimes for trades). The AI-era question is whether such institutions will develop for software and in time, or whether the democratization will outrun the institutional response.