Abstraction was, for Dijkstra, the most powerful intellectual tool available to the programmer and the most dangerous. Its power lay in the capacity to suppress irrelevant detail so that the mind could focus on essential structure. Its danger lay in the fact that suppressed detail does not cease to exist — it becomes invisible, and invisible detail in a system of sufficient complexity will eventually produce consequences no one anticipated. The distinction Dijkstra drew repeatedly was between abstractions that function as windows — the programmer can see through them when necessary — and abstractions that function as walls — the underlying detail is not merely suppressed but sealed off. A well-designed abstraction enables selective attention; a poorly designed one enforces ignorance. The history of computing is in one reading the slow, largely unremarked drift from windows toward walls.
There is a parallel reading that begins not from abstraction theory but from the political economy of software maintenance. The Dijkstrian concern with inspectability assumes a world where the same entity that builds the system maintains it—where the programmer who writes the code is available when it breaks. This was true in Dijkstra's academic context and remains true in certain elite engineering cultures, but it describes perhaps five percent of global software production. The other ninety-five percent operates under different constraints: the original programmers have moved on, the documentation is incomplete or outdated, and the person debugging the system has never seen the codebase before.
In this context, the opacity of AI-generated code is not a departure from current practice but its logical extension. Most production code is already uninspectable in practice—not because the abstractions are walls but because the institutional knowledge required to understand them has dissipated. The junior developer tasked with fixing a legacy system written in an unfamiliar framework with undocumented assumptions faces the same epistemic problem whether the code was written by a human or generated by an AI. The difference is that AI generation makes this problem visible at the moment of creation rather than discovery. What Dijkstra feared—programmers who don't understand their own systems—is not a future risk but a present reality. The natural language interface simply acknowledges what the industry has been unwilling to admit: most code is already maintained by people who don't understand it, and the pretense that they do causes more problems than acknowledging they don't.
Every layer in the computing stack is an abstraction. Assembly language abstracts over binary codes. High-level languages abstract over assembly. Compilers, operating systems, libraries, frameworks, cloud infrastructure — each layer suppresses the details of the layer below. Each transition made programming faster and less error-prone. Each transition also made the programmer know less about what she had built. Knowledge that is optional, over time, becomes unknown.
Dijkstra accepted the trade in general and distrusted it in particular. The criterion that distinguished acceptable abstractions from dangerous ones was whether the abstraction was accompanied by a verification mechanism appropriate to its layer. A programmer could trust a compiler only if the compiler's correctness had been verified. A programmer could rely on a library only if she understood the invariants the library assumed. Each layer of concealment needed to be matched by a layer of assurance.
The natural language interface that Segal celebrates is, by this measure, the ultimate abstraction: it conceals not only the hardware, operating system, and programming language but the programming logic itself. The builder communicates intention and receives implementation. Everything between is hidden. The neural network that generated the code is opaque. The training data that shaped the network is unknown. The logical structure of the generated code — its assumptions, its edge cases, its failure modes — is visible only to those willing and able to read and analyze it, which is the skill the interface was designed to make unnecessary.
This produces a pathology Dijkstra's framework identifies precisely: maximum efficiency and maximum ignorance achieved simultaneously. The abstraction has become so total that the window has been replaced not by a wall but by a wall without a door. The concealment is not structured, layered, or transparent. It is total. The builder sees the input — her description — and the output — the code. Everything between is hidden behind a surface through which nothing can be seen, behind which the entire logical structure of the system operates without human oversight.
Dijkstra's most sustained treatment of abstraction appears in the EWDs of the 1970s, especially the manuscripts associated with A Discipline of Programming (1976) and the lectures on predicate transformers. The window/wall distinction is an informal gloss that captures a principle Dijkstra stated in several more technical forms — the requirement that abstractions preserve proof obligations across their boundaries, making it possible to reason about code at one layer without losing the ability to inspect the layer below when the reasoning requires it.
The informational quality of abstraction — how much is preserved, how much is lost — was also central to Shannon's contemporaneous work on communication theory, and the convergence of Dijkstra's epistemological concern with Shannon's mathematical formulation is one of the deeper historical currents in twentieth-century computing.
Windows vs walls. Good abstractions allow selective attention and preserve the possibility of inspection; bad abstractions enforce ignorance by sealing off what they suppress.
Every layer needs a verifier. The programmer trusts a layer only to the extent that the layer's properties have been demonstrated. Each layer of concealment must be matched by a layer of assurance.
Natural language is the terminal abstraction. It conceals the programming logic itself, which is the last human checkpoint between intention and deployment.
Concealment has to be chosen. In the Dijkstrian ideal, the programmer decides which details to suppress and which to preserve. With AI generation, the concealment is imposed by the tool; the builder has no say in what is hidden.
Propagation paths become opaque. Bugs at lower layers propagate through the layers above. In a well-designed abstraction, these paths are at least theoretically traceable. In AI-generated code, the propagation paths pass through a network whose internal structure is inscrutable.
The standard defense of the natural language interface is that it is no worse than previous abstractions — that every layer of the stack has been criticized on Dijkstrian grounds and every layer has nonetheless proved itself by the productivity it enabled. The Dijkstrian reply is that previous abstractions preserved the possibility of inspection even when practitioners rarely exercised it, whereas the natural language interface removes that possibility at the critical moment. The analogy is not to the compiler but to the sealed black box — and the history of sealed black boxes in engineered systems is not reassuring.
The tension between Dijkstra's framework and the maintenance reality depends crucially on which question we're asking. If we're asking about theoretical correctness—can we prove the system works?—Dijkstra's position dominates (90%). The opacity of neural generation genuinely removes our ability to construct formal proofs about system behavior. But if we're asking about practical debugging—can we fix it when it breaks?—the contrarian view has merit (70%). Most debugging already proceeds through trial and error rather than systematic reasoning about abstractions.
The deeper synthesis emerges when we recognize that "inspectability" itself exists on a gradient. Dijkstra imagined binary states: windows or walls. The reality is more nuanced: semi-transparent materials, one-way mirrors, frosted glass. A well-commented AI-generated function with clear inputs and outputs might be more inspectable in practice than uncommented human code with clever optimizations. The question isn't whether we can see through the abstraction but how much we can see and whether it's enough for the task at hand.
The framework that holds both views recognizes abstraction as a negotiation rather than a design choice. Every abstraction involves a trade between cognitive load and epistemic access, and different contexts require different trades. In safety-critical systems, Dijkstra's windows are non-negotiable (100% his view). In rapid prototyping, the efficiency gains of natural language might justify the opacity (80% toward AI generation). The mistake is treating all software development as belonging to one category or the other. The real challenge is developing judgment about which abstractions can be walls, which must be windows, and—critically—who gets to make that decision. The power to determine abstraction boundaries is ultimately the power to determine what kinds of knowledge matter.