Structured programming is the discipline Dijkstra proposed in the late 1960s to rescue programs from the unanalyzable tangle that unrestricted control transfer produced. It replaced the arbitrary jump with a small set of composable control structures — sequence, selection, iteration — each of which has a single entry point and a single exit point. The constraint looked like a limitation, but it was in fact an enabler: by giving up the freedom to branch anywhere, programmers gained the ability to reason about each block independently and compose the verifications into a proof of the whole. Structured programming is the engineering realization of Dijkstra's deeper conviction that program structure and human understanding are not independent variables but two names for the same thing.
The immediate trigger for structured programming was Dijkstra's 1968 letter, but the argument had been forming for nearly two decades. Programs written with unrestricted go to statements could be perfectly correct in the sense that they computed the intended outputs for the intended inputs, yet they could not be understood. Execution might arrive at any labeled point from anywhere in the program, which meant that reasoning about a block of code required considering every possible path of arrival. For programs of any real size, this combinatorial explosion defeated human analysis. The code worked or it did not, and the difference was discovered empirically, by running it.
Dijkstra's proposed remedy was not stylistic. It was epistemological. The structured control constructs — sequence, if-then-else, while loops — share a property that unrestricted branching lacks: hierarchy. A structured block is entered in one place and exited in one place. Its internal behavior can be analyzed independently of its context. Verifications compose. If you have established what each block does, you have established what their sequence does. This is the separation of concerns applied to control flow, and it is the mechanism by which formal reasoning about programs becomes tractable at all.
The resistance from working programmers was sustained and bitter. Structured programming was denounced as a straitjacket imposed by theoreticians, and even Donald Knuth argued for disciplined exceptions rather than outright elimination of go to. But the historical verdict was unambiguous: the programs written with structured methods were measurably more reliable, more maintainable, and more comprehensible. The profession eventually came to treat structured programming as background common sense, forgetting both the resistance and the radicalism of the original argument.
The forgetting matters now. The structural insight that made structured programming work — that program logic and human reasoning must travel together, or neither will arrive — applies with new force to AI-generated code. The code may obey every convention of structured programming at the surface level and still violate the deeper principle, because the process that produced it is an arbitrary jump from natural language description to executable logic that no human reasoning can trace. The structure is in the artifact. It is not in the mind that requested it. And the mind that requested it is the only party capable of verifying anything.
Structured programming as a named movement crystallized around Dijkstra's 1968 letter, but its intellectual foundations had been forming throughout the 1960s in the work of Dijkstra, C.A.R. Hoare, and others who were trying to put programming on a mathematical footing. The 1972 volume Structured Programming, co-authored by Dijkstra, Hoare, and Ole-Johan Dahl, established the framework as a coherent methodology and provided the technical machinery — invariants, weakest preconditions, stepwise refinement — that made it operational.
By the mid-1970s, the core claims had been absorbed into mainstream programming practice, even by practitioners who had never read Dijkstra's papers. Structured programming became, in effect, the default — which is part of the reason it is now so easy to underestimate. What looks like ordinary competence in 2026 was a bitterly contested methodology fifty years earlier.
Hierarchy over arbitrary transfer. Single-entry, single-exit blocks compose; arbitrary jumps do not. Composition is what makes local reasoning possible.
Constraint as enabler. The programmer who gives up the freedom to jump anywhere gains the ability to understand everything — a compression that looks like loss and is in fact the precondition of reliable building.
Structure in the artifact is not enough. The discipline must be in the mind that produced the artifact. AI-generated code can satisfy every surface convention of structured programming while violating the epistemic principle the conventions were designed to serve.
Verifications must compose. Each block verified locally; the composition inherits the correctness of its parts. This is the only known strategy for bringing formal reasoning to bear on systems larger than a single blackboard.
The default hides the argument. Because structured programming won, practitioners no longer see it as an argument. They see it as common sense. The original claim — that structure determines comprehensibility — has faded from view, which is exactly when it needs to be restated.
The open question is whether the structural discipline can be recovered for AI-augmented building, or whether the natural language interface has rendered it inaccessible in principle. Segal's framework in The Orange Pill suggests friction relocates to higher cognitive work; this volume replies that the verification layer has been vacated without being replaced. Whether new verification practices can restore the discipline at the generation layer, or whether structured thinking becomes a skill practiced only by a shrinking minority, is the operational question of the next decade.