The engineer who believes she has produced a solution is inclined to defend it. The engineer who knows she has produced a hypothesis is inclined to test it — to look for the conditions under which it might fail, because finding those conditions before the world does is the difference between a controlled experiment and a catastrophe. Petroski developed this view most explicitly in Design Paradigms (1994), arguing that the standing bridge is not proof the engineer's model is correct but only proof that the model has not yet encountered the conditions that would reveal its error. Every day the bridge stands, the hypothesis has not been refuted. The non-refutation is not equivalent to validation. The distinction produces fundamentally different engineering practices: one oriented toward defending completed designs, the other oriented toward continuously testing them for the conditions that would expose the limits of the model they embody.
The philosophical lineage is Popperian, though Petroski did not typically frame it in those terms. Karl Popper argued that scientific theories are never verified, only corroborated through failed attempts at falsification. Petroski extended this framework to engineering: every design is a theory about the behavior of a structure under specified conditions, and the test of the theory is the structure's actual performance in service. The service tests the hypothesis continuously for the lifetime of the structure. When the service includes conditions the hypothesis did not anticipate, the hypothesis may be refuted — and refutation, in engineering, means collapse.
The Silver Bridge collapse of 1967 is Petroski's canonical case. The bridge had stood for thirty-nine years. Its design satisfied the codes of its era. The standing bridge appeared to be proof that the design was sound. The hypothesis it embodied — that eyebar chains could support a highway bridge without inspection access to the interior of the pin connections — had never been articulated as a hypothesis, because the engineers who designed it thought in terms of solutions. The hypothesis was wrong. A single eyebar, in which corrosion had grown a crack invisible to external inspection, failed. The bridge collapsed during rush hour. Forty-six people died. The refutation was the revelation.
The implication for AI-generated design is specific and uncomfortable. AI outputs present themselves with uniform confidence. There is no epistemological marker in the output distinguishing elements that are well-supported by training data from elements that extrapolate beyond it. The design looks complete. It may satisfy every specified constraint. But the hypothesis it embodies — the prediction that this configuration will perform as modeled under the conditions it will encounter — is embedded in the output in ways the engineer reviewing it cannot easily see, because she did not construct the output. The hypothesis is invisible because the construction process that would have made it visible has been bypassed.
Petroski's response to this condition was educational rather than technical. The defense against the hypothesis-blindness that AI outputs produce is the cultivation of engineers who approach every design — especially the ones that look most complete — with the suspicion that the design is a prediction, that every prediction is wrong about something, and that the engineer's job is to find what the prediction is wrong about before the world does. This suspicion is not cynicism. It is the engineering form of intellectual humility, and it is exactly what AI's presentation of confident, comprehensive outputs works to erode.
The framing was developed across Petroski's career but articulated most clearly in Design Paradigms: Case Histories of Error and Judgment in Engineering (1994). The book examined historical engineering failures not as isolated incidents but as cases where designs were revealed, by catastrophe, to have been hypotheses all along — hypotheses the engineers who produced them had mistakenly treated as solutions. Petroski drew on the Popperian philosophy of science without citing it directly, adapting its framework to the specific context of structural engineering where hypothesis refutation means structural collapse.
The standing structure is not validated. It is merely unrefuted. The distinction matters because validation implies completion, while unrefuted status implies continuing exposure to the conditions that might refute it.
The hypothesis is embedded in the design. Every element of a design encodes assumptions about how it will behave, what forces it will encounter, what conditions it will face. These assumptions are the hypothesis. They are invisible from inside the design because they constitute the framework through which the design is understood.
The engineer who constructs the design encounters the hypotheses. The engineer who receives a design from an AI system does not. The construction process is where the hypotheses become visible; the review process is not. This is not a matter of intelligence or diligence but of the specific cognitive work that construction requires.
The defense is the cultivation of suspicion. The engineer who treats every design — her own or the AI's — as a hypothesis rather than a solution is the engineer positioned to find the conditions under which the hypothesis may be refuted. The engineer who treats the design as complete is the engineer the catastrophe will surprise.
The framework has been criticized as excessively skeptical, producing engineers who distrust their own designs and are paralyzed by awareness of their designs' limitations. Petroski's response — implicit throughout his work — was that the alternative is worse. Engineers who trust their designs too completely produce the catastrophes the skeptical framework exists to prevent. The healthy engineering disposition is neither confidence nor paralysis but what Petroski called judicious suspicion — the willingness to build, combined with the alertness to watch for the conditions that might expose the limits of what has been built. Whether this disposition can be maintained in a profession that increasingly relies on AI-generated outputs that present themselves as complete is the open question of the AI era.