Engineering judgment is not a formula, calculation, or verifiable result. It is a pattern-recognition signal produced by a biological system that has been exposed to thousands of cases — not just the cases documented in any specific dataset, but the accumulated encounters with materials and systems and forces that do not always behave as models predict. The engineer who possesses this judgment cannot always articulate its basis, because the basis is not a single piece of evidence but a pattern built from thousands of pieces accumulated over a career. Each piece is too small to be decisive. Together they produce a signal that the experienced engineer reads as clearly as a physician reads a patient's color or a sailor reads the sky. The Challenger disaster of 1986 is Petroski's canonical illustration: Roger Boisjoly's judgment that the O-rings would fail at thirty-six degrees was based on incomplete data extrapolated through experience. The institution asked for quantitative proof. The absence of proof was interpreted as absence of risk. The launch proceeded. Seven people died. The engineer's judgment had been correct. The institutional structure had not known how to weigh it.
Petroski argued that engineering judgment is the form of intelligence that operates in the domain where calculation cannot: the domain of the unanticipated, unspecified, and untested. Every catastrophic engineering failure in the historical record occurred in this domain. The Tay Bridge's unanticipated wind loads, the Tacoma Narrows's unspecified aerodynamic forces, the Silver Bridge's untested corrosion geometry, the Challenger's untested O-ring temperature — in each case, the calculations were correct within their specified scope. In each case, the scope was insufficient. And in each case, the insufficiency was, or could have been, detected by engineering judgment if the judgment had been trusted, sought, or given institutional weight.
The judgment is cultivated through specific practices. It requires exposure to failure cases, detailed and repeated. It requires designing by hand — at least some of the time — so the engineer develops the feel for forces, materials, and uncertainties that AI output conceals. It requires mentorship from experienced engineers who can transmit the tacit knowledge that no textbook conveys. And it requires time: the judgment Boisjoly brought to the Challenger teleconference was not acquired quickly but deposited over decades of encounters with materials behaving differently at the edges of their specified ranges than in the middle.
AI possesses engineering calculation. Its ability to apply formulas, evaluate constraints, and optimize configurations exceeds any human engineer's capacity. But calculation is the map, and the territory is the world of real materials, real construction, real weather, and real use. The map is valuable but always a simplification. Engineering judgment is the capacity to recognize what the map has omitted — to sense, before the calculation confirms it, that the territory contains a feature the map does not show. AI does not possess this capacity because the AI has no access to the territory beyond what the map represents. It can process data about the territory; it cannot encounter the territory directly, and the direct encounter is the mechanism through which judgment is built.
The AI era creates a specific pressure on engineering judgment: not through the AI's deficiencies but through its strengths. The AI's calculations are so comprehensive, so rigorous, that the engineer who receives them may feel there is nothing for judgment to add. The analysis has been performed at a level of detail no human could match. But the analysis operates within a scope that may be incomplete, and the question judgment asks — whether the scope is sufficient — cannot be answered by more analysis. It can only be answered by the engineer who brings to the review the accumulated experience of working in the territory the map represents, the felt knowledge of what the territory contains that the map does not show, and the willingness to trust that felt knowledge even when the map shows nothing wrong.
The concept of engineering judgment is older than Petroski, with formal articulation in the work of Eugene Ferguson, Walter Vincenti, and earlier philosophers of engineering. What Petroski contributed was the detailed case-by-case documentation of judgment operating in high-stakes situations — Boisjoly at the Challenger, LeMessurier at the Citicorp Center, the Tay Bridge's absent wind-load skepticism — and the articulation of how judgment is cultivated and how it is eroded. His framework made explicit what engineering educators had long known implicitly: that codes and calculations alone do not produce safe engineering, and the difference between codes-plus-calculation and safe engineering is judgment.
Judgment is cultivated, not innate. It is not a personality trait. It is a capacity developed through decades of practice, refined by the study of failures, and maintained through continuous engagement with conditions under which designs succeed and fail.
Judgment operates where calculation cannot. In the domain of the unspecified, untested, and unanticipated — where every catastrophic failure has occurred — judgment is the only form of intelligence that can detect the inadequacy of the specification itself.
AI provides calculation, not judgment. The tool's output is the map. The judgment that recognizes what the map omits requires direct encounter with the territory, which the tool does not have.
The AI era threatens the developmental conditions for judgment. When AI performs the analysis, the engineer does not perform the analysis, and the analysis is the process through which judgment is developed. Review of an output is not the same as generation of an output, because generation is where the judgment is built.
The strongest counterargument is that engineering judgment, whatever its historical importance, is being increasingly captured in expert systems, codified standards, and AI training data — that what once required individual judgment can now be systematically encoded. The Petroski response was that encoding captures the outputs of past judgment (the lessons of specific failures) but not the process that would produce judgment about future unknowns. The encoded standards protect against known failure modes. The next catastrophe will involve an unknown mode, and detecting its approach requires the kind of calibrated suspicion that only direct experience of engineering's failures can produce. Whether this suspicion can be cultivated in an age when AI increasingly mediates the experience is the unresolved question of the transition.