To Engineer Is Human: The Role of Failure in Successful Design, published in 1985, was Petroski's first widely read book and established the conceptual framework that would organize his subsequent career. The book's central argument is that engineering progress is driven not by the accumulation of successes but by the careful study of failures. Each catastrophic collapse — the Tacoma Narrows Bridge, the Hyatt Regency walkways, the DC-10 cargo door failures — deposits knowledge in the profession that no sequence of successful designs could produce. The book introduced the framing of design as hypothesis, the concept of the factor of safety as epistemological commitment, and the warning that computer-aided design could erode the judgment it was supposed to support. Its title phrase — to engineer is human — captured the book's moral argument: engineering is a human activity because its consequences are human, and the machines that assist engineers do not change the fundamental character of the enterprise, which is the exercise of human judgment in service of human safety.
There is a parallel reading of Petroski's framework that begins from the material conditions under which failure-driven learning operates. The Tacoma Narrows Bridge collapsed in 1940, killing no one because the bridge was closed in time. The Hyatt Regency walkway collapse killed 114 people in 1981, and the lessons were absorbed into building codes over the subsequent decade. This learning regime — catastrophic failure followed by careful study followed by regulatory incorporation — presumes a slow-moving technological substrate and a relatively stable distribution of consequences. It presumes you have time to learn, and that the people who bear the cost of failure are not systematically different from the people who benefit from the subsequent knowledge.
AI systems deployed at scale do not fail this way. They fail continuously, across populations, in ways that are difficult to attribute and often invisible to the operators. A recommendation system that systematically suppresses certain viewpoints does not collapse; it operates successfully by its own metrics while producing epistemic failure at societal scale. The people who experience the failure are not the engineers who will study it, and the failure may not be legible as failure until years after deployment, when the regulatory window has closed. Petroski's framework depends on failures that announce themselves, that can be studied in retrospect, and that produce knowledge the profession can institutionalize before the next iteration deploys. The AI deployment cycle operates at a speed and scale that makes this learning loop structurally unavailable.
The book's genesis was Petroski's frustration with popular accounts of engineering that emphasized triumphal achievement while obscuring the continuous role of failure in producing those achievements. A bridge that stands is easy to celebrate; a bridge that falls is easy to mourn; what is hard is to understand how the bridge that stands depends, through the accumulation of lessons from bridges that fell, on its predecessors' failures. Petroski wrote the book to make this dependency visible to readers outside the engineering profession, using detailed case histories to illustrate how specific failures produced specific revisions in codes, standards, and practice.
The book's influence in engineering education was substantial. It became a standard assigned text in introductory engineering courses across American universities, and its framework — that engineers should study failure as systematically as they study success — became increasingly embedded in professional practice. Subsequent commissions investigating major engineering failures have repeatedly cited the book's principles, and the American Society of Civil Engineers has incorporated elements of its framework into its ethics and educational recommendations.
The book's particular prescience for the AI era lies in a passage Petroski wrote about computer-aided design in the mid-1980s, when finite-element analysis software was beginning to transform engineering practice. He warned that "what is commonly overlooked in using the computer is the fact that the central goal of design is still to obviate failure, and thus it is critical to identify exactly how a structure may fail. The computer cannot do this by itself." The passage was written about calculation tools that were dramatically less capable than contemporary AI. Its warning applies with greater force to systems that generate complete designs from natural-language descriptions, because the gap between the tool's output and the engineer's judgment has widened in ways Petroski could not have anticipated but his framework predicted.
The book's title has been widely quoted, sometimes in ways that obscure its meaning. Petroski intended the phrase to work on two levels. First, engineering is human because engineers are human and subject to error; to engineer is to participate in an activity that will, inevitably, produce failures. Second, engineering is human because its subject — the structures on which lives depend — is fundamentally about the people those structures serve. The two meanings are connected: because engineers are human and fallible, they must build in margins for their own fallibility, and these margins are the engineering profession's promise to the humans who will depend on what engineers build.
To Engineer Is Human was published in 1985 by St. Martin's Press. The book grew out of Petroski's frustration with the absence of serious popular treatment of engineering failure. His academic position at Duke University, joint between civil engineering and history, gave him the institutional space to write for audiences outside the technical profession. The book's success established Petroski as a public voice on engineering topics and led to his subsequent prolific output, including The Pencil (1990), The Evolution of Useful Things (1992), and Design Paradigms (1994).
Failure is the primary teacher. Engineering progresses through the careful study of what has gone wrong, because failure reveals the limits of existing understanding in ways that success conceals.
Design is hypothesis. Every engineered structure is a prediction about how specific configurations will behave under specific conditions. The standing structure is not validated — it is only unrefuted, awaiting the conditions that may eventually test the prediction.
The factor of safety is epistemological. It is not a technical parameter compensating for bad math but the profession's institutionalized acknowledgment that its models are incomplete. The factor protects against conditions the engineer has not specified, because the engineer knows she cannot specify them all.
Computers do not replace judgment. Petroski's 1985 warning — that computer-aided design could erode the judgment it was supposed to support — anticipated a dynamic that the AI era has intensified. The tool calculates. The judgment about whether the calculation is asking the right question remains human.
The book has been criticized, particularly by engineers working in high-reliability fields, as overweighting the role of failure and underweighting the role of successful preventive practice. Proponents of the book's framework argue that preventive practice itself is the product of learning from failure — that the codes, standards, and protocols that prevent catastrophes today are the encoded lessons of catastrophes past. The debate maps onto current discussions of AI safety: whether safety is achieved primarily through preventive specification (the AI optimization view) or through continuous attention to the ways specifications may be insufficient (the Petroski view). The book's framework is not that one view is right and the other wrong, but that the second is the more neglected and, under conditions of rapid technological change, the more urgent.
On the question of whether failure teaches, Petroski's framework is entirely correct (100%). Every mature engineering discipline encodes its history of catastrophic events in its standards — the codes are written in blood, as the saying goes. The question is not whether failure teaches but under what conditions the teaching can be institutionalized before the next deployment. Here the weighting shifts dramatically based on failure legibility. For bridges and buildings — structures with clear load paths, observable deformations, and catastrophic endpoints — Petroski's learning loop operates effectively (80% of the time, regulatory capture and economic pressure notwithstanding). For distributed software systems whose failures are epistemic rather than mechanical, the learning loop is structurally compromised (perhaps 30% effective at best).
The contrarian view is correct (70%) about the speed mismatch between AI deployment and regulatory learning, but this is a problem Petroski's framework anticipates rather than contradicts. His warning about computer-aided design eroding judgment was precisely about this: tools that increase the speed of iteration while decreasing the visibility of assumptions. The synthesis is that failure-driven learning requires institutional structures that match the speed and legibility of the technology. Bridges learned through post-collapse investigation because bridges fail observably and rarely. AI systems require continuous monitoring, real-time feedback, and distributed responsibility — mechanisms that do not yet exist at the scale deployment operates.
The deeper frame is that Petroski's "to engineer is human" works at two speeds. The profession learns slowly through catastrophe and regulation. The individual engineer must exercise judgment quickly at the moment of design. AI compresses the second timeline while exploding the first, creating a gap that neither view alone addresses.