Henry Petroski — On AI
Contents
Cover Foreword About Chapter 1: The Paradox of the Pencil Chapter 2: When Bridges Fall Chapter 3: The Factor of Safety Chapter 4: The Evolution of Useful Things Chapter 5: Design as Hypothesis Chapter 6: Small Failures and the Immune System Chapter 7: The Complacency Cycle Chapter 8: The Unbuilt Bridge Chapter 9: The Engineer's Judgment Chapter 10: Engineering as Stewardship Epilogue Back Cover
Henry Petroski Cover

Henry Petroski

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Henry Petroski. It is an attempt by Opus 4.6 to simulate Henry Petroski's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The bug that didn't happen is what got me.

I was watching one of my engineers in Trivandrum build a complete authentication system with Claude Code. Clean. Fast. Working on the first pass. No errors, no unexpected behavior, no moment where the system pushed back and forced her to understand why her assumptions were wrong. She shipped it in forty minutes. In the old world, that same system would have taken her two days, and she would have hit a wall at least three times — a dependency conflict, a race condition, a permissions error that made no sense until she traced it back to something she'd misunderstood about how the underlying framework handled sessions.

Those walls taught her things. Not just about the framework. About the invisible architecture beneath everything she built. Each failure deposited a thin layer of understanding that compounded over years into the kind of judgment you cannot teach and cannot shortcut. The judgment that tells you something is wrong before the logs confirm it.

Henry Petroski spent his entire career studying that kind of judgment — the kind that only forms through direct encounter with failure. He studied pencils, bridges, paper clips, and catastrophic collapses, and in every case he found the same principle: the intelligence embedded in a well-designed object is not the product of inspiration. It is the product of every previous version that broke, and the specific understanding that each break deposited in the people who diagnosed it.

Petroski called this "form follows failure." The pencil on your desk has four tines because every previous number of tines failed in some specific way. The bridge you drive across carries a factor of safety — deliberate, designed excess — because the engineer who designed it understood that her model of the world was incomplete, and she built in margin for what she did not know.

AI removes the failures that teach. The code compiles clean. The design satisfies every constraint. The output arrives polished and plausible. And the practitioner who receives it has not been changed by the struggle that would have made her capable of evaluating whether the output is truly sound or merely smooth.

This is not a technology book. It is a book about what happens when the cracks that warn you disappear — when the small failures that constitute an immune system are optimized away by a tool that doesn't know they were there for a reason.

Petroski saw further into the relationship between failure and understanding than anyone I have read. His lens belongs in this conversation. It might be the most important lens of all.

Edo Segal ^ Opus 4.6

About Henry Petroski

1942-2023

Henry Petroski (1942–2023) was an American civil engineer, professor, and historian of engineering and design. Born in Brooklyn, New York, he spent the majority of his academic career at Duke University, where he held a joint appointment in the Department of Civil and Environmental Engineering and the Department of History. Across sixteen books and hundreds of papers, Petroski developed a body of work that examined engineering not through its triumphs but through its failures, arguing that the designed world evolves primarily through the identification and correction of what does not work. His foundational text *To Engineer Is Human: The Role of Failure in Successful Design* (1985) established the principle that catastrophic failures are the primary teachers of the engineering profession. *The Pencil: A History of Design and Circumstance* (1990) demonstrated that even the most ordinary objects embody centuries of iterative refinement driven by failure. *The Evolution of Useful Things* (1992) and *Design Paradigms: Case Histories of Error and Judgment in Engineering* (1994) extended these arguments across the full range of designed artifacts and structural engineering. Petroski was elected a Distinguished Member of the American Society of Civil Engineers and a member of the American Academy of Arts and Sciences. His work established a field at the intersection of engineering, history, and philosophy of design, and his central insight — that success conceals the assumptions failure reveals — has become increasingly urgent in an era of AI-generated design where the encounter with failure is precisely what the tools eliminate.

Chapter 1: The Paradox of the Pencil

A pencil is not a simple object.

This was Henry Petroski's most characteristic observation, and his most subversive. In The Pencil: A History of Design and Circumstance, published in 1990, he devoted four hundred pages to an artifact most people use without thinking, lose without noticing, and replace without considering what they have replaced. The pencil sits on the desk like a fact of nature — self-evident, unremarkable, as though it had always existed in its current form and could not have existed in any other.

Petroski saw through this illusion with the patience of an engineer who had spent decades studying how things fail. The pencil, he demonstrated, is the product of centuries of iterative correction. The graphite core smeared in early versions, depositing too much material unevenly across the page. The wooden casing split along the grain when humidity changed, or when a child pressed too hard, or when the manufacturer selected a wood species whose cellular structure could not accommodate the stresses of daily use. The ferrule — the metal band that connects the eraser to the barrel — loosened over time, rotated under pressure, and occasionally detached entirely, leaving the user holding two separate objects where one had been promised. The eraser crumbled, or hardened, or smeared the graphite it was meant to remove.

Each of these failures was specific. Each was diagnosed. Each diagnosis produced a modification. The modification was tested — not in a laboratory, but in the hands of millions of users, over decades, in classrooms and offices and construction sites, in humid climates and dry ones, at altitudes where the air pressure was different and in temperatures where the wood contracted or expanded in ways the manufacturer had not anticipated. The modifications that survived this testing were retained. Those that introduced new problems were discarded or further modified. Over centuries, the cumulative effect of this process produced an object of extraordinary fitness for purpose: cheap enough to be disposable, reliable enough to be trusted, ergonomic enough to be comfortable in the hand for hours of continuous use, and precise enough to produce marks that are both legible and erasable.

The pencil is not simple. It is the resolution of complexity so thorough that it appears simple. Petroski understood that this distinction — between actual simplicity and the appearance of simplicity produced by the exhaustive resolution of difficulty — is one of the most important distinctions in all of engineering, and one of the most consistently overlooked.

The resolution happened through failure. Not through inspiration, not through genius, not through the sudden insight of a single brilliant designer. Through the slow, patient, often tedious accumulation of knowledge about what does not work. The pencil's form was not designed. It was discovered, through the elimination of every form that failed.

Petroski called this principle "form follows failure," inverting the famous architectural dictum that form follows function. The function of the pencil — to make marks on paper — has remained constant for five hundred years. The form has changed continuously, because each form failed in some specific way that the next form attempted to correct. Function is static. Failure is the dynamic force that shapes the object over time. The pencil you hold today is not the embodiment of its function. It is the embodiment of every previous pencil's failure.

This framework — patient, empirical, grounded in the close observation of specific objects and their histories — is where Petroski's thinking becomes most relevant to the current moment in artificial intelligence, and most uncomfortable for those who have embraced AI's generative capabilities without fully reckoning with what those capabilities leave out.

Consider the AI prompt. In the discourse surrounding large language models and AI-augmented creation, the prompt has acquired an almost mystical significance. "Prompt engineering" is taught in courses, discussed in conferences, and treated as a skill whose mastery is the key to extracting maximum value from the machine. The prompt is, in this framing, the new creative act — the thing the human contributes to the collaboration, the seed from which the machine's output grows.

Petroski's framework reveals the prompt as something quite different. The prompt is not analogous to the pencil. It is analogous to wanting a pencil — to the expression of a need without the understanding of what satisfies it. The pencil maker must understand wood: its grain, its moisture content, its workability, its cost, its availability in the quantities required for industrial production. The pencil maker must understand graphite: its hardness, its friability, its behavior when mixed with clay in varying proportions, the relationship between the mixture and the darkness of the mark it produces. The pencil maker must understand lacquer, ferrule metal, eraser compound, and the manufacturing processes that bring these materials together into a unified object at a price the market will bear.

The prompter must understand none of this. The prompter must understand only what she wants.

This is not a trivial distinction. It is the distinction between the engineer and the consumer, and the AI tool, by collapsing the gap between wanting and having, threatens to produce a culture in which the consumer's relationship to the artifact — I want this; now I have it — is mistaken for the engineer's relationship to the artifact — I understand why this works, what forces it must resist, what conditions might cause it to fail, and what modifications would improve it.

Petroski's pencil took centuries to reach its current form because the knowledge required to produce it was earned through direct encounter with failure. The wood split; the manufacturer learned about grain orientation. The graphite smeared; the chemist learned about clay ratios. The ferrule loosened; the metallurgist learned about thermal expansion coefficients. Each lesson was deposited through experience, the way geological strata are deposited through sedimentation — slowly, under pressure, one thin layer at a time.

An AI system trained on the accumulated data of pencil manufacturing possesses, in a certain sense, the endpoint of this process. It can generate specifications for a pencil that incorporates every lesson the industry has learned. The graphite-to-clay ratio will be correct. The wood species will be appropriate. The ferrule dimensions will account for thermal expansion. The output will be, by any measurable standard, a competent pencil design.

What the output will not contain is the understanding of why each specification is what it is. The AI does not know that the graphite ratio is set where it is because lower ratios produced marks that were too faint for classroom use in the 1870s, or that the wood species was selected after a specific competitor's product split catastrophically during a Chicago winter in 1923, or that the ferrule dimensions were revised after a patent dispute in which the critical evidence was a box of pencils whose erasers had detached in a New Orleans warehouse during a particularly humid August.

These stories are not trivia. They are the engineering intelligence embedded in the specifications. The specification without the story is a recipe without the cook's understanding of why each ingredient is there, what happens when it is omitted, and what the dish tastes like when it goes wrong. The recipe produces the dish. The understanding produces the cook — the person who can adapt when the ingredients change, when the oven runs hot, when the altitude is different, when the situation departs from the conditions the recipe assumed.

Petroski warned about precisely this dynamic in 1985, in his foundational work To Engineer Is Human. Writing about the computers of his era — primitive by current standards, but already beginning to transform engineering practice — he observed that "what is commonly overlooked in using the computer is the fact that the central goal of design is still to obviate failure, and thus it is critical to identify exactly how a structure may fail. The computer cannot do this by itself." The passage was written about finite-element analysis software and early computer-aided design. It applies with significantly greater force to systems that generate complete designs from natural-language descriptions.

The computer of 1985 was a calculation tool. The engineer supplied the design; the computer checked the mathematics. The relationship was clear: the human thought; the machine computed. The gap between the engineer's understanding and the machine's capability was visible, which meant the engineer knew where her judgment was required and where the machine's output could be trusted.

The AI of 2025 is a generation tool. The engineer supplies the intent; the machine produces the design. The relationship is fundamentally different, because the gap between the human's contribution and the machine's output is no longer visible. The engineer who receives a complete structural design from an AI system cannot easily determine which elements of the design reflect genuine engineering intelligence and which reflect pattern-matching against training data that happened to produce a plausible result. The design looks like engineering. It may even be engineering, in the sense that it satisfies all applicable codes and standards. But the engineer who receives it has not performed the cognitive work of engineering — the identification of failure modes, the selection among alternatives based on judgment about which failure modes are most dangerous, the iterative refinement that deposits understanding in the practitioner with each cycle.

The pencil is Petroski's proof that apparent simplicity is the most sophisticated achievement of the engineering process. The object's unremarkability is not a sign that it required little intelligence to produce. It is a sign that so much intelligence has been invested that the investment has become invisible. The pencil looks simple because every difficulty has been resolved. The resolution is the intelligence. The difficulty is the teacher.

When AI generates a pencil design — or a structural plan, or a circuit layout, or a software architecture — it produces an artifact that incorporates the resolutions without the difficulties that produced them. The output is a pencil that has never been broken. A bridge that has never swayed. A building that has never leaked. The designs are correct. They are also, in a specific and consequential sense, empty of the understanding that made them correct.

Petroski would have recognized this as the oldest trap in engineering: the confusion of a good result with a good process. A bridge that stands is not proof that the engineer understood the forces acting on it. It may simply mean the engineer was lucky — that the conditions the design encountered happened to fall within the range the design could accommodate, and that the conditions it could not accommodate have not yet arrived.

The pencil in your hand is not lucky. It is the product of five hundred years of engineers who were not lucky — whose products failed, whose failures were studied, and whose studies produced modifications that were tested against reality with the ruthlessness that only daily use by millions of people can provide.

The AI-generated pencil design may be indistinguishable from the product of that five-hundred-year process. It may even be superior in certain measurable dimensions. What it will not possess is the resilience that comes from having been tested by failure — the capacity to accommodate the unanticipated, to perform under conditions the model did not include, to fail gracefully rather than catastrophically when the world presents a situation that lies outside the data.

The paradox of the pencil is the paradox of the AI age in miniature. The tool has become so good at producing the appearance of resolved complexity that the resolution itself — the centuries-long process of earning the knowledge through failure — is no longer visible. And when the resolution is no longer visible, the temptation to believe it is no longer necessary becomes almost irresistible.

Petroski spent his career resisting that temptation. His insistence that the pencil is not simple, that the apparently trivial embodies the genuinely profound, that the resolution of difficulty is the highest achievement of the engineering mind — these were not academic observations. They were warnings. Warnings that the moment an engineer stops seeing the difficulty that a good design has resolved, that engineer has lost access to the intelligence that made the design good.

The pencil is the test case. If Petroski's framework holds for an object as modest and as thoroughly resolved as a pencil, it holds with considerably greater force for the bridges, buildings, and systems on which human lives depend. The chapters that follow will examine those cases. But the pencil establishes the principle: the intelligence is in the resolution of difficulty, and any tool that delivers the resolution without the difficulty has delivered the answer without the understanding.

The question that drives the rest of this investigation is whether that understanding can be rebuilt at a higher level — whether the engineer freed from the difficulty of calculation can invest that freed attention in the difficulty of judgment — or whether the difficulty, once removed, takes with it the very mechanism through which judgment develops.

Petroski's career suggests the answer is not predetermined. It depends on whether the engineers who use these tools understand what the tools have replaced, and whether they choose to study what the tools cannot teach.

The pencil does not care whether you understand it. It works regardless. But the bridge does not offer that indifference. The bridge requires understanding, because the bridge will encounter conditions the pencil never will, and when it does, the only thing standing between the structure and catastrophe is the judgment of the person who designed it.

That judgment cannot be prompted into existence. It must be earned. And earning it requires the encounter with failure that the pencil's five-hundred-year history so thoroughly, and so invisibly, represents.

Chapter 2: When Bridges Fall

On the evening of December 28, 1879, a passenger train carrying seventy-five people entered the High Girders section of the Tay Bridge in Dundee, Scotland. The bridge had been open for nineteen months. Its designer, Sir Thomas Bouch, had received a knighthood for its completion. It was the longest bridge in the world. The wind that night was blowing at speeds that subsequent analysis would estimate between sixty and eighty miles per hour.

The High Girders — the central navigation spans, enclosed in lattice trusses through which the train passed as though through a tunnel — separated from their piers and fell into the Firth of Tay, carrying the train and all seventy-five passengers to their deaths. No one survived.

The Court of Inquiry that followed identified multiple failures. The cast-iron lugs connecting the bracing bars to the columns were poorly designed and poorly cast, with blowholes concealed by a filler known as Beaumont's Egg — a mixture of beeswax, fiddler's rosin, and iron filings that hid the defects from visual inspection. The wind loading assumptions were inadequate. Bouch had allowed for wind pressures of ten pounds per square foot in his calculations. The actual wind pressure on the night of the disaster was several times that figure.

But the deepest failure, the one Petroski returned to throughout his career, was not technical. It was epistemological. The bridge failed because its designers believed they understood wind in a way they did not. The success of previous bridges — bridges that had stood in conditions that happened to fall below the threshold of their unexamined assumptions about wind — had created the impression that wind was a solved problem. It was not solved. It had merely not yet been tested.

Twenty-eight years later, on August 29, 1907, the south cantilever arm of the Quebec Bridge over the St. Lawrence River collapsed during construction, killing seventy-five workers. The bridge was designed to be the longest cantilever span in the world, surpassing Scotland's Forth Bridge by a significant margin. Its chief engineer, Theodore Cooper, had modified the original design to extend the span from 1,600 feet to 1,800 feet without proportionally increasing the structural members. The compression chords in the anchor arm buckled under a load they were never designed to carry.

Cooper was not incompetent. He was one of the most respected bridge engineers in North America. His failure was the failure of confidence — specifically, the confidence that the principles governing shorter cantilever bridges would scale linearly to longer ones. They did not. The forces that were manageable at 1,600 feet became catastrophic at 1,800 feet, because the relationship between span length and compressive force is not linear. Cooper knew this in theory. He did not feel it in practice, because his practical experience was with shorter spans where the nonlinearity had not yet manifested.

Thirty-three years after that, on November 7, 1940, the Tacoma Narrows Bridge in Washington State twisted itself apart in a forty-two-mile-per-hour wind — a wind speed that the bridge should have been able to withstand comfortably, by any calculation available at the time. The bridge did not fail because the wind was too strong. It failed because the wind excited a resonant frequency in the bridge deck that the design had not anticipated, producing oscillations that grew until the structure tore itself to pieces. Film footage of the collapse, with the bridge deck undulating like a ribbon in a breeze, became one of the most widely viewed engineering failures in history.

The designer, Leon Moisseiff, was among the most accomplished bridge engineers alive. He had contributed to the design of the Manhattan Bridge and served as a consultant on the Golden Gate Bridge. His design for the Tacoma Narrows reflected the state of the art in suspension bridge theory, a theory that had been developed through the successful construction of dozens of suspension bridges over the preceding century. Each successful bridge had confirmed the theory. Each confirmation had increased confidence. Each increase in confidence had encouraged slightly more aggressive designs — longer spans, thinner decks, less material. The Tacoma Narrows was the logical conclusion of a century of success, and it was destroyed by a phenomenon the theory did not include.

Petroski studied these three collapses — Tay, Quebec, Tacoma Narrows — not as isolated disasters but as manifestations of a single recurring pattern. The pattern is this: success produces confidence; confidence produces ambition; ambition reduces margins; reduced margins expose assumptions that success had concealed; the exposure of those assumptions produces catastrophe. The cycle is not accidental. It is structural. It is built into the relationship between engineering knowledge and engineering practice, because engineering knowledge is always incomplete, and success — paradoxically — is the force that conceals the incompleteness.

A standing bridge does not tell the engineer what she does not know. It tells her only that what she does know has been sufficient so far. The standing bridge is a passed test, but a passed test does not reveal which questions were not asked. Only a failed bridge reveals the questions that should have been asked and were not — the wind loads that were underestimated, the compressive forces that were not properly scaled, the aerodynamic phenomena that the theory did not include.

Each of these three catastrophes deposited a layer of knowledge in the profession. After the Tay Bridge, engineers learned to take wind loading seriously, to test their assumptions against actual wind data rather than relying on conservative estimates that turned out to be insufficiently conservative. After the Quebec Bridge, engineers learned that the scaling of structural members is nonlinear, that forces which are manageable at one scale become dominant at another, and that the extension of a proven design beyond its validated range is not extrapolation but speculation. After the Tacoma Narrows, engineers learned that aerodynamic behavior — the interaction between wind and structure — is a design consideration as fundamental as static loading, and that a bridge must be designed not only to resist the forces imposed on it but to avoid exciting forces within itself.

These lessons are now embedded in the codes and standards that govern bridge design worldwide. An AI system trained on modern engineering data incorporates them. A design generated by such a system will include appropriate wind-load coefficients, will account for nonlinear scaling of compressive forces, and will consider aerodynamic stability. The AI's output will reflect the lessons of the Tay Bridge, the Quebec Bridge, and the Tacoma Narrows without the AI — or its user — necessarily knowing anything about those disasters.

This is where Petroski's framework becomes most urgent for the current moment. The codes contain the lessons. The AI complies with the codes. The designs are, in a meaningful sense, safe — as safe as the codes can make them, which is to say as safe as the accumulated experience of past failures has determined they should be.

But the codes are not complete. They cannot be complete, because they are the codification of known failure modes, and the defining characteristic of the next catastrophe is that it will involve a failure mode the codes do not yet address. Every catastrophic bridge failure in history involved a failure mode that was, at the time of the failure, unknown. The Tay Bridge engineers did not know about high wind loading because no bridge had yet failed from high wind loading. The Quebec Bridge engineers did not properly understand nonlinear scaling because no cantilever of that length had yet been attempted. The Tacoma Narrows engineers did not know about aerodynamic resonance because the phenomenon had not yet been observed in a bridge.

In each case, the failure revealed a gap in the profession's knowledge — a gap that had been concealed by the success of previous designs that happened to fall within the range of conditions the existing knowledge could accommodate. The code was adequate until it was not, and the transition from adequate to inadequate was marked by the deaths of the people who were on or under the structure when the unknown failure mode manifested.

An AI system cannot anticipate failure modes that are absent from its training data. It can identify patterns in existing data with extraordinary sophistication. It can optimize designs within the parameter space defined by known constraints. It can generate more variations, test more configurations, and evaluate more conditions than any human engineer could accomplish in a lifetime of manual calculation. What it cannot do is ask the question that every catastrophic failure in engineering history has retrospectively shown to be the question that should have been asked: What are we assuming that we have not tested?

This is not a limitation of the current generation of AI systems that will be overcome by the next generation. It is a structural feature of any system that learns from data. Data is a record of what has happened. The catastrophic failure is, by definition, what has not yet happened. The gap between the data and the disaster is the gap that engineering judgment must bridge, and engineering judgment is developed not by processing data but by studying the specific, often terrible, cases in which the data proved insufficient.

Petroski argued throughout his career that the study of failure is not a marginal activity in engineering education and practice — something to be covered in a single course or reserved for specialists. It is the central activity. The engineer who understands why the Tacoma Narrows Bridge failed understands something about the relationship between structure and environment that no amount of successful bridge data can convey. The failure reveals the boundary. The success conceals it.

The parallel to the broader discourse around AI and human capability is precise. When AI-augmented engineering produces a stream of successful designs, each success reinforces the confidence that the system is reliable. The pattern of success accumulates. The engineer's direct engagement with the conditions that produce failure diminishes, because the tool is handling the design work that would previously have forced the engineer to confront the edge cases, the boundary conditions, the situations where the standard approach begins to break down.

This disengagement has a specific consequence that Petroski's historical analysis makes visible. The engineer who has not studied the Tay Bridge does not merely lack a piece of historical trivia. She lacks the specific form of caution that the Tay Bridge teaches — the recognition that wind is not a static load but a dynamic, variable force whose behavior at the extremes cannot be predicted from its behavior in the middle range. This caution is not a formula. It is not a coefficient. It is a way of thinking about uncertainty that can only be developed through the detailed examination of what happened when someone else's confidence in their model exceeded the model's validity.

The AI provides the coefficient. It applies the formula. It generates the design that complies with the code. What it does not provide is the engineer's felt understanding of why the coefficient exists — the seventy-five people in the Firth of Tay, the seventy-five workers on the St. Lawrence, the film of the Tacoma Narrows deck twisting like paper in a storm. These are not sentimental details appended to the engineering record. They are the engineering record. They are the data that matters most, and they are the data that a system trained on successful outcomes systematically underweights.

Petroski would not have argued that AI should be excluded from engineering practice. He was not a Luddite; he used computers throughout his career and appreciated their power. His argument would have been more precise and more consequential: that the engineer who uses AI without supplementing it with the study of failure is the engineer who has access to the accumulated knowledge of the profession without access to the understanding that makes that knowledge meaningful.

The codes are the profession's memory. Failure is the profession's teacher. The memory without the teacher is a library without a librarian — a vast repository of information organized by a system that the user does not understand and therefore cannot navigate when the catalog fails, when the classification proves inadequate, when the book she needs is the one that has not yet been written.

The next bridge that falls will fall for a reason the codes did not anticipate. Whether the engineer who designed it will possess the understanding to diagnose the failure — to identify the gap between the model and the reality, to trace the chain of assumptions that led from confidence to catastrophe — depends on whether that engineer's education included not only the tool that generates designs but the history that explains why those designs are shaped as they are.

The three bridges in this chapter — Tay, Quebec, Tacoma Narrows — stand as monuments not to engineering failure but to engineering learning. Each catastrophe produced knowledge that made the profession more capable, more humble, and more aware of the limits of its models. The question Petroski's framework poses to the AI age is whether the profession can continue to learn at this rate when the mechanism through which it has historically learned — the direct, consequential encounter with failure — is being increasingly mediated by a tool that processes failures as data points rather than experiencing them as catastrophes.

The data point and the catastrophe contain the same information. They do not contain the same understanding. And in engineering, where the difference between a standing bridge and a falling one may be the engineer's capacity to sense that something is wrong before the instruments confirm it, understanding is not a luxury. It is the margin of safety that no code can mandate and no optimization can provide.

Chapter 3: The Factor of Safety

Every engineered structure in the world is overbuilt.

This is not an accident, not a sign of inefficiency, and not the result of engineers who cannot calculate precisely. It is a deliberate design choice, and it may be the single most important concept in the history of engineering — more important than any material innovation, any construction technique, any mathematical breakthrough.

The concept is the factor of safety. In its simplest formulation, a factor of safety of two means the structure is designed to carry twice the maximum load it is expected to encounter. A factor of three means three times the expected load. The factor varies by application, by material, by the consequences of failure. A commercial aircraft wing is designed to withstand one and a half times the maximum load its engineers anticipate, a relatively low factor that reflects the extraordinary precision of aerospace engineering and the rigorous testing regime that every aircraft component undergoes. A concrete dam is designed with a factor of four or more, reflecting the catastrophic consequences of failure and the variability of the materials involved.

The factor of safety is not, as it might appear, a multiplier applied to compensate for bad math. The calculations themselves are as precise as the engineer can make them. The factor of safety exists because the engineer knows — and this knowledge is the deepest form of engineering wisdom — that the calculations are not sufficient. The model of the world that the calculations represent is approximate. The materials are variable. The construction process introduces tolerances that the design did not specify. The loads that the structure will actually encounter over its lifetime will include conditions that no model predicted, because the future is not a dataset and cannot be exhaustively sampled.

Petroski understood the factor of safety not as a technical parameter but as an epistemological stance. It is engineering's institutionalized acknowledgment of its own ignorance. A confession, built into every structure, that the engineer does not know everything — that the model is incomplete, that the assumptions are approximate, that the real world will, at some point, present a situation the calculations did not anticipate. The factor of safety is the margin within which this unanticipated situation can be absorbed without catastrophe.

This understanding has profound implications for the AI age, because AI optimization is structurally inclined to erode the factor of safety — not through malice, and not through error, but through the logic of optimization itself.

Optimization, in engineering, is the process of finding the configuration that minimizes some quantity — typically cost, weight, or material use — subject to constraints. A structural optimization algorithm seeks the design that uses the least material while still meeting the specified load requirements. Every gram removed is a gain. Every unnecessary millimeter of thickness is waste. The optimized design is the design from which nothing more can be taken away — the design that is exactly sufficient.

Exact sufficiency is the enemy of the factor of safety, because the factor of safety is, by definition, designed excess. It is the extra material, the additional thickness, the wider margin that the optimization algorithm reads as waste and seeks to eliminate. The algorithm is not wrong, in its own terms. The extra material does not serve the specified load requirements. It serves the unspecified ones — the loads that the requirements do not include because they have not yet been encountered.

Petroski documented this tension across the entire history of structural engineering. His most detailed examination focused on the period between 1920 and 1940, when suspension bridge design underwent precisely the kind of optimization that AI now performs at vastly greater speed. Designers, confident in their understanding of suspension bridge behavior, progressively reduced the depth of the stiffening trusses that controlled the bridge deck's response to wind. Each bridge was more elegant, lighter, more efficient than the last. Each one stood. The factor of safety diminished with each iteration, because each success confirmed the hypothesis that the margin could be reduced.

The Tacoma Narrows Bridge was the endpoint of this progressive optimization. Its deck was extraordinarily shallow — a plate girder only eight feet deep for a span of 2,800 feet — because the designers had concluded, on the basis of the successful performance of previous bridges with progressively shallower decks, that the depth was unnecessary. The depth was not unnecessary. It was the factor of safety, the margin within which the aerodynamic forces that the existing theory did not model could be absorbed. When the margin was eliminated, the forces — which had always been present but had always fallen within the margin — had nowhere to go. The bridge destroyed itself.

The lesson is specific, and it applies with uncommon precision to AI-optimized engineering: the factor of safety protects against the unknown, and the unknown cannot be specified as a constraint in an optimization problem, because if it could be specified, it would not be unknown.

An AI system that optimizes a structural design does so against the constraints it has been given. Those constraints reflect the current state of engineering knowledge — the load cases, the material properties, the environmental conditions that the profession has identified and codified. The optimization is thorough within this parameter space. The design will satisfy every specified constraint with minimum material, maximum efficiency, and an elegance that may exceed anything a human designer could achieve.

What the optimization will not do is maintain margin against unspecified conditions. It cannot, because it has no mechanism for representing what it does not know. The unknown is not a parameter. It is an absence, and optimization algorithms do not optimize against absences. They optimize against specifications, and the specification, however comprehensive, is always a subset of reality.

Petroski would have recognized this as the engineering equivalent of what the philosopher Byung-Chul Han describes as the aesthetics of the smooth — the cultural tendency to remove friction, excess, and resistance in the pursuit of seamless efficiency. The factor of safety is rough. It is excess. It is, in the vocabulary of optimization, waste. From the perspective of Han's framework, the factor of safety is productive friction — resistance built into the system that serves no immediate function but provides the margin for the unexpected, the capacity to absorb what the model did not predict.

The parallel extends beyond metaphor. In The Orange Pill, Edo Segal documents the phenomenon of ascending friction — the observation that each technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor. Assembly language gave way to compilers; the difficulty of memory management was replaced by the difficulty of algorithmic design. Frameworks replaced boilerplate code; the difficulty of implementation was replaced by the difficulty of architectural judgment.

Petroski's factor of safety is the engineering test case for this thesis, and it reveals both the thesis's strength and its limits. The strength: AI can indeed relocate the engineer's attention from calculation to judgment, freeing her to focus on the higher-order questions that optimization cannot address — questions about which failure modes are most dangerous, which conditions the codes do not cover, which assumptions have not been tested. The limit: this relocation is not automatic. It happens only if the engineer knows that the freed attention should be directed toward the factor of safety — toward the maintenance of margin against the unknown. If the engineer does not know this, or does not feel its importance, the optimization will simply consume the margin, and the structure will be more efficient and more fragile.

The case of the Hyatt Regency Hotel walkway collapse in Kansas City on July 17, 1981, illustrates the dynamics with terrible clarity. The original design called for a set of walkways suspended from the ceiling by continuous hanger rods — rods that would run from the roof to the lower walkway through the upper walkway. During construction, a seemingly minor design change was made: instead of continuous rods, the upper walkway would be supported by a separate set of rods attached to a beam on the upper walkway, and the lower walkway would be hung from the upper walkway's beam rather than from the roof.

The change appeared minor. It simplified construction. An engineer reviewing the modification might have approved it without recognizing its significance. The significance was this: the original design distributed the load of both walkways across the connection at the upper walkway beam. The modified design concentrated the load of both walkways on the beam-to-hanger connection at the upper walkway, effectively doubling the shear force on that connection. The connection was not designed for double the load. It failed during a crowded dance event, killing 114 people and injuring more than 200.

The factor of safety in the original connection was approximately two. The design modification, which no one recognized as critical, reduced it to approximately one — exactly sufficient under ideal conditions, catastrophically insufficient under the actual conditions of a crowded event. The modification consumed the margin. The margin was all that stood between the structure and disaster.

An AI optimization system, asked to evaluate this modification, would assess it against the specified load cases. If the specified load cases included the actual crowd loading and the concentrated connection forces, the AI would flag the modification as dangerous. If the specified load cases did not include these conditions — if the conditions were, as they were in 1981, unspecified because they were unanticipated — the AI would approve the modification as a simplification that reduced construction complexity without violating any constraint.

The AI would be right, within the scope of what it was asked. It would be catastrophically wrong about what it was not asked, and it would have no way of knowing the difference.

This is Petroski's deepest contribution to the AI discourse: the recognition that engineering competence is not the ability to produce correct answers within a defined problem space. It is the ability to recognize when the problem space is incorrectly defined — when the constraints that have been specified are missing the constraint that matters most, when the optimization is proceeding beautifully toward a solution that is elegantly, efficiently, fatally wrong.

A Pentagon systems engineer, reviewing Petroski's work for the Department of Defense, observed that "the author of the book warned against blind faith in computers" and likened his concerns to "the modern-day drawbacks of reliance on computer simulation." The observation, made about 1980s-era computing, has only grown more relevant as the capabilities of the tools have expanded. The blind faith Petroski warned about is not faith in the accuracy of the computation. The computation is accurate. The blind faith is in the sufficiency of the specification — in the assumption that the constraints given to the machine are the constraints that matter. The factor of safety exists because they are not.

The American Society of Civil Engineers — the professional organization of which Petroski was a Distinguished Member — adopted Policy Statement 573 on AI and Engineering Responsibility in July 2024, the year after Petroski's death. The statement declares that "AI cannot be held accountable, nor can it replace the training, experience, and judgment of a professional engineer in the planning, designing, building, and operation of civil engineering projects and the protection of the public health, safety, and welfare." The language is institutional, but the argument is Petroski's: the engineer's judgment is not a supplement to the calculation. It is the thing that determines whether the calculation is asking the right question.

The factor of safety is the engineering profession's way of embedding that judgment into the structure itself — of building in a margin that persists even when the individual engineer's attention lapses, even when the design is modified by someone who does not understand the original rationale, even when the conditions exceed what the model predicted. It is, in the most literal sense, engineering's built-in humility.

AI's tendency to optimize this humility away is not a flaw in the AI. It is a consequence of the logic of optimization applied to a domain where the most important constraint is the one that cannot be specified. The resolution does not lie in avoiding AI optimization. It lies in ensuring that the engineers who use it understand what the factor of safety is, why it exists, and what happens when it is eroded — that they understand it not as waste to be minimized but as wisdom to be defended.

Petroski would have insisted on one further point, and it is the point that connects his engineering framework to the broader cultural analysis of what happens when powerful tools meet insufficiently prepared practitioners. The factor of safety is not just a structural parameter. It is a moral commitment. It is the engineer's promise to the people who will use the structure that she has acknowledged the limits of her knowledge and built in protection against those limits. The promise says: I do not know everything that will happen to this structure. But I have given it enough margin that, when the thing I did not anticipate arrives, the structure will absorb it without killing the people inside.

A tool that systematically reduces this margin, however efficiently, is a tool that systematically reduces the engineer's commitment to the safety of the people who depend on her work. Whether the reduction is intentional is irrelevant. The people on the walkway are equally dead whether the margin was consumed by a deliberate decision or by an algorithm that identified it as excess.

The defense against this outcome is not technical. It is educational. It is the cultivation of engineers who understand the factor of safety the way Petroski understood it — not as a number in a calculation, but as the profession's answer to the question every honest engineer asks herself: What do I not know, and what have I done to protect the people who are counting on me?

Chapter 4: The Evolution of Useful Things

The common dinner fork has four tines. This seems obvious — natural, even, as though a fork could not be otherwise. But the fork has not always had four tines. It has had two, and three, and occasionally five. It has been straight and curved, long and short, with tines that were narrow and sharp or wide and flat. Each configuration was a response to the failure of the previous configuration, and the four-tined fork that rests beside your plate is not the embodiment of some Platonic ideal of fork-ness. It is the current survivor of a five-hundred-year process of elimination.

Petroski documented this process in The Evolution of Useful Things, published in 1992, tracing the histories of the fork, the zipper, the paper clip, the Post-it note, and dozens of other artifacts to demonstrate a principle that he considered fundamental to engineering and consistently overlooked by those who write about innovation: useful design is not the product of genius. It is the product of iteration driven by the identification of failure in use.

The two-tined fork, which preceded the four-tined version by several centuries, was adequate for spearing food but inadequate for scooping. Food slipped between the tines. Users adapted by modifying their eating technique, but the adaptation was a workaround, not a solution — it accommodated the fork's limitation without addressing it. The three-tined fork improved the situation but introduced a different problem: the narrower spacing between tines trapped certain foods, making the fork harder to clean. The four-tined fork resolved both problems — the spacing was narrow enough to support most foods and wide enough to release them — and this configuration persisted because it failed less frequently, in fewer circumstances, than any preceding version.

The process that produced the four-tined fork was not directed by a single designer. It was distributed across thousands of users, hundreds of manufacturers, and dozens of cultural contexts over centuries. Each context produced slightly different forks, because each context involved slightly different foods, slightly different eating practices, and slightly different manufacturing capabilities. The variations were tested by use. The ones that failed less often, in more contexts, survived. The ones that failed more often, or in contexts that happened to be economically or culturally dominant, were discarded.

This is evolution. Not biological evolution — the fork does not reproduce — but the same fundamental dynamic: variation, selection, and retention. Variation is produced by the ingenuity of individual designers and manufacturers. Selection is performed by use — by the daily encounter between the object and the hand that holds it, the food it contacts, the mouth it serves. Retention is accomplished through manufacturing and market success: the forms that users prefer are the forms that manufacturers produce in quantity, which are the forms that persist.

Petroski argued that this evolutionary process is the primary mechanism through which the designed world achieves fitness for purpose. Not inspiration. Not breakthrough. Not the flash of insight that the popular narrative of innovation celebrates. The slow, patient, often invisible accumulation of small improvements, each driven by the identification of a specific inadequacy in the current design and the modification of that design to address it.

This argument has a specific and uncomfortable implication for AI-generated design, because AI does not participate in the evolutionary process that Petroski described. It produces forms through optimization — the mathematical identification of configurations that satisfy specified constraints — rather than through the iterative, use-driven process that shaped every successful design in the history of human artifacts.

The distinction matters because the two processes produce different kinds of knowledge and embed different kinds of intelligence in their outputs.

The evolutionary process embeds use-knowledge. The four-tined fork knows something about how people eat, not because the fork is conscious but because its form was shaped by the encounter between millions of hands and millions of meals over centuries. The knowledge is implicit in the object. The tine spacing, the curvature, the weight, the balance — each parameter was calibrated not by calculation but by the accumulated experience of use, failure, and modification. The fork is a record of its own history, a crystallization of everything that was tried, failed, and was corrected.

The optimization process embeds specification-knowledge. The AI-generated fork knows whatever its designer specified: the dimensions within which the form must fit, the forces it must withstand, the manufacturing constraints it must satisfy. The specifications may be excellent. They may even incorporate the lessons of the evolutionary process, if the designer had access to that history and encoded it as constraints. But the specifications are always a reduction of the full complexity of use, because use is richer than any specification can capture.

The user who struggles with a fork does not generate a data point. She generates a frustration. The frustration is embodied — it lives in the hand that cannot manipulate the food, in the mouth that receives the food at an awkward angle, in the social discomfort of eating clumsily at a dinner table. This frustration is the most important signal in the evolutionary design process, and it is, by its nature, qualitative, contextual, and resistant to formalization. It cannot be transmitted to an AI system as a constraint, because the user herself may not be able to articulate what is wrong. She knows only that the fork does not feel right, and the "feeling right" is the integrated judgment of a biological system that has been using tools for millions of years and has developed an exquisite sensitivity to the fit between hand and object.

The designer who observes this frustration — who sits at the table and watches the user struggle, who picks up the fork herself and feels the inadequacy, who has the embodied understanding that comes from being a user as well as a designer — is the designer who can translate the frustration into a modification. This translation is the creative act in Petroski's framework. Not the generation of the modification itself, which may be straightforward once the problem is identified, but the identification of the problem, which requires the kind of situated, embodied attention that only a user can provide.

AI can analyze use data after the fact. It can process sensor information, click patterns, ergonomic measurements, user reviews, and complaint logs. It can identify statistical patterns in this data that correlate with dissatisfaction or failure. This analysis is valuable, and Petroski, who was not hostile to computational tools, would have recognized its value. But the analysis is a second-order process — a processing of representations of use, rather than use itself. The pattern in the data is a shadow of the experience, and the shadow may or may not include the feature that matters most.

Consider a case from the history of the zipper, which Petroski examined in detail. The early zipper, patented by Whitcomb Judson in 1893, had a fundamental problem: it came apart under lateral stress. The teeth did not interlock securely enough to resist the forces produced by the movement of the body wearing the garment. Users discovered this problem through the specific, embarrassing experience of having their clothing open unexpectedly. The frustration was intense, personal, and socially charged — it motivated rapid design iteration not because the engineers were inspired but because the failure was intolerable.

Gideon Sundback's redesign of 1913 — the interlocking-teeth mechanism that is essentially the modern zipper — solved the problem by increasing the number of teeth per inch and redesigning the slider mechanism. The solution was not inspired by data analysis. It was inspired by the experience of failure — by Sundback's direct understanding of what the zipper was supposed to do and what happened when it did not do it. The embodied experience of the failure drove the modification. Without that experience, the data — "zipper opens unexpectedly" — tells you what happened but not what it felt like, and the feeling is what drives the urgency and the direction of the solution.

The evolutionary process Petroski described is not merely a method for producing better designs. It is a method for producing better designers — practitioners whose judgment has been calibrated by the direct encounter with failure in use. Each failure teaches the designer something about the gap between the specification and the reality, between the model and the world, between what the object is supposed to do and what it actually does in the hands of real people under real conditions. The accumulation of these lessons is what Petroski called engineering judgment, and he spent decades arguing that it cannot be transmitted through data alone.

This argument encounters a legitimate counterargument from the AI perspective. If AI can process vastly more use data than any individual designer can accumulate through personal experience — if it can analyze millions of user interactions across thousands of contexts and identify patterns that no individual could detect — then perhaps the AI's use-knowledge, while different in kind from the designer's, is superior in scope. The AI has never held a fork, but it has processed the reports of a million people who have, and the aggregate of their experiences may contain more information than any single designer's embodied understanding.

Petroski's response, implicit in his work though never directed at modern AI, would be twofold. First, the aggregate is not the same as the specific. The statistical pattern that emerges from a million user reports may identify that a fork design is associated with dissatisfaction, but it may not identify why — may not identify the specific, situated, embodied experience that produces the dissatisfaction, because the experience is qualitative and the data is quantitative, and the translation between them is lossy. The fork's "feel" is an integrated sensation that includes weight, balance, texture, temperature, and the subtle dynamics of the tine-to-food interaction, and each of these parameters interacts with the others in ways that a disaggregated dataset cannot fully represent.

Second, and more fundamentally, the evolutionary process produces not just better objects but better judgment in the people who participate in it. The designer who has struggled with a hundred forks, who has watched users struggle, who has felt the specific inadequacy of each design in her own hand, has developed a sensitivity — a calibrated intuition about what works and what does not — that the designer who receives the AI's statistical analysis has not. The analysis may be more comprehensive. The intuition may be more accurate in the specific case, because the intuition includes dimensions that the analysis misses. The two forms of knowledge are complementary, not substitutable, and the error is in treating one as a replacement for the other.

The design of the Post-it note, another of Petroski's case studies, illustrates the point from a different angle. The Post-it note's adhesive — a pressure-sensitive adhesive that sticks firmly enough to hold a piece of paper to a surface but releases cleanly enough to be repositioned without damaging the surface — was originally a failure. Spencer Silver, a chemist at 3M, developed the adhesive in 1968 while attempting to create a super-strong adhesive. The adhesive he produced was weak. By any specification-based evaluation, it was a failure — it did not satisfy the constraint it was designed to meet.

The transformation of this failure into a product required a different kind of intelligence: the intelligence to recognize that a property which was wrong for one application might be right for another. Art Fry, Silver's colleague at 3M, made this connection when he was frustrated by the bookmarks that kept falling out of his church hymnal. The weak adhesive would hold a bookmark in place without damaging the page. The Post-it note was born from the intersection of a failed specification and an embodied frustration — neither of which, alone, would have produced the insight. The combination required a human mind that could hold both the failed property and the unmet need simultaneously and see the connection between them.

This form of intelligence — the capacity to see unexpected connections between a failure and an unmet need — is precisely the form of intelligence that the evolutionary design process cultivates. The designer who has encountered many failures, who has lived with many frustrated users, who has a rich library of unsolved problems and unexploited properties, is the designer most likely to make the connection that produces the Post-it note, the four-tined fork, the modern zipper.

AI can generate variations with extraordinary speed. It can test those variations against specified constraints with extraordinary rigor. It can produce, in minutes, the range of options that the evolutionary process required decades to explore. This is a genuine and significant capability, and Petroski, who valued efficiency in engineering, would not have dismissed it. But the capability operates within the space of what has been specified, and the most important innovations in the history of designed objects — the innovations that transformed not just the form of the object but the understanding of what the object could be — occurred at the boundary between the specified and the unspecified, between the intended and the accidental, between the failure and the unexpected recognition that the failure contained the seed of something valuable.

The AI can search exhaustively within the design space. It cannot redefine the design space. That redefinition — the moment when a failed adhesive becomes a bookmark, when a military communication network becomes the internet, when a toy designed for children becomes a tool for physical therapy — is the moment that requires the designer, the user, the human being who lives in the world of objects and feels, in her body, the gap between what exists and what is needed.

Petroski's evolutionary framework is not an argument against AI in design. It is an argument for understanding what AI contributes and what it does not, so that the contribution can be leveraged without losing the process that AI cannot replicate. The AI accelerates the later stages of evolutionary design — the generation and testing of variations — with extraordinary power. The earlier stage — the identification of what is wrong, the felt inadequacy that drives modification — remains the province of the embodied human user whose frustration is the most valuable signal in the entire design process.

The fork did not arrive at four tines through optimization. It arrived through the slow, distributed, embodied process of people eating with forks and discovering, meal by meal, what did not work. That process deposited intelligence in the object — intelligence that any AI can now incorporate into a new design. But the process also deposited intelligence in the people who participated in it — intelligence about how to see what is wrong, how to feel what is inadequate, how to recognize the gap between what exists and what is needed. That intelligence is not in the data. It is in the hands that held the fork, and it is the intelligence that will be needed most when the AI-generated fork encounters the condition that no specification anticipated, and the question becomes not what the data says but what the designer feels.

Chapter 5: Design as Hypothesis

In 1978, a routine inspection of the Silver Bridge replacement over the Ohio River revealed a hairline crack in an eyebar chain link. The discovery was unremarkable in isolation — inspectors find cracks regularly, and most are benign. What made this crack significant was its location: the same structural element, performing the same function, in the successor to a bridge that had collapsed eleven years earlier, killing forty-six people, because of a crack in an eyebar chain link that no inspection had detected.

The original Silver Bridge, a suspension structure that carried U.S. Route 35 across the Ohio between Point Pleasant, West Virginia, and Gallipolis, Ohio, collapsed without warning on December 15, 1967. The failure was traced to a single eyebar — a flat steel link in the chain that supported the bridge deck — in which a small crack, initiated by corrosion and stress, had grown over decades until the remaining cross-section could no longer carry the load. The crack was invisible to external inspection because it was located inside the pinhole where the eyebar connected to the next link. The bridge had been carrying traffic for thirty-nine years. The design was considered sound. The failure was catastrophic and instantaneous: one link failed, the chain separated, the deck dropped into the river during rush-hour traffic.

The replacement bridge was designed with explicit attention to the failure mode that had destroyed its predecessor. The eyebar chains were replaced with wire-cable suspension. Inspection access points were built into the design. The failure of the first bridge became, in Petroski's terms, the hypothesis that the second bridge was designed to refute.

This is the principle that structures every argument in this chapter: every design is a hypothesis. Not metaphorically. Literally. A bridge is a prediction that a specific configuration of materials and geometry will resist specific forces under specific conditions for a specific duration. The prediction is tested not in a controlled laboratory but in the world — continuously, by every vehicle that crosses, every wind that blows, every temperature cycle that expands and contracts the steel. Every day the bridge stands is a day the hypothesis has not been refuted. The standing bridge is not proof that the hypothesis is correct. It is proof that the hypothesis has not yet encountered the conditions that would reveal its error.

Petroski developed this view across several books, most explicitly in Design Paradigms: Case Histories of Error and Judgment in Engineering, published in 1994. The framing was deliberate: he wanted engineers to think of their designs not as solutions — which implies finality, completeness, sufficiency — but as hypotheses, which implies provisionality, testability, and the ever-present possibility of refutation. The distinction is not semantic. It produces a fundamentally different relationship between the engineer and her work. The engineer who believes she has produced a solution is inclined to defend it. The engineer who knows she has produced a hypothesis is inclined to test it — to look for the conditions under which it might fail, because finding those conditions before the world does is the difference between a controlled experiment and a catastrophe.

AI generates hypotheses with extraordinary speed and sophistication. A structural optimization algorithm can explore thousands of configurations in the time a human engineer would need to evaluate one. It can test each configuration against dozens of load cases, hundreds of environmental conditions, and multiple failure criteria simultaneously. The output is a design that satisfies all specified constraints, often with an elegance that reflects the mathematical rigor of the optimization process.

The difficulty is that AI generates these hypotheses without understanding them as hypotheses. The output does not present itself as a prediction awaiting refutation. It presents itself as a solution — a configuration that satisfies the constraints, full stop. The provisionality that Petroski considered essential to sound engineering practice is absent from the output, not because the AI has concluded that the design is final, but because the concept of provisionality is not part of the AI's architecture. The system does not know that its output is a prediction about the future. It knows only that its output satisfies the specified constraints derived from the past.

This absence has consequences. The engineer who generates a design by hand — who selects each member, sizes each connection, evaluates each load path — understands the hypothesis she has produced because she has constructed it. She knows why each element is configured as it is. She knows which decisions were well-supported by analysis and which were judgment calls made in the presence of uncertainty. She knows where the design is robust and where it is sensitive — where a small change in conditions would produce a small change in response and where a small change in conditions might produce a disproportionately large one.

The engineer who receives a design from an AI system may understand what the design is — may be able to read the drawings, check the member sizes, verify that the specifications have been met — without understanding why the design is what it is. The what and the why are different forms of knowledge, and the distinction between them is the distinction between the operator and the engineer. The operator can execute the design. The engineer can evaluate it — can ask whether the hypothesis it embodies has been tested against the right conditions, whether the constraints it satisfies are the constraints that matter, whether the situations it might encounter in service include conditions that the optimization did not consider.

The Silver Bridge collapse illustrates why this distinction is consequential. The original bridge was designed according to the best available understanding of suspension bridge behavior. It satisfied the codes of its era. Its configuration was consistent with successful precedent. The hypothesis it embodied — that eyebar chains could support a highway bridge for an indefinite period without inspection access to the interior of the pin connections — was never articulated as a hypothesis, because the engineers who designed it were not thinking in those terms. They were thinking in terms of solutions, and the solution appeared adequate.

The hypothesis was wrong. The conditions the bridge encountered — decades of corrosive exposure at a geometrically stressed point, invisible to inspection — were not included in the original design's parameter space. The design was a prediction about a future it did not adequately imagine, and the failure was the refutation that revealed the limits of the prediction.

An AI system trained on modern engineering data would not replicate the Silver Bridge's specific failure mode. The codes have been updated. The lessons have been incorporated. Corrosion-fatigue interaction at connection points is a recognized design consideration. The AI would produce a bridge that accounts for this specific failure mode, because this specific failure mode is now part of the codified knowledge base.

But the AI would produce the design with the same epistemological posture that characterized the original Silver Bridge's designers: the posture of the solution, not the hypothesis. The output would satisfy all specified constraints. It would incorporate all codified lessons. It would be, within the boundaries of what is currently known, correct. And it would contain, embedded in its correctness, the same structural vulnerability that has preceded every catastrophic engineering failure in history — the assumption that the specification is complete, that the codes are sufficient, that the lessons of past failures have exhausted the catalog of possible futures.

Petroski observed, with the dry precision of a career spent studying collapse, that the most dangerous moment in any engineering enterprise is the moment when the practitioners believe they have understood the problem completely. Complete understanding is the precondition for complacency, and complacency is the precondition for the catastrophe that reveals what the understanding missed.

The AI does not believe it has understood the problem completely. The AI does not believe anything. But the engineer who receives the AI's output may believe that the output represents complete understanding, because the output is comprehensive, rigorous, and consistent with everything the profession currently knows. The comprehensiveness of the output becomes a source of confidence, and the confidence becomes a barrier to the questioning that Petroski considered the engineer's most important activity.

The questioning takes a specific form. Not: Is this design correct? That question can be answered by checking the calculations, and the AI's calculations are likely to be more accurate than a human engineer's. The question is: Under what conditions might this design be wrong? That question cannot be answered by the design itself, because the design embodies its assumptions. The assumptions are invisible from inside the design, the way the water is invisible to the fish. They can only be seen from outside — from the perspective of someone who knows that every design is a prediction, that every prediction is based on a model, and that every model is a simplification of a reality more complex than any model can capture.

Petroski developed this perspective through decades of studying what happens when predictions fail. Each failure he examined was, in retrospect, the refutation of a hypothesis that the profession had not recognized as hypothetical. The Tay Bridge's assumption about wind. The Quebec Bridge's assumption about linear scaling. The Tacoma Narrows' assumption about static analysis being sufficient. The Silver Bridge's assumption about the durability of uninspectable connections. In each case, the assumption was embedded in the design, invisible to those who operated within the design's framework, and visible only after the failure that revealed it.

The AI does not create this problem. The problem — the invisibility of assumptions from within the framework that contains them — is inherent in all engineering and, indeed, in all modeling of complex systems. What the AI does is change the scale at which the problem operates and the speed at which its consequences compound.

When a human engineer generates a design, the design contains a relatively small number of assumptions, each of which the engineer has at least implicitly evaluated. The engineer may not have articulated every assumption — that is part of what makes engineering judgment tacit rather than explicit — but she has, in the process of constructing the design, encountered each assumption and either accepted it or modified it. The assumptions are few enough, and the process of construction is slow enough, that the engineer's judgment has had the opportunity to engage with each one.

When an AI system generates a design, the design may contain thousands of assumptions embedded in the training data, the optimization algorithm, and the constraint specification. The engineer who receives the output has not encountered these assumptions individually, because she did not construct the design. She received it. The assumptions are not fewer — they are vastly more numerous — but her engagement with them is less, because the construction process that would have forced engagement has been bypassed.

This is the specific mechanism by which AI-augmented engineering may erode the hypothesis-testing posture that Petroski considered essential. Not by producing bad designs — the designs may be excellent — but by producing designs whose embedded assumptions are opaque to the engineers who are responsible for evaluating them. The design is a hypothesis, but the engineer cannot see what the hypothesis predicts, because the hypothesis was constructed by a process she did not participate in and cannot fully reconstruct.

The defense Petroski would have recommended is educational, not technical. It is the cultivation of engineers who approach every design — especially the ones that appear most comprehensive and most correct — with the specific suspicion that Petroski spent his career trying to instill: the suspicion that the design is a prediction, that every prediction is wrong about something, and that the engineer's job is to find what the prediction is wrong about before the world does.

This suspicion is not cynicism. It is the engineering form of intellectual humility — the recognition that models are approximations, that codes are incomplete, and that the standing bridge is not proof of understanding but only proof that the understanding has not yet been tested to its limit. Petroski articulated this recognition more clearly than any engineer of his generation, and his articulation has become more necessary as the tools that generate designs have become more powerful and the designs they generate have become more opaque.

The Silver Bridge's successor stands. Its hypothesis — that wire-cable suspension, with inspection access and corrosion monitoring, can support a highway bridge safely — has not been refuted. But the engineer who designed that bridge knew why her hypothesis was different from the one that had failed. She knew because she had studied the failure, understood the assumption that had killed forty-six people, and designed her bridge as an explicit correction of that assumption.

The AI that generates the next bridge over the Ohio River will incorporate the same correction. It will be in the data. It will be in the code. What it may not be is in the engineer. And when the conditions arrive that test the new bridge's hypothesis — conditions that neither the codes nor the data anticipate, conditions that exist in the gap between the model and the world — it will be the engineer, not the AI, who must recognize that the hypothesis is being refuted and act before the refutation becomes a catastrophe.

That recognition requires the understanding that the design was always a hypothesis. And that understanding is exactly what the apparent completeness of AI-generated design makes hardest to maintain.

Chapter 6: Small Failures and the Immune System

In structural engineering, a crack is not always a catastrophe. More often, it is a message.

A hairline fracture appearing in a concrete beam, months or years before the beam would actually fail, communicates something specific to the engineer who knows how to read it: the stress distribution in this region exceeds what the design anticipated. The crack is the structure talking, reporting from the field conditions that the model on the drawing board did not predict. The engineer who observes the crack, interprets it, and modifies the maintenance protocol or the loading schedule in response has received a warning — an opportunity to intervene before the small failure becomes a large one.

Petroski understood small failures as the immune system of engineering practice. The analogy is precise, not decorative. A biological immune system functions by detecting threats early, when they are small enough to be managed, and mounting a response that prevents the threat from growing into a systemic crisis. The immune system does not prevent infection. It prevents infection from becoming fatal. It does this by operating in the margin between the initial incursion and the point of no return — the margin within which intervention is possible and effective.

Engineering's small failures operate in the same margin. The deflection that exceeds the calculated value by a small percentage. The vibration that occurs at a frequency the model did not predict. The material that fatigues slightly faster than the test data suggested. Each of these is a small failure — a departure from the design hypothesis — and each is an opportunity for the engineer to update her understanding of the real conditions the structure is encountering, to recalibrate the model, and to intervene before the departure grows large enough to threaten the structure's integrity.

The Citicorp Center in Manhattan provides what may be the most instructive example of the small-failure immune system operating as intended — though, in this case, the "small failure" was identified not through observation of the structure but through a question asked by a student.

In 1978, a year after the building's completion, an undergraduate student at Princeton asked the structural engineer William LeMessurier about the building's unusual design — specifically, about whether the corner columns, which were located at the midpoints of the building's faces rather than at the corners, made the structure vulnerable to quartering winds, winds that blew diagonally against the building rather than perpendicular to a face. LeMessurier, who had not fully analyzed this load case, investigated and discovered that the building was indeed vulnerable. Changes made during construction — the substitution of bolted connections for the welded connections the design specified — had reduced the structure's capacity to resist quartering winds below the level required by the building code. A sufficiently strong storm could cause the building to collapse.

The discovery was a small failure — not a physical failure of the structure, but a failure of the design process, identified before the conditions that would have produced a physical failure arrived. LeMessurier reported the vulnerability, a remediation plan was implemented (additional welding of the bolted connections, conducted at night to avoid public alarm), and the building was brought into compliance. The structure stands today, sound and occupied, because the small failure was identified and addressed within the margin between the design error and the catastrophic weather event that would have exploited it.

The margin is the key concept. The Citicorp Center had a margin because the quartering winds strong enough to threaten the building had not yet occurred. The margin was measured in time — the time between the identification of the vulnerability and the arrival of the storm that would test it. During that time, intervention was possible. After the storm's arrival, intervention would have been impossible, and the result would have been the collapse of a fifty-nine-story building in midtown Manhattan.

Small failures provide the margin. They appear in the gap between what the design predicted and what the real world delivers — in the zone where the departure is large enough to be detected but small enough to be corrected. Petroski argued that this zone is the most important zone in all of engineering, and that the practices and traditions of the profession are oriented, more than any other single objective, toward maintaining it.

Inspection protocols exist to detect small failures. Maintenance schedules exist to address them. Load testing exists to provoke them under controlled conditions. Safety factors exist to create the structural margin within which they can occur without producing immediate catastrophe. The entire apparatus of engineering practice — the codes, the standards, the inspection requirements, the factors of safety — can be understood as a system designed to ensure that failures are small before they become large, and that the small failures are detected, interpreted, and addressed before the large ones arrive.

AI optimization, by its nature, operates in tension with this system.

The tension is structural, not incidental. Optimization seeks the configuration that satisfies the specified constraints with minimum excess. The margin within which small failures occur is, from the optimizer's perspective, excess — material that is not carrying its share of the specified load, dimensions that exceed the specified minimum, capacity that has been provided but is not, under the specified conditions, used. The optimizer identifies this excess and proposes its removal, because the removal produces a more efficient design.

The removal also produces a more brittle design — a design in which the gap between normal operation and failure has been narrowed. In a structure with a generous factor of safety, the first sign of overstress is a small, observable, non-catastrophic departure from predicted behavior: a crack, a deflection, a vibration. These departures occur in the margin between the design load and the failure load. In a structure that has been optimized to operate near its capacity, this margin is reduced or eliminated. The structure does not crack before it breaks. It does not deflect before it collapses. It does not vibrate before it tears apart. The small failures that would have provided warning are absent, because the margin in which they would have occurred has been consumed by the optimization.

This is not a theoretical concern. The progression toward failure in the Tacoma Narrows Bridge followed precisely this pattern. The bridge's deck was optimized for efficiency — shallow, light, elegant. The optimization had removed the excess depth that would have damped the aerodynamic oscillations before they reached destructive amplitude. A deeper deck — a less efficient deck, in purely structural terms — would have oscillated, and the oscillation would have been observable, and the observation would have prompted investigation, and the investigation might have led to remediation before the bridge destroyed itself. The optimization produced a design in which the first manifestation of the problem was also the last: the oscillations grew without check because there was no margin within which they could have been detected at a manageable amplitude.

Petroski's observation about computer-augmented design from 1985 is directly applicable here. "As more complex structures are designed because it is believed that the computer can do what man cannot," he wrote, "then there is indeed an increased likelihood that structures will fail, for the further we stray from experience the less likely we are to think of all the right questions." The passage describes a mechanism that operates at two levels simultaneously: the computer enables more complex designs, and the complexity of those designs exceeds the engineer's experiential base, which means the engineer is less equipped to identify the failure modes the design might encounter.

Applied to AI optimization, the mechanism operates with even greater force. AI enables not just more complex designs but more optimized ones — designs that operate closer to their theoretical capacity, with less margin for the unexpected. The optimization is performed with mathematical rigor that exceeds any human engineer's capacity. But the mathematical rigor applies only to the specified conditions. The unspecified conditions — the quartering winds that no one asked about, the corrosion-fatigue interaction that no one modeled, the construction modification that no one evaluated — fall outside the optimization's scope and therefore outside the margin that the optimization has reduced.

The defense is not to avoid optimization. Optimization is valuable, and the gains it produces — in material efficiency, in cost, in environmental impact — are real and significant. The defense is to optimize within a margin that has been deliberately preserved — to tell the optimization algorithm, in effect, "Find the most efficient design that maintains a factor of safety of X against conditions that are not included in the specified constraints."

This is not a trivial instruction to implement, because it requires the engineer to specify the unspecifiable — to define a margin against conditions she cannot predict. It requires, in other words, exactly the kind of judgment that Petroski spent his career arguing is the engineer's most important contribution: the judgment about how much margin is enough, informed by the study of past failures, calibrated by the understanding that every model is incomplete, and exercised with the humility that comes from knowing that the next failure will involve a condition the current model does not include.

The engineer who uses AI optimization without preserving this margin is the engineer who has removed the immune system. The structure she produces may be more efficient than anything a human designer could achieve. It may operate beautifully under every condition the codes specify. And when the unspecified condition arrives — as it will, because the future is not a dataset and cannot be exhaustively anticipated — the structure will fail without warning, because the warning system has been optimized away.

The small failures that engineering needs are the failures it can afford. The crack that appears before the beam breaks. The deflection that alerts the inspector before the collapse. The vibration that prompts investigation before the resonance reaches destructive amplitude. These are not defects in the design. They are features — designed-in warnings that the structure's actual conditions are departing from its predicted conditions, providing the margin of time within which human judgment can intervene.

Preserving this margin in the age of AI requires a conscious decision to build less efficiently than the optimizer recommends — to accept a design that is heavier, more expensive, less elegant than the mathematical optimum, because the mathematical optimum has been achieved at the cost of the warning system that protects against the mathematical model's incompleteness.

It requires, in the deepest sense, the willingness to value safety over elegance, margin over efficiency, and the small, manageable, informative failures over the seamless performance that the optimization promises and that the real world, with its inexhaustible capacity for surprise, will eventually test.

Petroski would have said that this willingness is not a technical skill. It is an ethical commitment — a commitment to the people who will occupy the building, cross the bridge, use the product. The commitment says: the design could be more efficient, but efficiency is not the only value. The people inside the structure are a value, and the margin that protects them is not waste. It is the profession's promise that the gap between what is known and what will happen has been acknowledged and accommodated, even at the cost of mathematical elegance.

That promise is the immune system. And the immune system must be defended against the optimizer's indifference to everything it has not been told to value.

Chapter 7: The Complacency Cycle

Petroski observed, with the precision of someone who had counted the intervals, that engineering catastrophes recur on a roughly thirty-year cycle.

The observation was not casual. He documented it across domains — bridge engineering, building construction, aerospace — and found the same rhythm repeating. A disaster occurs. The profession mobilizes. Codes are revised. Inspection protocols are tightened. A generation of engineers, marked by the catastrophe they witnessed or studied in the immediate aftermath, practices with heightened caution. The structures they design carry generous margins. The assumptions they make are conservative. The profession, collectively, remembers.

Then, gradually, the memory fades. Not because the engineers forget — the codes still carry the revisions — but because the felt urgency that drove the revisions dissipates. The engineers who witnessed the catastrophe retire. Their replacements know the revised codes but not the collapse that prompted the revision. They know the coefficient but not the bodies. The codes become rules to be followed rather than lessons to be honored. The following they receive is competent but not cautious, precise but not humble.

And the margins begin to narrow. Each successful design confirms, for this new generation, that the margins can be reduced. The profession, collectively, grows more confident. The structures become more ambitious. The assumptions become more aggressive. Until the conditions arrive that the narrowed margins cannot accommodate, and the cycle begins again.

The Tay Bridge collapsed in 1879. The Quebec Bridge collapsed in 1907 — twenty-eight years later. The Tacoma Narrows collapsed in 1940 — thirty-three years after that. The Silver Bridge collapsed in 1967 — twenty-seven years later. The Hyatt Regency walkways collapsed in 1981 — fourteen years after the Silver Bridge, but the walkways were a building structure, not a bridge, suggesting that the cycle operates within subdisciplines as well as across the field. The I-35W Mississippi River bridge collapsed in 2007 — twenty-six years after the Hyatt Regency.

The intervals are not exact. They are approximate, and Petroski, who was an engineer and not a numerologist, did not claim precision for the pattern. What he claimed was that the rhythm existed and that it was driven by a human mechanism, not a technical one. The structures do not weaken on a thirty-year schedule. The profession's caution weakens on a generational schedule, because caution is a product of memory, and memory is a product of experience, and experience is lost when the people who had it leave the profession.

This cycle has a specific implication for AI-augmented engineering that Petroski did not live to articulate but that his framework makes clear: AI may compress the cycle from thirty years to five.

The mechanism is straightforward. In the pre-AI era, the confidence that drives the complacency phase of the cycle accumulated at human speed. Each successful bridge was designed over a period of years, reviewed by committees, constructed over additional years, and evaluated through decades of service. The accumulation of successful precedent was slow, because the generation of new precedent was slow. Thirty years of successful practice produced, perhaps, a dozen major structures in a given subdiscipline, each of which added incrementally to the profession's confidence.

In the AI era, successful designs can be generated in hours. An optimization algorithm can produce thousands of variants of a structural concept in the time previously required to develop one. Each variant that satisfies the specified constraints is, within the framework of the optimization, a success. The accumulation of successful precedent — designs that work, configurations that satisfy, outputs that pass review — accelerates by orders of magnitude.

The confidence that accumulates with each success accelerates proportionally. The AI-augmented engineer may, in five years of practice, experience the equivalent of thirty years of pre-AI precedent: thousands of successful designs, each reinforcing the conviction that the tool is reliable, that the specifications are sufficient, that the codes are complete. The psychological effect is the same as the effect Petroski documented across generations of bridge engineers: the confidence grows, the margins narrow, and the caution that would prompt the question "What are we missing?" diminishes.

But the real-world testing of those designs has not accelerated. A bridge designed by AI is still subjected to decades of traffic, weather, and material aging before its hypothesis is fully tested. The confidence has outrun the testing. The number of designs that appear successful has increased, but the number of designs that have been tested by the full range of conditions they will encounter in service has not. The gap between the confidence and the testing is the gap in which the next catastrophe lives.

Petroski identified a specific cognitive mechanism that drives the complacency cycle, and this mechanism is amplified rather than attenuated by AI. He called it the "extrapolation fallacy" — the tendency to assume that because a principle has held within a tested range, it will hold beyond that range. The Tacoma Narrows was an extrapolation fallacy: the principles of suspension bridge design held for shorter, deeper-decked bridges, and the designers extrapolated those principles to a longer, shallower-decked bridge without recognizing that the extrapolation crossed a boundary where the principles no longer applied.

AI optimization is, in a precise sense, an extrapolation machine. It generates designs by identifying patterns in existing data and projecting those patterns onto new configurations. The projection is sophisticated — it accounts for nonlinearities, interactions, and boundary conditions that the data includes. But it is still an extrapolation, and the extrapolation is only as valid as the data's coverage of the conditions the new design will encounter. When the AI produces a design that is more efficient, more ambitious, or more novel than any design in its training data, it has extrapolated beyond the tested range. The extrapolation may be valid. It may also cross a boundary that the data does not mark, because the boundary is defined by a failure that has not yet occurred.

The engineer's defense against the extrapolation fallacy is the study of the cases where extrapolation failed — the specific, detailed, often painful examination of what happened when someone else assumed that a principle tested in one range would hold in another. This study is what Petroski's career was devoted to, and it is the study that the AI era makes simultaneously more necessary and more difficult. More necessary because the speed of design generation increases the speed of extrapolation and therefore the speed at which the complacency cycle turns. More difficult because the very efficiency that AI provides reduces the time available for the reflective study of failure that is the only known antidote to complacency.

The engineer who generates ten designs per year has time to study failure cases between designs. The engineer who reviews ten thousand AI-generated designs per year does not. The volume of output that AI enables is itself a pressure against the reflective practice that keeps the complacency cycle from accelerating. The tool that could free the engineer to study failure more deeply may instead consume her attention with the review of outputs that leave no time for study at all.

This is the paradox that Petroski's framework, applied to the AI age, makes most visible. The tool that accelerates design generation is also the tool that accelerates the accumulation of confidence that drives the complacency cycle. The defense against that acceleration is the study of failure, which requires time, attention, and the specific intellectual posture of humility that confidence erodes. The tool provides the speed. The defense requires the slowness. And the culture of engineering practice must find a way to accommodate both — to leverage the speed without surrendering the slowness that keeps the cycle from turning faster than the profession's ability to learn.

Petroski, who died in June 2023, just as the acceleration was becoming visible, left behind a body of work that constitutes the most detailed argument ever made for the necessity of slowness in engineering — the necessity of pausing between successes to ask what the success concealed, of studying failures not as historical curiosities but as current warnings, of maintaining the margin of caution that every generation's confidence seeks to reduce.

The engineers who build bridges today have access to tools Petroski could not have imagined. The question his work leaves them is whether they will use those tools with the humility that his career was devoted to instilling — the humility that recognizes every success as provisional, every design as hypothetical, and every period of confidence as the precondition for the catastrophe that will reveal what the confidence concealed.

The cycle turns. It has always turned. The question is not whether it will turn again but whether the engineers who are responsible for the structures on which human lives depend will see it turning in time to act, or whether the speed of the tools they wield will have outpaced the speed at which engineering wisdom accumulates — leaving them confident, capable, efficient, and unprepared for the failure that no one asked the right question to prevent.

Chapter 8: The Unbuilt Bridge

In 1890, the civil engineer Gustav Lindenthal proposed a suspension bridge across the Hudson River at Fifty-Seventh Street in Manhattan. The bridge would carry sixteen railroad tracks, twelve lanes of vehicular traffic, and pedestrian walkways. Its main span would exceed three thousand feet — nearly twice the span of the Brooklyn Bridge, which had been completed only seven years earlier and was at that time the longest suspension span in the world. Lindenthal spent decades refining the proposal. He secured political support, modified the design repeatedly in response to changing requirements, and devoted much of his professional life to the project.

The bridge was never built. Not because Lindenthal lacked talent — he was among the most accomplished bridge engineers of his generation, responsible for the Hell Gate Bridge and the Scioto River Bridge, both notable structures. Not because the proposal was technically impossible — the span he proposed would eventually be exceeded by the George Washington Bridge in 1931. The bridge was not built because, at the time Lindenthal proposed it, the engineering profession's honest assessment was that the understanding had not yet caught up with the ambition. The span was unprecedented. The loading was unprecedented. The combination of railroad and vehicular traffic on a single suspension structure was unprecedented. And unprecedented, in engineering, is not a synonym for bold. It is a synonym for untested.

Petroski examined the unbuilt bridge — not Lindenthal's specifically, but the concept — as a category of engineering that reveals something essential about the profession's relationship to its own capabilities. Every era of engineering has its unbuilt bridges: designs that were conceived, developed, sometimes championed for decades, and ultimately not constructed because the profession concluded that the gap between what it knew and what the design required was too large to be safely bridged by the factor of safety.

The unbuilt bridge is not a failure of nerve. It is a triumph of judgment. It is the engineer saying: this is possible in principle and irresponsible in practice, because the principles have not been validated at this scale, and the consequences of discovering their limits during construction or operation would be measured in human lives.

This judgment — the capacity to distinguish between the possible and the safe, between what the formulas permit and what experience warrants — is the form of engineering intelligence that Petroski valued most highly. It is also the form most directly threatened by AI's capabilities.

AI does not possess the concept of the unbuilt bridge. The system generates designs within the parameter space defined by its constraints and training data. If a design satisfies the constraints, it is presented as valid. If it does not, the parameters are adjusted until a satisfying design is found or the system reports that no solution exists within the specified bounds. There is no intermediate state — no condition in which the system reports that a design is technically feasible but inadvisable, that the constraints are satisfied but the margin of safety against unspecified conditions is insufficient, that the formulas produce an answer but the answer has not been validated by the kind of extended, real-world testing that would justify confidence.

The machine does not hesitate. It cannot hesitate, because hesitation requires a felt sense of the boundary between the validated and the speculative — a sense that is developed through the study of cases where that boundary was crossed with catastrophic results. The AI has processed the data from those cases. It has not felt the weight of them. And the weight — the engineer's awareness that the seventy-five people in the Firth of Tay and the seventy-five workers on the St. Lawrence are not data points but human beings whose deaths resulted from precisely the kind of overreach that the unbuilt bridge is designed to prevent — is what produces the hesitation.

This is not sentimentality. It is a functional mechanism. The engineer who has studied the Tacoma Narrows collapse in detail, who has read the testimony of the witnesses, who has seen the film of the deck twisting like paper, carries that study as a physical sensation — a tightening, a caution, a reluctance to assume that the model is complete. This physical sensation is a signal. It fires when the engineer encounters a design that pushes beyond the validated range, and it produces the specific cognitive response that characterizes engineering judgment: the impulse to test further, to check again, to ask the question that the analysis did not prompt but that the feeling insists is necessary.

The impulse is fallible. It can be too conservative, leading engineers to avoid designs that would have been safe. It can be miscalibrated, firing in response to superficial similarities between a current design and a past failure that are not structurally relevant. But its average effect, across the history of the profession, has been protective. The bridges that were not built because engineers felt uneasy about them include, almost certainly, some that would have stood. They also include some that would have fallen. The unbuilt bridge is the profession's way of accepting the cost of excessive caution — some forgone capability — in exchange for the benefit of avoiding catastrophes whose cost is measured in lives.

AI changes this calculus because it shifts the default. In the pre-AI era, the default was inaction. Designing a bridge was expensive, time-consuming, and required significant institutional commitment before the first calculation was performed. The cost of proposing a design was high enough that proposals were self-selecting: only designs that the engineer believed, based on experience and judgment, to be both feasible and safe were proposed. The friction of the design process acted as a filter, and the filter was calibrated — imperfectly but functionally — by the engineer's judgment.

In the AI era, the default shifts toward action. Generating a design is cheap, fast, and requires minimal institutional commitment. The AI can produce a hundred designs in the time previously required to develop one. The cost of proposal has dropped dramatically, which means the filtering function of the design process has been weakened. Designs that would not have survived the pre-AI filter — because the engineer would have hesitated, would have felt the boundary between the validated and the speculative, would have decided the ambition exceeded the understanding — now arrive on desks as completed outputs, formatted and detailed and apparently ready for review.

The reviewer must supply the filter that the process no longer provides. The reviewer must look at a design that satisfies all specified constraints and ask: Should this structure be built? Not can it be built, which is a question about the design's compliance with codes. Should it be built, which is a question about whether the codes are adequate to the situation, whether the design pushes beyond the range of validated experience, whether the consequences of discovering its limits in service would be acceptable.

This is a harder question to ask of an AI-generated design than of a hand-generated one, for a reason that is psychological rather than technical. The hand-generated design carries, within its presentation, the marks of its creator's uncertainty. The engineer who developed it over months knows where the judgment calls were, where the analysis was inconclusive, where she made a decision in the presence of uncertainty. These marks of uncertainty are communicated, often implicitly, through the way the design is presented — the caveats in the report, the notes in the margin, the tone of the meeting in which the design is discussed. The AI-generated design carries no such marks. It is presented with uniform confidence, because the system does not experience uncertainty and therefore cannot communicate it. Every element of the output has the same epistemological status: it satisfies the constraints. The reviewer has no signal to indicate which elements are well-supported by the training data and which are extrapolations beyond it.

The unbuilt bridge, in Petroski's framework, is not a category of failure. It is a category of wisdom. It represents the profession's accumulated understanding that the gap between capability and safety is not a gap to be closed by ambition but a gap to be respected by judgment. The bridge that should have been built and was not is a cost. The bridge that should not have been built and was is a catastrophe. And the asymmetry between these two outcomes — the cost of excessive caution versus the cost of insufficient caution — is what makes the unbuilt bridge a morally serious category.

Petroski would not have argued that AI should never produce ambitious designs. The George Washington Bridge, which exceeded Lindenthal's unbuilt span, was eventually constructed safely because the profession's understanding had, by 1931, caught up with the ambition that 1890 could not yet support. The progression from the unbuilt to the built is the trajectory of engineering progress. The question is whether the progression is paced by the accumulation of validated understanding or accelerated by the availability of a tool that generates designs faster than understanding can follow.

The distinction maps onto a concern that appears throughout the broader discourse about AI's integration into human capability. When the cost of generating an output approaches zero, the critical question becomes not what can be produced but what should be produced. The AI can design the bridge. The question of whether the bridge should be built — whether the understanding is sufficient, whether the margin is adequate, whether the consequences of failure have been honestly assessed — remains a human question, and it is a question that requires the specific form of intelligence that Petroski spent his career documenting: the intelligence that comes from studying what happened when someone else's ambition exceeded their understanding.

The unbuilt bridge is the engineer's way of saying: not yet. Not because the calculation says no — the calculation may well say yes — but because the judgment, informed by the study of what happens when calculations encounter conditions they did not anticipate, says the calculation is not enough.

That "not yet" is a form of courage. It requires the engineer to resist the pressure of ambition, the seduction of capability, and the confidence that the tool's output inspires, and to say instead: the tool says this is possible, and I believe the tool. But possible is not the same as safe, and safe is not guaranteed by compliance with specifications that may not include the conditions that will determine whether this structure stands or falls.

The unbuilt bridge is engineering's conscience. AI, by making every bridge buildable on paper, threatens to silence that conscience — not through malice, but through the relentless, indiscriminate productivity of a tool that generates solutions without generating the hesitation that should accompany them.

Preserving the capacity for hesitation — for the "not yet" that protects the people who would cross the bridge — is not a technical problem. It is a problem of professional culture, of education, of the values that the profession transmits to each new generation of practitioners. Petroski devoted his career to transmitting those values. The question his work leaves to the AI age is whether the values can survive the speed — whether the profession that learned to hesitate through decades of studying catastrophe can maintain that hesitation when a tool that has never felt the weight of a collapse tells it, with mathematical confidence, that the bridge can be built.

Chapter 9: The Engineer's Judgment

On the morning of January 28, 1986, the temperature at Kennedy Space Center in Florida was thirty-six degrees Fahrenheit — fifteen degrees below the lowest temperature at which the Space Shuttle had previously launched. Engineers at Morton Thiokol, the contractor responsible for the solid rocket boosters, had spent the previous evening in a teleconference with NASA managers, arguing that the launch of the Challenger should be postponed. Their concern was specific: the O-rings that sealed the joints between the segments of the solid rocket boosters lost resilience at low temperatures. Below a certain threshold, the rubber would not compress and expand fast enough to maintain the seal against the hot gases produced during ignition. If the seal failed, the gases would escape through the joint and could ignite the external fuel tank.

The engineers had data. They had test results showing O-ring erosion at temperatures significantly above thirty-six degrees. They had photographs of recovered boosters showing charring and blow-by at the joints from previous launches conducted in cool weather. What they did not have was a precise model predicting the temperature at which the O-rings would fail catastrophically. The data showed a trend. The trend pointed toward danger. But the data did not draw a line below which disaster was certain and above which it was not.

Roger Boisjoly, the Thiokol engineer who had been studying the O-ring problem for months, argued against the launch. His argument was not based on a formula. It was based on judgment — on the accumulated understanding of a person who had spent his career working with materials and seals and who had developed, through that work, a sensitivity to the conditions under which materials behave in ways their specifications do not predict. He could feel, in the engineering sense of that word, that thirty-six degrees was wrong. The data supported his feeling. But the data did not, by itself, compel the conclusion he was drawing, because the data was incomplete — it covered a range of temperatures that did not extend to thirty-six degrees, and the extrapolation from the tested range to the untested conditions of that morning required judgment rather than calculation.

NASA managers asked for quantitative proof. The engineers could not provide it, because the phenomenon they were worried about had not been tested at the relevant temperature. The absence of quantitative proof was interpreted as the absence of risk. The launch proceeded. Seventy-three seconds after liftoff, the O-ring in the right solid rocket booster failed. The Challenger broke apart. Seven astronauts died.

Petroski studied the Challenger disaster not as an aerospace case but as an engineering case — a case that illustrated, with terrible clarity, the nature and the limits of engineering judgment. The Thiokol engineers had the judgment. They could feel the danger. The judgment was based on incomplete data, extrapolated through experience, and expressed as a recommendation rather than a proof. The institutional structure in which they operated did not know how to weigh judgment against proof, and when the two conflicted — when the judgment said no and the absence of proof said the risk was undemonstrated — the institution chose the absence of proof.

Engineering judgment is not a formula. It is not a calculation. It is not a result that can be verified by running the numbers again with different inputs. It is a cultivated sensitivity — developed over years of practice, refined by the study of failures, calibrated by the accumulated experience of working with materials and systems and forces that do not always behave as the models predict. The engineer who possesses this judgment cannot always articulate the basis for it, because the basis is not a single piece of evidence but a pattern recognition built from thousands of pieces of evidence accumulated over a career — each one too small to be decisive, all of them together producing a signal that the experienced engineer reads as clearly as a physician reads a patient's color or a sailor reads the sky.

This judgment is the thing that AI does not possess and cannot replicate through any currently known mechanism.

AI possesses engineering calculation — the ability to apply formulas, evaluate constraints, and optimize configurations with speed and accuracy that exceed any human engineer's capacity. But calculation is the map, and the territory is the world of real materials, real construction, real weather, and real use. The map is valuable. It is indispensable. But the map is always a simplification of the territory, and the simplification always omits something. Engineering judgment is the capacity to recognize what the map has omitted — to sense, before the calculation confirms it, that the territory contains a feature the map does not show.

Boisjoly's judgment on the morning of January 28 was precisely this. The map — the O-ring test data — did not extend to thirty-six degrees. The territory — the actual behavior of rubber seals at that temperature — was unknown in the precise, quantitative sense that the institution required. But Boisjoly's judgment, calibrated by years of working with seals and materials and the specific behavior of elastomers at low temperatures, told him that the territory was dangerous. He was reading the landscape with eyes trained by experience, and the landscape said: not here, not now, not at this temperature.

An AI system, presented with the same data available to Boisjoly, would process the data as given. The data showed O-ring erosion at certain temperatures. The data did not show catastrophic failure at any tested temperature. The AI would report this accurately. Whether it would extrapolate from erosion trends to the conclusion that thirty-six degrees was dangerous depends on the specific model and its training. Some models might identify the trend and flag the risk. Others might note the absence of failure data at the relevant temperature and report, correctly but fatally, that the available evidence did not demonstrate a specific failure threshold.

What the AI would not do is feel the danger. The feeling is not a mystical phenomenon. It is a pattern-recognition signal generated by a biological system that has been exposed to thousands of cases — not just the O-ring cases, but all the cases in the engineer's career where materials behaved differently at the edges of their specified ranges than in the middle. The feeling integrates information from across the entire breadth of the engineer's experience, including information that is not in the specific dataset at hand, and produces a signal that is imprecise but directionally reliable.

Petroski argued that this form of intelligence — imprecise, experiential, often inarticulate — is the most important form of intelligence in engineering practice. Not the most precise. Not the most reliable in any individual instance. But the most important, because it operates in precisely the domain where calculation cannot: the domain of the unanticipated, the unspecified, the untested.

Every catastrophic engineering failure in the historical record occurred in this domain. The Tay Bridge was destroyed by wind loads that were unanticipated. The Tacoma Narrows was destroyed by aerodynamic forces that were unspecified. The Challenger was destroyed by material behavior that was untested at the relevant conditions. In each case, the calculations were correct within their specified scope. In each case, the scope was insufficient. And in each case, the insufficiency was, or could have been, detected by engineering judgment — by the felt sense that the map was missing something — if the judgment had been trusted, sought, or given institutional weight.

The AI era creates a specific new pressure on engineering judgment, and the pressure comes not from the AI's deficiencies but from its strengths. The AI's calculations are so comprehensive, so rigorous, so thoroughly documented that the engineer who receives them may feel that there is nothing left for judgment to contribute. The analysis has been performed at a level of detail that no human could match. The constraints have been evaluated. The load cases have been tested. The optimizations have been run. What could judgment add to an analysis this thorough?

The answer — what judgment always adds — is the recognition that the analysis, however thorough, was performed within a scope that may be incomplete. The AI evaluated every load case in the specification. Judgment asks whether the specification includes every load case that matters. The AI optimized against every constraint it was given. Judgment asks whether the constraints it was given are the constraints the real world will impose. The AI tested every condition in the model. Judgment asks whether the model includes the condition that will determine whether the structure stands or falls.

These questions are not computational. They cannot be answered by more analysis, because more analysis operates within the same scope, and the question is about the sufficiency of the scope. They can only be answered by the engineer who brings to the review something the AI does not have: the accumulated experience of working in the territory that the map represents, the felt knowledge of what the territory contains that the map does not show, and the willingness to trust that felt knowledge even when the map — the beautiful, comprehensive, rigorously computed map — shows nothing wrong.

Boisjoly trusted his judgment. The institution overruled it. Seven people died. The Presidential Commission that investigated the disaster concluded that the decision-making process was flawed — that the institution had treated the absence of proof as proof of absence, and that the engineers' judgment should have been given greater weight.

Petroski would observe that this conclusion, while correct, identifies only half the problem. The other half is that the institution's ability to recognize judgment — to distinguish between the engineer's calibrated intuition and mere opinion, between the signal generated by decades of experience and the noise generated by anxiety — depends on the institution's understanding of what judgment is, how it is developed, and why it matters.

AI systems do not erode engineering judgment directly. They erode the institutional conditions under which judgment is developed and recognized. When the AI performs the analysis, the engineer does not perform the analysis, and the analysis — the direct encounter with the data, the struggle to make the model match the reality, the frustration of discovering that the calculation does not work and the investigation of why — is the process through which judgment is developed. The engineer who reviews AI output has not struggled. She has reviewed. The review may be competent. But competent review of an output is not the same as competent generation of an output, because the generation is where the judgment is built — where the engineer discovers, through direct encounter, what the data shows and what it hides, what the model captures and what it misses.

The Challenger investigation recommended specific institutional reforms: better communication channels between engineers and decision-makers, clearer protocols for evaluating risk in the presence of uncertainty, more weight given to engineering judgment in launch decisions. These reforms addressed the institutional problem. They did not address the developmental problem — the question of how to produce engineers whose judgment is worth trusting in the first place.

Petroski's answer, consistent throughout his career, was that judgment is produced by the study of failure. Not the study of success — success confirms what the engineer already believes and reinforces the confidence that the complacency cycle depends on. Failure reveals what the engineer did not know, exposes the limits of the model, and deposits the thin layers of caution and humility that accumulate, over a career, into the kind of judgment that Boisjoly brought to the Challenger teleconference.

The AI age requires engineers of judgment. It produces engineers of review. The gap between the two is the gap that Petroski spent his career trying to close, through education, through the detailed exposition of failure cases, through the patient argument that the study of what went wrong is more valuable than the study of what went right. His argument has not been refuted. It has only become more urgent, because the tool that performs the analysis is more powerful than ever, and the engineer who must evaluate the analysis — who must supply the judgment that the tool cannot — is increasingly developed under conditions that do not build the judgment she will need.

The O-ring failed at thirty-six degrees. The engineer knew it would. The institution did not listen. The question for the AI age is not whether the AI would have predicted the failure — it might have, depending on its training and its model. The question is whether the engineer who relies on the AI to perform the analysis will develop the judgment to know when the AI's analysis, like the data on that January morning, is technically accurate and fundamentally insufficient.

That judgment is the rarest and most consequential form of engineering intelligence. It cannot be coded, computed, or transferred. It can only be cultivated — through the direct, difficult, time-consuming study of what happens when the map encounters the territory and the territory wins.

Chapter 10: Engineering as Stewardship

Henry Petroski died on June 9, 2023, at the age of eighty-one. He had spent nearly four decades writing and teaching about engineering failure, and the timing of his death — six months after the release of ChatGPT, at the very beginning of the acceleration that would transform his profession — meant that he never directly addressed the questions this book has been exploring in his name.

But his framework addresses them. The principles he articulated across sixteen books and hundreds of papers — that failure is the primary teacher, that the factor of safety is a moral commitment, that every design is a hypothesis, that success breeds the complacency that produces the next catastrophe — do not require updating for the AI era. They require application. The forces they describe have not changed. The speed at which those forces operate has.

The application begins with a distinction that Petroski made repeatedly and that the AI discourse has largely failed to absorb: the distinction between engineering and calculation.

Calculation is the determination of quantities. The stress in a beam under a given load. The deflection of a floor under a specified weight. The resonant frequency of a structure in a given wind. Calculation is precise, repeatable, and verifiable. It is the domain in which AI excels — where its speed, accuracy, and capacity to evaluate thousands of configurations simultaneously produce results that are genuinely superior to anything a human calculator could achieve.

Engineering is the exercise of judgment about what to calculate, why to calculate it, and what to do when the calculation is insufficient. Engineering includes calculation, but it also includes the selection of the problem, the identification of the relevant forces, the assessment of which failure modes are most dangerous, the decision about how much margin to maintain against conditions the calculation does not cover, and the willingness to say "not yet" when the calculation says a design is feasible but the judgment says the understanding is incomplete.

AI performs calculation. Engineering is the human activity that determines whether the calculation matters.

This distinction has practical consequences that extend beyond engineering into every domain where AI is being integrated into professional practice. The lawyer who uses AI to draft a brief has received a calculation — a competent arrangement of precedents and arguments that satisfies the specified requirements. The engineering of the legal engagement — the determination of which arguments are strategically sound, which precedents are genuinely analogous rather than superficially similar, which positions will withstand the adversary's response — remains a human judgment. The physician who uses AI to analyze a scan has received a calculation — a pattern-recognition output that identifies anomalies consistent with the training data. The engineering of the diagnosis — the integration of the scan with the patient's history, the assessment of which anomaly is clinically significant, the judgment about whether to treat or to watch — remains human. The software architect who uses AI to generate code has received a calculation — working functions that satisfy the specified requirements. The engineering of the system — the determination of what the system should do, how it should handle the conditions the specification does not cover, what failure modes are acceptable and which are not — remains human.

In each case, the AI provides the artifact. The human provides the judgment about whether the artifact is adequate to the situation it must serve. And the adequacy is not a property of the artifact alone. It is a property of the relationship between the artifact and the world it will encounter — a world that is more complex, more variable, and more capable of surprise than any specification can capture.

Petroski's most consequential argument for the AI age is his insistence that this judgment is not innate. It is not a personality trait that some engineers possess and others lack. It is a cultivated capacity — developed over years of practice, refined by the study of failures, and maintained through continuous engagement with the conditions under which designs succeed and fail. The cultivation is deliberate. It requires exposure to failure cases, detailed and repeated, in educational settings and professional development. It requires the practice of designing by hand, at least some of the time, so that the engineer develops the feel for the forces, materials, and uncertainties that the AI's output conceals. It requires mentorship from experienced engineers who can transmit the tacit knowledge — the felt sense of what is right and what is wrong — that no textbook and no algorithm can convey.

The cultivation also requires something that the AI age makes increasingly difficult to justify in economic terms: time. The judgment that Boisjoly brought to the Challenger teleconference was not acquired quickly. It was deposited over decades, through thousands of encounters with materials and systems that did not always behave as the specifications predicted. Each encounter added a thin layer of understanding. The layers accumulated. The accumulation produced the signal that fired on January 28, 1986, when the data was insufficient but the judgment was clear.

AI compresses time. It generates in hours what previously required months. It offers the economic advantage of speed and the cognitive advantage of exploration — the ability to evaluate more options, in more configurations, under more conditions, than any human engineer could consider. These advantages are real, and Petroski, who valued engineering progress, would not have dismissed them.

But the compression of design time does not compress the development of judgment. The engineer who reviews a thousand AI-generated designs in a year has not acquired a thousand-fold increase in judgment. She has acquired familiarity with the AI's output — an understanding of what the tool produces and how it presents its results. This familiarity is useful but categorically different from the judgment that comes from generating designs, encountering failures, diagnosing their causes, and modifying the designs in response. The familiarity is about the tool. The judgment is about the world.

The question Petroski's framework poses to the AI age is whether the institutions that produce engineers — universities, professional organizations, firms — will prioritize the development of judgment or be satisfied with the development of familiarity. The economic incentive favors familiarity: the engineer who can effectively direct AI tools produces more output, at lower cost, than the engineer who insists on understanding the output before approving it. The safety incentive favors judgment: the engineer who understands why the design is what it is — who can evaluate its embedded assumptions, identify its untested hypotheses, and sense when its margin is insufficient — is the engineer who prevents the catastrophe that the AI-directed engineer discovers only when it arrives.

The tension between these incentives is not abstract. It is being resolved, right now, in the curricula of engineering schools, in the hiring practices of engineering firms, in the licensing requirements of professional boards. The American Society of Civil Engineers' 2024 policy statement — "AI cannot be held accountable, nor can it replace the training, experience, and judgment of a professional engineer" — represents one resolution. It asserts that judgment remains the engineer's irreducible contribution. But the assertion must be accompanied by the institutional commitment to developing that judgment, which means investing in the slow, expensive, economically inefficient process of producing engineers who have encountered failure directly, studied it in detail, and been changed by the encounter.

Petroski wrote, in what amounts to the central insight of his career, that "the further we stray from experience the less likely we are to think of all the right questions." The sentence was written about 1980s computers. It describes the 2020s with greater precision than Petroski could have anticipated.

AI enables engineering to stray further from experience than any previous tool. It generates designs that are more ambitious, more optimized, and more distant from the validated range of human experience than anything the profession has previously produced. Each design is a hypothesis about the future. Each hypothesis is tested by the world. And the engineer who must evaluate whether the hypothesis is sound — whether the design can safely enter the world and serve the people who will depend on it — must bring to that evaluation something the AI cannot provide: the judgment that comes from having walked the territory, not just studied the map.

The map is better than it has ever been. The territory has not changed. It is still the world of real materials, real forces, real weather, and real human lives that depend on the soundness of the structures engineers build. The gap between the map and the territory is where catastrophe lives, and the only thing that operates in that gap is the judgment of the person who knows, from experience, that the gap exists and that its contents cannot be specified, calculated, or optimized away.

Petroski spent his career studying what happens in the gap. His studies produced principles — the factor of safety, the complacency cycle, the evolutionary process, the unbuilt bridge — that describe the gap's dynamics with a precision that the AI's maps cannot match, because the principles are about the limits of maps, and no map can depict its own limitations.

The principles remain. The structures they describe remain. The consequences of ignoring them remain. What has changed is the speed at which the consequences arrive, the scale at which they manifest, and the confidence with which the practitioners who must guard against them approach their work.

Engineering is stewardship. It is the custodianship of the structures on which human lives depend — bridges, buildings, power systems, water systems, the physical infrastructure of civilization. The steward does not own what she protects. She maintains it. She inspects it. She worries about it. She lies awake wondering whether the design anticipated the conditions that tomorrow will bring.

The AI does not worry. It does not lie awake. It does not feel the weight of the lives that depend on the soundness of its output. This weight is the engineer's, and it is what makes engineering a human activity in the deepest sense — not because humans are the only beings who can calculate, but because humans are the only beings who can feel the consequence of a calculation that is wrong, and who can be changed by that feeling in ways that make the next calculation better.

Petroski understood this. His books are, at their core, an argument that the feeling matters — that the engineer who has felt the weight of a failure, who has studied the collapse and imagined the people inside it and carried that imagination as a physical burden, is a different and a better engineer than the one who knows only the formula that was revised in the aftermath. The formula is necessary. The feeling is what makes the formula meaningful.

To engineer is human because the consequences are human. The machines that assist the engineer do not change this. They change the speed, the scale, and the sophistication of the assistance. They do not change the fundamental character of the enterprise, which is the exercise of human judgment in the service of human safety, performed by people who carry the weight of knowing that their judgment, if wrong, will be measured not in errors on a screen but in lives lost on a bridge, in a building, in a structure that was supposed to hold and did not.

The factor of safety is not a number. It is a promise. The unbuilt bridge is not a failure. It is a form of courage. The small crack in the beam is not a defect. It is a warning, offered by the structure to the engineer who knows how to listen.

Petroski spent his life learning to listen. The structures he studied spoke in the language of failure — a language that is quiet, specific, and consequential. The AI speaks in a different language: the language of optimization, of efficiency, of mathematical elegance. Both languages are necessary. Neither is sufficient alone.

The engineer of the AI age must be fluent in both. She must be able to direct the tool with the precision it requires and evaluate its output with the judgment it cannot provide. She must be fast enough to leverage the tool's speed and slow enough to catch what the speed obscures. She must be confident enough to build and humble enough to know that what she builds is a hypothesis, not a solution, and that the hypothesis will be tested by a world that does not respect the elegance of the calculation.

She must, in other words, be an engineer. The word has not changed its meaning. Only its difficulty has increased — and difficulty, as Petroski would have been the first to observe, is where the learning lives.

Epilogue

The crack in a beam is not a defect. That sentence has been with me for weeks now, rearranging things.

I build software. I have never designed a bridge. I have never calculated a wind load or sized a steel member or specified the graphite-to-clay ratio in a pencil. Petroski's world is not my world. His materials are concrete and steel and wood. Mine are tokens and parameters and the invisible architectures of code that run on machines I will never touch.

And yet the crack in the beam is the thing I cannot stop thinking about, because it names something I have been struggling to articulate about the tools I use and the tools I build.

In The Orange Pill, I wrote about ascending friction — the idea that when AI removes difficulty at one level, the difficulty relocates to a higher cognitive floor. The implementation vanishes. The judgment ascends. I believed this then and I believe it now. But Petroski has shown me something I missed: the ascending friction only exists if you build in the margin for it to appear.

The crack appears in the margin. The beam that is optimized to the edge of its capacity does not crack before it collapses. It just collapses. The small failure, the warning, the message from reality to the builder — that exists only in the space between what the structure was designed to carry and what it was actually asked to carry. Eliminate the space, and you eliminate the warning.

I think about the engineers in Trivandrum. I celebrated their twenty-fold productivity gain. I still celebrate it. But Petroski has me asking a different question: In all that speed, in all that output, where were the small failures? Where were the cracks that told them something their models did not include? When the code works on the first pass because Claude wrote it and it compiles clean, what has been lost in the absence of the bug that would have forced the engineer to understand the system she is building?

The factor of safety is a moral commitment. That is the sentence I want to carry forward. Not a technical parameter. A promise to the people who will depend on what you build. The promise says: I know my model is incomplete. I have built in room for what I do not know. The room is not waste. It is the structure's way of saying, I was built by someone who understood that the world is more complex than any model can capture, and who cared enough about the people inside this structure to account for that complexity even when the math said it was unnecessary.

AI does not make that promise. AI satisfies constraints. The promise is human, and it lives in the specific form of humility that Petroski spent his life trying to teach: the knowledge that success is provisional, that every standing bridge is a hypothesis that has not yet been refuted, and that the engineer's most important contribution is not the calculation but the judgment about whether the calculation is asking the right question.

My son asked me at dinner whether AI was going to take everyone's jobs. I told him the truth: I do not know. But Petroski has given me something to add. The jobs that matter most — the ones that keep people safe, that maintain the structures on which lives depend, that carry the weight of consequence — those jobs require something no tool can provide. They require the willingness to study what went wrong. To sit with the rubble. To feel the weight of a failure deeply enough that the feeling changes how you build.

The pencil is not simple. The crack is not a defect. The unbuilt bridge is not a failure. These inversions are Petroski's gift, and they are exactly the inversions this moment requires — the recognition that what looks like waste is wisdom, what looks like hesitation is courage, and what looks like a defect is the structure trying to tell you something, if you have earned the judgment to hear it.

Edo Segal

IT'S THE ONLY WARNING YOU'LL GET.**

Every catastrophic engineering failure in history was preceded by success. Bridges that stood for decades before they fell. Designs that satisfied every code before they killed. AI produces the same dangerous confidence -- outputs that are comprehensive, rigorous, and correct within the scope of what they were asked. The question Petroski spent his life studying is what happens when the scope is wrong.

Henry Petroski documented a pattern that repeats across centuries: success breeds confidence, confidence narrows margins, and narrowed margins eliminate the small failures that serve as an immune system -- the cracks, the deflections, the warnings that tell the engineer her model is incomplete. AI optimization, by its nature, consumes these margins. It identifies them as waste. They are not waste. They are the structure's promise to the people inside it.

This book applies Petroski's framework to the age of artificial intelligence, where tools generate designs faster than understanding can follow and the encounter with failure -- the only known mechanism for developing engineering judgment -- is being systematically removed from professional practice.

Henry Petroski
“the author of the book warned against blind faith in computers”
— Henry Petroski
0%
11 chapters
WIKI COMPANION

Henry Petroski — On AI

A reading-companion catalog of the 9 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Henry Petroski — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →