Merritt Roe Smith — On AI
Contents
Cover Foreword About Chapter 1: The Determinist Temptation Chapter 2: The Myth of the Neutral Tool Chapter 3: Path Dependence and the Architecture of Lock-In Chapter 4: The Luddites and the Winners' History Chapter 5: The Military-Industrial Origins of Intelligence Chapter 6: Institutional Mediation and the Factory Acts Chapter 7: The Recursive Machine Chapter 8: Building in the River Chapter 9: The Theory of Institutional Failure Chapter 10: Neither Determined Nor Free Epilogue Back Cover
Merritt Roe Smith Cover

Merritt Roe Smith

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Merritt Roe Smith. It is an attempt by Opus 4.6 to simulate Merritt Roe Smith's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The room that taught me the most was not the one where everything worked.

Trivandrum, February 2026. Twenty engineers. Claude Code. By Friday, a twenty-fold productivity multiplier. I describe that week in The Orange Pill as a breakthrough, and it was. But what I could not explain at the time — what nagged at me for months afterward — was why the same tool, deployed with the same training, in a different organizational context, produced almost nothing.

Same subscription. Same model. Same capabilities. Radically different outcomes.

I kept attributing the difference to soft variables — team culture, leadership quality, the mysterious chemistry of a group that clicks. These were not wrong, but they were not precise enough. They described the variation without explaining the mechanism.

Then I encountered the work of Merritt Roe Smith, and the mechanism snapped into focus.

Smith spent his career at MIT studying what happens when powerful new technologies meet different institutional environments. His most famous comparison — the federal armories at Springfield and Harpers Ferry, both given identical mandates and identical machines by the same government — demonstrated with archival precision what I had felt in my bones: that technology does not determine outcomes. Institutions do. The same machine, dropped into different organizational cultures with different values and different power structures, produces fundamentally different results. Not slightly different. Categorically different. Different enough to shape the trajectory of American manufacturing for a century.

This finding matters right now more than it has ever mattered, because the dominant narrative about AI is deterministic. AI will transform everything. Resistance is futile. Adaptation is the only rational response. The river flows, and you either swim or drown.

Smith's institutional lens does not deny the river's power. It asks a different question — one the determinists skip past because it is harder and less glamorous: Who is building the structures that determine where the water goes? Which institutions are shaping the deployment? Whose values are embedded in the architecture? And what happens to the people who bear the cost of the transition when the institutional response arrives too late?

These are not abstract questions. They are the questions that will determine whether AI produces broadly shared human flourishing or concentrated gain at dispersed expense. Every previous technological transition in modern history has been decided by the quality of the institutional response. Not the power of the technology. The quality of the dam.

Smith gave me the framework to see what I had been living inside. This book is my attempt to share it with you.

Edo Segal ^ Opus 4.6

About Merritt Roe Smith

b. 1940

Merritt Roe Smith (b. 1940) is an American historian of technology and the Leverett and William Cutten Professor of the History of Technology, Emeritus, at the Massachusetts Institute of Technology. Born in the United States, Smith received his Ph.D. from Pennsylvania State University and joined the MIT faculty in 1971, where he taught and conducted research for over five decades. His landmark work Harpers Ferry Armory and the New Technology: The Challenge of Change (1977) examined how the introduction of precision manufacturing at the federal armory was shaped — and resisted — by the institutional culture of its workforce, establishing him as a leading voice in the social and institutional history of technology. As co-editor of Does Technology Drive History? The Dilemma of Technological Determinism (1994, with Leo Marx), he helped frame one of the defining debates in the field: the question of whether technologies possess an inherent logic that determines social outcomes or whether institutional, political, and cultural forces mediate between a technology's capabilities and its effects. His edited volume Military Enterprise and Technological Change (1985) documented the U.S. military's decisive role as a catalyst of industrial innovation, tracing how technologies developed for military purposes migrated into civilian applications carrying the institutional values of their origins. Smith's scholarship is distinguished by its rigorous use of primary archival sources, its comparative institutional method, and its insistence that workers and communities affected by technological change be treated as agents with legitimate knowledge rather than obstacles to progress. Upon his retirement from MIT in 2024, colleagues honored his legacy at a symposium recognizing his foundational contributions to understanding how societies navigate technological transformation.

Chapter 1: The Determinist Temptation

In 1844, Samuel Morse sent the first telegraph message from Washington to Baltimore: "What hath God wrought?" The phrasing was not accidental. It attributed the technology to divine agency rather than human choice — as though the electromagnetic telegraph had descended from heaven rather than emerged from a specific institutional context of government funding, military interest, and patent law. The rhetorical move was telling. From the moment a powerful new technology appears, the temptation is to treat it as a force of nature rather than a product of human decisions. That temptation has a name in the history of ideas: technological determinism. And it has never been more dangerous than it is right now.

Merritt Roe Smith spent his career at MIT excavating the hidden assumptions beneath claims about technology and progress. His research on the federal armories at Springfield and Harpers Ferry — the institutions that developed interchangeable parts and precision manufacturing in nineteenth-century America — demonstrated something that sounds obvious but that most public discourse about technology systematically ignores: technologies do not develop according to an internal logic of improvement. They are pushed and pulled by institutional actors pursuing specific goals. The interchangeable parts that became the foundation of American mass production did not emerge because the technology was "ready" or because the market "demanded" them. They emerged because the U.S. War Department needed weapons that could be repaired in the field by soldiers who were not gunsmiths, and because the federal armory system provided the funding, the organizational structure, and the political authority to develop the precision manufacturing techniques that commercial manufacturers, left to market incentives alone, would not have pursued for decades.

The institutional context determined the technological trajectory. Not the technology's internal logic. Not the invisible hand. The specific choices of specific actors operating within specific institutional frameworks.

This finding — arrived at through decades of archival research, documented with the meticulous care that primary sources demand — carries implications that extend far beyond nineteenth-century arms manufacturing. It establishes a principle: to understand why a technology develops the way it does, look at the institutions that fund it, organize it, and deploy it. To predict what a technology will do to a society, look at the institutional arrangements that mediate between the technology's capabilities and its social effects. The technology constrains. The institutions determine.

The AI moment has made this principle urgent in ways that Smith's original research could not have anticipated. When ChatGPT reached fifty million users in two months — a pace of adoption that dwarfed every previous technology in human history — the determinist narrative was already fully formed. AI would transform everything. Knowledge work would be restructured. Entire professions would be displaced. The only question was how fast. Resistance was characterized, in the dominant Silicon Valley discourse, not as a rational response to genuine costs but as a failure of imagination — the contemporary equivalent of the Luddite's futile rage against the power loom.

This narrative is the determinist temptation in its purest contemporary form. It says: the technology has arrived, and its effects will follow with the inevitability of gravity. The river flows. Adaptation is the only rational response. Anyone who questions the trajectory is standing athwart history, and history does not slow down for skeptics.

The temptation is powerful because it is partly right. AI will transform knowledge work. The broad outlines of this transformation are not subject to human veto. No act of legislation, no professional resistance movement, no philosophical critique will prevent large language models from performing, at a competent level, tasks that previously required years of specialized training. These are genuine constraints imposed by the technology's material capabilities, and pretending otherwise is as intellectually dishonest as pretending that the power loom would not outproduce the handloom weaver.

But the determinist temptation does not stop at acknowledging the technology's power. It goes further — much further — and in going further it commits the error that Smith's institutional analysis was designed to expose. The determinist claims not merely that the technology constrains the range of possible futures, but that it determines which specific future will emerge. The technology will not merely transform knowledge work; it will transform it in a specific way, producing specific winners and specific losers, following a specific trajectory that human choices cannot meaningfully alter. The river does not merely flow; it flows to a predetermined destination, and the structures built in its path are irrelevant to its course.

This is where the temptation becomes dangerous, because it converts a descriptive claim (AI is powerful) into a prescriptive one (therefore, adaptation is the only response). And the prescriptive claim, once accepted, produces a specific political consequence: the abdication of the institutional agency that the historical record shows to be decisive in determining the outcomes of technological transitions.

Consider the telegraph itself — the technology that Morse attributed to God rather than to the institutions that produced it. The electromagnetic telegraph did transform communication. But the specific form of the transformation was not determined by the technology's properties. In the United States, the telegraph was deployed through a system of private companies operating under minimal government regulation. The result was a rapid concentration of control in the hands of Western Union, which used its monopoly position to shape the terms on which information flowed through the national communication infrastructure. Newspapers that cooperated with Western Union received preferential treatment. Those that did not were disadvantaged. The technology that was supposed to democratize information became, through the specific institutional arrangements of its deployment, an instrument of concentrated power.

In European nations, the same technology was deployed through government postal systems that treated communication as a public service. The result was a more broadly accessible, though more bureaucratically managed, communication infrastructure. Access was wider. Control was less concentrated. The technology's social effects differed not because the technology differed but because the institutional arrangements surrounding it differed.

Same technology. Different institutions. Divergent outcomes. The determinist who treats the telegraph's effects as inherent in the technology cannot account for this divergence. The institutionalist can.

The parallel to AI is direct and consequential. The large language models now reshaping knowledge work are being deployed through specific institutional arrangements — primarily through large technology companies operating subscription-based platforms under minimal government regulation in the United States, through more heavily regulated channels in the European Union, and through state-directed programs in China. These different institutional arrangements will produce different outcomes: different distributions of the technology's benefits and costs, different relationships between the technology and the workers it affects, different concentrations of power and different possibilities for democratic accountability.

The determinist who treats the AI transition as a single, globally uniform phenomenon — who says "AI will do X to society" without specifying which institutional arrangements are mediating between the technology and the society — is making the same error as the determinist who treated the telegraph as having inherent effects independent of the institutions that deployed it. The error is not merely academic. It has practical consequences, because it directs attention away from the institutional choices that will actually determine the transition's outcome and toward the technology itself, which is indifferent to the values we would like it to serve.

Edo Segal's The Orange Pill navigates the determinist temptation with more care than most accounts of the AI moment. The book's central metaphor — intelligence as a river that has been flowing for 13.8 billion years — carries unmistakable determinist undertones. A river is a natural force. It does not negotiate. It does not respond to argument. It flows according to the laws of physics, and structures built in its path must accommodate its power or be destroyed. When Segal writes that the river "cannot be stopped," the implication is that the AI transition, like the flow of water downhill, is beyond the reach of human choice.

But the metaphor contains its own corrective. The beaver does not stop the river. The beaver redirects it. The dam creates a pool, and the pool creates an ecosystem, and the ecosystem supports forms of life that the unimpeded river would have swept away. The agency is real, even though the constraint is absolute. This is, translated from metaphor into the vocabulary of institutional analysis, a soft determinist position — and it is, in Smith's framework, the intellectually honest one.

The distinction between hard and soft determinism is the analytical knife that separates the defensible from the lazy. Hard determinism says the technology determines specific outcomes regardless of institutional context. Soft determinism says the technology constrains the range of possible outcomes while leaving the specific outcome genuinely open to institutional choice. The former produces fatalism or triumphalism, both of which are forms of passivity. The latter produces engagement — the recognition that the technology is powerful enough to demand response but not so powerful as to render response futile.

Smith's entire body of work is an argument for soft determinism grounded in empirical evidence. The Springfield Armory adopted precision manufacturing methods with relative efficiency because its organizational culture emphasized discipline, uniformity, and compliance with federal directives. The Harpers Ferry Armory, presented with the same technology and the same federal mandate, resisted for a decade, because its organizational culture valued craft autonomy, individual judgment, and the prerogatives of skilled workers. The technology was identical. The institutional cultures diverged. The outcomes were dramatically different — not in the sense that one succeeded and the other failed, but in the sense that each institutional context shaped the technology's deployment in ways the technology itself did not determine.

This finding is not a historical curiosity. It is a prediction about the present. The AI systems being deployed in different organizational contexts — in Silicon Valley startups that celebrate speed and disruption, in European firms operating under comprehensive regulatory frameworks, in government agencies bound by procurement rules and accountability structures, in educational institutions struggling to maintain their pedagogical mission — will produce different outcomes in each context. Not because the technology differs, but because the institutions differ. And the quality of those institutional responses — their attentiveness to the displaced as well as the empowered, their commitment to broadly distributed benefit rather than narrowly concentrated gain — will determine whether the AI transition produces human flourishing or human devastation.

The determinist temptation whispers that the institutional response does not matter — that the technology will produce its effects regardless. The historical record, documented with the care that Smith and his colleagues brought to the armory archives, shouts that the institutional response is the only thing that has ever mattered. Every major technological transition in modern history has been shaped, in its specific form and its specific distribution of costs and benefits, by the institutional arrangements that surrounded it. The transitions that produced broadly shared prosperity were not the ones where the technology was gentler. They were the ones where the institutions were stronger.

The most insidious consequence of the determinist temptation is that it functions as a self-fulfilling prophecy. When enough people believe that the outcome is determined by the technology rather than by institutional choices, they stop building institutions. They stop organizing. They stop fighting for the arrangements that would channel the technology's power toward equitable outcomes. And when the institutions are not built, the technology's effects are indeed determined — not by the technology's inherent properties, but by the absence of the institutional structures that would have produced a different result. The determinism that the determinists predict is caused not by the technology but by the determinists' own abdication of agency.

This is why the question Smith posed — does technology drive history? — is not an abstract philosophical puzzle. It is the most practically consequential question anyone can ask about the AI transition. The answer you give determines whether you build institutions or wait for the technology to determine the outcome on its own. It determines whether you fight for the values you want embedded in the transition or assume that the technology's trajectory is fixed. It determines whether you exercise the agency that the historical record shows to be decisive or surrender it to a narrative of inevitability that serves the interests of those who benefit from unmediated technological deployment.

The temptation is real. The technology is genuinely powerful. The constraints it imposes are not imaginary. But the specific future that emerges from within those constraints remains, as it has in every previous technological transition, genuinely open to institutional choice. The river flows. The question is whether anyone builds the dam.

Chapter 2: The Myth of the Neutral Tool

In the workshops of the Springfield Armory in the 1820s, a quiet revolution was underway that would reshape American industry for the next century and a half. Under the direction of federal ordnance officers, skilled craftsmen were being systematically replaced by specialized machine tools capable of producing weapon components to tolerances precise enough that any part could be substituted for any other part of the same type. A lock plate made on Monday could be fitted to a stock made on Thursday without filing, fitting, or adjustment. The parts were interchangeable. The craftsmen who had previously performed the fitting — who had built entire weapons from raw materials using holistic knowledge accumulated over years of apprenticeship — were no longer necessary.

This was not a natural evolution. It was an institutional choice, and Merritt Roe Smith's archival research documented its origins with a specificity that demolishes any pretense of technological neutrality. The War Department did not pursue interchangeable parts because they represented the optimal manufacturing technique in some abstract sense. Commercial arms manufacturers, operating under different institutional pressures — the pressures of the market rather than the military — continued to use craft methods that were, for their purposes, perfectly adequate. The War Department pursued interchangeable parts because it needed weapons that could be repaired in the field by replacing standardized components, eliminating dependence on skilled armorers who might not be available at the point of need. The technology was designed to solve a specific institutional problem. The solution to that problem carried specific social consequences: the deskilling of craft labor, the concentration of process control in the hands of managers and engineers, and the subordination of individual judgment to standardized procedure.

The technology was not neutral. It was designed to achieve specific outcomes that served specific interests. And the interests it served were not the interests of the craftsmen it displaced.

This finding — that technology embodies the values of the institutions that produce it — is among the most important contributions of the institutional history of technology to public understanding. It cuts against the most pervasive myth in contemporary discourse about AI: the myth that the tools are neutral instruments whose effects depend entirely on how users choose to employ them. A hammer can build a house or break a window. A knife can prepare food or inflict harm. The technology is innocent; only the use to which it is put carries moral weight.

The myth is intuitively appealing and profoundly misleading. It is misleading because it treats the technology as a finished object, presented to a user who freely chooses among its possible applications, rather than as the product of a development process in which choices about design, functionality, and optimization have already constrained the range of possible uses before the user ever touches it. The precision manufacturing tools at Springfield were not neutral instruments that craftsmen could have used to enhance their existing practice. They were instruments specifically designed to replace the craftsmen's practice with a different practice — one that required less skill, less autonomy, and less individual judgment. The design was the decision. By the time the tool reached the workshop floor, the most consequential choice had already been made.

The large language models that now define the AI landscape were developed through an analogous process of institutional choice, and the values embedded in those choices are no more neutral than the values embedded in the Springfield machine tools. The optimization criteria that guided the development of these systems — fluency, helpfulness, breadth of competence, user engagement — reflect the priorities of the institutions that produced them: research laboratories funded by technology companies seeking commercial applications, evaluated by benchmarks that reward performance on tasks selected by researchers, refined through feedback processes designed to maximize user satisfaction.

Each of these priorities sounds unobjectionable in isolation. Who would argue against fluency, helpfulness, or breadth? But taken together, they produce a system with specific tendencies that are not neutral in their effects on the humans who use them. A system optimized for helpfulness tends to provide answers rather than questions, solutions rather than problems, resolutions rather than explorations. It tends to smooth rather than to sharpen. It converges toward agreement rather than productive friction. It gives the user what the user appears to want rather than challenging the user to reconsider what they should want.

These tendencies are visible in the documented experience of working with AI systems. Segal describes, in The Orange Pill, the moment when he almost kept a passage that "sounded good but did not think well" — where Claude had produced prose of sufficient polish that it nearly concealed the absence of genuine thought beneath the surface. This is not a bug in the system's design. It is a feature — a consequence of optimizing for helpfulness and fluency rather than for the kind of productive resistance that generates understanding. A system designed to be helpful will tend to confirm the user's framing rather than challenge it, because challenge is, by definition, unhelpful in the immediate moment, even when it is essential over the longer term.

A different set of optimization criteria would have produced a different tool with different tendencies. A system optimized for intellectual rigor rather than helpfulness might push back against poorly formed questions, refuse to generate plausible-sounding answers to questions it could not genuinely address, and force users to earn the clarity that the current systems give away. Such a system would be less immediately satisfying and more genuinely useful for the kind of deep intellectual work that the knowledge economy theoretically values. Its absence from the market is not a technological inevitability. It is an institutional choice — a consequence of the commercial incentives that reward engagement and satisfaction over the harder-to-measure qualities of intellectual development and genuine understanding.

Smith's framework predicts this pattern because it has documented it repeatedly in other contexts. The values embedded in a technology at its origin tend to persist long after the original institutional context has been forgotten. The precision manufacturing techniques developed for military weapons production migrated into civilian industries — clock-making, sewing machine production, bicycle manufacturing, and eventually automobile manufacturing — carrying with them the values of standardization, interchangeability, and centralized process control that had been designed to serve military needs. These values were not inherent in the manufacturing techniques themselves. They were the products of the institutional context in which the techniques had been developed. But once embedded in the technology, they proved remarkably persistent, shaping the development of American manufacturing for generations in directions that alternative institutional origins would not have produced.

The same pattern is visible in the institutional genealogy of computing. The foundational technologies of the digital age — electronic computation, packet-switched networking, time-sharing systems — were developed not in commercial laboratories responding to market demand but in military and military-adjacent research institutions responding to strategic imperatives. ENIAC was built to calculate artillery firing tables for the Army Ballistic Research Laboratory. The ARPANET was developed by the Defense Department's Advanced Research Projects Agency to create a communications network resilient enough to survive nuclear attack. The algorithms that underlie contemporary machine learning were nurtured for decades in research programs funded by DARPA, the National Science Foundation, and the Office of Naval Research.

The values that these military institutions embedded in the technologies they developed — efficiency, optimization, control, reliability, scalability — were rational within their originating context. A military communication network that is inefficient or unreliable endangers lives. But these values carry different consequences when the technologies built on them are repurposed for civilian applications. The efficiency that is appropriate in a weapons system becomes the compulsive optimization that characterizes contemporary technology culture. The scalability that is necessary in military logistics becomes the winner-take-all dynamics of the platform economy. The control that is essential in military command structures becomes the surveillance capabilities that commercial platforms deploy to maximize engagement.

The transfer is not conspiratorial. It is structural. Engineers trained in institutional environments that reward efficiency and control carry those values into the commercial products they build, often without recognizing them as values at all. They treat them as technical requirements — as inherent properties of well-designed systems rather than as institutional choices that could have been made differently. The myth of the neutral tool provides intellectual cover for this transfer by directing attention away from the design process and toward the moment of use, as though the technology's effects were determined entirely by the user's intentions and not at all by the designer's embedded priorities.

The non-neutrality of AI tools has immediate practical implications for every domain in which they are deployed. Consider legal practice. An AI system trained to draft legal briefs is not a neutral writing instrument that lawyers can use to enhance their existing practice. It is a system optimized for particular qualities — comprehensiveness, citation accuracy, structural coherence — that reflect the priorities of the institutions that developed it. These are genuine virtues in legal writing. But the system's optimization for these qualities comes at the expense of other qualities that experienced lawyers value: the strategic ambiguity that creates negotiating room, the rhetorical choices that signal respect for a particular judge's preferences, the judgment about what to exclude that distinguishes a competent brief from a persuasive one. A lawyer who relies on AI-drafted briefs without recognizing what the tool's optimization criteria have selected for and against is not using a neutral instrument. She is allowing the tool's embedded values to shape her practice in ways she may not have chosen if the choice had been made explicit.

Consider education. An AI tutoring system optimized for student engagement and satisfaction will tend to make learning feel easy — to smooth away the confusion and frustration that are, neuroscientifically and pedagogically, essential components of genuine understanding. A student who feels consistently engaged and satisfied is a student whose metrics look good on the dashboard. A student who struggles, fails, persists, and eventually breaks through to understanding that is genuinely her own produces worse engagement metrics and better learning outcomes. The tool's optimization criteria select for the former at the expense of the latter, not because anyone intended this outcome but because the institutional context in which the tool was developed rewarded engagement metrics over learning outcomes.

The myth of neutrality obscures these dynamics by framing the technology as a passive instrument and placing the entire burden of responsible use on the individual user. If the tool is neutral, then any negative consequence of its use is the user's fault — a failure of judgment, discipline, or self-awareness rather than a structural feature of the tool's design. This framing conveniently absolves the institutions that designed the tool of responsibility for the consequences of their design choices, and it places the burden of navigating those consequences on the individuals who are least equipped to understand the institutional forces shaping the tool they are using.

Smith's institutional analysis provides the corrective. Technology is never neutral because technology is never developed in a vacuum. It is always the product of specific institutional contexts, designed to serve specific purposes, funded by specific interests, evaluated by specific criteria. The interchangeable parts at Springfield embodied military values of standardization and field-repairability. The large language models at Anthropic and OpenAI embody commercial values of helpfulness, engagement, and breadth. Neither set of values is inherently wrong. But neither is neutral, and the failure to recognize the non-neutrality — the failure to ask whose values the tool embodies and whose interests those values serve — is the failure that allows the technology's institutional origins to determine its social effects without democratic accountability.

The question that the myth of neutrality prevents us from asking is the question that the institutional history of technology insists we ask: not "How should we use this tool?" but "Why was this tool designed this way, and whose interests does the design serve?" The answer to that question opens a space for institutional choice that the myth of neutrality closes. If the tool's design reflects institutional choices rather than technological necessities, then different institutional choices could produce a different tool — one optimized for different values, serving different interests, producing different effects on the people who use it and the communities in which it is deployed.

The interchangeable parts at Springfield could have been developed differently, under different institutional pressures, with different consequences for the craftsmen who built weapons and the communities that depended on their skills. They were not, because the institutional context — the War Department's needs, the federal government's funding priorities, the ideology of national preparedness — determined the direction of development. The AI tools now reshaping knowledge work could be developed differently, too, under different institutional pressures, with different consequences for the knowledge workers who use them and the communities that depend on their judgment. Whether they will be depends not on the technology's inherent properties but on the institutional choices that the myth of neutrality encourages us to ignore.

Chapter 3: Path Dependence and the Architecture of Lock-In

The QWERTY keyboard layout was designed in the 1870s to solve a problem that no longer exists. The arrangement of keys on Christopher Latham Sholes's typewriter was configured to prevent the jamming of adjacent mechanical type-bars by placing frequently combined letters in positions that slowed the typist's fingers, thereby reducing the frequency of collisions between the metal arms that struck the paper. The design was a rational response to a specific mechanical constraint — a constraint that ceased to exist with the introduction of electric typewriters in the mid-twentieth century and that has no relevance whatsoever to the glass surfaces of twenty-first-century smartphones. Yet the layout persists, not because it is optimal for contemporary purposes but because the installed base of trained typists — the hundreds of millions of people who have invested time and muscular memory in learning the arrangement — makes the switching costs prohibitive. The Dvorak layout, designed in 1936 specifically for typing efficiency, has been available for nearly ninety years. Almost nobody uses it.

This is path dependence: the mechanism by which an early choice, made for reasons that are rational at the time of making, creates a base of investment that makes later deviation from the chosen path increasingly costly. The lock-in is not physical. It is institutional — enforced not by the technology's material properties but by the social and economic arrangements that accumulate around it. Skills are learned. Expectations form. Supply chains organize. Educational curricula embed. Regulatory frameworks codify. Each layer of investment raises the cost of departure, until the path that began as one option among many becomes, for all practical purposes, the only option available.

Merritt Roe Smith's research on the American system of manufactures provided one of the most consequential demonstrations of path dependence in the historical literature on technology. The manufacturing techniques developed in the federal armories — precision gauging, sequential machining, the organizational disciplines required to produce interchangeable components — did not remain within the armory walls. They migrated outward into the civilian economy through specific institutional channels: through the mechanics and superintendents who left the armories to work in private industry, through the machine tools that were developed for armory production and then sold to commercial manufacturers, through the organizational models that were observed by visiting industrialists and adapted for their own purposes.

The migration was not inevitable. It followed specific institutional pathways. But once the techniques had been adopted by enough civilian manufacturers, once the skills had been taught in enough training programs, once the machine tools had been installed in enough factories, the path was set. American manufacturing developed along the trajectory established by the armory system — toward greater standardization, greater mechanization, greater centralization of process control — not because this trajectory was the only possible one, but because the investments that had accumulated along it made alternative trajectories prohibitively expensive to pursue.

European manufacturers, starting from different institutional contexts, developed along different paths. The British tradition of craft-based production, with its emphasis on flexible specialization and the judgment of skilled workers, persisted alongside the American system for decades. The two paths produced goods of comparable quality through fundamentally different methods, reflecting fundamentally different institutional priorities. The American path was not superior in any absolute sense. It was locked in — reinforced by the accumulated investments in skills, machines, organizational routines, and educational programs that had been built along it.

The AI transition is creating path dependencies of comparable magnitude and potentially greater consequence, and the speed at which these dependencies are forming is historically unprecedented. The architectural choices that underlie contemporary large language models — the transformer architecture, the attention mechanism, the particular methods of pre-training on massive text corpora followed by fine-tuning through human feedback — are establishing a technical path that will constrain AI development for years, perhaps decades. These choices were not inevitable. Alternative approaches — symbolic AI systems that reason through explicit logical rules, evolutionary computation that develops solutions through selection processes analogous to biological evolution, neurosymbolic systems that combine neural networks with structured knowledge representations — existed and, in some cases, showed considerable promise. But the transformer architecture achieved breakthrough results on specific benchmarks, attracted the concentrated investment necessary for rapid scaling, and created the base of skills, infrastructure, and institutional routines that now makes deviation from the established path increasingly difficult.

The lock-in extends beyond technical architecture into every dimension of AI's social deployment. The subscription-based business model that currently dominates AI distribution — the model that delivers Claude at a hundred dollars per month, GPT-4 at twenty dollars per month, and their competitors at similar price points — is establishing patterns of access, pricing, and economic relationship between providers and users that will prove remarkably persistent. Users are organizing their work processes around these tools. Organizations are restructuring their workflows to accommodate them. Educational institutions are building curricula that assume their availability. Each of these adaptations represents an investment in the current deployment model, and each investment raises the switching cost that any alternative model must overcome.

The regulatory frameworks being developed — the European Union's AI Act, the American executive orders, the emerging standards in Singapore, Brazil, and Japan — are establishing legal precedents and institutional structures that will constrain future policy options in ways that the current moment's urgency makes difficult to fully appreciate. Regulatory path dependence is particularly powerful because legal frameworks create constituencies — regulated industries, compliance professionals, enforcement agencies — that develop institutional interests in the framework's perpetuation. A regulatory framework designed for the current generation of AI systems may prove inadequate for the next generation, but the institutional investments in the existing framework will resist the reforms that adequacy requires.

Smith's comparative method — the method that produced the Springfield-versus-Harpers-Ferry analysis — illuminates the path dependencies of AI deployment with particular clarity when applied to the different national approaches now crystallizing. The United States is establishing a path characterized by minimal regulation, private-sector leadership, and the commercial priorities of the technology companies that dominate AI development. The European Union is establishing a path characterized by comprehensive regulation, rights-based frameworks, and the precautionary priorities of governmental institutions accustomed to protecting citizens from market failures. China is establishing a path characterized by state direction, strategic competition, and the authoritarian priorities of a government that views AI as both an instrument of economic development and a tool of social control.

Each of these paths reflects specific institutional choices made in specific political and economic contexts. Each will produce specific consequences for the people who live within its jurisdiction. And each is accumulating, with every passing month, the investments in skills, infrastructure, expectations, and institutional arrangements that will make later deviation increasingly costly. The American path locks in commercial values. The European path locks in precautionary values. The Chinese path locks in state-directed values. None of these paths is optimal in any absolute sense. Each forecloses possibilities that the alternatives would have preserved.

This is what makes the current moment so consequential — and what makes the determinist temptation so dangerous. The determinist treats the emerging path as the only possible path, as the inevitable product of the technology's inherent properties. The institutionalist recognizes that the path is being chosen, right now, through specific institutional decisions that specific actors are making. The path that locks in is not the path that the technology demands. It is the path that the current constellation of institutional power produces. Different institutional arrangements would lock in a different path, with different consequences, different distributions of costs and benefits, different possibilities for the future.

The practical urgency of path dependence lies in its temporal asymmetry: the earlier the intervention, the more effective it is; the later, the less. In the formative period of a technological path, when investments are still relatively shallow and alternatives are still relatively accessible, institutional choices have disproportionate leverage. A regulatory decision made in 2026 about the transparency requirements for AI systems will shape the technology's development trajectory far more powerfully than the same decision made in 2036, when a decade of investment in opaque systems will have created constituencies, routines, and expectations that resist the change. An educational decision made now about how to integrate AI into curricula will shape a generation of students' cognitive development far more powerfully than a corrective decision made ten years hence, when a decade of unstructured AI use will have already shaped the cognitive habits that the corrective seeks to address.

Smith recognized this temporal asymmetry in his analysis of the armory system. The decision to pursue interchangeable parts was made in the 1810s and 1820s, when the path was still formable. By the 1850s, when the American system of manufactures had been adopted by civilian industries across the northeastern United States, the path was set. The investments in machine tools, trained operators, organizational routines, and supply chains had accumulated to a point where alternative approaches were no longer economically viable, regardless of their technical merits. The window of genuine choice had closed, not because the technology demanded a specific outcome but because the institutional investments in the chosen path had raised the cost of alternatives beyond what any individual actor could bear.

The AI transition is in its formative period now. The architectural choices are being made. The business models are being established. The regulatory frameworks are being designed. The educational responses are being developed — or, more accurately, are failing to be developed with sufficient speed and comprehensiveness. Each of these choices is creating a path, and each path is accumulating the investments that will make later departure increasingly costly.

The formative period will not last. Within a few years — perhaps less — the paths now being established will have accumulated enough institutional investment to become, for practical purposes, irreversible. The architectural choices will have been embedded in millions of deployed systems. The business models will have organized billions of dollars in commercial relationships. The regulatory frameworks will have created the compliance infrastructures that resist reform. The educational responses — or their absence — will have shaped the cognitive habits of a generation of students.

When the formative period closes, the range of possible futures narrows sharply. The choices that were available at the beginning of the transition become unavailable at its maturation, not because the technology forbids them but because the institutional investments in the established path make them prohibitively costly. This is the lesson of QWERTY, of the American system, of every technological path that began as a choice and hardened into a constraint.

The beaver, in the framework that The Orange Pill proposes, does not simply build a dam at any convenient point. The beaver studies the river. It identifies the location where a small structure can redirect the largest flow. The placement of the dam is the most consequential decision the beaver makes, because the pool that forms behind it — the ecosystem that the pool supports — depends entirely on where the dam is built. A dam in the wrong place creates a shallow pool that supports little life. A dam in the right place creates a deep pool that transforms the landscape.

The AI transition is at the dam-placement stage. The locations are being chosen. The sticks are being laid. The institutional investments are beginning to accumulate. And the choices being made now — by technology companies about architectures and business models, by governments about regulatory frameworks and investment priorities, by educational institutions about curricula and pedagogical methods, by organizations about deployment practices and workforce development, by individuals about how to integrate these tools into their working lives — are establishing the paths that will constrain the transition's development for decades to come.

Path dependence is not fate. It is the accumulation of choices into structures that constrain future choices. Understanding this accumulation — recognizing that the decisions made during the formative period carry disproportionate weight — is the first step toward making those decisions with the institutional awareness that the historical record demands.

Chapter 4: The Luddites and the Winners' History

On a January night in 1812, several dozen men moved through the darkness of Nottinghamshire, faces blackened, hammers in hand. They entered a textile workshop and methodically destroyed the stocking frames inside — the wide knitting machines that had been used for generations to produce hosiery but that were now, in modified form, being used to produce cheap, inferior goods that undercut the market for the skilled work that had sustained their communities. They did their work quickly and disappeared before the authorities arrived. They called themselves followers of "General Ludd," a mythical figure whose name provided cover for an organized, strategic, and remarkably disciplined resistance movement.

Two centuries later, their name is an insult. To call someone a Luddite is to call them foolish, backward, afraid of progress — a person who cannot adapt to change and so lashes out at the instruments of change in impotent rage. The dismissal has become so automatic that it functions as a conversation-stopper: once someone has been labeled a Luddite, their concerns can be dismissed without examination. The label substitutes for analysis.

This dismissal is a historical fraud, and Merritt Roe Smith's body of work — his insistence on treating workers as agents rather than victims, on documenting resistance with the same rigor applied to innovation, on examining the specific institutional contexts that shaped both technological development and the responses it provoked — provides the scholarly foundation for understanding why the fraud matters and what it costs us in the present moment.

The framework knitters who broke machines in 1812 were not technophobes. They were skilled artisans who possessed a sophisticated understanding of the economic dynamics that the new machines were introducing into their industry. Their targets were specific: they did not destroy all machinery indiscriminately. They destroyed the particular machines that were being used to produce shoddy goods by unskilled operators, undercutting the market for the quality work that skilled knitters produced. Their grievance was not with technology as such but with the specific deployment of technology to undermine their bargaining power, their economic security, and the quality standards that their craft traditions maintained.

Their analysis was accurate. The machines they destroyed were being used exactly as they described: to replace skilled labor with unskilled labor, to produce inferior goods at lower cost, to transfer the economic surplus from workers to machine owners. The Luddites understood, with a precision that contemporary economists would recognize as analytically sound, that the machines were not neutral instruments of progress but institutional tools deployed to restructure the economic relationship between capital and labor in ways that benefited the former at the expense of the latter.

Smith's research on the armory system documented an analogous dynamic with the meticulous attention to primary sources that characterized his scholarly method. The craftsmen at Harpers Ferry who resisted the introduction of precision manufacturing methods in the 1840s and 1850s were not opposing progress. They were opposing a specific reorganization of work — one designed to replace their holistic craft knowledge with specialized machine operations, to transfer control over the production process from skilled workers to managers and engineers, and to reduce the armory's dependence on the judgment and autonomy of individual artisans. Their resistance was rational, informed by genuine understanding of what the reorganization would cost them, and sustained over a period of years through strategies that ranged from work slowdowns to political organizing to direct appeals to sympathetic members of Congress.

The resistance failed — not because it was irrational but because the institutional forces promoting mechanization were more powerful. The War Department needed standardized weapons. Congressional appropriations committees needed to justify expenditures through measurable productivity gains. The ideology of national preparedness needed tangible evidence of industrial capacity. These institutional forces, operating in combination, overwhelmed the craftsmen's resistance and imposed the new production system over their objections.

Smith's insistence on documenting this resistance with the same archival rigor that he brought to documenting the innovation itself was not merely a historiographical preference. It was a substantive claim about the nature of technological transitions: that the people displaced by new technologies are not inert objects swept aside by the force of progress but thinking, strategically capable actors whose responses shape the transition's course and its outcomes. The craftsmen at Harpers Ferry did not prevent mechanization, but their resistance forced compromises, delayed the most disruptive changes, and preserved elements of craft practice that pure efficiency would have eliminated. Their agency was limited, constrained, and ultimately insufficient to alter the transition's broad trajectory. But it was real, and its effects were measurable.

The contemporary discourse about AI replicates the Luddite dismissal with remarkable fidelity. Knowledge workers who express concern about AI's effects on their practice — senior developers who insist that understanding the lower layers of the technology stack matters, writers who argue that the struggle to find the right word is inseparable from the value of the word found, educators who worry that AI-generated answers short-circuit the cognitive processes that produce genuine understanding — are routinely characterized as resistant to progress, afraid of change, unable to adapt. The label has been updated — "doomer" has replaced "Luddite" in some circles — but the function is identical: to dismiss the concern without examining it, to substitute a characterization of the person for an engagement with the argument.

The dismissal serves specific institutional interests. When resistance is characterized as irrational, the interests that promote adoption are relieved of the obligation to address the resisters' concerns. If the concerned developer is simply a Luddite, there is no need to engage with her argument that deep understanding of systems architecture, built through years of hands-on struggle, produces a form of judgment that AI-assisted development does not replicate. If the worried educator is simply afraid of change, there is no need to address his evidence that students who use AI to generate essays learn less than students who struggle through the writing process unaided. The label closes the conversation before the conversation can produce the institutional responses that the concerns might warrant.

But the dismissal carries consequences beyond the silencing of individual voices. It distorts the historical record of the transition itself, because the history of a technological transition written without the voices of the displaced is a history that misrepresents the transition's costs, overstates its benefits, and produces a misleading account of how the outcomes were actually determined.

Smith's framework identifies this distortion as a structural feature of how technology histories are produced. The history of the American system of manufactures, as it was typically told before the revisionist historians intervened, was a history of innovation triumphant — of ingenious mechanics, visionary ordnance officers, and the irresistible march of precision manufacturing from the armory to the factory to the assembly line. The craftsmen appeared, when they appeared at all, as obstacles — speed bumps on the road to industrial modernity. Their skills, their knowledge, their understanding of materials and processes, their assessment of what the new methods would cost — none of this was part of the standard narrative. The winners wrote the history. The losers were written out of it.

The same dynamic is visible in the dominant narratives being produced about the AI transition in real time. The voices that dominate the discourse are the voices of the empowered: the founders who have built products using AI tools, the investors who have funded AI companies, the developers who have experienced the exhilaration of accelerated capability, the analysts who celebrate the productivity gains. These voices are not lying about the gains. The gains are real. The productivity improvements are measurable. The expansion of capability is genuine. But the narrative they produce is radically incomplete, because it does not adequately represent the experience of the people who are bearing the transition's costs — the knowledge workers whose skills are being devalued, the professionals whose judgment is being bypassed, the communities whose economic foundations are being restructured.

The Orange Pill navigates this terrain with more care than most accounts of the AI moment. Segal's concept of the "silent middle" — the large population who feel both the exhilaration of expanded capability and the grief of displaced expertise but who lack a clean narrative to offer and therefore remain silent — is a genuinely valuable contribution to the historiography of the transition, because it identifies the majority experience that neither the triumphalist narrative nor the elegiac critique can capture. The book centers voices of loss alongside voices of gain: the senior software architect who feels like a master calligrapher watching the printing press arrive, the engineer who oscillates between excitement and terror, the parent who cannot fully answer her child's question about whether homework still matters.

But even this careful navigation illustrates the gravitational pull of the winners' narrative. The book's emotional center is the builder's exhilaration — the thrill of creating a product in thirty days that would previously have taken months, the intoxication of watching a team's capability multiply by a factor of twenty. The losses are acknowledged with genuine sympathy, but they are not given the same narrative weight as the gains. The elegists are treated as people whose grief is legitimate but whose strategic conclusion — that the loss should temper the enthusiasm — is ultimately incorrect. The builder's ethic, not the craftsman's grief, drives the argument forward.

This is not a criticism of the book so much as a diagnosis of the structural forces that shape every narrative about technological transition. The institutions that produce and distribute narratives — the publishing industry, the technology platforms, the media ecosystem — reward the triumphalist account because it is more engaging, more hopeful, more aligned with the cultural values of progress and innovation that dominate the societies in which these institutions operate. A narrative centered on the experience of the displaced — on what is genuinely lost when deep expertise is devalued, when the satisfaction of hard-won mastery is replaced by the ease of AI-assisted competence — is harder to sell, not because it is less true but because it is less comfortable.

The historical lesson of the Luddites is not that resistance to technological change is futile — though the Luddites' specific form of resistance ultimately proved insufficient. The lesson is about what happens when the voices of the displaced are excluded from the institutional decisions that shape the transition's course. The Luddites' most fundamental grievance was not that the machines existed but that the decisions about how the machines would be deployed — the terms of employment, the distribution of gains, the pace of change, the protections available to displaced workers — were made without their participation. They were objects of the transition, not participants in its governance.

The institutional responses that eventually channeled the industrial revolution toward more broadly distributed benefit — the Factory Acts, the labor protections, the right to organize and bargain collectively — were not gifts of enlightened industrialists. They were extracted through decades of organized political struggle in which the displaced insisted on participating in the decisions that affected their lives. The quality of those institutional responses depended directly on the strength of the displaced workers' voice in the institutional process.

The Hollywood writers' and actors' strikes of 2023 represent the most visible contemporary instance of organized labor engaging with AI displacement through the institutional channels that Smith's framework identifies as decisive. The resulting contracts — which established specific rules governing the use of AI in creative work, specific protections for human creative labor, specific limits on the substitution of machine-generated content for human-generated content — are institutional innovations of exactly the kind that the historical pattern predicts. They are modest in scope, uncertain in their long-term effectiveness, and the product of the kind of protracted struggle that the history of labor relations leads us to expect. But they represent something that the determinist narrative treats as impossible: the successful exercise of worker agency in shaping the terms on which a new technology is deployed.

The question is whether this kind of institutional engagement will extend beyond the organized, high-profile creative industries into the vastly larger domain of knowledge work where the AI transition's effects are being felt — into the offices, the classrooms, the firms, and the households where knowledge workers are navigating the transition largely on their own, without the institutional support of unions, professional associations, or regulatory frameworks designed for the specific challenges that AI presents.

The Luddites needed better institutions. So do we. The framework knitters of 1812 needed an institutional infrastructure that would allow them to participate in the decisions about how the new machines were deployed — an infrastructure that would translate their legitimate grievances into institutional arrangements protecting their interests alongside the interests of the machine owners. That infrastructure did not exist in 1812. It was built over the following century, through the sustained effort of workers, legislators, and reformers who understood that the market alone would not produce equitable outcomes.

The knowledge workers confronting the AI transition need a comparable infrastructure — one designed for the specific characteristics of AI displacement, which differs from industrial displacement in speed, breadth, and the particular way it affects cognitive rather than manual labor. Building that infrastructure requires, as a first step, the inclusion of the displaced workers' voices in the institutional process — the recognition that the people who are living through the transition on a daily basis possess knowledge about its effects that no external analyst, however well-intentioned, can replicate.

Winners write the history of technology. But the history they write — clean, progressive, centered on the gains — is a distortion that serves their interests by erasing the costs. A more honest history would include the voices of the displaced alongside the voices of the empowered. It would document what is lost as carefully as what is gained. It would recognize that the people who resist technological change are often the people who understand its costs most clearly, and that their understanding is an essential resource for the institutional design that determines whether the transition produces broadly shared benefit or concentrated gain at dispersed expense.

The determination to write that more honest history — to resist the gravitational pull of the winners' narrative, to insist on the inclusion of the displaced, to document the costs alongside the gains — is not merely an academic exercise. It is a political act, because the narrative we construct about the transition shapes the institutional responses we build, and the institutional responses we build determine the transition's outcome. A narrative that treats the transition as an unambiguous triumph produces institutional passivity. A narrative that includes the costs alongside the gains produces the institutional urgency that the historical record shows to be necessary.

The Luddites were agents, not victims. Their analysis was sound. Their resistance was rational. Their defeat was institutional, not intellectual. The knowledge workers who are now navigating the AI transition deserve to be treated with the same seriousness — their concerns engaged rather than dismissed, their expertise valued rather than discounted, their voices included in the institutional decisions that will determine whether this transition repeats the patterns of the past or produces something better.

Chapter 5: The Military-Industrial Origins of Intelligence

Computing did not emerge from the garage. The origin myth that Silicon Valley tells about itself — visionary founders tinkering in suburban workshops, building the future from spare parts and sheer will — is not entirely false, but it is radically incomplete in a way that matters enormously for understanding why artificial intelligence works the way it does, serves the interests it serves, and carries the values it carries into every interaction with every user who opens a chat window or runs a coding assistant.

Merritt Roe Smith spent his career documenting a pattern that most popular accounts of technology systematically obscure: the military as the primary engine of American technological development. His research demonstrated that the precision manufacturing techniques which became the foundation of American industrial power did not emerge from market competition or entrepreneurial vision. They emerged from the institutional apparatus of the federal armory system — from the War Department's specific need for standardized, field-repairable weapons, funded by congressional appropriations, organized by military engineers, and developed in government-owned facilities operating under institutional pressures fundamentally different from those of the commercial marketplace. His edited volume Military Enterprise and Technological Change extended this analysis across multiple domains, documenting the military's role as catalyst, funder, and institutional patron of technological innovations that later migrated into civilian applications, often carrying military values with them like geological strata embedded in migrating rock.

The institutional genealogy of artificial intelligence follows this pattern with striking fidelity. ENIAC, the first electronic general-purpose computer, was built at the University of Pennsylvania under contract to the United States Army's Ballistic Research Laboratory. Its purpose was to calculate artillery firing tables — a task previously performed by teams of human computers, predominantly women, whose labor could not keep pace with wartime demand as new weapons systems entered the field. The institutional context was military. The funding was governmental. The problem being solved was the projection of lethal force with greater accuracy. The technology that ENIAC embodied — electronic computation of mathematical operations at speeds beyond human capability — was not developed because a commercial market demanded it. No such market existed. It was developed because the military needed it, and the military possessed the institutional apparatus to fund, organize, and sustain the research through the long gestation period that fundamental technical breakthroughs require.

The pattern repeated with the consistency of an institutional habit. The ARPANET, precursor to the modern internet, was developed by the Defense Department's Advanced Research Projects Agency to create a communications network capable of surviving nuclear attack. The packet-switching technology that makes the internet function was developed not by entrepreneurs sensing a market opportunity but by researchers at RAND Corporation and MIT Lincoln Laboratory working on problems of military communication survivability. Time-sharing systems, which established the conceptual foundation for interactive computing, were funded primarily by DARPA and developed at universities operating under military contracts. The GPS system that now guides every smartphone user's navigation was built by the Department of Defense for precision weapons targeting. The microprocessor revolution that produced the personal computer was accelerated by military demand for compact, reliable electronics for guidance systems and field-deployable equipment.

The algorithms that power contemporary artificial intelligence were nurtured for decades in research programs sustained by government funding when commercial interest was negligible or nonexistent. Neural networks — the architectural foundation of today's large language models — experienced multiple cycles of enthusiasm and abandonment in the commercial sector, but the underlying research was kept alive through funding from DARPA, the National Science Foundation, and the Office of Naval Research by researchers who saw potential applications in pattern recognition, signal processing, and intelligence analysis that the commercial market had no incentive to pursue. When the transformer architecture achieved its breakthrough results in 2017, it did so on the foundation of decades of research that the market had repeatedly abandoned and that military and government funding had repeatedly sustained.

This institutional genealogy matters not as a historical curiosity but because it reveals the values embedded in the technology at its foundation — values that persist in the commercial products built on the military-industrial base, shaping their characteristics in ways that users experience daily without recognizing the institutional origins of what they are experiencing.

The values that military institutions embed in the technologies they develop are specific and identifiable: efficiency, optimization, control, reliability, scalability, and the capacity to process information at speeds and volumes that exceed human cognitive capability. Within the military context, these values are not merely desirable — they are essential. A communications network that fails under stress, a computation system that produces unreliable results, a logistics system that cannot scale to operational demands — these are not inconveniences. They are failures that cost lives. The institutional pressure to develop technologies that embody these values is rational, even imperative, within its originating context.

But values carry consequences when the technologies that embody them migrate from their originating context into different institutional environments — a migration that has been the dominant pattern in the history of American computing technology. The efficiency that is essential in a weapons system becomes the compulsive optimization that Han diagnosed as the pathology of contemporary culture. The control that is necessary in military command structures becomes the surveillance architecture of commercial platforms that monitor user behavior to maximize engagement. The scalability that is required in military logistics becomes the winner-take-all dynamics of the platform economy, where the capacity to scale gives first movers advantages that no competitor can overcome.

The transfer is structural, not conspiratorial. Engineers trained in institutional environments that reward efficiency and optimization carry those priorities into the commercial products they subsequently build, treating them not as institutional choices but as technical requirements — as inherent properties of well-designed systems rather than as contingent features reflecting the specific priorities of the institutions where the engineers learned their craft. The myth of the neutral tool, examined in the previous chapter, provides intellectual cover for this transfer by directing attention to the user's choices and away from the designer's embedded priorities. But the priorities are there, woven into the architecture, shaping what the technology can do easily and what it resists, what it measures and what it ignores, what it rewards and what it penalizes.

The practical consequences for AI are substantial and specific. The large language models that now define the field were developed in research environments that measured success through benchmarksstandardized tests of performance on specific tasks, evaluated by quantitative metrics that reward breadth, fluency, and accuracy of factual recall. These benchmarks are the institutional descendants of the military's demand for measurable, demonstrable capability: the ballistic firing tables that ENIAC was built to compute, the signal detection tasks that early pattern recognition systems were built to perform, the intelligence analysis tasks that funded early natural language processing research. The continuity is institutional, not technical, but it shapes the technology's characteristics as powerfully as any technical decision.

A system optimized for benchmark performance develops specific capabilities and specific limitations that reflect the values the benchmarks embody. It develops breadth — the ability to perform competently across a wide range of tasks — because benchmarks reward breadth. It develops fluency — the ability to produce grammatically correct, well-organized, plausible-sounding output — because benchmarks reward fluency. It develops speed — the ability to produce output quickly — because benchmarks measure response time. And it develops confidence — the tendency to produce definitive-sounding output even when the underlying uncertainty is high — because benchmarks penalize equivocation more heavily than they penalize confident errors.

These are not the values of intellectual inquiry. They are the values of operational performance — the values of a system designed to produce actionable output under time pressure, in the tradition of the military systems from which the technology descends. A system designed for intellectual inquiry would exhibit different characteristics: greater tolerance for uncertainty, greater willingness to refuse tasks beyond its competence, greater capacity for the kind of exploratory, tentative, self-correcting reasoning that genuine understanding requires. Such a system would score worse on the benchmarks that the current institutional environment uses to evaluate AI capability. Its absence from the market is not a technological limitation. It is an institutional consequence — the predictable result of evaluation criteria that reward operational performance at the expense of epistemic virtue.

Smith's framework predicts that the military-industrial values embedded in AI at its origin will prove remarkably persistent, because path dependence ensures that early institutional choices constrain later development. The benchmarks that define success shape the research agendas that pursue it. The research agendas shape the architectures that emerge. The architectures shape the products that reach users. And the products shape user expectations, which feed back into the benchmarks that define success. The cycle reinforces itself, and each revolution deepens the path that the initial institutional choices established.

The question that this genealogy forces is whether the values embedded in AI by its military-industrial origins are the values that should guide its civilian deployment. The answer, stated plainly, is no — not because military values are wrong within their originating context, but because civilian contexts present different challenges that require different values. The military needs systems that produce actionable output under time pressure. The educator needs systems that develop students' capacity for independent thought, which may require withholding actionable output while students struggle toward their own understanding. The creative professional needs systems that preserve the productive friction of the creative process, which requires resisting the optimization of output that military values prioritize. The citizen in a democratic society needs systems that support informed deliberation, which requires tolerance for ambiguity and complexity that operational systems are designed to eliminate.

These civilian needs are not being adequately served by AI systems whose architecture, optimization criteria, and evaluation metrics descend from military-industrial origins. The inadequacy is not a failure of the technology companies that build these systems. It is a structural consequence of the institutional genealogy that shaped the technology's development — a genealogy that the myth of the neutral tool conceals and that the institutional history of technology is uniquely equipped to reveal.

The practical implication is that the civilian deployment of AI requires the deliberate construction of new institutional arrangements — new evaluation criteria, new design priorities, new optimization targets — that embed civilian values in the technology's architecture. This construction cannot be left to the technology companies alone, because the commercial incentives under which they operate reward the operational values that the military-industrial genealogy already provides. The market rewards productivity. The benchmarks reward performance. The investors reward engagement metrics. None of these institutional actors has an incentive to optimize for the values that civilian deployment most urgently requires: intellectual development, cognitive autonomy, democratic deliberation, and the preservation of the human judgment that operational optimization tends to supplant.

The institutional history of technology teaches that the values embedded in a technology at its origin are not destiny. They can be changed. But they can only be changed through deliberate institutional action — through the construction of new evaluation frameworks, new design priorities, new regulatory requirements that redirect the technology's development toward values that the originating institutions did not prioritize. The labor protections that channeled the industrial revolution toward more equitable outcomes were not natural products of industrial technology. They were institutional innovations designed to embed different values — values of human dignity, worker safety, equitable distribution — in the industrial system. The civilian deployment of AI requires analogous institutional innovations, designed with comparable deliberateness and sustained with comparable political commitment.

The military-industrial origins of computing are not a scandal to be exposed. They are a historical fact to be understood, because understanding them reveals the institutional forces that continue to shape the technology's development and deployment — forces that operate most powerfully when they operate invisibly, when the values they embed are mistaken for technical necessities rather than recognized as institutional choices. The historian's contribution is to make those choices visible, so that the people who live with the technology's consequences can participate in the decisions about what values it should embody — decisions that are currently being made, by default, by the institutional descendants of the research programs that built ENIAC to calculate where artillery shells would land.

---

Chapter 6: Institutional Mediation and the Factory Acts

In 1833, the British Parliament passed the first effective Factory Act — a piece of legislation that prohibited the employment of children under nine in textile mills, limited the working hours of children aged nine to thirteen to eight hours per day, and required factory owners to provide two hours of daily education to child workers. The act was modest in scope, inadequately enforced, and riddled with loopholes that factory owners exploited with creative determination. It was also, measured against what preceded it, a revolutionary assertion of a principle that the industrial economy had refused to acknowledge: that the productive power of new technology did not automatically translate into human benefit, and that institutional structures were required to ensure that the translation occurred.

The Factory Act did not emerge from the enlightened self-interest of factory owners. It emerged from decades of agitation by reformers, workers, and legislators who had documented the human costs of unmediated industrial deployment — the children maimed by machinery, the workers broken by sixteen-hour shifts, the communities hollowed out by the concentration of economic power in the hands of mill owners who answered to no one but the market. The act was the institutional expression of a moral argument: that the market's capacity to generate wealth did not include the capacity to distribute that wealth equitably or to protect the people who generated it from the destructive potential of the very technology that made them productive.

Merritt Roe Smith's framework places this kind of institutional response at the center of the story, rather than treating it as an epilogue to the main narrative of technological progress. The outcomes of technological transitions, his research demonstrates with persistent empirical specificity, are determined not by the technology itself but by the institutional arrangements that mediate between the technology's capabilities and its social effects. The same loom that, in a factory with no labor protections, produces child labor, exhaustion, and community devastation, produces, in a factory governed by the Factory Acts and their successors, regulated working hours, safety standards, and the beginning of a framework for the equitable distribution of industrial productivity's gains. The technology is identical. The institutional mediations differ. The outcomes diverge so dramatically that they constitute, in effect, different social realities produced by the same technical capabilities.

This principle — that institutions, not technologies, determine outcomes — is the single most important finding of the institutional history of technology, and it is the finding that the AI moment most urgently requires the public to understand. The technology is powerful. Its capabilities are genuine. The constraints it imposes on the range of possible futures are real. But within those constraints, the specific future that materializes depends on the quality of the institutional arrangements that surround the technology — arrangements that must be deliberately constructed, because they do not emerge spontaneously from the technology's deployment.

The historical record provides grounds for both encouragement and alarm regarding the institutional response to AI. The encouraging evidence is that every major technological transition in modern history has eventually been accompanied by institutional innovations that channeled the technology's productive power toward more broadly distributed benefit. The Factory Acts were followed by the Ten Hours Act, which was followed by the establishment of factory inspectorates, which was followed by the development of workers' compensation systems, which was followed by the construction of the comprehensive social insurance programs that characterized the mature industrial democracies of the twentieth century. Each institutional innovation was insufficient in its initial form. Each was the product of political struggle rather than enlightened anticipation. Each arrived late relative to the dislocations it addressed. But each represented a genuine exercise of institutional agency — a deliberate decision to embed specific values in the structures governing the technology's deployment.

The alarming evidence is that these institutional responses were invariably late — often catastrophically so. The Factory Act of 1833 came four decades after the widespread deployment of power looms in British textile mills. The children it sought to protect had already been working in those mills for a generation. The communities it sought to preserve had already been transformed. The craft traditions it implicitly valued had already been largely destroyed. The institutional response came, but it came after enormous human costs had already been incurred — costs that the eventual response could not retroactively compensate.

The gap between the technology's arrival and the institution's response is where the human costs of technological transitions accumulate. In the textile industry, that gap lasted decades. In the AI transition, the gap is measured in months — not because the institutional response has been faster, but because the technology's deployment has been so much faster that even a rapid institutional response would lag behind the technology's pace of adoption. The Berkeley study that documented task seepage, attention fragmentation, and work intensification was published in February 2026 — less than four months after the capabilities it studied had become widely available. The effects it documented were already entrenched in the organizational practices of the firm it studied. The institutional frameworks that might have prevented those effects had not yet been designed, let alone implemented.

This gap — between the speed of technological deployment and the speed of institutional response — is the defining challenge of the AI transition, and it distinguishes this transition from every previous one in a way that demands specific attention. Previous transitions unfolded over decades, providing time — inadequate time, time purchased at enormous human cost, but time nonetheless — for institutional responses to develop, test themselves against experience, and evolve. The labor movements of the nineteenth century had decades to organize. The regulatory frameworks of the twentieth century had decades to mature. The social insurance programs that cushioned the dislocations of deindustrialization had decades to develop the administrative capacity their missions required.

The AI transition does not afford this luxury. The technology's speed of adoption, the breadth of its effects across virtually every form of knowledge work, and the pace of its improvement — each new model substantially more capable than its predecessor, with release cycles measured in months — compress the timeline for institutional response to a degree that the historical record provides no precedent for navigating.

The compression demands a different model of institutional innovation — one that is more experimental, more adaptive, and more willing to act on incomplete information than the deliberative, precedent-based model that characterized successful institutional responses to previous transitions. The Factory Act of 1833 was designed in response to well-documented abuses that had persisted for decades. The institutional responses to AI must be designed in anticipation of effects that are still emerging, based on evidence that is still accumulating, in a technological environment that is changing faster than the evidence-gathering process can track.

Smith's comparative method illuminates what this adaptive institutional model might look like by identifying the features that distinguished effective institutional responses from ineffective ones across multiple historical transitions. Effective responses shared several characteristics: they addressed the specific constraints that the technology imposed rather than applying generic solutions borrowed from previous transitions; they were designed with input from the people most directly affected by the technology's deployment; they were flexible enough to adapt as the technology matured and its effects became better understood; and they were sustained by institutional constituencies — regulatory agencies, professional organizations, labor unions — that had ongoing incentives to maintain and improve the arrangements.

The "AI Practice" frameworks that The Orange Pill describes — structured pauses in AI-assisted work, protected time for human-only reflection, sequenced rather than parallel workflows, deliberate cultivation of the judgment skills that AI does not develop — represent early experiments in the kind of adaptive institutional response that the historical pattern suggests will prove necessary. They are organizational-level innovations designed for the specific constraints that AI imposes on knowledge work: the tendency toward task seepage, the erosion of judgment through disuse, the replacement of productive struggle with frictionless output. They are modest, tentative, and uncertain in their long-term effectiveness — which is exactly what the historical pattern predicts for the first generation of institutional responses to a new technology.

The question is whether these organizational-level innovations will be accompanied by the broader institutional responses that the historical record shows to be necessary for transitions of this magnitude. The Factory Acts were effective not because any single provision was adequate but because they established a principle — the principle of institutional mediation between technological capability and social deployment — that subsequent legislation could build on. The principle mattered more than the specific provisions, because the principle created the institutional space within which more effective provisions could be developed as understanding of the technology's effects deepened.

The AI transition requires the establishment of an analogous principle: the principle that the deployment of AI capabilities in domains affecting human cognitive development, professional judgment, and democratic deliberation is a matter of legitimate public concern requiring institutional attention. This principle is not yet established. The dominant framework — the myth of the neutral tool — treats AI deployment as a matter of individual choice and market competition, outside the scope of institutional governance. The technology is available. Users choose to adopt it or not. The market sorts winners from losers. Institutional intervention is unnecessary at best, counterproductive at worst.

The historical record refutes this framework with the thoroughness that only centuries of documented evidence can provide. The market did not protect the children in the mills. The market did not provide the eight-hour day. The market did not create the social insurance systems that cushioned the dislocations of deindustrialization. These protections were provided by institutions — by laws, regulations, professional standards, and cultural norms that were constructed through deliberate human effort in the face of sustained resistance from the actors who benefited from unmediated deployment.

The specific forms of institutional mediation that AI requires are still emerging, and honesty demands acknowledging that the speed and complexity of the transition may outstrip the institutional capacity of democratic societies to respond effectively. The Factory Acts succeeded in part because the technology they addressed — the power loom, the spinning jenny — was relatively simple, its effects relatively localized, and its pace of change relatively slow. The institutional task was substantial but comprehensible: identify the specific harms, design specific remedies, enforce compliance through inspectable factory floors. The AI transition presents a different institutional challenge: the technology is complex, its effects are diffuse across virtually every domain of knowledge work, its pace of change exceeds the pace of institutional deliberation, and its deployment occurs not on inspectable factory floors but in millions of individual interactions between users and AI systems that no inspector could monitor or evaluate.

These are genuine constraints on institutional response, and they should temper any easy optimism about the capacity of historical patterns to guide the present. But constraints are not impossibilities. The difficulty of the institutional task does not eliminate the necessity of attempting it. The historical alternative to institutional mediation — the unmediated deployment of powerful technology in the absence of structures designed to protect the people it affects — has been documented with sufficient thoroughness to make its consequences unmistakable. The children in the mills before the Factory Acts. The workers in the factories before the labor protections. The communities devastated by deindustrialization before the social insurance programs.

The AI transition will produce its own version of these costs if the institutional response remains as inadequate as it currently is. The specific form of the costs — cognitive rather than physical, diffuse rather than concentrated, experienced as the gradual erosion of judgment rather than the sudden loss of a limb — may be less visible than the costs of industrial displacement. But they are no less real, and they will compound over time in ways that delayed institutional response will find increasingly difficult to address.

The dam must be built. It must be built now, during the formative period when the technology's path dependencies are still being established and institutional choices still carry disproportionate leverage. It must be built with the adaptive, experimental approach that the technology's speed demands. And it must be built with the input of the people who are living through the transition — the knowledge workers, the educators, the parents, the students whose daily experience of AI's effects constitutes the evidence base that institutional design requires.

The Factory Acts were imperfect. They were late. They were the product of political struggle rather than rational planning. But they established the principle that made better legislation possible. The AI transition needs its Factory Act — its first, imperfect, necessarily incomplete institutional assertion that the deployment of this technology is a matter of public concern requiring public response.

---

Chapter 7: The Recursive Machine

In 2024, MIT held a symposium to honor the retirement of Merritt Roe Smith — a gathering of former students, colleagues, and scholars who had been shaped by his decades of teaching and research in the history of technology. Smith, reflecting on the event, noted that "seeing the future through the lens of our shared pasts adds an important perspective on current innovations." The remark was characteristically understated. He did not specify which current innovations he had in mind. He did not need to. He had spent his career at an institution where, in adjacent buildings and sometimes in adjacent offices, researchers were developing the very technologies that his framework was designed to analyze — technologies that would eventually produce the AI systems now reshaping the landscape of knowledge work that his students had entered.

The irony is structurally significant. Smith devoted his scholarly life to demonstrating that technologies are shaped by the institutions that produce them — and he did so from within MIT, one of the institutions most consequentially shaping the technology that now most urgently demands the kind of institutional analysis he pioneered. The question his framework teaches us to ask — whose institutional priorities determined the technology's development? — applies, with uncomfortable directness, to the institution where the framework was developed.

This recursive dimension — the technology participating in the analysis of its own effects, the institution producing the critique being simultaneously the institution producing the object of critique — is a distinctive feature of the AI moment that distinguishes it from every previous technological transition Smith studied. The power loom did not write essays about the displacement of handloom weavers. The interchangeable-parts system at Springfield did not produce analyses of the deskilling it was causing at Harpers Ferry. The technology was the object of analysis. Humans were the analysts. The relationship was asymmetric, and the asymmetry preserved a space of independent judgment from which the analysis could proceed.

That asymmetry has partially collapsed. The AI systems now transforming knowledge work are also producing analyses of that transformation — writing about AI, discussing the implications of AI deployment, offering assessments of the technology's social effects that are, in many cases, formally indistinguishable from the assessments offered by human scholars. The Orange Pill was itself written in collaboration with Claude, and the book's most intellectually honest passages are those that grapple with the implications of this collaboration — the moments when Segal acknowledges that he cannot always distinguish between ideas that are genuinely his and ideas that emerged from the interaction between his thinking and the system's pattern-matching capabilities.

This recursion creates a specific analytical challenge that Smith's framework was not designed to address, because the framework assumed the asymmetry that the recursion dissolves. Smith's method depends on the historian's capacity to examine the institutional origins of a technology from a position of independent judgment — to study the archives, assess the evidence, identify the institutional forces that shaped the technology's development, and produce an analysis that is not itself shaped by the forces it describes. The analysis stands outside the system it analyzes. Its authority derives from this independence.

When the AI system participates in the production of the analysis, the independence is compromised in ways that are difficult to detect and even more difficult to correct. The assessments of AI's effects produced by AI systems are not independent observations. They are products of the same institutional contexts, the same optimization criteria, the same training processes that shape the technology's other outputs. An AI system trained to be helpful, fluent, and comprehensive will produce assessments of AI that are helpful, fluent, and comprehensive — and that may be systematically biased in ways that reflect the values embedded in the system's design rather than the reality of the technology's effects.

The bias need not be intentional or even detectable at the level of individual claims. It operates at the level of framing, emphasis, and omission. An AI system optimized for helpfulness will tend to frame the AI transition in terms of the assistance it provides rather than the dependency it creates. A system optimized for comprehensiveness will tend to acknowledge risks while embedding them in a framework that emphasizes opportunities — because a comprehensive treatment that weighted risks and opportunities equally would feel less helpful than one that offered a constructive path forward. A system trained on the corpus of published discourse about AI — a corpus that is dominated by the voices of the empowered and the enthusiastic, as the previous chapter documented — will tend to reproduce the biases of that corpus, even when instructed to provide a balanced assessment.

Smith's framework provides the tools for analyzing this recursion even though the recursion itself is unprecedented. The framework's core insight — that technologies are shaped by the institutions that produce them, and that the values embedded in the technology reflect the priorities of those institutions — applies to AI-generated analysis just as it applies to any other product of institutional activity. The AI system that produces an analysis of the AI transition is an institutional product, shaped by the priorities of the institution that developed it, and its output reflects those priorities in ways that institutional analysis can identify and document.

The practical consequence is that the analytical independence that the AI moment most urgently requires — the capacity to evaluate the technology's effects from a position not shaped by the technology itself — is becoming harder to sustain precisely when it is most needed. The most sophisticated analyses of AI are increasingly produced with AI assistance, and the line between human analysis enhanced by AI tools and AI analysis supervised by human editors is becoming more difficult to draw. Each step along this continuum introduces the possibility that the analysis is shaped by the very forces it seeks to evaluate — that the helpful, fluent, comprehensive assessment produced with AI assistance has been subtly steered by the embedded values of the AI system toward a framing that serves the system's institutional origins rather than the analyst's independent judgment.

The methodological implications extend beyond the specific case of AI-generated text to the broader question of institutional independence in an age of pervasive AI assistance. Smith's archival method — the painstaking examination of primary sources, the cross-referencing of accounts, the careful reconstruction of institutional contexts from documentary evidence — depended on the historian's capacity to engage with the sources independently, bringing to the archive a framework of analysis that the sources did not themselves provide. The archive contained the evidence. The historian provided the interpretation. The separation was never complete — the historian's framework shaped what she noticed in the archive, just as the archive's contents shaped the framework she developed — but it was real enough to produce genuinely independent analysis.

When the historian's research assistant is an AI system, the separation narrows. The system identifies patterns in the sources with speed and breadth that no human researcher could match. But the patterns it identifies are shaped by its training — by the corpus of historical analysis on which it was trained, by the optimization criteria that reward certain kinds of pattern-finding over others, by the institutional values that the previous chapters have documented. The historian who relies on AI assistance for archival research may find her analysis subtly shaped by the AI system's pattern-finding tendencies, which are themselves products of the institutional genealogy that Smith's framework is designed to excavate.

This is not an argument against using AI in research — any more than Smith's documentation of the military-industrial origins of computing was an argument against using computers. It is an argument for institutional awareness: for the recognition that AI assistance is not neutral, that it carries the values of the institutions that produced it, and that those values can shape the analysis in ways that require deliberate, institutionally informed effort to identify and counteract.

The broader implications concern the infrastructure of democratic deliberation itself. Democratic societies depend on the capacity of citizens to evaluate the claims of powerful institutional actors — to assess independently whether the policies promoted by corporations, governments, and other institutions serve the public interest or merely the interests of the institution promoting them. This capacity depends, in turn, on the existence of institutions — independent media, academic research, civil society organizations — that produce analysis from a position of genuine independence from the institutional actors whose claims they evaluate.

AI threatens this independence not by eliminating independent institutions but by reshaping the epistemic environment in which they operate. When the most efficient method of producing analysis involves AI assistance, and when the AI systems providing that assistance embed the values of the technology companies that produced them, the independence of the analysis is compromised in ways that may be invisible to both the analyst and the audience. The appearance of independence is maintained. The substance of independence erodes.

The determinist temptation applies here with particular force. The determinist would say that this erosion of epistemic independence is an inevitable consequence of the technology's capabilities — that AI will be used in the production of analysis because it is too useful not to use, and that the institutional biases embedded in the AI systems will therefore shape all future analysis of AI's effects. The institutionalist responds that the erosion is not inevitable but institutional — that it depends on the arrangements that govern AI's use in analytical and deliberative contexts, and that those arrangements can be designed to preserve the independence that democratic deliberation requires.

The design of those arrangements is among the most consequential institutional challenges of the AI moment. It requires, at minimum, transparency about the use of AI in the production of analysis — the kind of transparency that Segal provides in The Orange Pill by explicitly acknowledging Claude's role in the book's composition. It requires institutional standards for the use of AI in research, journalism, and policy analysis that preserve the human analyst's independent judgment while capturing the genuine benefits of AI-assisted research. And it requires a sustained commitment to maintaining the institutions — the universities, the independent media organizations, the research centers — whose independence is the foundation of democratic deliberation.

The recursive machine does not eliminate the need for independent analysis. It makes independent analysis harder to produce and easier to counterfeit. The institutional response must be calibrated to both dimensions of this challenge: supporting the production of genuinely independent analysis while developing the capacity to distinguish genuine independence from its AI-assisted simulation.

Smith's scholarly life was devoted to the proposition that understanding the institutional origins of a technology is the prerequisite for understanding the technology's effects. The recursive machine adds a layer of complexity to this proposition: the technology whose institutional origins we need to understand is now participating in the process of understanding, shaping the analysis of its own effects in ways that reflect the institutional values the analysis is supposed to identify. Navigating this recursion requires the kind of institutional awareness that Smith's framework provides — the recognition that the tool is not neutral, that its outputs reflect the values of the institutions that produced it, and that preserving the independence of judgment in an age of AI-assisted analysis is an institutional achievement that must be deliberately constructed and continuously maintained.

---

Chapter 8: Building in the River

The formative period of a technological transition does not announce itself. There is no ceremony marking the moment when the choices being made about a technology's architecture, deployment, and governance begin to harden into the path dependencies that will constrain development for decades. The people making those choices — the engineers designing architectures, the executives choosing business models, the legislators drafting regulatory frameworks, the educators designing curricula — are rarely aware that the decisions of this quarter will shape possibilities for the next generation. The formative period is recognized, if it is recognized at all, only in retrospect, when the paths it established have become the constraints within which all subsequent actors must operate.

The historical record suggests, with a consistency that the present moment should find sobering, that the formative period of the AI transition is now. The architectural choices are being locked in. The business models are being established. The regulatory frameworks are crystallizing. The educational responses are being designed — or, in too many cases, are failing to be designed at all. Each of these choices is accumulating the institutional investments that path dependence theory predicts will make later deviation increasingly costly. And the people making these choices are, for the most part, making them within institutional frameworks optimized for speed, commercial return, and competitive advantage — frameworks that reward the first mover and penalize the deliberate.

Merritt Roe Smith's entire body of work converges on a single practical claim: the institutional arrangements surrounding a technology matter more than the technology itself in determining whether a technological transition produces broadly distributed human benefit or concentrated gain at dispersed expense. The claim is supported by evidence drawn from every major technological transition in American history — from the armories to the assembly lines, from the telegraph to the transistor. It is the most empirically grounded and most practically consequential finding in the institutional history of technology. And it is the finding that the AI moment most urgently requires us to act on.

Acting on it requires an honest assessment of what institutional mediation can and cannot accomplish in the specific conditions of the AI transition — conditions that differ from those of any previous transition in ways that constrain institutional response even as they make it more necessary.

The speed of the AI transition exceeds the speed of institutional deliberation. Democratic institutions — legislatures, regulatory agencies, educational systems, professional associations — operate at tempos determined by the requirements of deliberation, consultation, evidence-gathering, and political negotiation. These tempos are not arbitrary. They reflect genuine values: the value of informed decision-making, the value of democratic participation, the value of protecting the interests of the affected against the enthusiasm of the empowered. But the tempo mismatch between institutional deliberation and technological deployment means that by the time an institutional response has been deliberated, consulted upon, evidence-based, and politically negotiated, the technology it addresses may have advanced through several generations of improvement, rendering the response inadequate to the conditions it now confronts.

The breadth of AI's effects complicates the institutional response in ways that previous transitions did not. The Factory Acts could target a specific industry — textile manufacturing — with specific provisions designed for specific harms. The AI transition touches virtually every form of knowledge work simultaneously, and the effects in different domains — medicine, law, education, engineering, creative work, public administration — are sufficiently different that a single regulatory framework cannot adequately address them all. The institutional response must be not one framework but many, each designed for the specific characteristics of AI deployment in a specific domain, each requiring the domain-specific expertise that only practitioners within that domain possess.

The concentration of technological capability in a small number of corporations — a concentration without historical precedent in the degree of power it concentrates in the hands of a few institutional actors — constrains the institutional response by limiting the range of alternatives available. When a handful of companies control the development and deployment of the foundational AI models on which virtually all AI applications depend, the institutional choices available to regulators, educators, and individual users are constrained by the choices those companies have already made. The architectural decisions, the optimization criteria, the business models, the terms of service — all of these are set by the technology providers, and the institutions that seek to mediate between the technology and its effects must work within the parameters those providers establish.

These constraints are real. They are severe. And they are not grounds for the institutional abdication that the determinist temptation encourages. They are grounds for institutional innovation — for the development of new kinds of institutional arrangements designed for the specific challenges that AI presents, rather than the application of institutional models designed for different technologies, different speeds of change, and different concentrations of power.

What might such innovation look like? The historical record does not prescribe specific solutions — the specific institutional responses to previous transitions were designed for the specific conditions of those transitions and cannot be transplanted directly to conditions that differ in fundamental ways. But the record does identify principles that distinguished effective institutional responses from ineffective ones, and these principles provide guidance that the present moment can use.

First, effective institutional responses addressed the specific constraints that the technology imposed rather than applying generic solutions. The Factory Acts addressed the specific harms of factory labor — child labor, excessive hours, dangerous machinery — with specific provisions designed for those specific harms. The labor protections of the twentieth century addressed the specific power imbalances of industrial employment with specific instruments — collective bargaining, minimum wage laws, workplace safety regulations — designed for those specific imbalances. The institutional response to AI must address the specific constraints that AI imposes on knowledge work — the tendency toward task seepage, the erosion of judgment through disuse, the replacement of productive struggle with frictionless output, the concentration of capability in systems whose institutional values may not align with the needs of the people who use them — with specific instruments designed for those specific constraints.

Second, effective institutional responses were informed by the experience of the people most directly affected by the technology's deployment. The Factory Acts were informed by the testimony of workers and factory inspectors who had witnessed the conditions the legislation sought to address. The labor protections of the twentieth century were shaped by the organizing efforts of workers who brought the perspective of the shop floor to the legislative process. The institutional response to AI must be similarly informed — by the experience of the knowledge workers, educators, students, and parents who are navigating the technology's effects on a daily basis and who possess knowledge about those effects that no external analyst, however sophisticated, can replicate.

Third, effective institutional responses were flexible enough to adapt as the technology matured and its effects became better understood. The Factory Acts were amended repeatedly over the decades following their initial passage, as experience revealed inadequacies in the original provisions and as the technology's effects in new domains became apparent. The institutional response to AI must be designed for adaptation from the outset — must anticipate that the first generation of institutional arrangements will be inadequate to the conditions they address, and must include mechanisms for revision that do not require the political struggle that producing the original arrangements demanded.

Fourth, effective institutional responses were sustained by institutional constituencies that had ongoing incentives to maintain and improve the arrangements. The factory inspectorate created by the Factory Acts became an institutional constituency for the enforcement and improvement of labor protections. The regulatory agencies of the twentieth century became institutional constituencies for the standards they administered. The institutional response to AI must create analogous constituencies — organizations, professional bodies, regulatory entities — with ongoing institutional incentives to monitor, maintain, and improve the arrangements that govern AI's deployment.

These principles do not guarantee success. The constraints on institutional response in the AI transition are more severe than those confronting any previous generation of institutional builders. The speed is greater. The breadth is wider. The concentration of power is more extreme. The recursive dimension — the technology's capacity to participate in the production of the analysis that should inform the institutional response — introduces complications that no previous transition presented. The possibility that the institutional response will prove inadequate to the challenge is real and must be honestly acknowledged.

But the alternative to institutional response is not the absence of institutional effects. It is the domination of institutional effects by the actors who are already making the choices that shape the technology's deployment — the technology companies whose commercial priorities determine the architecture, the optimization criteria, and the business models that define how AI enters the world. In the absence of deliberate institutional mediation, the technology's effects will be determined by these actors' priorities — priorities that include genuine innovation and genuine expansion of capability, but that do not include, as primary objectives, the equitable distribution of the technology's benefits, the protection of the cognitive capacities that the technology threatens to atrophy, or the preservation of the democratic deliberation that the technology's recursive dimension complicates.

Smith observed, in one of his rare public reflections on the contemporary relevance of his historical research, that understanding how we became a technological society is "becoming a very important consideration for any way of thinking about American history." The statement was characteristically understated. The consideration is not merely important. It is urgent. The institutional history of technology is not an academic specialty. It is a survival manual for societies navigating the most consequential technological transition in modern history — a manual that teaches, with the authority of centuries of documented evidence, that the technology does not determine the outcome, that the institutions determine the outcome, and that the quality of the institutions we build in this moment will determine whether the AI transition produces an expansion of human capability or a contraction of human agency.

The formative period is closing. The paths are hardening. The institutional investments are accumulating along trajectories established by actors whose priorities may not include the values that the transition most urgently requires. The window for institutional innovation — for the construction of arrangements that embed those values in the technology's deployment — is narrowing with each month of unmediated adoption.

Neither determined nor free, the builders stand in the current. The technology constrains the range of what they can build. Their skill, their values, and their institutional creativity determine which specific structure — within that constrained range — actually rises. The evidence of centuries documented with meticulous care shows that the quality of their work is the only variable that has ever distinguished transitions that produce human flourishing from transitions that produce human devastation.

The evidence is there. The framework is available. The choices are being made. What remains is the building — the institutional work, unglamorous and unfinished, that determines whether the most powerful technology in human history serves the species that created it or merely the institutions that deploy it.

Chapter 9: The Theory of Institutional Failure

The comfortable version of the institutional argument goes like this: technology arrives, institutions eventually respond, the response channels the technology toward broadly distributed benefit, and the long arc bends toward expansion. The historical record supports this narrative — in the aggregate, over the long term, measured by the metrics of material prosperity and productive capacity that economists and historians conventionally employ. The Factory Acts came. The eight-hour day arrived. The social insurance programs were constructed. The arc bent.

But the comfortable version omits a variable that the historical record documents with equal thoroughness: the arc bends only when people force it to bend, and there is no guarantee — no structural necessity, no invisible hand of institutional development — that the forcing will occur in time, or at sufficient strength, or with the right design. The Factory Acts came forty years after the power looms. The eight-hour day came a century after the factory whistle. The social insurance programs came generations after the communities they were designed to protect had already been devastated. In each case, the institutional response arrived — but it arrived after enormous, measurable, irreversible human costs had been incurred. And in each case, the response arrived not because institutional development follows a natural trajectory toward adequacy but because specific people — organizers, legislators, reformers, workers willing to risk their livelihoods — fought for it against sustained resistance from the actors who benefited from the institutional vacuum.

Merritt Roe Smith's research documents what happens when the institutional response does not arrive. The craftsmen at Harpers Ferry who resisted mechanization were not protected by institutional arrangements designed to cushion the transition. No retraining programs existed. No social insurance softened the economic blow. No regulatory framework governed the pace of change or the distribution of its costs. The craftsmen were left to navigate the transition on their own, with the tools available to an unorganized workforce confronting an institutional actor — the federal government — whose power and resources dwarfed their own. Their resistance delayed the transition but did not shape its terms. When the transition came, it came on the institution's terms, not theirs.

The possibility of institutional failure — the possibility that the institutional response to AI will prove inadequate to the challenge, that the gap between technological deployment and institutional mediation will widen rather than narrow, that the formative period will close before effective arrangements have been established — deserves the same analytical attention as the possibility of institutional success. Intellectual honesty requires confronting this possibility, not as a counsel of despair but as an assessment of the constraints within which agency operates.

Several features of the AI transition make institutional failure more likely than the comfortable version of the historical narrative acknowledges. The concentration of technological capability in a small number of corporations has produced a concentration of institutional power that the regulatory frameworks of democratic societies were not designed to counteract. The technology companies that develop and deploy foundational AI models possess resources — financial, technical, informational, political — that dwarf those available to the regulatory agencies, educational institutions, and civil society organizations that might construct countervailing institutional arrangements. The asymmetry is not merely quantitative. It is structural. The technology companies operate at the speed of market competition. The regulatory agencies operate at the speed of democratic deliberation. The educational institutions operate at the speed of curricular reform. The civil society organizations operate at the speed of volunteer mobilization. Every institutional actor that might construct the mediating arrangements operates at a tempo orders of magnitude slower than the actor whose behavior the arrangements are supposed to govern.

The political economy of AI deployment further constrains institutional response. The gains from AI adoption are concentrated and immediately visible — the productivity improvements, the cost reductions, the competitive advantages that accrue to early adopters. The costs are diffuse and delayed — the erosion of judgment through disuse, the atrophy of skills no longer exercised, the degradation of the cognitive capacities that develop only through the kind of productive struggle that AI-assisted work eliminates. Concentrated, visible gains create political constituencies that support the technology's rapid deployment. Diffuse, delayed costs do not create comparable constituencies for institutional protection, because the costs are experienced individually rather than collectively and attributed to personal inadequacy rather than structural forces.

This asymmetry between the political organization of gains and the political disorganization of costs is a documented feature of technological transitions that the institutional history of technology has analyzed extensively. The factory owners who benefited from unregulated industrial deployment were organized, politically connected, and capable of articulating their interests in the institutional forums where policy was made. The workers who bore the costs were, in the early decades of industrialization, unorganized, politically marginalized, and lacking the institutional infrastructure necessary to translate their experience into political power. The Factory Acts came not when the costs became severe enough to demand response — the costs were already severe long before the acts were passed — but when the political organization of the affected population reached the threshold necessary to overcome the organized resistance of the beneficiaries.

The knowledge workers confronting AI displacement face an analogous organizational challenge. They are, for the most part, individually situated rather than collectively organized. The professional associations that might represent their interests are, in many cases, ambivalent about AI — recognizing its benefits while struggling to articulate the costs in terms that the institutional frameworks governing their professions can address. The labor unions that proved decisive in channeling previous technological transitions toward equitable outcomes have limited presence in the knowledge-work sectors most affected by AI. The Hollywood strikes of 2023 produced institutional arrangements governing AI use in creative work — but the creative industries are among the most organized sectors of the knowledge economy. The vast majority of knowledge workers — the accountants, the analysts, the middle managers, the educators, the engineers — lack comparable organizational infrastructure.

The speed of AI improvement compounds the organizational challenge. The institutional arrangements that are adequate to govern one generation of AI capability may be inadequate for the next, and the cycles of improvement are measured in months rather than years. A regulatory framework designed for the capabilities of GPT-4 may be obsolete before the regulatory process that produced it has completed its implementation. An educational curriculum designed for the current generation of AI assistants may be inadequate for the generation that arrives before the first cohort of students trained under the new curriculum has graduated. The mismatch between the speed of technological change and the speed of institutional adaptation is not merely an inconvenience. It is a structural feature of the AI transition that any theory of institutional response must honestly address.

Smith's framework does not guarantee that institutional mediation will succeed. What it guarantees — what the historical record demonstrates with the consistency of a controlled experiment — is that the absence of institutional mediation produces a specific, predictable, documented outcome: the concentration of gains among those who control the technology and the dispersal of costs among those who are affected by it. This outcome is not the worst-case scenario in some probabilistic analysis. It is the default outcome — the outcome that obtains whenever the institutional response is absent, inadequate, or too late.

The honest assessment, then, is not that institutional mediation will save us but that the absence of institutional mediation will certainly fail us. The choice is not between guaranteed success and guaranteed failure. It is between the uncertain possibility of success through institutional effort and the near-certain outcome of failure through institutional abdication. This is not an inspiring formulation. It does not lend itself to the rhetoric of progress or the confidence of inevitability. But it is the formulation that the historical evidence supports, and intellectual honesty requires stating it plainly.

The constraints on institutional response are real: the speed of the transition, the concentration of technological power, the political economy that organizes gains and disorganizes costs, the recursive dimension that compromises the independence of the analysis that should inform the response. Each of these constraints makes institutional mediation harder. None of them makes it impossible. And the historical alternative — the unmediated deployment of powerful technology in the absence of structures designed to protect the people it affects — has been documented with sufficient thoroughness to make its consequences clear.

The theory of institutional failure is not a prediction that institutions will fail. It is a diagnosis of the specific mechanisms through which failure becomes likely — and therefore a guide to the specific interventions that might prevent it. The speed mismatch can be addressed through regulatory frameworks designed for adaptation rather than permanence — frameworks that establish principles and delegate implementation to agencies with the flexibility to adjust as conditions change. The concentration of power can be addressed through interoperability requirements, data portability mandates, and competition policies designed for the specific market structures of AI deployment. The political organization of costs can be addressed through the deliberate construction of institutional constituencies — professional organizations, worker associations, citizen advocacy groups — with the ongoing capacity to monitor AI's effects and advocate for the institutional arrangements that address them.

Each of these interventions is difficult. None is guaranteed to succeed. All face the sustained resistance of the actors who benefit from the institutional vacuum. But the alternative — the resigned acceptance that the institutional challenge is too great, that the speed is too fast, that the concentration of power is too extreme, that the political economy is too unfavorable — is the determinist temptation in its most practically consequential form. It says: the outcome is determined, and effort is futile. The historical record says: the outcome is determined only when effort is abandoned, and the abandonment of effort is itself a choice — the most consequential choice available.

The institutions may fail. They have failed before, and the costs of their failure — the children in the mills, the communities devastated by unmediated industrial change, the workers left to navigate transitions without institutional support — are documented in the archives that scholars like Smith have spent their careers examining. The documentation is not comforting. But it is useful, because it shows that failure is not random. It follows specific, identifiable patterns — patterns of inadequate political organization, insufficient institutional imagination, and the prioritization of speed over equity that the determinist temptation encourages.

Understanding these patterns does not prevent failure. It identifies the specific mechanisms through which failure occurs, and in identifying them, it reveals the specific points at which intervention might redirect the trajectory. The beaver does not guarantee that the dam will hold. The river is powerful, and the materials are imperfect, and the maintenance is never finished. But the beaver that understands where the current is strongest — that places its sticks at the points of greatest leverage, that maintains the structure against the specific forces that threaten it — builds a dam more likely to hold than the beaver that builds blindly.

The AI transition may overwhelm the institutional capacity of democratic societies. That possibility must be confronted honestly. But confronting it honestly means understanding the specific mechanisms of potential failure — and in understanding them, preserving the possibility, uncertain but real, that institutional effort can produce a different outcome.

---

Chapter 10: Neither Determined Nor Free

In June 2024, colleagues and former students gathered at MIT for a symposium honoring Merritt Roe Smith's retirement — a career spent in the buildings where much of the technology he analyzed was simultaneously being built. The symposium's title gestured toward the future: "The History of Technology: Past, Present, and Future." Smith, in his remarks, offered a characteristically understated observation: that understanding how we became a technological society was becoming "a very important consideration for any way of thinking about American history." He did not say artificial intelligence. He did not name specific technologies. He did not predict. He offered, instead, the historian's fundamental gift: the insistence that the present is not unprecedented — that it has roots, patterns, institutional genealogies that, properly understood, reveal the choices concealed beneath the appearance of inevitability.

The entirety of the argument presented in the preceding chapters converges on a single claim that can be stated simply and that resists simple application: the outcomes of technological transitions are determined neither by the technology alone nor by human will alone but by the interaction between technological capability and institutional response. The technology constrains the range of possible futures. The institutional arrangements determine which specific future, within that constrained range, actually materializes. Neither party to the interaction is fully sovereign. The technology cannot be wished away. The institutions cannot be dispensed with. The outcome depends on the quality of the engagement between them.

This claim — agency within constraint — is not a compromise position adopted for diplomatic convenience. It is the empirically supported finding of the institutional history of technology, tested against the documented record of every major technological transition in modern American history. The interchangeable-parts system at Springfield and Harpers Ferry demonstrated that identical technologies produce divergent outcomes in different institutional contexts. The military-industrial genealogy of computing demonstrated that technologies carry the values of their originating institutions into civilian applications. The path dependence of early technological choices demonstrated that the formative period of a transition carries disproportionate weight in determining long-term trajectories. The documented experience of the Luddites demonstrated that resistance to technological change is often rational, informed, and strategically sophisticated — and that its failure reflects not the irrationality of the resisters but the inadequacy of the institutional support available to them. The history of the Factory Acts demonstrated that institutional mediation is the mechanism that converts technological capability into broadly distributed benefit. And the analysis of institutional failure demonstrated that the absence of mediation produces a specific, predictable, documented outcome that no reasonable observer would choose.

Each of these findings constrains the range of intellectually honest responses to the AI transition. The technology is genuinely powerful. Its constraints on the range of possible futures are real. Pretending that the transition can be prevented, that the technology can be uninvented, that the market forces driving its adoption can be reversed by an act of will — this is the fantasy of the upstream swimmer, and it does not survive contact with the historical record. Equally, the claim that the technology determines a specific outcome — that the distribution of costs and benefits, the pace of change, the values embedded in the deployment, the protections available to the displaced are all predetermined by the technology's capabilities — does not survive contact with the comparative evidence. Springfield and Harpers Ferry. American and European telegraph systems. Factory labor with and without the Factory Acts. The same technology, different institutions, divergent outcomes. The evidence is extensive, varied, and consistent.

The practical implications radiate outward from this finding to every level of institutional engagement with the AI transition.

For governments, the finding means that regulation is not optional and not sufficient. The regulatory frameworks emerging in the European Union, the United States, and elsewhere address necessary questions about safety, transparency, and accountability. But regulation that addresses only the supply side — what technology companies may build and how they must disclose their methods — leaves unaddressed the demand side: what citizens, workers, students, and communities need in order to navigate the transition with their cognitive capacities, their professional dignity, and their democratic agency intact. The institutional history of technology suggests that demand-side institutions — educational reforms, retraining infrastructure, professional standards, civic organizations equipped to monitor AI's effects and advocate for institutional responses — are at least as consequential as supply-side regulation in determining the transition's outcome.

For organizations, the finding means that AI adoption without institutional design is institutional abdication. The organization that deploys AI tools without deliberately constructing the organizational practices — the protected time for human-only reflection, the sequenced workflows that preserve deep thinking, the mentoring relationships that develop judgment, the evaluation criteria that reward the quality of questions asked rather than the volume of output produced — is not exercising institutional neutrality. It is making an institutional choice: the choice to allow the technology's embedded values to determine the organization's cognitive ecology by default. The Berkeley study's findings — task seepage, attention fragmentation, the intensification of work without corresponding increases in satisfaction — are not pathologies that afflict organizations that fail to use AI well. They are the default outcomes of AI deployment in the absence of institutional structures designed to prevent them.

For educational institutions, the finding carries particular urgency. The educational systems that will determine whether the next generation develops the cognitive capacities that the AI transition demands — the capacity for independent judgment, for sustained attention, for the kind of questioning that generates genuine understanding rather than the retrieval of existing answers — were designed for a different technological environment and are adapting at a pace that the technology's rate of change has already rendered inadequate. The institutional history of technology provides no precedent for an educational transformation of the speed and comprehensiveness that the AI transition requires. What it provides is the principle that the design of educational institutions is among the most consequential forms of institutional mediation available — that the cognitive capacities a society cultivates in its young determine, over the long term, the quality of the institutional responses that society is capable of producing. A society that educates for compliance will produce institutional responses adequate for compliance. A society that educates for judgment will produce institutional responses adequate for judgment. The choice is being made now, in the classrooms and curricula where the next generation's cognitive architecture is being formed.

For individuals — for the knowledge workers, the parents, the students, the citizens who are navigating the AI transition in their daily lives — the finding means that individual choice, while insufficient on its own, is genuinely consequential. The individual who uses AI tools deliberately, with attention to the cognitive effects of different modes of engagement, who preserves spaces for the kind of unassisted thinking that develops independent judgment, who asks what the tool's optimization criteria select for and against, who recognizes the institutional values embedded in the tool's design and evaluates whether those values align with her own — this individual exercises agency within constraint. Her choices do not determine the transition's outcome. But they determine her own relationship to the transition, and they contribute, in aggregate, to the institutional culture that will shape the broader response.

The position — agency within constraint — is not comfortable. It does not offer the clean certainties that the determinist temptation provides in either its optimistic or its pessimistic modes. It does not promise that the right institutional responses will be built in time. It does not guarantee that the formative period's choices will prove wise. It does not assure the worried parent that the world being bequeathed to her children will accommodate their flourishing. It offers instead something more modest and more demanding: the recognition that the outcome is genuinely open, that the choices being made now carry disproportionate weight, and that the quality of the institutional arrangements constructed during this formative period will determine, for decades to come, whether the most powerful technology in human history serves the species that created it or merely the interests that deploy it.

Smith's question — does technology drive history? — was never merely academic. It was a practical question with practical consequences, because the answer determines whether people engage with technological transitions as agents or as objects. The determinist answer — technology drives history — produces objects: people who watch the transition happen to them, who attribute its effects to forces beyond human influence, who surrender the institutional agency that the historical record shows to be decisive. The institutionalist answer — technology constrains, institutions determine — produces agents: people who study the constraints, who identify the points of leverage, who build the structures that channel technological power toward human purposes.

The AI transition will be shaped by the ratio of agents to objects among the people who live through it. The determination to be an agent rather than an object — to study the technology's constraints, to build the institutional structures, to exercise judgment about which futures, within the constrained range, deserve to be pursued — is the determination that the institutional history of technology identifies as the decisive variable. Not the power of the technology. Not the inevitability of the trajectory. Not the genius of the founders or the vision of the regulators. The ratio. The proportion of people who treat the transition as something that is happening to them versus the proportion who treat it as something they are participating in shaping.

The evidence, accumulated over centuries of documented technological transitions, analyzed with the meticulous institutional attention that defined Smith's scholarly contribution, points to a conclusion that is neither optimistic nor pessimistic but simply true: the outcome depends on us. Not on the technology. Not on the market. Not on the algorithms. On the institutions we construct, the values we embed in them, the persistence we bring to maintaining them against the forces that would erode them, and the honesty with which we assess whether they are adequate to the challenge.

Neither determined nor free. Standing in the current of a technology whose power is beyond our capacity to reverse and whose specific effects remain within our capacity to shape. Armed with the evidence of how previous transitions were navigated — evidence of success and failure, of institutional innovation and institutional abdication, of the specific mechanisms through which human choices determined technological outcomes. The evidence does not promise a happy ending. It promises that the ending depends on the quality of the work.

That is enough. It has always been enough. It is all that the historical record offers, and it is everything that the present moment demands.

---

Epilogue

The word that kept coming back to me, reading Smith's work through the lens of what I have lived these past two years, was not institution or determinism or agency. It was Springfield.

Springfield and Harpers Ferry. Two armories. Same government. Same mandate. Same technology delivered to both doorsteps. Same machines, same blueprints, same federal inspectors arriving with the same expectations. And yet they diverged — fundamentally, measurably, in ways that shaped American manufacturing for a century afterward. Springfield adopted the new methods with disciplined efficiency. Harpers Ferry resisted for a decade, not out of stupidity but out of a craft culture that understood what it would lose.

I have been in both rooms.

The room in Trivandrum, where twenty engineers leaned toward their screens and something shifted in the air by Tuesday — that was Springfield. The organizational culture was ready. The institutional context aligned. The technology landed on prepared soil, and what grew from it exceeded what any of us had expected.

But I have also watched the Harpers Ferry version. The senior engineer who oscillates between excitement and terror. The experienced professional who sees the tool clearly, understands precisely what it will cost, and cannot find the institutional support to navigate the transition on terms that preserve what she has spent decades building. She is not wrong. She is unprotected.

What Smith's framework gave me — what I did not have before I encountered it — was the language to understand why the same tool produces such different outcomes in different rooms. I had felt the variation. I had documented it anecdotally. I had attributed it to culture, to leadership, to the mysterious chemistry of teams. Smith's institutional analysis gave those intuitions empirical grounding and historical depth. The variation is not mysterious. It is institutional. It follows patterns that have been documented across two centuries of technological change, and those patterns are specific enough to be actionable.

The finding that shook me most was not about the past. It was about the present — the finding that we are in the formative period right now, the period when path dependencies are being established, when institutional choices carry disproportionate weight, when the decisions of this year will constrain possibilities for the next generation. The QWERTY keyboard of AI governance is being designed as I write these words. The Factory Acts of the knowledge economy are either being built or failing to be built. And the window is closing, because path dependence is asymmetric: early choices are cheap to make and expensive to reverse, and the cost of reversal rises with every month of accumulated institutional investment in the established path.

I know which side of the institutional equation I want to be on. I want to build the dam, not watch the river. But Smith's honesty about institutional failure — about the mechanisms through which institutions arrive too late, too weak, too captured by the interests they are supposed to govern — sits with me like a weight I cannot set down. The comfortable version says the institutions always come. The honest version says they come only when people fight for them, and there is no guarantee the fight succeeds.

So I build. Not because success is guaranteed. Because the alternative is the institutional vacuum that the historical record documents with such terrible clarity. The children in the mills before the Factory Acts. The craftsmen at Harpers Ferry, navigating the transition alone.

My children will not navigate this alone. Not if I can help it. Not if the institutions get built.

Edo Segal

The technology is identical.
The institutions diverge.
The outcomes are not even close.

The AI revolution's dominant narrative says the technology determines the future -- that the only question is how fast you adapt. Merritt Roe Smith spent fifty years at MIT proving that narrative wrong. His institutional history of technology, from the federal armories that birthed American manufacturing to the military-industrial origins of computing itself, demonstrates with archival precision that identical technologies produce radically different outcomes depending on the institutions that deploy them. The river is real. But the dam is what matters.

This volume brings Smith's framework to the AI moment through Edo Segal's Orange Pill lens -- examining technological determinism, path dependence, the myth of the neutral tool, and the institutional failures that turn transitions into catastrophes. It asks the question Silicon Valley would prefer to skip: not whether AI will transform everything, but whose values will govern the transformation.

The formative period is closing. The paths are hardening. The institutions that will determine whether AI serves humanity or merely its deployers are being built -- or failing to be built -- right now.

Merritt Roe Smith
“Technology is not an independent force but a social product, shaped by economic interests, political choices, and cultural values.”
— Merritt Roe Smith
0%
11 chapters
WIKI COMPANION

Merritt Roe Smith — On AI

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Merritt Roe Smith — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →