By Edo Segal
The decision that haunts me is never the one I agonized over.
It's the one I made in passing. The architecture choice at 2 a.m. because we needed to ship. The API standard we adopted because it was there and worked and nobody had time to debate alternatives. The workflow we designed for a team of five that quietly became the workflow for a team of fifty, then the template every new hire inherited, then the thing we could not change without rebuilding everything from scratch.
I have spent my career making these decisions. Small, pragmatic, under deadline. Each one felt temporary. Almost none of them were.
Thomas Hughes spent fifty years studying what happens when those small decisions accumulate into systems that no one can redirect. He was a historian of technology, not a futurist, and he never saw a large language model. But he understood something that the AI discourse is catastrophically failing to grasp: the technology is not the system. The technology is the filament. The system is everything else — the infrastructure, the institutions, the regulations, the pricing models, the workforce assumptions, the cultural narratives — and it is the system, not the technology, that determines who benefits and who bears the cost.
Hughes showed that every large technical system passes through a formative period when human choices still matter, followed by a phase of technological momentum when the system's accumulated weight makes redirection enormously expensive. The electrical grid. The telephone network. The highway system. In every case, the configuration that crystallized during the formative period persisted for generations, and the people who made the formative choices rarely understood how permanent those choices would become.
We are in the formative period of the AI sociotechnical system right now. The conduits are being laid. The standards are being set. The infrastructure is being built with thirty-year operational lifetimes. The regulatory precedents are being established. And the window during which these choices remain plastic is closing faster than any previous technological transition Hughes documented.
This book applies Hughes's framework to our moment with the rigor his scholarship demands. It is not comfortable reading. It does not tell you the future will be fine if we build responsibly. It tells you the future is being built right now, by every architecture decision and regulatory precedent and organizational norm that crystallizes this quarter, and that the cost of changing course increases with every month of inattention.
The conduit outlasts the cable. The pathway outlasts the conduit. And the assumptions we embed today will constrain builders who haven't been born yet.
That is why this lens matters. Not because Hughes predicted AI. Because he mapped the dynamics that will determine what AI becomes.
-- Edo Segal ^ Opus 4.6
1930–2014
Thomas Parke Hughes (1930–2014) was an American historian of technology whose work fundamentally reshaped how scholars understand the relationship between technological systems and the societies that build them. Born in Richmond, Virginia, Hughes studied at the University of Virginia and earned his doctorate at the same institution before spending the majority of his career at the University of Pennsylvania, where he held the position of Melvin Kranzberg Professor of the History and Sociology of Science. His landmark work, *Networks of Power: Electrification in Western Society, 1880–1930* (1983), offered a sweeping comparative history of electrical systems in the United States, Germany, and Britain, introducing concepts that became foundational to the field: the system builder, the reverse salient, and technological momentum — the idea that young technological systems are shaped by society while mature systems increasingly shape the societies around them. His subsequent works, including *American Genesis: A Century of Invention and Technological Enthusiasm, 1870–1970* (1989) and *Rescuing Prometheus: Four Monumental Projects That Changed the Modern World* (1998), extended his analysis to military-industrial systems, large-scale engineering projects, and the broader relationship between technological ambition and democratic governance. Hughes received numerous honors, including the Leonardo da Vinci Medal from the Society for the History of Technology, and his framework for understanding large technical systems remains among the most widely cited analytical tools in science and technology studies.
Thomas Alva Edison did not invent the light bulb. This is the first thing Thomas P. Hughes's scholarship requires anyone thinking about technology to understand, and it is the claim from which everything else in this volume follows. What Edison did was simultaneously more mundane and more consequential than the mythology suggests. He designed a system. Between 1878 and 1882, working out of his Menlo Park laboratory and then the Pearl Street station in lower Manhattan, Edison and his expanding team developed not merely an incandescent lamp but the entire sociotechnical apparatus required to make that lamp functional, economical, and culturally legible: the dynamo that generated the current, the distribution network that carried it through underground conduits, the junction boxes and safety fuses that prevented it from burning down buildings, the metering device that measured consumption so customers could be billed, the business model that made the whole enterprise financially viable, and the political strategy that persuaded New York's Board of Aldermen to grant permission for tearing up the streets of the financial district to lay copper mains.
Hughes spent years in the Edison archives reconstructing this process, and what he found was not a story of genius striking like lightning. It was a story of relentless, unglamorous system integration — the coordination of dozens of technical, financial, institutional, and political components into a functioning whole. Edison's notebooks from this period reveal a mind that moved constantly between the molecular properties of bamboo filaments and the question of whether the city's insurance underwriters would approve his wiring standards. The system builder, in Hughes's formulation, is defined precisely by this capacity to hold the entire system in view — to see not the artifact but the network of relationships that makes the artifact useful.
The distinction matters because it determines what counts as innovation. In the popular imagination, innovation is the flash of insight: the eureka moment, the breakthrough, the lone genius in the laboratory. In Hughes's framework, innovation is the construction of a system that works. The flash of insight is necessary but radically insufficient. Edison's bamboo filament was a technical achievement. Pearl Street Station was a system achievement. The filament without the station is a curiosity in a glass bulb. The station without the filament has nothing to illuminate. The system builder's genius lies in understanding that neither component has value without the other, and that the work of connecting them — the institutional design, the financial architecture, the regulatory negotiation, the customer education — is not secondary to the technical work. It is the technical work, understood at the proper scale.
Hughes traced this figure across the history of electrification with the patience of a historian who understood that patterns visible across decades are invisible within any single year. Edison gave way to Samuel Insull, who transformed the Chicago Edison Company into Commonwealth Edison by solving problems Edison had never confronted: load management across an entire urban grid, rate structures that incentivized off-peak consumption, regulatory relationships with state utility commissions that had not existed when Edison was laying copper under Fulton Street. In Berlin, Emil Rathenau built AEG (Allgemeine Elektricitäts-Gesellschaft) into a vertically integrated electrical empire by solving a different set of problems shaped by German municipal traditions, engineering education, and industrial policy. Each system builder operated within a specific institutional and cultural context. Each built a system adapted to that context. And each system, once built, developed a momentum that constrained the choices of everyone who came after.
The AI moment has its own system builders, though the term is not yet applied to them with the analytical precision Hughes would demand. The founders of the major AI laboratories — Dario Amodei at Anthropic, Sam Altman at OpenAI, Demis Hassabis at Google DeepMind — are building not merely models but systems. The model is the filament. The system is everything else: the cloud computing infrastructure that trains and serves the model, the API layer that makes the model accessible to developers, the safety and alignment research that determines the model's behavioral constraints, the pricing structure that determines who can afford to use it, the terms of service that determine what it may be used for, the lobbying apparatus that shapes the regulatory environment, the partnership agreements with hardware manufacturers that secure access to the GPUs required for training, and the cultural narrative — AI as tool, as partner, as threat, as salvation — that shapes public perception and political will.
Hughes's framework reveals something that the contemporary discourse about AI routinely obscures: the model is the least interesting part of the system. Not because the technical achievements are trivial — they are extraordinary — but because the model, like Edison's filament, has no function outside the system that deploys it. A large language model sitting on a server with no API, no interface, no pricing model, no terms of service, no regulatory framework, and no cultural narrative is a mathematical object of great sophistication and zero social consequence. What transforms it from a mathematical object into a force that reshapes industries, displaces workers, redefines creativity, and restructures the relationship between human beings and their tools is the system that surrounds it.
This is why the question "What can AI do?" is, from a Hughesian perspective, malformed. The correct question is: "What can the AI sociotechnical system do?" And the answer to that question depends not on the capabilities of the model but on the configuration of the system — on the institutional, economic, regulatory, and cultural components that determine how the model's capabilities are channeled, constrained, distributed, and experienced.
Consider the difference between two configurations of the same underlying technology. In one configuration, a large language model is deployed through an open API with minimal usage restrictions, priced at a level that makes it accessible to individual developers worldwide, embedded in a regulatory environment that imposes few constraints on its application, and surrounded by a cultural narrative that celebrates speed, disruption, and the democratization of capability. In another configuration, the same model is deployed through a controlled enterprise platform with extensive usage logging, priced at a level that restricts access to well-funded organizations, embedded in a regulatory environment that requires impact assessments and algorithmic audits, and surrounded by a cultural narrative that emphasizes caution, accountability, and the protection of existing institutions.
The model is identical. The systems are radically different. And the social consequences — who benefits, who bears the cost, what is created, what is destroyed — diverge as completely as the consequences of American and German electrification diverged despite using the same fundamental technology. Hughes demonstrated this divergence in exhaustive historical detail. The lesson has not been absorbed by the people currently building AI systems, most of whom remain focused on the filament and largely indifferent to, or unaware of, the system forming around it.
Segal's The Orange Pill captures something essential about the system builder's experience from the inside — the vertigo of operating at the frontier, the simultaneous exhilaration and terror of watching a system take shape in real time, the awareness that the choices being made in a room in Trivandrum or on a trade-show floor in Las Vegas will ripple outward in ways that cannot be fully anticipated. What Hughes's framework adds is the historical evidence that this experience is not new. Edison felt it. Insull felt it. Rathenau felt it. Every system builder at the formative stage of a large sociotechnical system has felt the specific combination of creative power and dawning awareness that the system is beginning to develop its own logic, its own momentum, its own trajectory that may or may not align with the builder's original intentions.
The transition from builder-as-architect to builder-as-component is the transition that Hughes documented across every large technical system he studied. Edison designed Pearl Street Station. Insull managed a system that no single person could design. By the time the American electrical grid reached maturity in the mid-twentieth century, no individual — no matter how brilliant, how visionary, how systemically minded — could redirect its fundamental configuration. The installed infrastructure, the trained workforce, the regulatory commissions, the rate structures, the consumer expectations, the manufacturing supply chains, the political constituencies organized around cheap electricity — all of these components had developed a collective inertia that exceeded any individual's capacity to redirect.
This is not a story about loss of control to machines. It is a story about the accumulation of social weight. The system becomes difficult to change not because the technology resists change but because the institutions, economic interests, and cultural assumptions that have formed around the technology resist change. The utility company that has invested billions in generation capacity resists the transition to distributed solar. The workforce trained on legacy systems resists the transition to new platforms. The regulator whose expertise is calibrated to existing technologies resists the frameworks required by new ones. None of these actors is irrational. Each is responding to genuine interests and real constraints. But their collective response produces a system-level inertia that no individual actor can overcome.
The AI sociotechnical system is forming now, and the Hughesian analysis suggests that the window for fundamental shaping is measured in years, not decades. The infrastructure is crystallizing: the data centers are being built, the API standards are being established, the pricing models are being tested, the regulatory frameworks are being drafted, the organizational practices are being normalized, the cultural narratives are being set. Each of these components is still malleable — still responsive to the choices of system builders, regulators, users, and citizens. But each is gaining weight. Each is developing the institutional density that will eventually make it resistant to fundamental change.
The system builders of the AI moment are, whether they recognize it or not, in the position Edison occupied in 1880: designing a system whose configuration will constrain the choices of everyone who comes after. Edison chose direct current. The system eventually chose alternating current, but only after a painful, costly, and socially destructive transition that Hughes documented in meticulous detail. The choices being made now about AI — about who controls the infrastructure, who has access to the capabilities, how the benefits are distributed, what safeguards are built into the system, what values the system optimizes for — are the equivalent of Edison's choice of direct current. They are consequential. They are path-dependent. And they are being made, in many cases, by actors who do not fully understand the systemic implications of their decisions.
Hughes's scholarship does not prescribe outcomes. It describes dynamics. The dynamic it describes most powerfully is this: the formative period of a large sociotechnical system is the period of maximum human agency. Before the momentum sets, before the institutions crystallize, before the economic interests calcify, before the cultural assumptions harden into common sense — in that brief window, the system can be shaped. After the window closes, the system shapes. The distinction between these two states is the most consequential distinction in the history of technology.
The AI system is in its formative period. The system builders are at work. The question is not whether they will build a system — they already are — but whether the system they build will serve broadly or narrowly, whether its momentum will carry human civilization toward expansion or extraction, whether the choices being made in conference rooms and server farms and legislative chambers today will be remembered as the choices that opened a new chapter of human capability or the choices that locked in a configuration that subsequent generations could not escape.
Hughes would not presume to answer that question. He would insist, with the historian's characteristic sobriety, that the answer is not determined. It is being determined, now, by the people who are building and who choose to build.
---
In 1886, four years after Pearl Street Station began operation, Thomas Edison controlled the American electrical system. His patents, his companies, his engineering standards, his business relationships defined the technology's trajectory. Direct current flowed through the streets of Manhattan because Edison had decided it would, and the system he had built — the generators calibrated for DC, the distribution networks designed for DC's limited range, the motors and appliances manufactured to DC specifications — constituted a sociotechnical reality that appeared, to anyone operating within it, as settled as the grid of streets above the copper mains.
By 1893, Edison had lost control of the system he created. Alternating current, championed by George Westinghouse and technically enabled by Nikola Tesla's polyphase motor, had demonstrated decisive advantages in long-distance transmission that DC could not match. The "War of Currents" was not merely a technical dispute. It was a systemic crisis — a moment when the accumulated momentum of one configuration of the electrical system collided with the superior technical characteristics of another, and the outcome was determined not by the technical merits alone but by the full weight of institutional, financial, and political forces arrayed on both sides.
Thomas P. Hughes analyzed the War of Currents not as a story of good technology defeating bad technology, which is how it is typically narrated in popular history, but as a case study in technological momentum. Edison's DC system had developed significant momentum by the mid-1880s. Hundreds of local utilities had been established around DC generation. Thousands of workers had been trained in DC installation and maintenance. Millions of dollars of capital had been invested in DC-compatible equipment. Edison's personal reputation, political relationships, and financial interests were bound up in the continuation of the DC standard. The system resisted change not because DC was technically superior — it was not — but because the sociotechnical apparatus that had formed around DC had developed its own inertia.
The concept of technological momentum, which Hughes refined over decades of historical research, occupies a deliberate middle ground between two positions that dominate thinking about technology and society. On one side stands technological determinism — the view that technology drives social change, that the internal logic of technical development determines outcomes, that society must adapt to whatever technology produces. On the other side stands social constructivism — the view that society shapes technology, that technical artifacts are the products of social choices, and that there is nothing inevitable about any particular technological configuration. Hughes rejected both positions as incomplete. His alternative — technological momentum — holds that the relationship between technology and society changes over time. A young technology is socially constructed: human choices determine its configuration. A mature technology approaches technological determinism: its accumulated sociotechnical weight constrains the choices available to the humans operating within it.
The temporal dimension is what makes the concept analytically powerful and what distinguishes it from the static positions it mediates between. When Edison was designing Pearl Street Station, the electrical system was maximally plastic. Every decision — the voltage, the current type, the distribution topology, the business model, the regulatory approach — was open. Edison and his team made choices, and those choices reflected their technical knowledge, their economic interests, their cultural assumptions, and their assessment of what was politically feasible. The system, at this stage, was being shaped by its social context.
But each choice, once implemented, reduced the range of subsequent choices. The DC generators constrained the distribution network. The distribution network constrained the service area. The service area constrained the customer base. The customer base constrained the revenue model. The revenue model constrained the investment in infrastructure. The investment in infrastructure constrained the willingness to consider alternatives. Each component of the system interlocked with every other component, and the interlocking produced a collective resistance to change that exceeded the resistance of any individual component.
Hughes was careful to distinguish technological momentum from technological determinism. A system with momentum is not a system that cannot be changed. It is a system whose change requires effort proportional to its accumulated sociotechnical weight. Edison's DC system was eventually replaced — but the replacement required a new technical paradigm (Tesla's polyphase motor), a new corporate champion (Westinghouse), a new financial structure, a new set of regulatory relationships, and a public demonstration of AC's superiority dramatic enough to overcome the institutional inertia of the DC system. The transition was not impossible. It was expensive, slow, socially disruptive, and bitterly contested by those whose interests were bound up in the existing configuration.
This dynamic — the gradual accumulation of sociotechnical weight that transforms a system from plastic to resistant — is precisely what the AI moment is undergoing in real time. Scholars who have taken up Hughes's framework and applied it directly to artificial intelligence have arrived at a finding that should give pause to anyone who believes the current moment of creative experimentation will last indefinitely. Researchers adopting the Large Technical Systems framework to study AI have concluded that while the AI industry is still in the making, it is "rapidly ossifying, together with its specific problems, power imbalances, and development scenarios." The language of ossification is precise: it denotes the hardening of what was once fluid, the transformation of choices into structures, of decisions into dependencies, of experiments into standards.
The evidence for this ossification is visible across every component of the emerging AI sociotechnical system. Consider the infrastructure layer. The training of frontier AI models requires computational resources concentrated in a small number of hyperscale data centers, owned and operated by a small number of companies — Microsoft, Google, Amazon, and a handful of others. This concentration is not the result of a deliberate policy choice. It is the result of the enormous capital requirements of building and operating the GPU clusters required for frontier model training, combined with the economies of scale that make large data centers more cost-efficient than small ones. But the result, regardless of its origin, is a structural dependency: every AI company that does not own its own hyperscale compute infrastructure depends on a company that does. This dependency is hardening into the system's permanent architecture. The paths not taken — distributed training, federated compute, alternative hardware architectures — are becoming more expensive to pursue with every quarter that passes, not because they are technically infeasible but because the sociotechnical system is organizing itself around the existing configuration.
Consider the model layer. The dominant architecture for frontier AI — the transformer — has achieved a position analogous to Edison's DC system in the mid-1880s. Enormous investments have been made in transformer-based training pipelines, inference optimization, deployment frameworks, and workforce skills. Alternative architectures exist and may offer advantages for specific applications. But the sociotechnical weight of the transformer ecosystem — the tools, the expertise, the infrastructure, the benchmarks, the academic publication incentives, the investor expectations — creates a momentum that makes the exploration of alternatives increasingly costly relative to the continuation of the established approach.
Consider the market structure. Venture capital has concentrated its AI investments in a small number of companies, creating a financial ecosystem in which the largest AI labs have access to resources that smaller competitors cannot match. This concentration is self-reinforcing: the companies with the most capital can afford the most compute, which produces the most capable models, which attracts the most users, which generates the most revenue, which attracts the most capital. The positive feedback loop is a textbook example of the kind of system dynamic that Hughes documented in the consolidation of the American electrical industry under companies like General Electric and Westinghouse.
Consider the regulatory layer. The EU AI Act, finalized in 2024, represents the most comprehensive attempt to regulate AI to date. But it was designed to address the AI landscape of 2022–2023 — a landscape that has already been transformed by subsequent developments. The regulatory framework is entering the system as a component that will co-evolve with the technology, but it is entering at a specific moment that reflects a specific understanding of what AI is and what it can do. As the technology continues to evolve faster than the regulatory process, the gap between the system's capabilities and the regulatory framework's assumptions will widen — creating precisely the kind of reverse salient that Hughes identified as the driver of systemic tension.
The Hughesian analysis produces a prediction that is uncomfortable for those who believe the current moment of creative possibility will endure: the window for fundamental shaping of the AI sociotechnical system is closing, and it is closing faster than the windows that closed around previous large technical systems. The reason is straightforward. Previous systems — electrical grids, telephone networks, highway systems — developed momentum over decades. The physical infrastructure was expensive and slow to build. The institutional structures formed gradually. The cultural assumptions shifted incrementally. AI infrastructure is digital rather than physical, which means it can be built, replicated, and scaled at speeds that physical infrastructure cannot match. The institutional structures are forming in an environment already saturated with the organizational templates of the digital economy. The cultural assumptions are being shaped by social media, which propagates narratives at speeds that previous communication technologies could not approach.
The compression is not metaphorical. The electrical industry took roughly four decades to transition from its inventive phase to its mature, momentum-laden configuration. The telephone industry took perhaps three. The internet took roughly two. If the pattern holds — and Hughes's framework provides strong theoretical reasons to expect that it will — the AI sociotechnical system may complete its formative period and achieve stable momentum within a single decade of reaching widespread deployment.
This means that the choices being made now — in 2025, 2026, 2027 — are not preliminary. They are foundational. They are the equivalent of Edison's choice of direct current, or the American decision to structure electrical utilities as regulated private monopolies, or the German decision to structure them as municipal enterprises. These were choices that appeared, at the time, to be pragmatic responses to immediate circumstances. They were, in fact, structural decisions that determined the configuration of sociotechnical systems for generations.
The people making the equivalent choices for AI — the executives deciding how to deploy AI in their organizations, the policymakers drafting regulatory frameworks, the educators designing curricula, the investors allocating capital, the builders like Segal deciding whether to convert productivity gains into headcount reduction or capability expansion — may or may not understand the systemic weight of their decisions. Hughes's historical research suggests that most system builders at the formative stage do not. Edison did not understand that his choice of direct current would precipitate a decade-long industrial war. Insull did not anticipate that his consolidation of the Chicago utility market would produce the regulatory framework that governed American electricity for a century. The system builders of the AI moment are operating in the same condition of consequential uncertainty: making decisions whose full implications will not be visible for years, but whose effects will be structurally embedded in the system long after the decision-makers have moved on.
The historical record offers no comfort to those who believe that mistakes made during the formative period can be easily corrected later. They cannot. The War of Currents cost years of development time, destroyed companies, consumed fortunes, and produced a transition that was far more painful than it would have been if the AC standard had been adopted from the beginning. The QWERTY keyboard layout, adopted to solve a mechanical problem in typewriters that ceased to exist decades ago, persists because the sociotechnical system — the trained workforce, the manufacturing standards, the institutional expectations — has developed momentum that exceeds any individual's capacity or incentive to redirect.
The momentum is not fate. Hughes was emphatic on this point. Technological momentum is not technological determinism. Systems with momentum can be redirected. But redirection requires effort proportional to the system's accumulated weight, and the weight increases with every passing month. The window is open. It will not stay open. And the people who understand the dynamics of large sociotechnical systems have an obligation — not merely a professional interest but, Hughes might have argued, a civic obligation — to act while action is still affordable.
---
A common error in thinking about technology — an error so pervasive that it functions as the default assumption of most public discourse — is the treatment of a technology as a discrete object that enters society from outside, produces effects, and can be evaluated independently of the social context in which it operates. The smartphone is good or bad. Social media is beneficial or harmful. Artificial intelligence will save or destroy us. In each case, the technology is isolated from the system that gives it meaning, and the analysis proceeds as though the artifact itself, abstracted from its institutional, economic, regulatory, and cultural surroundings, is the proper unit of analysis.
Thomas P. Hughes spent his career dismantling this error. His analytical unit was never the artifact. It was always the system — the sociotechnical system, a term he deployed with deliberate precision to denote a complex of technical components, organizational structures, legislative artifacts, scientific programs, natural resources, and human practices that function as an integrated whole. The electrical grid is not wires and generators. It is wires and generators and utility companies and regulatory commissions and rate structures and consumer expectations and manufacturing processes and workforce training programs and fuel supply chains and environmental regulations and the cultural assumption that electricity should be available at the flip of a switch, twenty-four hours a day, at a price that most households can afford. Remove any component and the system does not merely degrade. It transforms into a different system, with different capabilities, different constraints, and different social consequences.
The concept of the sociotechnical system is not merely taxonomic — a way of listing the components of a technological enterprise. It is analytical. It explains why technological systems behave the way they do: why they resist change, why they produce unintended consequences, why they evolve in directions that their original designers did not anticipate and might not have chosen. The explanation lies in the interdependence of the components. Each component of a sociotechnical system is adapted to every other component. The generator is designed for a specific distribution network. The distribution network is designed for a specific customer base. The customer base generates revenue that supports a specific investment pattern. The investment pattern sustains a specific workforce. The workforce possesses skills calibrated to the existing technical configuration. Change any component — introduce a new generation technology, alter the rate structure, retrain the workforce for a different technical standard — and every other component must adjust. The cost of adjustment is the source of the system's inertia.
The AI sociotechnical system is forming now, in real time, and its components are becoming visible to anyone willing to look beyond the model. The technical core consists of the machine learning models — the large language models, the vision models, the multimodal systems — that constitute the capability frontier. This is the component that receives the most attention, the component that generates the headlines, the component that produces the sense of vertigo that Segal describes in The Orange Pill when he watches Claude Code produce in hours what his team would have taken months to build. But the technical core is, in Hughesian terms, the filament: necessary but insufficient, impressive but inert without the system that surrounds it.
The infrastructure layer consists of the physical and digital substrate on which the technical core operates: the data centers that house the GPU clusters required for training, the cloud computing platforms that serve inference requests, the networking equipment that connects data centers to users, the cooling systems that prevent the hardware from melting, the electrical generation capacity that powers the entire apparatus. This layer is crystallizing rapidly. The capital expenditures of the major cloud providers on AI infrastructure are measured in tens of billions of dollars per quarter — investments that, once made, create physical assets with decades-long operational lifetimes and depreciation schedules that enforce continuity. A data center built in 2025 will still be operating in 2040, and its technical specifications — its power capacity, its cooling infrastructure, its networking topology — will constrain what can be run on it for its entire operational life.
The institutional layer consists of the organizations and practices through which AI is deployed: the corporate AI teams that integrate models into business processes, the consulting firms that advise on AI strategy, the training programs that teach workers to use AI tools, the IT departments that manage AI infrastructure, the procurement processes that select AI vendors, the internal governance frameworks that determine what AI may and may not be used for within an organization. This layer is where the Berkeley researchers whose work Segal discusses in The Orange Pill conducted their eight-month study — inside a functioning organization, watching the institutional practices form in real time around the new technology. What Xingqi Maggie Ye and Aruna Ranganathan documented was the formation of institutional components of a sociotechnical system: the expansion of job scope, the colonization of rest periods by AI-assisted work, the normalization of multitasking as the default mode of cognitive operation. These are not individual behavioral choices. They are institutional patterns crystallizing into components of the system.
The economic layer consists of the financial structures that sustain the system: the venture capital that funds AI startups, the public market valuations that determine the cost of capital for established AI companies, the pricing models that determine who can afford to use AI tools, the business models that determine what AI companies optimize for, the labor market dynamics that determine how the economic gains of AI are distributed between capital and labor. Segal's analysis of the SaaS Death Cross in The Orange Pill is, in Hughesian terms, an analysis of economic-layer disruption: the repricing of an entire sector of the economy as the market's model of where value resides shifts from the technical core (code) to the institutional layer (ecosystem, data, customer relationships).
The regulatory layer consists of the legal and policy frameworks that govern the system: the EU AI Act, the American executive orders on AI safety, the emerging national AI strategies in countries from Singapore to Brazil, the intellectual property frameworks that determine who owns AI-generated content, the liability frameworks that determine who is responsible when AI systems cause harm, the data protection regulations that determine what training data may be used and how. This layer co-evolves with the others. The EU AI Act was designed to address the AI system as it existed in 2022–2023. The system has already evolved beyond the Act's assumptions, creating a regulatory lag that will persist and deepen as the technology continues to accelerate.
The cultural layer consists of the narratives, assumptions, and expectations that shape how people understand and relate to AI: the utopian narrative of AI as the great equalizer, the dystopian narrative of AI as existential threat, the productivity narrative of AI as the ultimate efficiency tool, the creative narrative of AI as a new form of collaboration, the anxiety narrative of AI as job destroyer. These narratives are not decorative. They are functional components of the sociotechnical system. They determine what people demand from AI, what they will accept from AI, what risks they will tolerate, what regulations they will support, and what political leaders they will elect. The cultural layer shapes the regulatory layer, which shapes the economic layer, which shapes the institutional layer, which shapes the infrastructure layer, which constrains the technical core.
The interdependence of these layers is what makes the AI sociotechnical system behave as a system rather than as a collection of independent components. When the technical core advances — when a new model demonstrates capabilities that exceed what previous models could achieve — the advancement propagates through every other layer. New capabilities create new deployment possibilities, which require new institutional practices, which generate new economic dynamics, which demand new regulatory responses, which stimulate new cultural narratives, which reshape public expectations, which create new demands on the technical core. The system is a network of feedback loops, and the behavior of the whole cannot be predicted from the behavior of any individual component.
Scholars who have taken up Hughes's framework and applied it to contemporary AI have recognized this systemic quality with increasing analytical precision. A 2024 paper in Minds and Machines argues that viewing AI through a sociotechnical system lens is essential "not just to better understand how AI functions, but also to see how AI might transform existing sociotechnical systems, like the energy system or the transportation system, but also ultimately society and crucial institutions like the law and democracy." The paper also identifies something that distinguishes the AI sociotechnical system from its historical predecessors: traditional sociotechnical systems are self-learning and adaptive through their human components, but AI systems "also have technical elements that allow learning from the environment." The technical core of the AI system is not static infrastructure, like electrical wires or telephone cables. It learns. It adapts. It evolves in response to data and usage patterns in ways that the human components of the system may not fully understand or anticipate. This adaptive technical core interacting with adaptive institutional, economic, regulatory, and cultural components creates a system of unprecedented complexity — a system whose emergent behavior is genuinely difficult to predict even for those who understand its individual components well.
A 2025 paper in AI & Society extends the analysis further, arguing for "shifting the focus in AI ethics from artifacts and their intrinsic properties — what I refer to as the robotic conception of AI — to the relationships among elements embedded in AI-involving sociotechnological systems." The paper directly cites Hughes's 1994 formulation, in which "technical" refers to artifacts (hardware and software) and "social" refers to "institutions, values, interest groups, social classes, and political and economic forces." The argument is that AI ethics has been systematically misdirected by treating AI as an artifact — a thing with intrinsic properties that can be evaluated in isolation — rather than as a system. The safety of an AI system cannot be determined by examining the model alone, any more than the safety of an electrical system can be determined by examining a single wire. Safety is a property of the system, not the component.
This reframing has immediate practical consequences. It means that the question "Is AI safe?" cannot be answered by technical evaluation of the model. It can only be answered by examining the sociotechnical system in which the model is embedded: the institutional practices that govern its deployment, the economic incentives that shape what it is used for, the regulatory frameworks that constrain its application, the cultural assumptions that determine what risks are considered acceptable, and the feedback loops that connect all of these components to each other and to the evolving capabilities of the technical core.
It also means that the question "What should we do about AI?" cannot be answered by intervening at any single layer of the system. Technical solutions — better alignment, more robust safety testing, improved interpretability — are necessary but insufficient, because the behavior of the system is determined by the interaction of all its components, not by the properties of any individual one. Regulatory solutions — new laws, new oversight bodies, new reporting requirements — are necessary but insufficient, because the regulatory layer is only one component of a system that evolves faster than the regulatory process can adapt. Institutional solutions — new organizational practices, new training programs, new governance frameworks — are necessary but insufficient, because institutional change within any single organization cannot reshape the dynamics of the system as a whole.
What is required is systemic intervention: coordinated action across multiple layers of the sociotechnical system, informed by an understanding of how the layers interact and how interventions at one layer propagate to others. Hughes's historical research demonstrates that effective systemic intervention has occurred in the past — the construction of the regulatory framework for American electricity, the establishment of universal telephone service as a policy goal, the creation of the interstate highway system — but that it requires a degree of institutional coordination, political will, and long-term vision that is not currently in evidence in the AI domain.
The AI sociotechnical system is still forming. Its components are still being assembled. Its interdependencies are still being negotiated. The system is still plastic — still responsive to deliberate shaping by the actors within it. But the plasticity is diminishing with every quarter, as investments are made, institutions are established, practices are normalized, and cultural assumptions harden. The time to understand the system — to see it as a system, not as a technology — is now.
---
Hughes borrowed the term "reverse salient" from military history, where it denotes a section of an advancing front that has fallen behind the rest of the line, creating a bulge that is vulnerable to counterattack and that constrains the overall advance. In the context of large technical systems, a reverse salient is a component that lags behind the rest of the system, creating a bottleneck that limits the system's overall performance and that attracts concentrated innovative effort as system builders attempt to bring the lagging component into alignment with the rest.
The concept is diagnostic rather than prescriptive. It identifies where the system's development is constrained. It does not specify how the constraint should be resolved. But the identification itself is powerful, because it directs attention away from the components that are advancing most rapidly — which tend to attract the most public attention and the most investment — and toward the components whose lag is actually determining the system's trajectory. Hughes demonstrated, across case after case, that the direction of a large technical system's evolution is determined not by its most advanced component but by its most retarded one. The reverse salient is where the action is, whether or not the actors within the system recognize it.
In Edison's electrical system, the early reverse salient was the incandescent lamp itself. The generators, the distribution network, the metering system, and the business model were all functional. But the lamp burned out too quickly to make the system economically viable. Edison's famous search for a durable filament material — testing thousands of substances, from platinum to carbonized cardboard to bamboo fiber from a specific Japanese variety — was, in Hughes's analysis, an effort to resolve the reverse salient that was constraining the system's performance. When the bamboo filament extended lamp life sufficiently, the reverse salient shifted: the lamp was no longer the bottleneck, and the system's development was constrained by other components — the limited range of DC distribution, the absence of an effective motor for industrial use, the regulatory uncertainty surrounding the installation of electrical equipment in buildings.
The concept illuminates a dynamic that is otherwise invisible: the migration of bottlenecks through a system as each successive reverse salient is resolved. The resolution of one bottleneck does not eliminate constraint. It moves the constraint to the next-weakest component. The system advances not smoothly but in a series of lurches, each lurch following the resolution of the reverse salient that was holding the system back, each lurch revealing the next reverse salient that will constrain the system until it, too, is resolved.
The AI sociotechnical system, viewed through this lens, presents a striking picture. The component that has advanced most rapidly and that receives the most attention — the technical core, the models themselves — is not the reverse salient. The models have advanced far beyond what most users, organizations, and institutions can absorb. A frontier language model in 2026 can generate code, analyze documents, produce creative writing, conduct research, and engage in multi-step reasoning at a level that exceeds the capacity of most human specialists working without AI assistance. The technical capability frontier is not the bottleneck.
The reverse salients are elsewhere. They are in the human and institutional components of the system that lag behind the technical core. And their identification reveals where the system's actual development is being determined — and where intervention would have the most leverage.
The first reverse salient is organizational absorption. Large organizations — the enterprises, government agencies, and educational institutions that employ most of the workforce and shape most of the economy — are structured around workflows, hierarchies, and job descriptions that were designed for a world in which the tasks AI now performs were performed by humans. The organizational chart of a typical enterprise in 2026 still reflects the assumption that software development requires a team of specialized engineers, that legal analysis requires a team of specialized lawyers, that financial modeling requires a team of specialized analysts. AI has made it possible for a single person with appropriate judgment to perform work that previously required teams. But the organizational structures have not adapted.
The lag is not a failure of awareness. Most large organizations are acutely aware that AI is changing the nature of work. The lag is structural: organizational restructuring is slow, politically fraught, and culturally difficult. It requires renegotiating job descriptions, redefining performance metrics, restructuring reporting relationships, retraining managers, and confronting the uncomfortable reality that some roles have been rendered redundant while others have become dramatically more valuable. Each of these changes meets resistance — not irrational resistance, but the rational resistance of people whose interests, identities, and career trajectories are bound up in the existing organizational configuration.
The Berkeley researchers documented this reverse salient in granular detail. They observed that AI tools expanded workers' job scope — designers started writing code, individual contributors absorbed tasks previously delegated to others — but that the organizational structures did not adapt to reflect the expanded scope. Workers were doing more and different work, but their job titles, their compensation, their performance reviews, and their career paths remained calibrated to the old organizational model. The result was a mismatch between what workers were actually doing and what the organization recognized them as doing — a mismatch that produced frustration, burnout, and the specific form of intensification that the researchers documented.
The second reverse salient is workforce reskilling. The AI sociotechnical system requires a workforce with skills fundamentally different from those that the existing educational and training infrastructure produces. The system needs people who can direct AI tools — who can formulate problems, evaluate outputs, exercise judgment about what should be built and for whom. The existing workforce was trained to execute: to write code, to draft documents, to perform analyses, to complete tasks that AI now handles. The gap between the skills the system requires and the skills the workforce possesses is the reverse salient that The Orange Pill identifies as the "retraining gap" — the most dangerous failure of the current moment.
The severity of this reverse salient is compounded by the speed mismatch between the system's technical advancement and the human capacity for adaptation. Models improve on timelines measured in months. Human skill development operates on timelines measured in years. Educational curricula are redesigned on timelines measured in decades. The reverse salient is not closing. It is widening, because the technical core is advancing faster than the human components can adapt.
Hughes's historical research suggests that reverse salients of this kind — where the lagging component is human rather than technical — are the most difficult to resolve, because human and institutional adaptation does not respond to the same incentives that drive technical innovation. An engineering problem can be attacked with concentrated resources: more researchers, more compute, more capital. A workforce-adaptation problem cannot be solved by throwing resources at it, because adaptation requires time, because human learning is embodied and experiential in ways that cannot be accelerated beyond certain biological limits, and because the institutional structures that would need to deliver reskilling at scale — educational institutions, corporate training programs, government workforce development agencies — are themselves components of a sociotechnical system with its own momentum and its own resistance to rapid change.
The third reverse salient is regulatory capacity. The regulatory frameworks being designed for AI are, almost by definition, calibrated to the AI system as it existed at the time of their design, not as it will exist at the time of their implementation. The EU AI Act, the most comprehensive regulatory effort to date, was negotiated over a period during which the AI system's capabilities underwent multiple step-function advances. By the time the Act's provisions are fully implemented, the system they were designed to govern will have evolved significantly beyond the assumptions embedded in the legislation. This is not a criticism of the regulators. It is a structural feature of the relationship between regulatory processes and fast-moving technological systems. Legislative processes operate on timelines of years. They require consultation, drafting, negotiation, revision, ratification, and implementation — each stage consuming time during which the technology continues to evolve. The result is a permanent lag between the system's capabilities and the regulatory framework's assumptions.
The fourth reverse salient — and arguably the most fundamental — is cultural comprehension. The cultural layer of the AI sociotechnical system, the narratives and assumptions through which people understand what AI is and what it means, lags behind the reality of what the system is becoming. Public discourse about AI remains organized around narratives forged in the early stages of the technology's development: AI as job destroyer, AI as creative threat, AI as existential risk, AI as productivity miracle. Each of these narratives captures a fragment of the reality. None captures the systemic complexity that Hughes's framework reveals. The gap between the public's mental model of AI — shaped by science fiction, by headline-driven journalism, by the promotional narratives of AI companies, by the counter-narratives of AI critics — and the actual sociotechnical system that is forming around the technology is a reverse salient that constrains the quality of every decision made about AI, from individual career choices to national policy.
Hughes's framework predicts that the system's evolution will be determined by the resolution of these reverse salients — not by the next breakthrough in model capabilities. The model is not the bottleneck. The bottleneck is everything else: the organizations that cannot restructure fast enough, the workers who cannot reskill in time, the regulators who cannot keep pace, the public that cannot comprehend the system forming around it.
This prediction has a corollary that is both hopeful and demanding. If the reverse salients are human and institutional rather than technical, then the system's trajectory is determined by human choices, not by technical inevitabilities. The direction of the AI system is not written in the mathematics of transformer architectures or the economics of GPU manufacturing. It is written in the organizational decisions, the educational investments, the regulatory designs, and the cultural narratives that human beings are producing right now.
The resolution of each reverse salient is a choice point — a moment when the system's trajectory can be influenced by deliberate human action. The organizational reverse salient can be addressed by leaders willing to restructure their organizations around the capabilities that AI makes possible rather than the workflows that AI has rendered obsolete. The workforce reverse salient can be addressed by educational institutions willing to transform their curricula, by employers willing to invest in reskilling, by governments willing to fund the infrastructure of human adaptation. The regulatory reverse salient can be addressed by designing adaptive frameworks that evolve with the technology rather than attempting to freeze it in a particular configuration. The cultural reverse salient can be addressed by thinkers, writers, and communicators willing to develop and disseminate mental models that capture the systemic complexity of what is actually happening.
Each of these interventions is difficult. None is sufficient on its own. Together, they constitute the kind of coordinated systemic intervention that Hughes's research shows is required to direct a large sociotechnical system's development during its formative period.
The reverse salients are where the leverage is. Not in the next model release, which will arrive on schedule regardless of what anyone does about the institutional, workforce, regulatory, and cultural components of the system. The model will advance. The question is whether everything else will advance with it — whether the human components of the system will keep pace with the technical core, or whether the reverse salients will widen into fractures that constrain the system's capacity to serve broad human interests rather than narrow ones.
Hughes's research provides no guarantee that the reverse salients will be resolved in time. It provides only the diagnostic framework that makes resolution possible: the identification of where the constraint actually lies, which is the necessary first step toward addressing it. The constraint does not lie in the model. It lies in us — in our organizations, our educational systems, our regulatory institutions, our cultural comprehension. That is where the work must be done. That is where the system's future will be determined.
Edison kept notebooks. Not the kind that accumulate on a desk and are consulted occasionally — the kind that constitute a continuous externalized record of a mind moving between problems at radically different scales within a single working day. Hughes spent years inside those notebooks at the Edison National Historic Site in West Orange, New Jersey, and what he found there was not the record of an inventor but the record of a system builder: a mind that could not think about a filament without simultaneously thinking about the distribution network that would carry current to it, the meter that would measure consumption, the pricing structure that would make the service economically viable, and the political strategy required to secure permission to dig up the streets of lower Manhattan.
The notebooks reveal a characteristic that Hughes identified across every system builder he studied: the capacity to hold the entire system in view. Not sequentially — not first the filament, then the generator, then the distribution network, then the business model — but simultaneously. Edison's entries from 1878 to 1882 move fluidly between the chemistry of carbonization and the question of how many feet of copper main would be required to serve a square mile of the financial district at an acceptable voltage drop. The molecular and the municipal coexist on the same page, connected by an implicit understanding that neither has meaning without the other.
This capacity is not merely intellectual breadth, though breadth is part of it. It is a specific cognitive orientation: the habit of seeing relationships between components rather than the components themselves. Hughes called it "system sight" — though the term appears more in commentaries on his work than in his own prose, which tended toward the empirical rather than the aphoristic. The system builder sees not a lamp but a lamp-in-a-system. Not a generator but a generator-whose-output-must-match-the-load-profile-of-a-customer-base-that-does-not-yet-exist. The artifact is always already embedded in the network of relationships that give it function.
Samuel Insull exemplified a different dimension of system sight. Where Edison saw the technical system — the physical apparatus of generation, distribution, and consumption — Insull saw the institutional system. Taking control of the Chicago Edison Company in 1892, Insull spent three decades assembling not merely an electrical utility but an institutional architecture of extraordinary sophistication: a rate structure that incentivized off-peak consumption (thereby improving the load factor, the ratio of average demand to peak demand, which determined the economics of the entire enterprise), a regulatory strategy that positioned the utility as a natural monopoly deserving of exclusive franchise in exchange for rate regulation, a financial structure that leveraged the predictable revenue stream of a regulated monopoly to fund continuous infrastructure expansion, and a political network that protected the enterprise from competitors and reformers alike.
Insull's vision was not technical. It was sociotechnical. He understood that the electrical system's performance was determined not by the efficiency of its generators but by the configuration of the entire apparatus — technical, institutional, financial, regulatory, political — and that optimizing any single component without optimizing the whole was futile. A more efficient generator connected to a distribution network that could not handle its output, serving a customer base whose consumption pattern produced an unfavorable load factor, regulated by a commission that set rates below the cost of service, was not a better system. It was a more expensively dysfunctional one.
Hughes's comparative history revealed that system sight took different forms in different cultural contexts. Emil Rathenau, building AEG in Berlin, operated within a German institutional tradition that valued systematic engineering, municipal planning, and vertical integration. His system sight was shaped by this tradition: he saw the electrical system as an integrated industrial enterprise that should control everything from the manufacture of generators to the installation of wiring in individual buildings, coordinated through a central engineering office that imposed systematic standards across the entire operation. The result was a system that was more technically coherent than its American counterpart — more standardized, more efficiently engineered, more amenable to central planning — but less commercially dynamic, less responsive to local market conditions, less tolerant of the improvisational entrepreneurship that characterized American electrification.
The contrast illuminates something about system sight that pure technical analysis obscures: the system builder's vision is shaped by the cultural context in which the builder operates. Edison's vision was shaped by American entrepreneurial culture — competitive, improvisational, market-driven. Rathenau's was shaped by German engineering culture — systematic, hierarchical, planning-driven. Neither vision was inherently superior. Each produced a system adapted to its context. And each, once implemented, constrained the subsequent evolution of the system in ways that reflected the original builder's culturally shaped vision long after the builder himself had departed the scene.
The AI moment has its own system builders, and their visions are diverging along lines that Hughes's comparative framework would predict. The major American AI laboratories are building systems that reflect the cultural context of Silicon Valley: venture-capital funded, growth-oriented, speed-privileging, disruption-celebrating. The system they are assembling — open APIs, competitive pricing, rapid iteration, minimal regulatory engagement — is the Edison model updated for the digital age: entrepreneurial, market-driven, improvisational, optimized for speed of deployment rather than systematic coherence.
The emerging Chinese AI system is being built by system builders operating within a different institutional tradition: state-directed, strategically planned, integrated into national industrial policy, optimized for technological sovereignty rather than commercial competition. The European AI system, still largely in the specification stage, reflects a regulatory tradition that prioritizes individual rights, precautionary principles, and institutional accountability. Each system will deploy functionally similar technical capabilities — large language models, vision systems, multimodal architectures — within radically different sociotechnical configurations, and the social consequences will diverge as completely as the consequences of American, German, and British electrification.
But there is a dimension of system sight that the current generation of AI system builders may be neglecting, and it is the dimension that Hughes's historical research identifies as most consequential. The most effective system builders in Hughes's account were not the ones who built the most impressive technical systems. They were the ones who built systems that aligned technical capability with institutional structure, economic viability, regulatory feasibility, and cultural acceptance simultaneously. Edison's genius was not the filament. It was Pearl Street Station — the integrated demonstration that electric lighting was technically feasible, economically viable, institutionally manageable, and culturally desirable, all at once.
The current AI discourse is overwhelmingly focused on the technical core. Model capabilities, benchmark scores, parameter counts, training data volumes — these are the metrics that dominate the conversation. The institutional, economic, regulatory, and cultural components of the system receive comparatively little attention, and what attention they receive is often reactive rather than proactive: regulators responding to capabilities that have already been deployed, organizations restructuring in response to disruptions that have already occurred, cultural narratives forming in response to experiences that have already been had.
Hughes's framework suggests that this technical focus is not merely incomplete but actively dangerous, because it produces a system whose technical capabilities outstrip the institutional capacity to deploy them wisely, the economic structures to distribute their benefits fairly, the regulatory frameworks to constrain their harms effectively, and the cultural comprehension to integrate them meaningfully into human life. The result is precisely the pattern of reverse salients described in the previous chapter: a system whose most advanced component — the technical core — races ahead while the other components lag behind, creating tensions that manifest as the organizational dysfunction, workforce displacement, regulatory confusion, and cultural anxiety that characterize the current moment.
The system builder's vision, in Hughes's account, is the antidote to this pattern. It is the capacity to see and build across all the components of the system simultaneously — to design not just the model but the organizational practices that will govern its deployment, the economic structures that will distribute its benefits, the regulatory frameworks that will constrain its harms, the educational programs that will prepare the workforce, and the cultural narratives that will help people understand what the technology means for their lives.
This vision is rare. It has always been rare. Hughes documented only a handful of system builders across the entire history of electrification who possessed it in full measure. Most actors in the system saw one or two components clearly and were blind to the rest. The inventors saw the technology. The financiers saw the economics. The regulators saw the legal framework. The workers saw the labor implications. The rare system builder saw all of these simultaneously and understood their interdependence.
The rarity of system sight is not a failure of individual intelligence. It is a structural consequence of specialization. The modern economy rewards deep expertise in narrow domains. The institutions that produce AI leaders — computer science departments, technology companies, venture capital firms — are optimized to develop technical and commercial competence. They are not optimized to develop the kind of broad, integrative vision that Hughes identified as the system builder's defining characteristic. A person trained to optimize model performance is not thereby trained to understand how model deployment will interact with organizational structures, labor markets, regulatory frameworks, and cultural assumptions.
What follows from this analysis is not a call for better-rounded AI executives, though that would help. It is a recognition that system building in the AI age requires collaboration across domains that do not naturally communicate. The technical builders must work with the institutional designers. The institutional designers must work with the regulatory architects. The regulatory architects must work with the educators. The educators must work with the cultural interpreters. And all of them must work together within a shared understanding of the system as a whole — the interdependence of its components, the dynamics of its evolution, and the consequences of its momentum.
The history of electrification provides both a model and a warning. The model is the system builder who sees the whole: who designs not a filament or a generator or a rate structure but a sociotechnical system that integrates all of these into a functioning whole. The warning is what happens when system sight is absent: when the technical capability outruns the institutional capacity to absorb it, when the economic benefits concentrate rather than distribute, when the regulatory framework lags behind the reality it governs, and when the cultural narrative fails to capture the complexity of what is actually happening.
Hughes documented both outcomes. The system builders who possessed system sight — Edison at Pearl Street, Insull in Chicago, Rathenau in Berlin — produced systems that, whatever their specific flaws, functioned as integrated wholes and delivered broadly distributed benefits. The system builders who lacked it — who built impressive technical components without attending to the institutional, economic, and cultural components required to make them functional — produced systems that were technically brilliant and socially dysfunctional.
The AI moment will produce both kinds of system. The question is which kind will achieve momentum first, because the system that gains momentum first is the system that will shape the subsequent trajectory. Hughes's research offers no guarantee that the better system will prevail. Edison's technically inferior DC system gained momentum first, and displacing it required a decade-long industrial war. The stakes of the current moment are not measured in the relative merits of competing technical standards. They are measured in the sociotechnical configurations that will crystallize around whatever technical capabilities achieve early dominance — configurations that will determine, for decades or longer, who benefits from AI, who bears its costs, and what kind of society it produces.
The system builder who sees the whole — who builds not just the model but the habitat — is the figure the moment requires. Whether that figure will emerge from the current landscape of specialized expertise and siloed institutions is the open question that Hughes's historical research frames but cannot answer.
---
The photograph is famous in the history of technology, though it circulates more in academic seminars than in popular culture: Thomas Edison, sometime in the early 1890s, standing in his West Orange laboratory surrounded by the apparatus of invention — chemicals, wires, prototype devices in various states of completion, the physical evidence of a mind that worked by trying everything and discarding what failed. Within a decade of when that photograph was taken, the American electrical industry would be controlled not by Edison but by Samuel Insull, a man who had never invented anything, who could not have designed a generator or calculated a voltage drop, but who understood something that Edison did not: that the creative phase of a large technical system gives way, inevitably and structurally, to the managerial phase, and that the skills required to manage a mature system are fundamentally different from the skills required to invent one.
Hughes traced this transition with the care of a historian who understood that the pattern was not incidental but structural — a recurring feature of every large technical system he studied. The inventive phase is characterized by creativity, improvisation, tolerance for failure, and the dominance of individuals whose personal vision drives the system's development. Edison could try thousands of filament materials because the system was small enough, young enough, and plastic enough that individual experimentation was the appropriate mode of development. The financial stakes were manageable. The institutional structures were minimal. The regulatory framework was nonexistent. The system was, in the fullest sense, a startup.
The managerial phase is characterized by systematization, standardization, optimization, and the dominance of organizational routines over individual vision. Insull did not experiment. He organized. He took the scattered, locally adapted, individually managed electrical utilities that Edison and his imitators had created and consolidated them into a systematically managed, centrally controlled, financially optimized enterprise that operated across an entire metropolitan region. He standardized equipment specifications so that components from different manufacturers could be integrated into a single system. He developed rate structures based on rigorous analysis of load curves — the patterns of electrical consumption across hours, days, and seasons — that maximized the utilization of expensive generating equipment. He designed the regulatory relationships that would govern the American electrical industry for a century: the natural-monopoly franchise, the rate-of-return regulation, the obligation to serve all customers within a defined territory.
The transition from Edison to Insull was not a personnel change. It was a phase change in the system itself. The skills that made Edison effective in the inventive phase — his restless experimentation, his tolerance for disorder, his instinct for the technically elegant solution — were not merely unnecessary in the managerial phase. They were counterproductive. A mature electrical system serving millions of customers across a metropolitan region could not tolerate the kind of creative disruption that had characterized Edison's laboratory. It required reliability, predictability, and the subordination of individual creativity to organizational discipline.
Hughes was careful to note that this transition was not a decline. Popular narratives tend to romanticize the inventive phase — the era of the lone genius, the breakthrough, the garage startup — and to treat the managerial phase as a falling-off, a bureaucratization, a loss of creative energy. Hughes argued that this framing is sentimental rather than analytical. The managerial phase is not a corruption of the inventive phase. It is its necessary successor. A system that remained permanently in the inventive phase — permanently experimental, permanently improvised, permanently dependent on the vision of a single individual — would never achieve the scale, reliability, and institutional embeddedness required to deliver its benefits broadly. Edison could light a few blocks of lower Manhattan. Insull lit Chicago. The difference was not merely quantitative. It was systemic.
The AI industry is entering this transition now, and the signs are unmistakable to anyone familiar with Hughes's framework. The inventive phase of AI — the period from roughly 2017, when the transformer architecture was published, through 2025, when the capabilities of frontier models crossed the threshold that The Orange Pill describes — was characterized by exactly the features that Hughes associated with early-phase system development. Small teams with outsized impact. Rapid iteration. Tolerance for failure. The dominance of individual vision — Dario Amodei's constitutional AI, Sam Altman's scaling bets, Demis Hassabis's neuroscience-inspired architectures. The culture was, and in some quarters still is, the culture of the laboratory: creative, improvisational, oriented toward breakthrough rather than reliability.
The managerial phase is arriving not because anyone has decided it should, but because the system's growth demands it. When an AI model serves hundreds of millions of users, reliability becomes non-negotiable. When an AI tool is integrated into enterprise workflows that process billions of dollars in transactions, predictability matters more than novelty. When an AI system handles sensitive personal data across dozens of jurisdictions, compliance cannot be an afterthought. When an AI company's valuation depends on quarterly revenue growth that satisfies institutional investors, the tolerance for creative experimentation that does not immediately contribute to revenue declines sharply.
The symptoms are visible across the industry. The AI labs that began as small research organizations are growing into large corporations with the institutional apparatus that large corporations require: human resources departments, compliance teams, government relations offices, enterprise sales forces, customer success organizations. The founders who built the first models are being joined — and in some cases displaced — by executives recruited from established technology companies, executives whose expertise is not in building new systems but in managing existing ones at scale. The organizational charts are becoming more hierarchical. The decision-making processes are becoming more structured. The culture is shifting, perceptibly and in some cases painfully, from the laboratory to the corporation.
This transition carries specific consequences for the values that the system optimizes for. Hughes demonstrated that the inventive phase and the managerial phase optimize for different things, and that the transition between them is a value transition as much as an organizational one. Edison optimized for technical elegance and functional novelty. Insull optimized for load factor and return on capital. Neither set of values was inherently superior. But they were different, and the difference determined what kind of system was built.
The inventive phase of AI has optimized for capability: the frontier of what models can do, measured by benchmarks, by surprise, by the gap between what was possible last year and what is possible now. The managerial phase will optimize for deployment: the reliable, predictable, scalable integration of AI into institutional workflows, measured by uptime, by compliance, by customer satisfaction, by revenue. The shift from capability to deployment is not a betrayal of the inventive phase's values. It is the structural consequence of a system that has grown beyond the scale at which inventive-phase values can govern.
But the transition is not neutral. It favors certain actors and certain interests over others. The inventors — the researchers, the creative builders, the people who thrive in environments of high uncertainty and rapid change — find their influence diminishing as the organizational structures that sustained their work are replaced by structures optimized for managerial efficiency. The managers — the executives, the operators, the people who thrive in environments that reward reliability and scale — find their influence increasing. The power shifts. The culture shifts. And the system that emerges from the transition reflects the values of the people who hold power within it.
Hughes documented this power shift with particular attention to what was lost. The inventive phase produced technical diversity — multiple competing approaches, multiple architectures, multiple business models, multiple visions of what the technology could become. This diversity was inefficient by managerial standards. Multiple competing standards meant incompatible equipment, fragmented markets, duplicated effort. But the diversity also maintained optionality — the possibility that the system could evolve in multiple directions, that paths not yet taken remained open.
The managerial phase eliminates this diversity in the name of efficiency. Standardization replaces experimentation. Dominant designs emerge and are reinforced by economies of scale. Competing approaches are abandoned as resources concentrate around the configuration that has achieved the most momentum. The system becomes more efficient and more brittle simultaneously — more capable of delivering its current configuration reliably and less capable of adapting to circumstances that the current configuration was not designed to handle.
The AI industry is already exhibiting this narrowing. The transformer architecture has achieved a dominance that resembles, in Hughes's terms, a dominant design: the configuration around which the industry standardizes, and which subsequent development elaborates rather than challenges. The organizational model of the large, well-funded AI laboratory — training enormous models on enormous compute clusters funded by enormous capital investments — has become the assumed mode of frontier development, squeezing out alternative approaches that might produce different capabilities but cannot compete for the resources required to operate at the established scale.
The Hughesian analysis does not condemn this narrowing. It describes it as a structural feature of system maturation and identifies its consequences. The consequence that matters most is the closing of options. Each path not taken during the managerial transition becomes more expensive to pursue as the system gains momentum around the dominant configuration. Alternative architectures that might have produced different capabilities — different strengths, different weaknesses, different social consequences — become increasingly difficult to explore as the resources, expertise, and institutional structures of the industry organize around the established approach.
This is not a counsel of despair. It is a counsel of attention. The transition from inventor to manager is happening now. The values the system will optimize for are being determined now. The paths that will be foreclosed are being foreclosed now. The people who understand this transition — who can see it happening in the organizational changes, the hiring patterns, the investment flows, the cultural shifts within the AI industry — are the people best positioned to influence it. Not to prevent it, because the transition is structural and cannot be prevented. But to shape it — to ensure that the managerial phase preserves enough of the inventive phase's diversity, enough of its optionality, enough of its willingness to explore paths that do not immediately contribute to quarterly revenue, to maintain the system's capacity for adaptation in a future whose contours cannot be predicted.
The window is the transition itself. Once the managerial phase stabilizes — once the organizational routines crystallize, the dominant designs lock in, the workforce skills calibrate to the established configuration, and the institutional structures achieve their own momentum — the capacity for fundamental redirection diminishes sharply. Hughes documented this in system after system. The time to shape the managerial phase is while it is forming. After it has formed, the historian can only document what was built and what was lost.
---
The electrical generator operates according to the same physical principles in New York, Berlin, and London. Faraday's law of electromagnetic induction does not vary with cultural context. The relationship between magnetic flux and induced voltage is invariant across political systems, economic traditions, and national temperaments. If technology were merely applied physics, electrification would have produced the same sociotechnical system everywhere it was deployed.
It did not.
Hughes's comparative history of electrification in the United States, Germany, and Britain revealed that the same fundamental technology, deployed in different institutional and cultural contexts, produced sociotechnical systems so different that they can barely be described in the same vocabulary. The differences were not superficial — not matters of branding or aesthetics or national preference in the color of insulation. They were structural, extending to the ownership model, the regulatory framework, the relationship between the electrical system and the state, the distribution of costs and benefits across social classes, the pace and pattern of electrification, and the cultural meaning of electricity itself.
American electrification was driven by private enterprise operating in competitive markets, at least initially. Edison, Westinghouse, and their competitors built systems wherever they saw commercial opportunity, serving customers who could pay, expanding incrementally as revenue justified investment. The result was rapid electrification of commercially attractive areas — urban centers, industrial districts, wealthy residential neighborhoods — and slow or nonexistent electrification of commercially unattractive ones. Rural America remained largely without electricity until the New Deal's Rural Electrification Administration brought the federal government into the system as a direct investor and provider, decades after urban America had been fully electrified.
The American system's regulatory framework reflected its market origins. State utility commissions, established in the early twentieth century largely at the instigation of utility executives like Insull who preferred regulated monopoly to unregulated competition, governed the industry through rate-of-return regulation: utilities were granted exclusive franchises in defined territories, in exchange for which they submitted to commission oversight of their rates, their investment plans, and their service standards. The regulatory relationship was adversarial in form but collaborative in substance. Utilities and regulators developed a shared institutional culture, a common vocabulary, and a set of mutual expectations that governed the industry's development for decades.
German electrification followed a radically different path. Municipal governments played a central role from the beginning, establishing municipally owned utilities (Stadtwerke) that served entire cities under unified management. The engineering tradition was systematic rather than improvisational: German electrical engineers, trained in the rigorous engineering education system that had made Germany a world leader in industrial chemistry and precision manufacturing, approached electrification as a planning problem rather than a market opportunity. Rathenau's AEG exemplified the approach: vertical integration from generator manufacture to building wiring, coordinated through a central engineering office that imposed systematic standards across the entire enterprise.
The result was a system that was technically more coherent than the American one — more standardized, more efficiently engineered, more rationally planned — but commercially less dynamic. German electrification proceeded according to plan rather than according to market opportunity, which meant that it was more equitable in its initial distribution (municipalities served all residents, not just commercially attractive ones) but slower to respond to unexpected demand and less tolerant of the entrepreneurial improvisation that characterized American electrification.
British electrification, caught between the American and German models, developed more slowly than either. The British system lacked both the entrepreneurial energy of the American market model and the systematic planning of the German municipal model. A patchwork of local authorities, private companies, and parliamentary committees produced a system that was neither systematically planned nor commercially dynamic — fragmented, inconsistently standardized, and chronically underinvested. Britain did not achieve the level of electrification that the United States and Germany had reached by the 1920s until the creation of the National Grid in 1926 and the nationalization of the electricity industry after World War II.
Hughes's comparative analysis demonstrated that these differences were not accidental. They were expressions of what he called "regional style" — the characteristic ways in which different societies configure their sociotechnical systems, shaped by their institutional traditions, political structures, economic ideologies, engineering cultures, and social values. Regional style is not a decorative overlay on a universal technology. It is the mechanism through which universal technology becomes specific social reality. The same generator, embedded in American entrepreneurial culture, produces a competitive market with rapid but unequal electrification. Embedded in German municipal culture, it produces a planned system with equitable but methodical electrification. Embedded in British parliamentary culture, it produces a compromise that achieves neither speed nor equity with particular distinction.
The concept of regional style has immediate and concrete relevance to the AI moment. The same underlying technology — large language models, transformer architectures, deep learning systems — is being deployed in sociotechnical systems that are already diverging along lines that Hughes's framework would predict.
American AI development is being shaped by the same institutional forces that shaped American electrification: private enterprise, venture capital, competitive markets, a regulatory environment that historically privileges innovation speed over precautionary restraint. The resulting system prizes rapid capability development, aggressive deployment, and commercial metrics — user growth, revenue, market share. The pace is extraordinary. The distribution of benefits is, so far, heavily concentrated: the AI companies and their investors capture the lion's share of the economic value, while the costs — workforce displacement, attention colonization, the erosion of practices that depend on productive friction — are broadly distributed across the population.
Chinese AI development is being shaped by state industrial policy, strategic competition with the United States, and institutional traditions that integrate technology development into national planning in ways that have no American equivalent. The Chinese system prioritizes technological sovereignty — the capacity to develop and deploy AI without dependence on foreign technology, foreign data, or foreign infrastructure. The state plays a coordinating role that would be structurally impossible in the American system: directing capital, setting research priorities, managing data resources, and integrating AI into surveillance and governance systems in ways that reflect Chinese political culture and institutional structure.
European AI development is being shaped by the regulatory tradition embodied in the EU AI Act — the most comprehensive attempt to govern AI through explicit legal frameworks. The European system prioritizes individual rights, transparency, accountability, and precautionary risk assessment. The result is a system that is, by design, slower to deploy than the American system and less strategically coordinated than the Chinese system, but potentially more protective of the values that European societies have identified as worth preserving.
Indian AI is emerging within a context that combines massive demographic scale, a highly educated technical workforce, and institutional and infrastructure challenges that prevent straightforward adoption of either the American or Chinese model. The Indian system will be shaped by its specific institutional context — the relationship between the central government and the states, the coexistence of a cutting-edge technology sector and vast populations with limited digital access, the linguistic diversity that English-trained models imperfectly serve.
The developer in Lagos, invoked by Segal in The Orange Pill as a figure whose access to AI tools represents the democratization of capability, will build within a sociotechnical system shaped by the specific conditions of West African economies: limited infrastructure, constrained capital, immense creative energy, institutional frameworks that are still forming, and a relationship to Western technology that carries the weight of colonial history. The tools she uses may be the same tools used by a developer in San Francisco. The system in which she uses them is radically different.
Hughes's regional-style framework issues a warning against the universalist narratives that dominate the AI discourse — narratives that treat AI as a single phenomenon producing a single set of consequences everywhere it is deployed. These narratives are, from a Hughesian perspective, analytically empty. They describe the technology while ignoring the system. They talk about the filament while ignoring Pearl Street Station. The question "What will AI do to society?" has no single answer, because there is no single society and no single AI sociotechnical system. There are multiple systems, forming in multiple cultural contexts, shaped by multiple institutional traditions, optimizing for multiple values.
The comparative analysis also reveals something less obvious but equally important: the regional styles are not merely parallel developments. They interact. The American system's rapid deployment creates competitive pressure on the European system to accelerate its own deployment, potentially at the cost of the regulatory caution that European societies have chosen. The Chinese system's strategic integration of AI into state capacity creates security concerns that reshape the American system's relationship with government. The global competition for AI talent, AI compute, and AI data creates interdependencies that link the regional systems into a global meta-system whose dynamics cannot be understood by examining any single regional system in isolation.
Hughes did not live to analyze this global dimension of the AI transition. But his framework provides the tools. Regional style explains why the same technology produces different systems in different contexts. Technological momentum explains why each system, once established, becomes resistant to change. Reverse salients explain why the gap between the technology's capabilities and the institutional capacity to absorb them creates tension that drives the system's evolution. And the concept of the system builder — the figure who sees the whole — explains why the quality of leadership during the formative period determines the configuration that will eventually gain momentum and lock in.
The AI moment is not one moment. It is many moments, unfolding simultaneously in different cultural contexts, producing different sociotechnical systems that will coexist, compete, and interact for decades. The question is not what AI will do. The question is what each society will build around AI — what institutional structures, what regulatory frameworks, what cultural narratives, what economic arrangements — and whether the system that gains momentum in each context will serve the broad interests of the people who live within it or the narrow interests of the actors who built it.
Hughes's comparative history suggests that both outcomes are possible, that neither is predetermined, and that the determining factor is the quality of the system building during the formative period. The formative period is now.
---
Beneath the streets of lower Manhattan, the copper mains that Edison's workers laid in 1882 are gone. But the conduits that held them — the underground pathways that determined where electrical cables could run, which buildings could be connected, and how the distribution network would grow — shaped the development of Manhattan's electrical infrastructure for decades after the original cables were replaced. The conduits outlasted the cables. The pathways outlasted the conduits. The assumptions embedded in the pathways — about where power should flow, who should receive it first, what density of service was desirable in which neighborhoods — outlasted everything.
Infrastructure is the most persistent component of a sociotechnical system. It is also the most invisible, which is why its persistence is so consequential. Roads outlast the vehicles that drive on them. Railway gauges outlast the trains that run on them. Electrical grids outlast the appliances they power. Telephone networks outlast the conversations they carry. In each case, the infrastructure — the physical substrate on which the system operates — was designed to serve a specific set of requirements at a specific historical moment. And in each case, the infrastructure persisted long after the requirements had changed and the historical moment had passed, constraining the system's evolution to configurations compatible with the existing substrate.
Hughes analyzed infrastructure persistence with the patience of a historian who understood that the most consequential decisions are often the least visible. The decisions that attracted public attention — the choice of AC over DC, the introduction of new generating technologies, the creation of regulatory commissions — were dramatic, contested, and well-documented. The decisions about infrastructure — where to lay the conduits, what gauge of wire to install, what voltage standards to establish, what frequency to adopt — were technical, unglamorous, and made by engineers whose names do not appear in popular histories. But the infrastructure decisions were more durable. The policy decisions could be revised. The regulatory frameworks could be reformed. The business models could be restructured. The infrastructure, once built, was the system's skeleton — the structure around which everything else organized and which could not be replaced without rebuilding the system from the ground up.
The American electrical grid's adoption of 60-hertz alternating current is a case in point. The choice of 60 Hz, rather than the 50 Hz adopted in Europe, was made early in the system's development for reasons that were technically defensible but not technically compelled — either frequency could have served. Once adopted, 60 Hz was embedded in every generator, every motor, every transformer, every appliance connected to the American grid. The cost of switching to 50 Hz — or to any other frequency — exceeded any conceivable benefit, not because 60 Hz was optimal but because the entire sociotechnical system had organized itself around the assumption that frequency equaled 60. The infrastructure had locked in a choice that would persist for as long as the system existed.
The AI sociotechnical system is building its infrastructure now, and the choices being made at the infrastructure layer will be the most durable choices of the entire transition. The physical infrastructure — the data centers, the networking equipment, the cooling systems, the electrical generation capacity — represents capital investments measured in tens of billions of dollars per quarter, with operational lifetimes measured in decades. A hyperscale data center commissioned in 2025 will be operating in 2040 or beyond. Its technical specifications — its power capacity, its cooling infrastructure, its networking topology, its physical location — will constrain what can be run on it for its entire operational life.
The concentration of this physical infrastructure in a small number of companies is the most structurally consequential feature of the emerging AI system. As of 2026, the vast majority of frontier AI training and inference occurs on infrastructure owned or controlled by three companies: Microsoft Azure, Google Cloud, and Amazon Web Services. This concentration is the result of the enormous capital requirements of building hyperscale compute facilities — requirements that create barriers to entry so high that even well-funded AI companies cannot afford to build their own infrastructure and must instead rent capacity from the hyperscalers.
The result is a dependency structure that will persist long after the current generation of AI models has been superseded. The models are ephemeral — each generation supersedes the last within months or years. The infrastructure is durable — each data center operates for decades. The companies that control the infrastructure occupy a structural position in the AI sociotechnical system analogous to the position that the utility companies occupied in the electrical system: they are the substrate on which everything else operates, and their decisions about pricing, access, technical standards, and capacity allocation constrain the choices available to every other actor in the system.
Hughes would recognize this pattern immediately. The consolidation of the American electrical industry in the early twentieth century produced a similar structural dependency. A small number of vertically integrated utility companies — General Electric, Westinghouse, and the holding companies that Insull and his imitators assembled — controlled the infrastructure on which the entire electrical economy depended. Their decisions about where to build generating stations, what transmission standards to adopt, which customers to serve first, and how to price their service determined the shape of American electrification as completely as any technological breakthrough.
The infrastructure dependency in the AI system extends beyond the physical layer. The software infrastructure — the API standards, the deployment frameworks, the model-serving platforms, the development tools — is also crystallizing into a configuration that will constrain the system's evolution. The APIs through which developers access AI capabilities are becoming standardized around a small number of providers, each with its own conventions, limitations, and pricing structures. The deployment frameworks that organizations use to integrate AI into their workflows are creating institutional dependencies that will persist long after better alternatives might become available, because the cost of switching — retraining the workforce, rebuilding the integration layer, renegotiating the vendor relationships — exceeds the benefit of the improvement.
The training data infrastructure presents a particularly consequential case of lock-in. The datasets on which frontier models are trained are enormous, expensive to assemble, and legally complex to acquire. The companies that have invested in assembling large, high-quality training datasets possess an asset whose value compounds over time, because each generation of models trained on the dataset produces insights that improve the assembly and curation of the next dataset. The result is a data flywheel that reinforces the advantage of incumbents and raises the barrier to entry for competitors.
The infrastructure choices also embed values, whether or not the builders intend them to. A data center located in a region with cheap electricity from fossil fuels embeds a particular relationship between AI capability and environmental cost. A training dataset assembled primarily from English-language internet text embeds a linguistic and cultural bias that will propagate through every model trained on it. An API pricing structure that charges per token embeds an economic model that favors short, efficient interactions over the long, exploratory conversations that might produce the most valuable human-AI collaboration. Each of these choices is being made now, during the infrastructure's formative period, and each will be extraordinarily difficult to reverse once the infrastructure is built and the system has organized itself around its specifications.
The invisibility of infrastructure is what makes it politically and culturally dangerous. Public discourse about AI focuses on the visible components: the models, the applications, the user experiences, the dramatic capabilities and equally dramatic failures. The infrastructure receives attention only when it fails — when a data center loses power, when an API goes down, when a cloud provider raises prices. The rest of the time, it operates in the background, invisible and therefore unexamined, making choices that will constrain the system's evolution for decades.
Hughes's analysis of electrical infrastructure provides a framework for understanding what is at stake. The infrastructure decisions that shaped the American electrical grid — the choice of frequency, the adoption of specific voltage standards, the geographic distribution of generating capacity, the ownership structure of the transmission network — were made by engineers and executives operating under immediate commercial pressures, with limited awareness of the long-term systemic consequences of their choices. The 60-hertz standard was not chosen because anyone believed it would be optimal for a century. It was chosen because it worked well enough with the available equipment and the existing customer base. The concentrated ownership of generating capacity was not chosen because anyone believed that private monopoly was the ideal governance structure for a universal service. It was chosen because the economics of large-scale generation favored concentration, and the regulatory framework that emerged to govern the concentrated industry was a pragmatic response to an accomplished fact rather than a deliberate design choice.
The same pattern is visible in the AI infrastructure decisions being made now. The concentration of compute in hyperscale data centers is not the result of a deliberate decision that concentrated infrastructure is the optimal configuration for the AI system. It is the result of economic dynamics — economies of scale, capital requirements, network effects — that favor concentration, and the policy response to the accomplished fact of concentration has been minimal.
The Hughesian prescription is not to prevent concentration — Hughes was not a prescriptive thinker, and he understood that economic dynamics cannot be wished away by policy preferences. The prescription is to recognize that infrastructure choices are systemic choices, that their consequences will persist long after the immediate circumstances that produced them have changed, and that the formative period — the period during which infrastructure is being built and standards are being established — is the period of maximum leverage for shaping those consequences.
Segal's analysis of the Death Cross captures one dimension of infrastructure's significance: the companies whose value survived the SaaS repricing were the ones whose position in the system was infrastructural rather than applicational. Their value resided not in the code they had written — which AI could now reproduce at trivial cost — but in the ecosystem they had built: the customer relationships, the data assets, the integration frameworks, the institutional dependencies that constituted the infrastructure on which their users' operations depended.
But the infrastructure analysis extends beyond the survival of individual companies. It extends to the configuration of the entire AI sociotechnical system — the substrate on which everything else will be built, the skeleton around which everything else will organize. The physical infrastructure, the software infrastructure, the data infrastructure, the financial infrastructure — each is being constructed now, each embeds choices that will persist for decades, and each will constrain the system's evolution in ways that are not yet fully visible to the people making the choices.
The people building AI infrastructure are, in Hughes's terms, laying the conduits under the streets of a city that does not yet exist. They are making decisions that will determine where power flows, who receives it, and at what cost, for a system whose eventual shape they cannot predict. The conduits will outlast the cables. The pathways will outlast the conduits. And the assumptions embedded in the pathways — about who should have access, how much it should cost, what standards should govern interoperability, who should control the bottlenecks — will outlast everything else, shaping the AI system long after the current generation of builders has moved on.
The New York Board of Aldermen did not understand electricity. This is not a criticism. In 1881, when Edison petitioned the Board for permission to lay electrical cables beneath the streets of the financial district, no municipal body on earth understood electricity in any operationally meaningful sense. The Board understood gas lighting, which it had regulated for decades. It understood the franchise model, in which private companies received exclusive rights to provide a service within defined territories in exchange for submitting to public oversight of rates and service standards. It understood the political dynamics of infrastructure construction — who would profit, who would be inconvenienced, which aldermen would face constituent complaints about torn-up streets.
What the Board did when confronted with Edison's petition was not to evaluate the technology. It was to assimilate the technology into the regulatory framework it already possessed. Electricity was treated, for regulatory purposes, as a variant of gas lighting — a service provided by a private company under a municipal franchise, subject to rate oversight, with the understanding that the technology would be confined to the specific geographic area covered by the franchise. The regulatory framework was not designed for electricity. It was designed for gas. But it was the framework that existed, and the pressure to act — Edison was politically connected, commercially motivated, and not inclined to wait — exceeded the capacity to design something new.
Thomas P. Hughes analyzed this process not as a failure of regulatory intelligence but as a structural feature of the relationship between regulation and technological systems. Regulatory frameworks are components of sociotechnical systems, not external constraints upon them. They co-evolve with the technology, the institutions, and the economic interests that make up the rest of the system. The Board of Aldermen's decision to regulate electricity as though it were gas was not ignorance. It was path dependence — the tendency of institutional actors to respond to new phenomena by extending existing frameworks rather than designing new ones, because existing frameworks have institutional support, legal precedent, and political legitimacy that new frameworks lack.
The co-evolutionary relationship between regulation and technology is one of Hughes's most analytically productive insights, and its relevance to the AI moment is immediate and concrete. Every regulatory framework being designed for AI in 2025 and 2026 is being designed by institutions whose expertise, precedents, and political dynamics were shaped by previous technologies. The EU AI Act, the most comprehensive regulatory framework to date, draws on the EU's experience regulating product safety, data protection, and consumer rights — domains with well-established legal traditions and institutional infrastructure. The American approach, less codified and more fragmented, draws on the Federal Trade Commission's experience with consumer protection, the Securities and Exchange Commission's experience with disclosure requirements, and the Department of Commerce's experience with export controls. In each case, the regulatory framework for AI is an extension of frameworks designed for other purposes, adapted to a technology that those frameworks were not designed to govern.
Hughes would predict exactly the tensions that are already visible in the AI regulatory landscape. The EU AI Act classifies AI systems by risk level — unacceptable, high, limited, minimal — and imposes obligations proportional to the classification. The framework is logical, principled, and already lagging behind the technology's evolution. The Act was negotiated between 2021 and 2024, a period during which AI capabilities underwent multiple step-function advances. By the time its provisions are fully implemented, the AI landscape will have evolved significantly beyond the categories the Act was designed to address. The regulatory framework, like the Board of Aldermen's franchise model, will be governing a technology that has outgrown the assumptions embedded in the regulation.
The American regulatory approach presents a different version of the same structural problem. Rather than comprehensive legislation, the United States has relied on executive orders, agency guidance, and voluntary industry commitments — instruments that are more flexible than legislation but less durable, less enforceable, and more vulnerable to political change. The result is a regulatory environment that is adaptable in theory but unstable in practice: each administration can revise its predecessor's approach, creating a policy oscillation that makes long-term planning difficult for both AI companies and the institutions affected by AI deployment.
Hughes's analysis of electrical regulation in the United States reveals a pattern that should concern anyone thinking about AI governance. The regulatory framework for American electricity did not emerge from deliberate design. It emerged from a series of pragmatic responses to accomplished facts. Edison built Pearl Street Station before any regulatory framework existed. The Board of Aldermen granted a franchise after the station was already under construction. State utility commissions were established after the utility industry had already consolidated into regional monopolies. Rate-of-return regulation was adopted after the utilities had already achieved the scale and political influence to ensure that the regulation would protect their market position as much as it would protect consumers.
In each case, the regulation was reactive rather than proactive — designed to govern a system that already existed rather than to shape a system that was still forming. The result was a regulatory framework that was well-adapted to the system as it existed at the moment of the framework's creation but poorly adapted to the system as it would evolve. The framework locked in assumptions about the technology (centralized generation, one-way distribution), the market structure (natural monopoly, regulated franchise), and the relationship between provider and consumer (passive consumption of a standardized service) that persisted long after the assumptions were no longer valid.
The AI regulatory frameworks being designed now are following the same reactive pattern. The technology is being deployed. The applications are being adopted. The market structure is consolidating. The social consequences are becoming visible. And the regulatory response is arriving after each of these developments, shaped by the accomplished facts it is attempting to govern rather than by a proactive vision of the system it wants to create.
Hughes's framework suggests that this reactive pattern is not a policy failure that could be corrected by more foresighted regulators. It is a structural feature of the relationship between regulation and rapidly evolving technological systems. Regulatory processes are inherently slower than technological development. They require consultation, deliberation, drafting, review, revision, and political negotiation — each step consuming time during which the technology continues to evolve. The gap between the technology's capability frontier and the regulatory framework's assumptions is not a temporary condition that will be resolved when the regulators catch up. It is a permanent feature of the system, because the technology will continue to evolve faster than the regulatory process can adapt.
This structural gap has a specific consequence that Hughes documented across the history of electrification: the regulation that eventually crystallizes around a technology tends to serve the interests of the actors who were most powerful at the moment of crystallization. The American regulatory framework for electricity served the interests of the utility companies, because the utility companies were the most powerful actors in the system at the moment the regulatory framework was established. The utility companies were not passive recipients of regulation. They were active participants in its design — lobbying for the specific provisions that would protect their market position, staffing the regulatory commissions with people who shared their assumptions about how the industry should operate, and shaping the regulatory culture in ways that ensured the framework would evolve along lines favorable to their interests.
The AI industry is engaged in exactly the same process. The major AI companies are the most active participants in the regulatory conversation — not because they are more civically minded than other actors, but because they have the most at stake and the most resources to deploy. They employ lobbyists, fund research, participate in government advisory committees, publish safety frameworks, and engage in the kind of sustained regulatory relationship-building that Hughes documented in Insull's dealings with the Illinois Commerce Commission. The result, if the historical pattern holds, will be a regulatory framework that reflects the interests and assumptions of the AI industry as much as it reflects the interests of the public the regulation is ostensibly designed to protect.
This is not a counsel of cynicism. Hughes was not a cynic. He was a historian who documented patterns. The pattern he documented is that effective regulation requires not merely good intentions but structural conditions that support the regulation's capacity to evolve with the system it governs. The most effective regulatory frameworks in the history of large technical systems were not the most restrictive or the most permissive. They were the most adaptive — frameworks designed with explicit mechanisms for revision, with institutional capacity for ongoing learning, with feedback loops that connected the regulatory process to the evolving reality of the system.
The Federal Communications Commission, established in 1934 to regulate radio and telephone, demonstrated adaptive capacity when it extended its framework to television, then to cable, then to satellite, then to the internet — each extension requiring significant revision of the existing framework but enabled by the institutional capacity for ongoing adaptation. The framework was far from perfect. It lagged behind technological change at every transition. But it adapted, because it was designed with enough institutional flexibility to accommodate technologies that did not exist at the time of its creation.
The AI regulatory frameworks being designed now face a more severe challenge than any previous regulatory effort, because the technology is evolving faster and the gap between the technology's capabilities and the regulatory framework's assumptions is widening more rapidly. The challenge is not to design the right regulation for AI as it exists today. It is to design regulatory institutions with the capacity to co-evolve with a technology that will continue to change in ways that cannot be predicted.
Hughes's research does not provide a regulatory blueprint. It provides something more valuable: the understanding that regulation is not an external constraint on a sociotechnical system but an integral component of it, and that the quality of the regulatory component — its adaptiveness, its institutional capacity, its alignment with broad social interests rather than narrow industrial ones — will determine the system's trajectory as surely as the quality of the technical component. The regulatory choices being made now are system-building choices. They will crystallize into the system's permanent architecture. And they are being made in the same condition of consequential uncertainty that characterized every previous regulatory transition: by actors who cannot predict the consequences of their choices but whose choices will constrain the system for decades.
The regulatory component of the AI sociotechnical system is still forming. It is still plastic. The question is whether it will be shaped by the broadest possible set of interests — including the interests of workers, students, parents, citizens, and communities that will be affected by AI but that lack the resources and the political access to participate in the regulatory conversation on equal terms with the AI industry — or whether it will be shaped, as the electrical regulatory framework was shaped, primarily by the actors who were most powerful at the moment of crystallization.
Hughes would not predict the outcome. He would observe that the outcome is being determined now, by the quality of the institutional design, the breadth of participation in the regulatory process, and the willingness of the people who understand the system's dynamics to insist that the regulatory component serve the system's broadest constituents rather than its most powerful ones.
---
The argument of this book reduces to a single claim, supported by a century of historical evidence: the formative period of a large sociotechnical system is the period of maximum human agency, and the AI system's formative period is now.
Everything that has been analyzed in the preceding chapters — the system builders who are designing the system's architecture, the technological momentum that is accumulating around their designs, the sociotechnical components that are interlocking into an integrated whole, the reverse salients that identify where the system is bottlenecked, the transition from inventive to managerial phase that is changing what values the system optimizes for, the regional styles that are producing different systems in different cultural contexts, the infrastructure that is crystallizing into the system's permanent skeleton, the regulatory frameworks that are co-evolving with the technology — all of it converges on a single conclusion: the choices being made now will determine the configuration of the AI sociotechnical system for decades, and the cost of changing that configuration will increase with every month that passes.
Hughes arrived at this conclusion through decades of comparative historical research, not through theoretical speculation. The evidence is specific, archival, and accumulative. Edison's choice of direct current constrained the American electrical system for a decade and cost millions of dollars and years of development time to reverse. Insull's regulatory strategy — the natural-monopoly franchise, the rate-of-return regulation — governed the American electrical industry for nearly a century. The British failure to adopt a unified national grid before 1926 meant that Britain's electrification lagged behind America's and Germany's for decades, with consequences for industrial productivity, economic growth, and quality of life that persisted into the postwar period. In each case, the formative-period choices were consequential far beyond what the actors who made them could have anticipated, and the cost of reversing or modifying those choices increased with every year that the system operated in the established configuration.
The AI system's formative period is compressed relative to its historical predecessors. Hughes documented electrical systems that took four decades to transition from their inventive phase to stable momentum. The telephone system took roughly three. The internet took roughly two. The AI system, operating in an environment already saturated with the institutional templates, communication technologies, and capital deployment mechanisms of the digital economy, appears to be completing its formative period even faster. The infrastructure is being built. The market structure is consolidating. The regulatory frameworks are taking shape. The organizational practices are normalizing. The cultural narratives are crystallizing. Each of these processes is occurring simultaneously, each is reinforcing the others, and the resulting system is gaining momentum at a pace that exceeds any historical precedent.
The compression of the formative period means that the window for shaping is correspondingly compressed. The people who will determine the AI system's configuration — the system builders, the regulators, the educators, the organizational leaders, the citizens who participate in the political processes that govern the system — have less time to act than their counterparts had in any previous technological transition. The electrical system's formative period lasted long enough for a generation of system builders to learn from their mistakes. The AI system's formative period may not last long enough for the mistakes to become visible before the momentum sets.
This urgency is not rhetorical. It is structural. Hughes's concept of technological momentum is not a metaphor for difficulty. It is an analytical description of a specific dynamic: the accumulation of installed infrastructure, trained workforces, institutional routines, regulatory frameworks, economic interests, and cultural assumptions that resist change not because anyone decides to resist but because the collective weight of the system's components exceeds any individual actor's capacity to redirect. The dynamic is not mysterious. It is mechanical. It is the same dynamic that makes it easier to keep driving on a highway than to exit and find an alternative route, multiplied by the complexity of a system with millions of components and billions of interdependencies.
The historical record offers specific guidance about what effective shaping looks like during a formative period. It is not the imposition of a single design, however brilliant. Edison tried that with direct current, and the system eventually rejected his design at enormous cost. Effective shaping is the creation of conditions that allow the system to evolve toward configurations that serve broad interests rather than narrow ones. It is the construction of what might be called systemic affordances — institutional structures, regulatory frameworks, educational programs, cultural practices — that make certain trajectories more likely and others less so, without attempting to predetermine the system's final configuration.
The most effective systemic affordances in the history of large technical systems share several characteristics. They are adaptive rather than prescriptive: they evolve with the system rather than attempting to freeze it. They are inclusive rather than exclusive: they incorporate the interests of the broadest possible set of stakeholders rather than the narrowest set of powerful actors. They are institutional rather than individual: they create durable structures rather than depending on the continued attention of specific people. And they are designed with explicit awareness of the system's dynamics — its tendency toward momentum, its vulnerability to lock-in, its characteristic pattern of reverse salients, its inclination to optimize for the values of the actors who held power during the formative period.
The reverse salients identified in Chapter 4 — organizational absorption, workforce reskilling, regulatory capacity, cultural comprehension — are the points of maximum leverage for systemic intervention. Each reverse salient represents a gap between the system's technical capabilities and the human capacity to absorb them. Each gap, if left unaddressed, will constrain the system's development in ways that serve narrow interests: the organizations that cannot restructure will lose their best people to organizations that can, concentrating talent and capability. The workers who cannot reskill will be displaced, concentrating the costs of the transition on the people least equipped to bear them. The regulators who cannot keep pace will produce frameworks that reflect the industry's interests rather than the public's, concentrating power. The cultural narratives that cannot capture the system's complexity will produce political responses — panic, euphoria, paralysis — that serve no one.
Addressing these reverse salients requires the kind of coordinated systemic intervention that Hughes documented in the most successful technological transitions. The creation of universal electrical service in the United States required coordination across federal, state, and local governments, private utilities, rural cooperatives, and manufacturing industries. The creation of the interstate highway system required coordination across federal and state transportation agencies, construction industries, automobile manufacturers, petroleum companies, and urban planners. Neither of these interventions was perfect. Both produced unintended consequences that subsequent generations have had to address. But both created systemic affordances that shaped the trajectory of their respective sociotechnical systems in directions that served broad rather than narrow interests.
The AI transition requires an equivalent effort. Not a single policy, not a single institution, not a single cultural initiative, but a coordinated set of interventions across the organizational, workforce, regulatory, and cultural components of the system, designed with explicit awareness of the system's dynamics and the temporal constraint imposed by the closing of the formative window.
Hughes's scholarship does not guarantee that this effort will be made. It does not guarantee that the system builders, regulators, educators, and citizens who have the capacity to shape the AI system will choose to exercise that capacity, or that their efforts, if made, will succeed. The historical record includes examples of successful systemic intervention and examples of failed intervention and examples of no intervention at all, and the outcomes in each case were determined by the specific circumstances — the quality of the leadership, the breadth of the coalition, the adequacy of the institutional design, the alignment of political will with systemic need.
What Hughes's scholarship does guarantee is that the outcome is not predetermined. The AI sociotechnical system is not destined to serve any particular set of interests or to produce any particular distribution of benefits and costs. It will serve the interests and produce the distribution that correspond to the choices made during its formative period — choices that are being made now, by the people who are reading these pages and by the millions of others who are participating in the AI transition whether they know it or not.
The formative period is a gift. It is the moment when human agency is at its maximum within a technological transition — when the choices of individuals, organizations, and societies can still shape the system's trajectory, before the momentum sets and the system begins to shape them. The gift will not be offered twice. The window will close, the choices will crystallize, and the system that emerges will reflect, with the stubborn persistence that Hughes documented across a century of technological history, the quality of the attention, the breadth of the vision, and the courage of the action that the formative period received.
Thomas P. Hughes died in February 2014, two years before the publication of the transformer architecture paper that launched the current AI revolution. He never saw ChatGPT. He never used Claude Code. He never experienced the vertigo that millions of people felt in 2025 when the machines learned to speak their language. But he spent fifty years studying the dynamics that govern what happens when a new technology enters the world and assembles a sociotechnical system around itself. His conclusions are as relevant to the AI moment as they were to the electrical moment that gave them empirical foundation.
The system is forming. The momentum is building. The window is open.
The question is not what the technology will do.
The question is what the system builders will choose to build around it, and whether those choices will serve the broadest possible definition of human flourishing or the narrowest possible definition of institutional interest.
The history of technology provides no assurance. It provides only the tools to understand what is at stake, and the evidence that the stakes are as high as they have ever been.
---
The conduit beneath the street was the image I could not shake.
Not the filament, not the generator, not Edison in his laboratory — those are the images everyone remembers, the ones that make the story feel like a triumph of individual genius. The conduit is different. The conduit is the unglamorous trench dug into Manhattan bedrock in 1882, the underground pathway that would determine where electrical cables could run for the next half-century, designed by engineers whose names appear in no popular history, under time pressure from a commercially impatient Edison, with no awareness that the specific angles and dimensions of a hole in the ground would constrain the development of a city's electrical infrastructure for generations after the original copper was melted down and replaced.
That is what Hughes saw that almost no one else sees. Not the spectacular choices — AC versus DC, Edison versus Westinghouse — but the quiet ones. The infrastructure decisions made under deadline by people solving immediate problems, embedding assumptions into physical and institutional structures that would outlast every person in the room.
I think about conduits now when I watch my team build. When we make an API choice, an architecture decision, a workflow assumption. These feel like engineering decisions. Hughes showed me they are system decisions — choices that will constrain everything built on top of them long after we have forgotten why we made them.
In The Orange Pill I wrote about the window — the brief period when the river can still be channeled, before the current carves its own path. Hughes gave that intuition a name and a century of evidence. Technological momentum. The transition from a system that can be shaped to a system that shapes. The most important thing I learned from his work is that this transition is not a moment. It is a gradient. The system gets heavier every day. The window does not slam shut. It narrows, imperceptibly, until one morning you push against a wall that was a door last quarter.
What unsettles me most is the reverse salient analysis. The bottleneck is not the model. It was never the model. The bottleneck is us — our organizations, our schools, our regulatory institutions, our capacity to understand what is forming around us. The technical core races ahead. Everything human lags behind. And Hughes's history shows, with archival precision, that systems whose human components cannot keep pace with their technical components do not slow down to wait. They simply produce outcomes that serve whoever was powerful enough to build during the gap.
I keep returning to a specific passage, the one about regional style. The same technology, embedded in different cultures, producing different systems. American electricity was entrepreneurial, fast, unequal. German electricity was systematic, planned, more equitable. British electricity was compromised, slow, satisfying no one fully. The technology was identical. The societies were different. The outcomes diverged completely.
That is the real argument against anyone who says AI will do some single, deterministic thing to the world. It will not. It will do what the sociotechnical systems built around it cause it to do. And those systems are being assembled right now, by people making conduit decisions — infrastructure choices, regulatory precedents, organizational norms, cultural narratives — that feel temporary and will prove permanent.
My children will live inside whatever system we build during this formative period. Not the system we intended. The system that crystallized from a thousand decisions made under pressure by people solving today's problems, each choice narrowing the corridor that future builders will walk through.
Hughes never offered comfort. He offered clarity. The clarity is this: the window is real, the momentum is building, and the quality of attention we bring to these years — not these decades, these years — will echo through the system for as long as the system stands.
I find that clarifying enough to be useful, and sobering enough to be honest.
-- Edo Segal
The choices being made this quarter about AI infrastructure, regulation, and deployment will constrain your children's options for decades. Thomas Hughes proved why -- a century before anyone wrote a prompt.
Every conversation about AI focuses on what the models can do. Thomas Hughes spent fifty years proving that the technology is never the point. What matters is the system that forms around it -- the infrastructure, the institutions, the regulations, the economic structures, the cultural assumptions -- and that system crystallizes during a brief formative window after which redirection becomes extraordinarily costly. Hughes mapped this dynamic across the history of electrification, showing how decisions made by engineers and executives under commercial pressure in the 1880s locked in configurations that governed entire societies for generations.
This book applies Hughes's framework to the AI sociotechnical system assembling around us right now. It identifies where momentum is building, where the reverse salients -- the human bottlenecks -- are constraining the system's trajectory, and why the window for shaping AI's impact on society is measured in years, not decades.
The formative period is closing. The conduits are being laid. The question is whether we understand what we are embedding before the concrete sets.
-- Thomas P. Hughes, Networks of Power (1983)

A reading-companion catalog of the 15 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Thomas Hughes — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →