The most dangerous condition in any society is not bad rules. It is no rules — or, more precisely, a condition in which existing rules have ceased to describe the reality they were designed to govern. Actors operate in a space where behavior is neither sanctioned nor prohibited, where the boundaries of acceptable conduct are undefined, and where the powerful exploit the absence of constraint not through deliberate transgression but because there is nothing to transgress against. North identified this condition with analytical precision. Previous technological transitions produced sectoral voids; the AI transition produces a systemic void, disrupting employment law, educational systems, professional licensing, intellectual property, quality assurance, social welfare, and democratic governance simultaneously. In the absence of defined rules, the actors with the most resources, information, and organizational capacity shape the emerging framework — not necessarily through malice, but through the natural operation of competitive pressure in a ruleless environment.
There is a parallel reading that begins not with the absence of rules but with their superabundance — the proliferation of technical standards, API specifications, model architectures, and platform dependencies that constitute the material substrate of AI deployment. The void Segal identifies exists only at the level of formal institutional response. At the infrastructural level, the rules are dense, specific, and already locked in. Every AI application depends on cloud computing resources controlled by three companies. Every model trains on datasets scraped according to terms of service written by platform monopolies. Every deployment operates through APIs whose documentation constitutes de facto law. The institutional framework already exists; it was simply written by engineers rather than legislators.
This infrastructure determines what is possible before any formal institution can respond. A small company cannot train a foundation model — not because regulations forbid it but because the computational resources required exist only within the budgets of corporations and nation-states. A developer cannot build outside the prescribed interaction patterns — not because law constrains them but because the technical architecture enforces specific modes of engagement. The twenty-fold productivity multiplier Segal celebrates operates only within the channels these infrastructures permit. The question is not whether we fill the void quickly enough to prevent extractive lock-in. The lock-in already occurred at the moment the technology stack crystallized. What remains is not institution-building but negotiation within an already-determined technical regime whose fundamental power relations were established by the capital requirements of transformer architecture and the economies of scale in data center operation.
Previous technological transitions — mechanization, electrification, computerization — disrupted specific sectors and their adjacent institutional domains. The power loom disrupted textile production. Electrification disrupted manufacturing. Computerization disrupted information processing. In each case, the institutional void was sectoral, confined to the domains directly affected, and the institutional response could be developed within the existing framework of adjacent institutions that remained functional.
The AI transition is different. It disrupts the entire category of knowledge work. Every institutional domain — employment, education, professional licensing, intellectual property, quality assurance, welfare, democratic governance — is simultaneously inadequate to the reality the technology has created. The void is systemic. And a systemic void produces qualitatively different dynamics than a sectoral one, because the adjacent institutions that might have provided a framework for response are themselves in flux.
The consequences are visible in every domain The Orange Pill describes. An employer facing the twenty-fold productivity multiplier must decide: reduce headcount to capture gains as profit, or maintain headcount and expand scope? The formal rules pull in different directions. The informal norms are unsettled. Segal himself describes making this decision in the void — choosing stewardship over extraction under conditions the rules did not compel. It was a good decision. It was also dependent on the leader's character rather than on institutional structure.
The void is not neutral. It is being filled. The question is by whom and in whose interest. In any period of institutional uncertainty, the actors with the most resources shape the emerging framework to their advantage. The technology companies building AI tools are, through product decisions, terms of service, pricing structures, and cultural narratives, actively constructing the informal institutional framework within which AI is used. When Anthropic designs Claude's interaction patterns, it is establishing norms. When a company prices at one hundred dollars per month, it is establishing accessibility norms. These are institutional acts, performed by organizations whose primary accountability is to shareholders rather than to the broader public.
The concept emerged from North's comparative work on institutional transitions — particularly his analysis of the English enclosure movement, which converted common land to private property and whose institutional void was filled by landowners possessing the political resources to reshape land tenure rules in their favor. The result was economically efficient in aggregate but devastating in distribution.
North and collaborators extended the analysis to limited access orders in Violence and Social Orders (2009), examining how dominant coalitions fill institutional voids with arrangements that generate rents — stable because the rents give the coalition an incentive to maintain the restriction.
The void is dangerous because it is invisible. Bad rules can be criticized and changed. The absence of rules operates through silence.
Sectoral vs. systemic. Previous voids were contained. The AI void spans every institutional domain simultaneously, with no functional adjacent institutions to provide scaffolding.
Individual ethics operating in a void produce inconsistent outcomes. Good decisions here, bad decisions there, with distribution determined by decision-maker character rather than system structure.
The filling is happening now. Product decisions, terms of service, and pricing structures are crystallizing into informal institutional frameworks with the durability of formal law.
Participation determines direction. Voids filled by the powerful produce extractive frameworks. Voids filled through inclusive institutional entrepreneurship produce frameworks that serve the broad population.
Critics argue the concept is a rhetorical device that overstates institutional absence — real societies always have some rules operating, however imperfectly. Supporters respond that the point is not the absence of all rules but the absence of rules adequate to the current reality, and that the functional consequences of inadequate rules approach those of no rules at all. The AI-era test will be whether formal responses catch up quickly enough to prevent extractive lock-in.
The right frame depends on which layer of the system we examine. At the level of formal law and policy — employment regulations, professional licensing, educational credentials — Segal's void thesis holds completely (100%). These institutions genuinely have not caught up to AI's implications. When a court must decide whether AI-assisted work constitutes practicing law without a license, it operates in true institutional absence. But at the infrastructural level, the contrarian view dominates (80%). The technical standards, platform dependencies, and computational requirements that govern AI deployment constitute a dense regulatory framework more binding than formal law.
The synthesis emerges when we recognize that different institutional layers operate at different speeds and with different mechanisms of enforcement. Technical infrastructure crystallizes rapidly through network effects and capital concentration — this is where the contrarian's "already-determined regime" exists. Social norms and professional practices evolve at medium speed through trial and error — this is Segal's space of "stewardship over extraction," where individual choices still matter. Formal legal frameworks move slowest, arriving after the technical and social layers have largely solidified. The void is thus selective: absent in formal rules, contested in social norms, already filled in technical architecture.
The productive question becomes not whether a void exists but at which layers it remains open. Where technical lock-in has occurred (compute infrastructure, model architectures), the task is negotiating within constraints. Where social norms remain fluid (productivity sharing, skill valuation), the task is active institutional entrepreneurship. Where formal rules lag (employment law, credentialing), the task is preventing the crystallization of extractive practices into law. The AI transition requires operating simultaneously across all three temporal layers, recognizing both where possibility remains and where the infrastructure of control has already consolidated.