By Edo Segal
The rule I enforced least was the one that mattered most.
I had a hundred rules for my engineering team. Code review standards. Deployment checklists. Security protocols. Testing requirements before anything touched production. These rules were real. They had teeth. Violate them and you heard about it. They worked because someone — me, a lead, a system — would catch you if you cut corners.
But the rule I cared about most deeply — build things that genuinely serve the people who use them — had no enforcement mechanism at all. It lived in my head. Sometimes I said it in meetings. Nobody checked. Nobody measured it. Nobody got called into a room for shipping something that technically worked but quietly made a user's life worse. The rule existed the way a wish exists. Sincerely held. Completely weightless.
Douglass North would have predicted this. He spent his career studying something that sounds dry until you realize it explains nearly everything: institutions. Not buildings with columns. The rules of the game. Formal laws, informal norms, and — this is the part that rearranged my thinking — the enforcement mechanisms that determine whether rules are real or decorative.
North's core insight is brutal in its simplicity. The same technology, operating inside different institutional frameworks, produces opposite outcomes. The steam engine built prosperity in one country and extracted misery in another. Not because the engine was different. Because the rules were.
In *The Orange Pill*, I wrote about the river of intelligence and the dams we need to build. I meant it. I still mean it. But North's framework asks a question my metaphor did not contain: *Whose dam? Built by whom? Protecting whose territory?*
A dam is never neutral. It creates a pool that nourishes some and floods others. The placement is a distributional choice dressed up as an engineering decision. I had been thinking about whether to build. North forced me to think about who decides where.
This matters right now because the institutional framework of the AI economy is being written as you read this sentence. Not in a single legislature. In product decisions, pricing structures, terms of service, corporate strategies, and the thousand informal norms crystallizing around how this technology gets used. The rules are forming. Path dependence will lock them in. And the question North spent his life answering — who writes the rules, and whose interests do they serve — is the question that will determine whether AI's extraordinary gains flow broadly or concentrate narrowly.
The river is real. The beaver is real. But the rules under which the building occurs matter more than the building itself. North showed me that. He might show you something I still cannot see.
-- Edo Segal ^ Opus 4.6
1920–2015
Douglass Cecil North (1920–2015) was an American economist who fundamentally reshaped how scholars understand the relationship between institutions and economic performance. Born in Cambridge, Massachusetts, and educated at the University of California, Berkeley, North spent much of his academic career at Washington University in St. Louis, where he co-founded the Center in Political Economy. His major works include *The Rise of the Western World: A New Economic History* (1973, with Robert Paul Thomas), *Structure and Change in Economic History* (1981), *Institutions, Institutional Change and Economic Performance* (1990), and *Violence and Social Orders* (2009, with John Joseph Wallis and Barry R. Weingast). North's central contribution was demonstrating that institutions — the formal rules, informal norms, and enforcement mechanisms that structure human interaction — are the primary determinant of long-term economic outcomes, operating through their effect on transaction costs. He shared the Nobel Memorial Prize in Economic Sciences in 1993 with Robert Fogel for their work applying economic theory and quantitative methods to explain economic and institutional change. His concepts of path dependence, institutional entrepreneurship, and the distinction between extractive and inclusive institutional frameworks continue to shape economics, political science, and development studies worldwide.
Every society plays a game whose rules most of its members cannot see. The rules are not posted on walls or distributed in pamphlets. They are embedded in the fabric of daily life so deeply that they feel less like rules than like gravity — forces that constrain behavior without announcing themselves as constraints. A merchant in fifteenth-century Venice did not wake each morning and consult the Venetian commercial code before deciding whether to invest in a trading voyage. He invested because he knew, with the confidence born of long experience, that the institutional framework of his city would protect his property, enforce his contracts, and adjudicate his disputes. The rules were invisible precisely because they worked.
Douglass North spent a lifetime making those invisible rules visible. His central proposition, stated with the economy of a man who had spent decades refining it, was that institutions are the rules of the game in a society. The proposition sounds simple. It is not. Understanding what it means requires distinguishing between three categories of constraint that most people collapse into a single blur.
The first category is formal rules. Constitutions, statutes, regulations, contracts, corporate charters — the written instruments through which societies codify their agreements about how economic and political life should be conducted. Formal rules are explicit, deliberate, and subject to conscious modification through legislative or judicial action. When the United States passed the Sherman Antitrust Act in 1890, it established a formal rule constraining the concentration of market power. When the European Union enacted the General Data Protection Regulation in 2018, it established formal rules governing the use of personal data. These instruments are the visible architecture of institutional life. Their visibility is both their strength and their limitation — their strength because they can be identified, debated, and changed through political processes; their limitation because they constitute only a fraction of the institutional framework that actually governs behavior.
The second category is informal norms. Customs, traditions, codes of conduct, conventions of behavior — the unwritten rules that societies develop through long processes of cultural evolution and that govern the vast majority of human interaction. Informal norms determine whether a handshake is binding. Whether a promise is kept. Whether a professional cuts corners when no one is watching. Whether a community ostracizes a member who violates shared expectations. North argued throughout his career that informal norms were at least as important as formal rules in determining economic performance. A society could possess the most elegant formal rules in the world and still fail economically if the informal norms governing everyday behavior undermined the formal structure. A formal rule against corruption is meaningless in a society where the informal norm is that public officials are expected to use their positions for private gain. A formal rule protecting property rights is hollow in a society where the informal norm is that the powerful take what they want.
The third category is enforcement mechanisms. Courts, police, regulatory agencies, social sanctions, reputational consequences — the apparatus through which rules, both formal and informal, are made effective. Rules without enforcement are suggestions. North was emphatic on this point. The elegance of the rule is irrelevant if the mechanism for enforcing it is absent, corrupt, or inadequate.
Together, these three forms constitute the institutional framework of a society. And the quality of that framework, North argued, is the primary determinant of economic performance. Not technology. Not resources. Not geography. Institutions. The analytical mechanism connecting institutions to economic outcomes is transaction costs — the costs of defining, protecting, and exchanging property rights. Every exchange between human beings involves costs beyond the direct cost of the good or service being exchanged: the costs of finding a trading partner, negotiating terms, drafting a contract, monitoring performance, enforcing the agreement. These are not incidental to economic life. North estimated that transaction costs consumed approximately forty-five percent of the net national product of the United States — a proportion reflecting the enormous resources devoted to the infrastructure of exchange rather than to production itself.
Good institutions reduce transaction costs. When a merchant knows that contracts will be enforced and property will not be seized, the transaction costs of every exchange fall, more exchange occurs, specialization increases, and productivity rises. Bad institutions increase transaction costs. When behavior is unpredictable, when property rights are insecure, when contracts are unenforceable, the costs of exchange rise to prohibitive levels. Trade contracts. Investment declines. The economy stagnates. The difference between Venice's prosperity and the economic stagnation of societies that possessed identical technologies but inferior institutions was not a difference in the tools available. It was a difference in the rules under which the tools were used.
This framework bears directly on the moment described in The Orange Pill. The author documents a technological transformation of extraordinary power — the collapse of the imagination-to-artifact ratio, the twenty-fold productivity multiplier, the language interface that eliminated the translation overhead between human intention and machine execution. The documentation is vivid and experientially rich. What it does not adequately address is the institutional dimension: the question of which rules, norms, and enforcement mechanisms will determine whether this technological power produces broadly shared prosperity or concentrated extraction.
The distinction matters because the historical record is unambiguous. The same technology, operating within different institutional frameworks, produces opposite distributional outcomes. The steam engine operated in England and in the Congo. In England, where institutions had evolved over centuries to constrain arbitrary power, protect property rights, and distribute gains through labor markets reinforced by emerging collective bargaining norms, the steam engine eventually — after decades of institutional struggle — produced broadly shared prosperity. In the Congo, where institutions had been designed by colonial powers precisely to extract resources and concentrate gains, the same class of technology produced extractive misery. The technology did not determine the distribution. The institutions did.
The AI transition will follow the same logic. The language interface is an amplifier, as the author correctly observes. But an amplifier amplifies whatever signal it receives, and the signal is institutional. A society with inclusive institutions — institutions that distribute opportunity broadly, protect the rights of the displaced, invest in the human capital that complements the new technology, and maintain the competitive markets that prevent monopolistic capture — will amplify inclusion. A society with extractive institutions — institutions that concentrate power in the hands of those who control the technology, permit the capture of productivity gains by a narrow elite, and fail to invest in the transitions that the displaced require — will amplify extraction.
The author's call for dams is, at its core, a call for institutional construction. The beaver metaphor captures something real about the relationship between human agency and the forces that technology unleashes. But the question North's framework brings to the beaver's enterprise is not whether the dam is well-constructed. It is who decides where the dam goes, whose territory it protects, and whose territory it floods. That is the institutional question, and it is the question upon which everything else depends.
The AI transition has created what institutional economists recognize as an institutional void — a gap between the existing rules of the game and the new reality that the technology has inaugurated. The formal rules were designed for a pre-AI economy. Employment law assumes that productivity is roughly proportional to hours worked and that workers are broadly interchangeable within skill categories. When a single worker with an AI tool produces the output that previously required twenty, these assumptions collapse. Intellectual property law assumes that creation is attributable to identifiable human authors. When a book is written in collaboration with a machine, when code is generated through conversation rather than composition, the attribution assumptions of intellectual property law cease to describe reality. Educational institutions assume that the purpose of training is to develop skills that will retain their market value over the course of a career. When the market value of skills can shift in months rather than decades, this assumption becomes actively harmful — producing graduates credentialed in competencies that the technology has already commoditized.
The informal norms are equally disrupted. What counts as expertise is being renegotiated in real time. The senior engineer whose identity was built on decades of implementation knowledge confronts a world in which that knowledge can be replicated by a tool costing one hundred dollars per month. The informal norms governing professional identity, educational aspiration, and the relationship between effort and reward are all under pressure from a technology that makes competent performance cheap and deep expertise harder to monetize. The discourse that the author describes — the camps, the confusion, the inability to process the change through existing frameworks — is a symptom of the informal norms breaking down. Without shared norms for evaluating the change, the society defaults to tribal signaling.
The enforcement mechanisms face perhaps the most acute challenge. How does one enforce quality standards when the output is generated by a machine whose reasoning process is opaque? How does one enforce educational standards when AI-assisted work is indistinguishable from human-only work? How does one enforce professional licensing requirements when the licensed competencies can be replicated by a tool that requires no license? The enforcement mechanisms were designed for a world in which human behavior was the primary object of enforcement. The AI world requires enforcement of human-machine systems, and the conceptual frameworks, legal precedents, and institutional capacities for this enforcement do not yet exist.
The void is not neutral. In the absence of defined rules, the actors with the most power shape the emerging framework to their advantage. This is not necessarily malicious. It is the natural operation of competitive pressure in a ruleless environment. The technology companies building AI tools are also, inevitably, shaping the informal norms around AI use, the expectations about what AI-assisted work looks like, and the practical standards that will eventually calcify into formal rules. They are doing this not through conspiracy but through the accumulated force of product decisions, pricing structures, terms of service, and the cultural narratives they promote about what their technology means.
The rules of the game are being written. The question is who holds the pen.
---
Ronald Coase asked a question in 1937 that economics had been ignoring for a century and a half: if markets are efficient mechanisms for coordinating economic activity, why do firms exist? His answer was transaction costs. Markets coordinate through the price mechanism, but using the price mechanism is itself costly — the costs of discovering relevant prices, of negotiating and concluding separate contracts for each exchange transaction, of monitoring compliance and enforcing agreements. Firms exist because, under certain conditions, organizing production within a hierarchical structure reduces transaction costs below what the market would impose. The boundary of the firm is the boundary where the cost of organizing one more transaction internally equals the cost of conducting that transaction through the market.
North extended this insight from the theory of the firm to the theory of the economy. If institutions exist because transaction costs exist, and if the quality of institutions determines the magnitude of transaction costs, then the institutional framework is the primary lever of economic performance. Reduce transaction costs through better institutions, and you produce the conditions for productive exchange, specialization, and growth. Fail to reduce them, and you produce the stagnation that characterizes most of human economic history.
The language interface that the author of The Orange Pill describes is, in North's analytical terms, a transaction cost revolution. It did not merely reduce the costs of software production. It eliminated an entire category of costs that had structured the software industry since its inception — the communication overhead, the specification friction, the coordination expense that traditional team-based development required.
The transaction cost structure of pre-AI software development was elaborate. A person with an idea needed to translate that idea into a specification a programmer could implement. This translation was a transaction: the cost of converting intuitive knowledge into technical specification. The specification was then interpreted by a programmer — another transaction, involving the cost of communication between two people with different cognitive frameworks. The programmer's interpretation was implemented in code, tested, and the gap between what the code did and what the originator intended became visible only at the testing stage — when the cost of correcting the gap was highest. Each stage involved handoffs, and each handoff was a transaction with associated costs: the cost of scheduling, the cost of context-switching, the cost of information loss in transmission.
Large organizations developed elaborate institutional structures to manage these transaction costs. Project management methodologies. Agile sprints and standups. Code review processes. Integration testing frameworks. These were institutional innovations — formal and informal rules designed to reduce the transaction costs of collaborative software development. They worked, in the sense that they made complex software production possible. They also consumed enormous resources. A significant fraction of the cost of any software project was not the cost of writing code but the cost of coordinating the people who wrote code.
The language interface collapsed these costs. A person with an idea could describe it in natural language and receive a working implementation. The translation transaction — from intuitive knowledge to technical specification — was eliminated. The interpretation transaction — from specification to code — was eliminated. The coordination transactions — scheduling, handoffs, review cycles — were eliminated or radically compressed. What remained was a conversation between a single human and a machine, conducted in the human's native language, producing functional output in real time.
The magnitude of this reduction is economically extraordinary. The author's account of the Trivandrum training, where twenty engineers each achieved the leverage of a full team, is a concrete measurement of transaction cost collapse. The work that previously required elaborate institutional infrastructure — teams, managers, processes, coordination mechanisms — could now be accomplished by individuals working in direct conversation with a machine. The institutional overhead of collaborative production had been stripped away.
But North's framework provides a warning that the celebratory account of transaction cost reduction does not adequately address. Transaction costs do not disappear from an economic system. They shift. The reduction of one category of costs creates or reveals other categories that were previously masked by the dominant friction. The net effect on total transaction costs — and therefore on economic welfare — depends on whether the new costs are lower or higher than the ones they replaced. And that determination depends on the institutional framework governing the transition.
Three categories of new transaction cost deserve specific identification.
The first is quality evaluation cost. When a human programmer wrote code, the process of writing was itself a form of quality assurance. The programmer understood the code because the programmer had created it. Bugs could be identified because the programmer knew where decisions had been made and where those decisions might have been wrong. When the machine writes the code, the human's relationship to the output changes fundamentally. The human described what the code should do. The machine produced an implementation. The quality of that implementation is no longer transparent to the human, because the human did not experience the process of creating it. The author acknowledged this risk — the senior engineer who found herself making architectural decisions with diminishing confidence because she had lost the incidental learning of manual implementation. This is a transaction cost: the cost of evaluating output that one did not produce and does not fully understand. It is a new cost, created by the same technology that eliminated the old costs, and its magnitude depends on institutional factors — the quality of testing frameworks, the norms around code review, the organizational structures that maintain human understanding of AI-generated systems.
The second is human capital maintenance cost. The language interface reduced the cost of producing software, but it did not reduce the cost of producing the human judgment required to direct the machine effectively. The twenty percent of the engineer's work that remained — the judgment, the architectural instinct, the taste — is not a natural byproduct of using the tool. It is the product of years of experience, much of it gained through the very friction that the tool eliminated. The traditional pathway for developing these capabilities — years of hands-on implementation work that deposited layers of understanding through struggle — has been disrupted. The institutional question is how to replace this pathway. What training programs, what mentorship structures, what educational reforms will produce the human capital that the AI-augmented economy requires? The transaction cost of maintaining this human capital is institutional: it depends on educational systems, professional development norms, and organizational investment in the capabilities that complement rather than compete with AI.
The third is social adjustment cost. The twenty-fold productivity multiplier has distributional consequences that extend beyond the room where the productivity was measured. If twenty engineers can do the work of four hundred, the institutional question is what happens to the other three hundred and eighty. The labor market adjustment, the retraining, the social safety net required to support displaced workers, the political management of communities that lose their economic base — these are transaction costs of the AI transition. They do not appear on any company's balance sheet. They do not figure in the calculation of the productivity multiplier. But they are real costs, borne by real people, and their magnitude depends on the institutional framework for managing displacement — unemployment insurance, retraining programs, portability of benefits, the informal norms that determine whether displaced workers are treated as failures or as people caught in a structural transition beyond their control.
The net effect of the language interface on economic welfare depends on the relative magnitude of these new costs compared to the old ones. In a society with strong institutions for quality evaluation — robust testing frameworks, professional norms around understanding the systems one deploys, organizational structures that maintain human oversight of AI-generated output — the quality evaluation costs will be manageable. In a society with weak institutions for quality evaluation, the costs could be severe: systems deployed without adequate understanding, failures that propagate because no human in the loop understood the code well enough to anticipate them.
In a society with strong institutions for human capital development — educational systems that adapt quickly to changing skill requirements, professional development norms that invest in judgment and creative capacity, organizational cultures that protect mentoring time — the human capital maintenance costs will be manageable. In a society with path-dependent educational institutions that continue to credential the old competencies while the market demands new ones, the costs will compound over time as the gap between institutional output and economic need widens.
In a society with strong institutions for social adjustment — portable benefits, effective retraining programs, safety nets that preserve dignity — the social adjustment costs will be absorbed as part of the normal functioning of a dynamic economy. In a society whose safety net was designed for an era of gradual occupational change rather than rapid technological displacement, the costs will accumulate as social strain, political instability, and the erosion of the social contract that sustains productive exchange.
The technology creates the possibility. The institutions determine the reality. The language interface has collapsed an old category of transaction costs with breathtaking speed. Whether the new categories it has created will be managed effectively is not a technological question. It is an institutional one.
---
In 1985, the economist Paul David published a short paper on the QWERTY keyboard. The paper's argument was narrow — it demonstrated that the layout of keys on the standard typewriter had been determined by an early engineering constraint that no longer applied, and that the layout persisted not because it was optimal but because the switching costs of adopting a superior layout exceeded the benefits for any individual typist. The paper's implication was vast. It demonstrated that the outcomes of historical processes could be inefficient, locked in by the accumulated weight of past decisions, and resistant to change even when superior alternatives were available.
North extended the analysis from technology to institutions with a rigor that transformed the field. His central proposition was that institutional frameworks, once established, develop self-reinforcing properties that make them extraordinarily persistent. The mechanism was increasing returns: once an institutional arrangement was in place, the organizations and individuals who operated within it invested in skills, relationships, and strategies specific to that arrangement. These investments created constituencies with a vested interest in persistence, because changing the arrangement would devalue their investments. The more time that passed, the more investment accumulated, the stronger the constituencies became, and the more resistant the institutional framework grew to modification.
This was not a conspiracy theory. It was a structural observation about how institutional systems function. The QWERTY keyboard persisted not because anyone wanted to keep it but because the cost of coordinated switching exceeded the benefit. Path dependence in institutions operated through the same logic at vastly greater scale and with far greater consequences.
The AI transition interacts with institutional path dependence in ways that are both predictable from North's framework and deeply troubling in their specific manifestations. The existing institutional framework governing knowledge work — employment law, educational systems, professional licensing, intellectual property — was designed for a world that the technology has fundamentally altered. Each element of this framework is path-dependent, reinforced by decades of investment in skills, organizations, and political coalitions adapted to the pre-AI environment. The resistance to institutional change is not the result of stupidity or malice. It is the structural consequence of rational actors protecting investments that the existing framework made rational to undertake.
Employment law provides the clearest illustration. The modern framework of employment regulation was constructed over more than a century in response to the conditions of industrial and post-industrial labor. Minimum wage laws assumed that workers were paid for time. Overtime regulations assumed that productivity tracked with hours worked. Anti-discrimination statutes assumed that workers within broad categories were roughly interchangeable in their productive capacity. Workers' compensation systems assumed that injury was the primary occupational risk. The entire framework was predicated on a model in which labor was exchanged for wages in a relationship where time served as a reasonable proxy for productive contribution.
The twenty-fold productivity multiplier described in The Orange Pill obliterates this model. When one worker with an AI tool produces the output that previously required twenty, the relationship between time and productivity dissolves. The eight-hour day, the cornerstone of industrial labor regulation, becomes an artifact of a production function that no longer applies. An engineer who produces in two hours with AI what used to require a week has not worked an eight-hour day in any meaningful sense, regardless of how the timesheet reads. The compensation structures built on time-based productivity — hourly wages, salaried positions calibrated to expected output per week, overtime premiums — cease to describe the actual economics of AI-augmented labor.
Yet employment law remains largely unchanged. The formal rules persist because the switching costs of redesigning them are enormous and because the organizations adapted to the existing framework — employment law firms, human resources departments, labor unions organized around occupational categories, government agencies staffed with experts in pre-AI labor markets — resist changes that would devalue their institutional investments. A law firm that has spent decades building expertise in overtime regulation does not welcome a world in which overtime is a meaningless concept. A human resources department structured around job descriptions and competency frameworks does not welcome a world in which job descriptions change monthly and competency frameworks are obsolete before they are published.
Educational systems exhibit the same path dependence with arguably greater consequences. The modern educational system was designed to produce workers for an industrial economy — workers capable of following instructions, performing standardized cognitive operations, and functioning within hierarchical organizational structures. The system has been modified incrementally over a century and a half, but the fundamental architecture remains: age-graded classrooms, standardized curricula, sequential specialization from general to specific, credential-based certification that signals competency to employers.
The AI transition makes this architecture not merely suboptimal but counterproductive. An educational system designed to develop the capacity for standardized cognitive operations produces workers whose skills are precisely the skills that AI replicates most easily. The capabilities that the AI economy rewards — judgment, creative direction, the capacity to ask questions that reframe problems, the ability to integrate across domains — are precisely the capabilities that the existing educational system is least equipped to develop, because the system was designed to develop different capabilities.
The path dependence is formidable. The curriculum committees, testing regimes, teacher training programs, accreditation systems, and ranking methodologies are all invested in the existing structure. Tens of millions of students are currently enrolled in programs designed to produce competencies that the market will not reward by the time the credentials are conferred. Hundreds of thousands of educators have built careers around pedagogical approaches calibrated to the transmission of standardized knowledge rather than the development of judgment. A proposal to redesign the system around judgment, integration, and creative questioning encounters resistance not only from the organizations adapted to the existing system but from the parents and students who have invested in the existing credential structure — investments that would be devalued by the change.
Professional licensing represents perhaps the sharpest case. The legal profession requires three years of law school and passage of a bar examination testing knowledge of legal doctrine. The medical profession requires four years of medical school, years of residency, and licensing examinations testing diagnostic and procedural knowledge. The accounting profession requires examinations testing knowledge of accounting standards. In each case, the licensing structure was designed to ensure that practitioners possessed the specific knowledge and skills necessary to perform competently.
The AI transition disrupts the relationship between licensing and competence. When an AI tool can draft a legal brief that is competent, cite the correct cases, and organize the analysis in the expected structure, three years of law school devoted to developing the ability to perform these functions becomes less relevant to actual professional competence. The competence that matters in an AI-augmented legal practice is not the ability to draft the brief but the ability to evaluate the machine's draft — to identify the cases it missed, to recognize the arguments it assembled incorrectly, to exercise the judgment that determines whether the brief serves the client's actual interests. These are capabilities that require deep legal knowledge, but they are different capabilities from the ones the licensing examination tests.
The licensing structure persists because it is reinforced by the institutions built around it. Law schools depend on the three-year JD requirement for their economic model. Bar associations depend on licensing examinations for their gatekeeping function. Practicing attorneys depend on the licensing barrier for the restriction of competition that supports their fees. Each of these constituencies has a rational interest in the persistence of the existing structure, even as the structure becomes less and less aligned with the competencies the profession actually requires.
The technology trap, as North's framework reveals it, is not a trap set by the technology. It is a trap set by the institutional legacy of a previous technological era. The institutions were rational when they were designed. They served real purposes. They reduced real transaction costs. But they were designed for conditions that no longer exist, and their persistence through path dependence creates a growing misalignment between the institutional framework and the economic reality it is supposed to govern.
The author of The Orange Pill recognizes the urgency of institutional reform. The call for educational institutions to teach questioning over answering, integration over specialization, judgment over execution is precisely the kind of institutional redesign that the AI transition demands. But the call underestimates the difficulty of the change, because it does not adequately account for the structural forces that hold the existing institutions in place. Path dependence is not overcome by good arguments. It is overcome by changing the incentive structures that make the existing path self-reinforcing — a task that requires not just vision but institutional entrepreneurship, political coalition-building, and the patient, often unglamorous work of redesigning systems while the systems are running.
The rules of the game are resistant to change. But they are not immutable. North's later work identified the conditions under which institutional change occurs: shifts in relative prices or relative bargaining power that alter the incentives of the actors within the existing framework, creating opportunities for institutional entrepreneurs to propose and implement new arrangements. The AI transition is producing exactly such a shift. The question is whether the institutional entrepreneurs — the policymakers, educators, professional leaders, and citizens who see the misalignment — will act with sufficient speed and sufficient skill to redirect the path before the existing institutions lock in arrangements that serve the legacy rather than the future.
---
North's distinction between formal rules and informal norms was not a taxonomic convenience. It was an analytical proposition about how societies actually function, and its implications for the AI transition are both precise and insufficiently appreciated.
Formal rules can be changed overnight. A legislature can pass a statute. A regulator can issue a rule. A court can hand down a decision. The formal institutional framework can be modified through deliberate action, and the modification takes effect at the moment it is enacted. This is the strength of formal rules: they are responsive to deliberate design.
Informal norms cannot be changed overnight. They cannot be enacted by legislation or imposed by regulation. They are the product of long processes of cultural evolution — generational shifts in values, practices, and expectations that unfold over decades rather than legislative sessions. A society's informal norms about what constitutes honest dealing, about what professionals owe their clients, about how elders are treated, about whether a person's word is reliable — these are not the products of deliberate design. They are the accumulated residue of millions of interactions, transmitted through families and communities and professional cultures, enforced not by courts but by the far more pervasive mechanism of social approval and social sanction.
The relationship between formal rules and informal norms is one of the most consequential dynamics in institutional economics, and North was precise about its structure. Formal rules operate within a matrix of informal norms. The effectiveness of any formal rule depends on whether the informal norms of the society support compliance with that rule. A formal prohibition on bribery is effective in a society where the informal norm is that public officials serve the public interest. The same prohibition is a dead letter in a society where the informal norm is that public officials are expected to supplement their income through their position. The formal rule is identical. The institutional outcome is opposite. The difference is the informal norm.
The converse is also true: informal norms that support productive behavior can function effectively even in the absence of formal rules. North documented numerous historical cases in which trade flourished not because of formal legal frameworks but because of informal networks of trust, reputation, and reciprocity. The Maghribi traders of the eleventh-century Mediterranean maintained a trading network that spanned thousands of miles without the benefit of a formal legal system capable of enforcing contracts across jurisdictions. They relied instead on an informal institution: a coalition in which members shared information about the conduct of their trading agents, and in which the sanction for cheating was exclusion from the coalition — a penalty that, given the coalition's commercial dominance, was economically devastating.
The AI transition is disrupting both formal rules and informal norms simultaneously, but it is disrupting them at different speeds. This asymmetry is one of the most dangerous features of the current moment.
Formal rules are being addressed, however inadequately. The European Union's AI Act, the American executive orders, the emerging regulatory frameworks in Singapore, Japan, and Brazil — these are formal institutional responses to the AI transition. They are insufficient, as the author of The Orange Pill correctly observes. They are also real. They represent deliberate attempts to construct formal rules for a new technological environment. The attempts are constrained by path dependence — the new regulations bear the imprint of the institutional frameworks from which they evolved — but they are occurring. Legislatures are legislating. Regulators are regulating. Courts are beginning to adjudicate disputes that arise from AI-generated output.
Informal norms are being disrupted without any comparable deliberate response. And the disruption is occurring across every dimension of professional and personal life.
Consider the informal norms governing expertise. For centuries, expertise has been understood as the product of sustained engagement with a domain — years of study, practice, and incremental mastery that produced a depth of knowledge unavailable to the casual observer. The informal norm was that expertise commanded respect, that the expert's judgment was entitled to deference, and that the credential certifying expertise was a reliable signal of the underlying competence. This norm was not merely a social convention. It was a functional institution that reduced transaction costs: when a client trusted a lawyer's judgment because of the bar credential, the transaction cost of evaluating the lawyer's competence on a case-by-case basis was eliminated.
The AI transition has destabilized this norm. When a tool costing one hundred dollars per month can produce output that is competitive with the output of a credentialed professional, the informal relationship between credential, expertise, and trust breaks down. The credential no longer reliably signals the competency gap between the credentialed and the uncredentialed, because the tool has narrowed the gap. The respect that expertise commanded was partly a respect for scarcity — the recognition that the expert possessed something rare and hard-won. When the scarcity diminishes, the respect diminishes with it, regardless of whether the underlying depth of knowledge retains its value.
This is not an argument that expertise has become worthless. North's framework is more precise than that. The argument is that the informal norms governing the social recognition and economic valuation of expertise are changing, and the change is occurring faster than the norms can adapt. The senior engineer whose architectural judgment is more valuable than ever — because the tool handles implementation but cannot evaluate whether the architecture serves the purpose — nevertheless experiences a decline in professional standing because the informal norms of her community have not yet adjusted to the new value hierarchy. The market has not yet learned to price judgment separately from implementation. The informal norms that would support this pricing — norms about what constitutes valuable work, what deserves compensation, what earns professional respect — are still forming.
Consider the informal norms governing authorship and attribution. The author of The Orange Pill confronted this directly: the book was written in collaboration with Claude, an artificial intelligence. The transparency about this collaboration was deliberate and, in the context of the book's argument, essential. But the informal norms governing authorship — who deserves credit for a piece of writing, what constitutes original work, whether AI-assisted output should be disclosed — are in flux. There is no formal rule requiring disclosure of AI assistance in book writing. The informal norms are unsettled. Some communities treat AI assistance as analogous to research assistance — a legitimate support that does not compromise authorship. Others treat it as analogous to ghostwriting — a collaboration that should be acknowledged but does not disqualify authorship. Still others treat it as a form of fraud — a substitution of machine output for human thought that misrepresents the nature of the work.
The unsettled state of these norms creates transaction costs that formal rules alone cannot address. A publisher evaluating a manuscript does not know whether the prose was generated by the author, by an AI, or by some collaboration between them. The cost of determining the answer — and the cost of deciding whether the answer matters — is a new transaction cost that did not exist before AI writing tools became capable of producing publication-quality prose. The informal norms that will eventually govern this domain — norms about disclosure, about the relationship between AI assistance and authorship, about what counts as original work in an age of machine-assisted creation — will determine the magnitude of this transaction cost. Clear norms will reduce it. Unsettled norms will sustain it.
Consider the informal norms governing the relationship between effort and reward. One of the deepest informal norms in most professional cultures is the assumption that difficult work is more valuable than easy work — that the effort required to produce an output is a signal of its worth. This norm has deep evolutionary roots and strong cultural reinforcement. The craftsman who spends years mastering a skill is respected precisely because the mastery was difficult. The difficulty is the signal.
The AI transition disrupts this norm with particular force. When a tool makes previously difficult output easy to produce, the informal equation between difficulty and value breaks down. The author of The Orange Pill documented the emotional resonance of this disruption: the engineer who felt both relief and grief when the tedious parts of his work were automated, because the tedious parts had been, in ways he was only now recognizing, a source of identity and a signal of value. The philosopher Byung-Chul Han articulated the cultural dimension of the same disruption: the concern that smoothness — the elimination of friction, resistance, and difficulty — produces not liberation but hollowness.
North's framework reframes this concern in institutional terms. The norm equating effort with value was not merely a cultural preference. It was a functional institution that reduced transaction costs. When effort reliably signaled quality, the transaction cost of evaluating quality directly was reduced — you could trust that the thing produced with great effort was more likely to be good than the thing produced easily. When AI disrupts the correlation between effort and quality — when the easy output is as good as the difficult output, or nearly so — the transaction cost of quality evaluation rises, because the effort signal is no longer reliable. The society must develop new signals, new norms for evaluating quality that do not depend on the assumption that quality requires struggle. This development takes time — generational time, in many cases — and the gap between the disruption of the old norm and the establishment of the new one is an institutional void with real economic and social costs.
The asymmetry between formal and informal institutional change is the deepest structural challenge of the AI transition. Formal rules can be redesigned in months or years. Informal norms require decades. The technology is changing at a pace measured in months. The formal institutional response is measured in years. The informal institutional response is measured in decades. The gap between the speed of technological change and the speed of informal institutional adaptation is widening, and within that gap, the costs accumulate: the confusion about what expertise means, the uncertainty about what constitutes authorship, the erosion of the effort-value signal that organized professional life for centuries.
The author of The Orange Pill is, in North's analytical terms, engaged in the work of informal institutional construction. The book seeks to change how people think about AI, about expertise, about depth and speed. It cannot change formal rules. What it can do — and what the most valuable intellectual work in any period of institutional disruption always does — is contribute to the formation of the informal norms that will eventually shape how formal rules are designed, interpreted, and enforced. The quality of the informal norms that emerge from this period of disruption will determine the quality of the formal institutions that follow. And the quality of the informal norms depends on the quality of the thinking that shapes them — thinking that is honest about costs, rigorous about trade-offs, and attentive to the distributional consequences that market dynamics alone will not address.
Formal rules are the skeleton of institutional life. Informal norms are the musculature. A skeleton without muscle cannot move. And in the current moment, the skeleton is being hastily assembled while the musculature has barely begun to form. The institutional body that will govern the AI era is taking shape with bones but without the connective tissue that makes coordinated movement possible. The work of building that tissue — the norms, the expectations, the shared understandings about what this technology means and how it should be used — is the work that will matter most in the years immediately ahead. It is also the work that is most difficult to see, most difficult to measure, and most easily neglected in a culture that rewards the visible architecture of formal rules while ignoring the invisible infrastructure of informal norms upon which all formal rules ultimately depend.
The most dangerous condition in any society is not bad rules. It is no rules. A society with bad rules can identify the rules, criticize them, organize to change them. A society with no rules — or, more precisely, a society in which the existing rules have ceased to describe the reality they were designed to govern — faces a different and more insidious problem. The actors within the system do not know what the rules are. They cannot comply with rules that do not exist. They cannot violate rules that have not been articulated. They operate in a space where behavior is neither sanctioned nor prohibited, where the boundaries of acceptable conduct are undefined, and where the powerful exploit the absence of constraint not through deliberate transgression but through the simple fact that there is nothing to transgress against.
North's framework identifies this condition with analytical precision. The institutional void is the gap between the existing rules of the game and the reality that the rules were designed to govern. It arises when the environment changes faster than the institutional framework can adapt — when the formal rules, the informal norms, and the enforcement mechanisms that structure human interaction are calibrated for conditions that no longer obtain.
The AI transition has produced an institutional void of a breadth and depth that has no precise historical analogue. Previous technological transitions — mechanization, electrification, computerization — disrupted specific sectors and specific institutional domains. The power loom disrupted textile production and the labor institutions that governed it. Electrification disrupted manufacturing and the workplace safety institutions that had developed around steam and water power. Computerization disrupted information processing and the clerical employment institutions that had organized around paper-based workflows. In each case, the institutional void was sectoral — confined to the domains directly affected by the technology — and the institutional response could be developed within the existing framework of adjacent institutions that remained functional.
The AI transition is different. It disrupts not a single sector but the entire category of knowledge work. Employment law, educational systems, professional licensing, intellectual property, quality assurance, social welfare, democratic governance — each of these institutional domains is simultaneously inadequate to the reality the technology has created. The void is not sectoral. It is systemic. And a systemic void produces qualitatively different dynamics than a sectoral one, because the adjacent institutions that might have provided a framework for response are themselves in flux.
The practical consequences of the void are visible in every domain the author of The Orange Pill describes. Consider the employment relationship. An employer in early 2026 faces a set of decisions for which the existing institutional framework provides no guidance. The twenty-fold productivity multiplier means that a team of five can produce the output previously requiring a hundred. The employer must decide: reduce headcount and capture the productivity gains as profit? Maintain headcount and expand the scope of what the team produces? Some combination? The formal rules — employment contracts, labor regulations, fiduciary obligations to shareholders — pull in different directions. The informal norms — expectations about loyalty, about the social contract between employer and employee, about what constitutes responsible corporate behavior — are unsettled. The enforcement mechanisms — what happens if an employer lays off ninety-five workers to capture a productivity gain? — are untested in the specific context of AI-driven displacement.
The author of The Orange Pill describes this dilemma from the inside: the boardroom conversation about headcount reduction, the arithmetic that was "clean and seductive," the choice to keep the team and grow it rather than convert the productivity gain into margin. This was a decision made in the institutional void — a choice that the existing rules did not compel, the existing norms did not clearly prescribe, and the existing enforcement mechanisms did not constrain. It was a good decision, in the author's judgment and arguably in broader social terms. But it was a decision that depended on the values and circumstances of a particular leader in a particular organization. The next employer, facing identical arithmetic, may make the opposite choice. The institutional void does not prevent good decisions. It fails to make good decisions systematic.
This is the critical distinction that separates institutional analysis from moral exhortation. The author calls for ethical leadership, for builders who choose stewardship over extraction. North's framework does not dismiss this call. It contextualizes it. Individual ethics matter. But individual ethics operating in an institutional void produce inconsistent outcomes — good decisions here, bad decisions there, with the distribution determined by the character of the decision-maker rather than by the structure of the system. Institutions exist precisely to make good decisions systematic: to create the framework within which even self-interested actors produce socially beneficial outcomes, because the rules align private incentives with public welfare.
The void is not static. It is being filled. The question is by whom and in whose interest. North's analysis of institutional change emphasizes that in any period of institutional uncertainty, the actors with the most resources, the most information, and the most organizational capacity shape the emerging framework to their advantage. This is not necessarily the result of malicious intent. It is the structural consequence of the fact that institutional design requires resources — the resources to identify what rules are needed, to draft those rules, to build the coalitions necessary to enact them, and to construct the enforcement mechanisms that make them effective. The actors who possess these resources are, in the context of the AI transition, the technology companies that build and deploy AI tools.
The technology companies are not merely participants in the institutional void. They are, through their product decisions, their terms of service, their pricing structures, and the cultural narratives they promote, actively constructing the informal institutional framework within which AI is used. When Anthropic designs Claude's interaction patterns, it is not only making a product decision. It is establishing norms about how humans and AI systems should interact — norms about transparency, about the appropriate level of deference the machine should show to the human, about the boundaries of the machine's role. When a company prices its AI tool at one hundred dollars per month, it is not only setting a price. It is establishing an accessibility norm — a determination about who can participate in the AI economy and who is excluded. When a technology company publishes research about AI safety, it is not only contributing to scientific knowledge. It is shaping the informal norms around responsible AI development — norms that may eventually calcify into formal regulatory requirements.
These are institutional acts, performed by organizations whose primary accountability is to their shareholders rather than to the broader public. The norms being established through product design and corporate strategy may well serve the public interest — Anthropic's explicit commitment to responsible AI development suggests genuine institutional concern. But the structural reality is that the informal institutional framework of the AI era is being constructed primarily by the actors who benefit most from the technology, because those actors possess the resources and the organizational capacity that institutional construction requires.
The historical pattern is not encouraging. North documented numerous cases in which institutional voids were filled by the powerful to the detriment of the broad population. The enclosure movement in England, which converted common land to private property, was an institutional change driven by landowners who possessed the political resources to reshape the rules of land tenure in their favor. The result was economically efficient in aggregate — enclosed land was more productive than common land — but the distributional consequences were devastating for the commoners who lost their traditional access. The institutional void created by the dissolution of common rights was filled by the actors with the most resources, and the framework they constructed served their interests.
The AI transition presents an analogous risk. The institutional void created by the technology's disruption of existing rules is being filled by the actors with the most resources — the technology companies, the early adopters, the investors who are positioned to capture the productivity gains. The framework they are constructing may be efficient in aggregate. The distributional consequences are a separate question, and that question is not being asked with sufficient urgency by the people who have the most to lose.
The author's insistence that the displaced must stay in the room — the lesson drawn from the Luddites, who removed themselves from the conversation about how the transition would unfold — is, in institutional terms, a call for inclusive institutional design. The quality of any institutional framework depends on the breadth of participation in its design. When the framework is designed only by the powerful, it serves the powerful. When it is designed with the participation of the affected — the displaced workers, the anxious parents, the educators struggling to adapt, the professionals whose expertise is being commoditized — it has a better chance of serving the broad population.
But participation requires capacity, and capacity requires institutions. The displaced worker who has lost income and is struggling to retrain does not have the time, the energy, or the organizational resources to participate in institutional design. The parent who lies awake wondering what the world will look like for her children does not have access to the policy forums where AI governance is discussed. The teacher whose students are using AI to produce essays she cannot evaluate does not have a seat at the table where educational standards are being reconsidered. The institutional void is self-reinforcing: the absence of institutions that would empower broad participation means that the institutional framework is designed by the narrow few who do not need institutional support to participate.
Breaking this cycle requires what North called institutional entrepreneurship — the deliberate creation of new institutional arrangements by actors who see the misalignment between existing rules and current reality and who possess the vision, the authority, and the political skill to construct alternatives. Institutional entrepreneurs are not merely reformers. They are architects of the rules of the game — people who understand that the rules matter more than any individual play, and who invest their energy in redesigning the rules rather than in winning under the existing ones.
The institutional void of the AI transition is an invitation to institutional entrepreneurship on a scale that matches the technology's transformative power. The entrepreneurs may come from government, from industry, from civil society, from academia, or from some combination of these domains. But they will share a common understanding: that the rules are being written, that the window for influencing them is finite, that path dependence will lock in whatever arrangements emerge from this period, and that the quality of the arrangements depends on the quality of the participation in their design.
The void will be filled. The question is whether it will be filled by design or by default — by the deliberate construction of institutions that serve the broad population, or by the gradual accretion of norms and practices that serve the powerful. The historical record suggests that default produces extraction. Design produces inclusion. And the difference between the two is the difference between a society that the AI transition enriches and a society that the AI transition divides.
---
Property rights are the bedrock of institutional economics. North demonstrated across decades of historical analysis that the definition and enforcement of property rights — who owns what, what ownership entails, how disputes about ownership are resolved — is the single most important function that institutions perform. When property rights are clear, secure, and efficiently transferable, the transaction costs of economic exchange are minimized, investment is encouraged, and the conditions for sustained economic growth are established. When property rights are ambiguous, insecure, or costly to transfer, exchange is inhibited, investment is discouraged, and economic stagnation follows.
The proposition seems straightforward when applied to physical property. A farmer who owns his land has an incentive to invest in irrigation, because the returns on the investment accrue to the farmer. A farmer who does not own his land — who occupies it at the sufferance of a landlord who may evict him at any time — has no such incentive. The security of the property right determines the investment, and the investment determines the productivity. This logic, demonstrated by North across centuries of European and American economic history, was central to his explanation of the divergent development paths of nations with strong versus weak property rights institutions.
The application to intellectual property is more complex, because intellectual property is fundamentally different from physical property. A physical object is rival — my use of it excludes your use of it. An idea is nonrival — my use of it does not diminish yours. A physical object is excludable — I can fence my land and prevent you from entering. An idea is imperfectly excludable — once it exists, preventing others from using it requires elaborate legal and technological mechanisms. Intellectual property law exists to create artificial scarcity in a domain where natural scarcity does not obtain — to give creators sufficient incentive to produce by granting them temporary exclusive rights to the products of their intellectual effort.
The AI transition destabilizes intellectual property rights at every level of the framework. The destabilization begins with the most fundamental question: who is the creator?
When a human writes a novel, the property right is clear. The author created the work. The work is the author's property, subject to the terms of any publishing contract. When a human writes a novel with the assistance of a human editor, the property right is still clear — the editor's contribution, however significant, does not rise to the level of co-authorship under established legal norms and contractual conventions. When a human writes a novel with the assistance of an AI that generates prose, suggests structural revisions, identifies connections the author missed, and produces passages that the author incorporates into the final text, the property right is no longer clear. The author of The Orange Pill confronted this question directly and answered it with transparency: the ideas were his, the collaboration was genuine, the authorship was a new form of creation that neither he nor the machine could have produced alone. The answer is honest. It is also not a legal framework. It is one individual's resolution of a question that the existing intellectual property regime has not addressed.
The copyright system was designed for identifiable human authors producing original works through their own creative effort. Every element of this design is challenged by AI-assisted creation. The requirement of human authorship excludes works generated entirely by AI — the U.S. Copyright Office has been explicit on this point. But the requirement provides no clear guidance for the vast middle ground of human-AI collaboration, where the human contribution ranges from minimal (a brief prompt that generates a complete work) to substantial (the author's judgment, taste, and editorial control shaping the AI's output into something that could not have existed without both contributors).
The ambiguity creates transaction costs. A publisher acquiring a manuscript does not know with certainty whether the copyright is valid, because the validity depends on the degree of human creative contribution, and that degree is difficult to verify and may be contested. An investor funding a startup built on AI-generated code does not know with certainty whether the code is protectable intellectual property. A musician sampling AI-generated compositions does not know whether the compositions are in the public domain, under copyright, or in some legally undefined category. Each uncertainty is a transaction cost — a cost that inhibits the exchange that would otherwise occur if the property rights were clear.
The problem extends beyond copyright to the training data that makes AI systems possible. Large language models are trained on vast corpora of text, much of it produced by human authors who did not consent to its use as training data and who receive no compensation for it. The property rights question is whether the use of copyrighted material to train an AI model constitutes infringement or fair use — a question that courts in multiple jurisdictions are currently adjudicating with no clear resolution in sight. The uncertainty is itself a transaction cost, inhibiting both the development of AI systems (which face potential liability for training on copyrighted material) and the compensation of creators (who have no established mechanism for capturing value from the use of their work in training).
North's framework suggests that the resolution of these property rights questions will be among the most consequential institutional decisions of the AI era. The resolution will determine who captures the value created by AI systems. If the property rights framework grants broad rights to the creators of training data, the value will flow partly to the millions of writers, artists, and creators whose work made the systems possible. If the framework grants broad fair use protection to AI developers, the value will be concentrated in the companies that build and deploy the systems. If the framework creates new categories of property — a right to compensation for training data use, analogous to the mechanical license in music — the value will be distributed through a new institutional mechanism that does not yet exist.
The stakes are not merely economic. Property rights in knowledge define who participates in the knowledge economy. The framework knitter whose skill was his property watched that property become worthless when the power loom replicated it mechanically. The software developer whose expertise is her property faces an analogous threat when the AI tool replicates her capability at marginal cost. The property right in expertise — the informal but economically real right to command a premium for knowledge that is scarce and difficult to acquire — is being eroded by a technology that makes knowledge production cheap and knowledge itself abundant.
North's analysis of property rights in developing economies is instructive here. He documented how ambiguous property rights in land — where formal ownership was unclear, customary rights conflicted with statutory law, and enforcement was unreliable — produced precisely the conditions that inhibited economic development. Farmers would not invest in land they might lose. Entrepreneurs would not build businesses on property whose legal status was uncertain. The ambiguity itself was the constraint: not the absence of resources or the absence of capability, but the absence of the institutional clarity that would have made productive use of resources and capability possible.
The AI transition is producing analogous ambiguity in the domain of intellectual property. The property rights in AI-generated output are unclear. The property rights in training data are contested. The property rights in expertise — the most economically significant form of intellectual property for most knowledge workers — are being eroded by a technology that the existing framework was not designed to address. The ambiguity inhibits investment, creates uncertainty, and produces the transaction costs that North's framework identifies as the primary obstacle to productive exchange.
Resolving the ambiguity requires institutional innovation — new categories of property rights, new mechanisms for compensation, new frameworks for attribution that reflect the reality of human-AI collaboration. The innovation will not emerge from the technology itself. Technologies do not create property rights. Institutions do. And the quality of the property rights institutions that emerge from this period of disruption will determine, to a significant extent, who benefits from the AI transition and who bears its costs.
---
In 1989, North published a paper with Barry Weingast that transformed the study of institutional economics. The paper examined the Glorious Revolution of 1688 — the constitutional settlement in which the English Parliament established its supremacy over the Crown — and demonstrated that the institutional change produced measurable economic consequences. By constraining the Crown's ability to arbitrarily alter property rights, repudiate debts, or change the rules of the economic game without Parliamentary consent, the Glorious Revolution made the English state's commitments credible. Credible commitments reduced the risk premium that lenders charged the government, reduced the transaction costs of economic exchange, and produced the stable institutional environment within which the economic growth of the eighteenth century became possible.
The paper's insight was that institutional arrangements that constrain the arbitrary exercise of power produce better economic outcomes than arrangements that concentrate power, even when the concentrated power is exercised by benevolent actors. The reason is credibility. A benevolent ruler who can change the rules at will cannot credibly commit to maintaining any particular set of rules, because the commitment is only as durable as the ruler's benevolence. A constrained ruler who cannot change the rules without institutional approval can credibly commit, because the commitment is enforced by the institutional structure rather than by the ruler's character. The credibility of the commitment — not the benevolence of the ruler — determines the incentive to invest.
Daron Acemoglu and James Robinson extended this analysis into a comprehensive framework for understanding economic development, distinguishing between extractive institutions, which concentrate power and economic opportunity in the hands of a narrow elite, and inclusive institutions, which distribute power and opportunity broadly. Their analysis, explicitly built on North's foundations, demonstrated that the same technologies, the same resources, and the same geographic endowments produce dramatically different economic outcomes depending on whether the institutional framework is extractive or inclusive. The divergence between North and South Korea, between Botswana and Zimbabwe, between the United States and Mexico was explained not by differences in technology, culture, or geography, but by differences in institutional quality.
The AI transition is a test case for this framework, and the test is being administered in real time. The technology is identical across institutional contexts. The same large language models are available to firms in the European Union, the United States, China, India, and Nigeria. The same productivity multipliers are technically achievable in Stockholm and in São Paulo. The same language interface collapses the same transaction costs regardless of the jurisdiction in which it is deployed. If the extractive-inclusive framework is correct, the distributional outcomes of the AI transition should diverge systematically across institutional contexts — with inclusive institutions producing broadly shared gains and extractive institutions producing concentrated extraction.
The early evidence supports the framework's prediction. In jurisdictions with strong labor market institutions — portable benefits, effective retraining programs, robust social safety nets — the displacement costs of AI adoption are being partially absorbed by institutional mechanisms designed to manage economic transitions. In jurisdictions where these institutions are weak or absent, the displacement costs are borne entirely by the displaced, with no institutional mediation between the worker and the market.
But the extractive-inclusive distinction applies not only across national jurisdictions. It applies within firms, within industries, and within the emerging platform structures through which AI is deployed. A firm that captures the entire productivity gain of AI adoption as profit — reducing headcount to convert the twenty-fold multiplier into margin — is operating through extractive internal institutions, regardless of the jurisdiction in which it is located. A firm that distributes the productivity gain across the organization — expanding the scope of what each worker can accomplish, investing in human capital development, sharing the value creation with the workforce — is operating through inclusive internal institutions.
The author of The Orange Pill describes this choice from the inside: the decision to keep the team and grow it rather than reduce headcount and capture margin. In North's analytical terms, this was a choice between extractive and inclusive internal institutions. The choice was made by a specific leader in specific circumstances. The institutional question is whether such choices can be made systematic — whether the formal rules, informal norms, and enforcement mechanisms governing AI adoption can be designed to favor inclusive outcomes over extractive ones.
The historical record is instructive but not comforting. The first Industrial Revolution produced extractive outcomes for decades before inclusive institutional reforms — factory legislation, labor unions, public education, social insurance — redirected the gains toward the broad population. The reforms were not automatic. They were the product of sustained political struggle by the displaced and their allies, against the resistance of incumbents whose interests were served by the extractive arrangements. The struggle took generations, and its outcomes were never guaranteed.
Acemoglu's recent work classifies AI as a "critical juncture" — an episode during which even small institutional differences can produce divergent long-term outcomes. At a critical juncture, the path dependence that normally constrains institutional change loosens. The existing arrangements are destabilized. New arrangements become possible. But the new arrangements are not predetermined — they can be either more inclusive or more extractive than the ones they replace, depending on the political dynamics of the moment.
The AI transition exhibits the characteristics of a critical juncture. The existing institutional arrangements governing knowledge work are destabilized. New arrangements are emerging. The direction of the new arrangements — toward inclusion or toward extraction — is being determined now, through the accumulated force of product decisions, corporate strategies, regulatory choices, and the political mobilization or demobilization of the affected populations.
The concentration of AI capability in a small number of firms raises specific institutional concerns. When the means of intelligence production — the models, the training data, the computational infrastructure — are controlled by a handful of organizations, the market structure resembles what North, in collaboration with John Wallis and Barry Weingast, called a limited access order: a social arrangement in which a dominant coalition controls access to valuable resources and uses that control to generate rents. Limited access orders are stable because the rents generated by restricted access give the dominant coalition an incentive to maintain the restriction. They are also economically inferior to open access orders, in which competition is broad, entry is unrestricted, and the creative destruction that drives long-term growth is permitted to operate.
The risk that the AI economy becomes a limited access order is not hypothetical. The computational costs of training frontier models, the data requirements, and the engineering expertise necessary to build and deploy them create natural barriers to entry that restrict competition. The platform dynamics of AI deployment — where the value of a service increases with the number of users, creating winner-take-most outcomes — reinforce the concentration. The informal norms of the AI industry — the prestige hierarchy that places frontier model builders at the apex and everyone else below — further concentrate talent and resources.
Inclusive institutional design in the AI era requires deliberate countermeasures against these concentrating forces. Antitrust enforcement calibrated to the specific dynamics of AI markets. Open-source requirements that prevent the complete privatization of AI capability. Interoperability standards that reduce switching costs and prevent platform lock-in. Public investment in AI research that maintains a competitive alternative to corporate development. Educational institutions that distribute AI literacy broadly rather than concentrating it in the graduates of elite programs.
None of these countermeasures will emerge automatically from the market. Markets, as North demonstrated throughout his career, operate within institutional frameworks. The quality of the institutional framework determines whether the market produces inclusive or extractive outcomes. A market operating within inclusive institutions — competitive markets, broad access, strong property rights, effective enforcement — produces broadly shared prosperity. The same market operating within extractive institutions — concentrated market power, restricted access, weak enforcement — produces concentrated wealth.
The institutional framework of the AI economy is being constructed now. The construction is occurring partly through formal regulatory action and partly through the informal norms being established by the technology companies themselves. The question is whether the framework that emerges will be inclusive or extractive — whether it will distribute the extraordinary gains of AI broadly or concentrate them in the hands of those who control the technology. The answer depends on institutional design, and institutional design depends on participation. The rules are being written. The breadth of participation in the writing determines the breadth of the benefit.
---
North was fond of a formulation that had the quality of an aphorism but functioned as an analytical proposition: rules without enforcement are suggestions. The statement captures a reality that legal theorists and political scientists have long recognized but that popular discourse consistently underestimates. The existence of a rule — even a well-designed rule, enacted through legitimate processes, widely known and broadly supported — does not guarantee compliance. Compliance requires enforcement, and enforcement requires mechanisms: institutions dedicated to monitoring behavior, detecting violations, imposing sanctions, and creating the expectation that violations will be detected and sanctioned with sufficient reliability to deter the rational actor from violating.
The quality of enforcement mechanisms determines the quality of the institutional framework. A society with excellent rules and poor enforcement is functionally equivalent to a society with no rules at all, because the rules exist only on paper. The Soviet Union had an elaborate constitutional framework guaranteeing individual rights. The enforcement mechanisms — an independent judiciary, a free press, political accountability — were absent. The constitutional guarantees were, in North's formulation, suggestions.
The AI transition creates enforcement problems of a qualitatively new character. Previous enforcement challenges involved monitoring human behavior — determining whether a person had complied with a rule, had violated a contractual obligation, had failed to meet a professional standard. The monitoring was difficult but conceptually straightforward: the object of enforcement was a human actor whose behavior could be observed, whose intentions could be inferred, and whose compliance could be evaluated against clearly defined standards.
The AI transition introduces a new object of enforcement: the human-machine system. The relevant behavior is no longer solely the human's. It is the product of interaction between a human and a machine, where the human's contribution may range from minimal to substantial, where the machine's reasoning process is opaque even to its creators, and where the output that must be evaluated against quality, safety, or ethical standards is the joint product of both contributors. Enforcing rules against this composite actor requires conceptual and institutional innovations that the existing enforcement apparatus does not possess.
Consider the enforcement of professional standards. A lawyer is bound by rules of professional conduct that require competence, diligence, and candor toward the tribunal. When a lawyer drafts a brief personally, the enforcement of these standards is conceptually straightforward: the lawyer's work product is evaluated against the standards, and deviations are sanctioned through disciplinary proceedings. When a lawyer uses an AI tool to draft a brief, the enforcement problem changes. The brief may cite cases that do not exist — a well-documented failure mode of large language models. The analysis may contain errors that would be obvious to a human who had researched the question personally but that are invisible to a human who is reviewing machine-generated output without independent verification. The question that enforcement must answer — did the lawyer exercise competence and diligence? — now requires evaluating not just the output but the process by which the output was generated, a process that involves the interaction between a human's judgment and a machine's capabilities and limitations.
The existing disciplinary mechanisms are not designed for this evaluation. Bar associations have begun issuing guidance on AI use — requiring disclosure, mandating human review, cautioning against reliance on AI-generated citations without verification. But guidance is not enforcement. The enforcement mechanisms — the investigation of complaints, the disciplinary hearings, the sanctions for violations — are staffed by people trained to evaluate human professional conduct, not human-machine collaboration. The gap between the guidance and the enforcement capacity is itself an institutional void, one that creates uncertainty about whether the standards will actually be applied and, if applied, whether they will be applied consistently and fairly.
The enforcement problem extends beyond professional regulation to every domain in which AI-generated output must meet standards of quality, safety, or truthfulness. In education, the standard is that the student's work reflects the student's learning. When AI can produce work indistinguishable from a student's own, the enforcement of academic integrity requires detecting AI-generated content — a detection problem that current technology cannot reliably solve and that may be fundamentally insoluble as the models improve. The institutional response — honor codes, AI detection software, modified assignment design — addresses the problem at the surface while the underlying enforcement challenge deepens.
In healthcare, the standard is that diagnostic and treatment decisions meet the applicable standard of care. When AI systems assist in diagnosis, the enforcement of the standard of care requires evaluating whether the physician appropriately incorporated, modified, or overrode the machine's recommendations. A physician who follows an AI recommendation that turns out to be wrong faces a different liability analysis than a physician who makes the same error through independent judgment. The institutional framework — malpractice law, hospital credentialing, clinical practice guidelines — has not been updated to address this distinction. The enforcement mechanisms for medical quality were designed for human decision-makers. The human-machine system requires a different enforcement architecture.
In financial markets, the standard is that investment recommendations are suitable for the client and that market participants do not engage in manipulation. When AI systems generate trading strategies, portfolio recommendations, or market analyses, the enforcement of suitability and anti-manipulation rules requires attributing decisions to the human, the machine, or the interaction between them. A trading algorithm that exploits a market inefficiency operates in a regulatory gray zone that the existing enforcement mechanisms — the SEC, FINRA, their equivalents in other jurisdictions — are struggling to address with tools designed for human market participants.
In each of these domains, the enforcement problem has the same structure. The standards were designed for human actors. The enforcement mechanisms were designed to monitor human behavior. The AI transition has introduced a new kind of actor — the human-machine system — whose behavior does not map cleanly onto the categories that the existing enforcement apparatus was built to address. The machine's reasoning is opaque. The human's contribution is variable. The output is a joint product whose quality depends on the interaction rather than on either contributor alone.
North's framework suggests that the enforcement problem is not merely a technical challenge to be solved by better AI detection tools or more detailed regulatory guidance. It is an institutional challenge that requires the construction of new enforcement mechanisms — mechanisms designed for the human-machine system rather than for the human alone. These mechanisms will require new forms of monitoring (audit trails that document the interaction between human and machine), new standards of competence (the ability to evaluate and override machine-generated output rather than merely to produce output independently), new liability frameworks (that allocate responsibility between the human and the machine's developer based on the nature of the interaction), and new institutional capacity (regulators, judges, and professional disciplinary bodies who understand AI well enough to evaluate the human-machine system rather than just the human).
The construction of these mechanisms is occurring, but slowly and unevenly. Some jurisdictions are further ahead than others. Some professional communities are adapting faster than others. The unevenness is itself a consequence of institutional path dependence — the jurisdictions and professions with the most entrenched enforcement traditions face the highest switching costs in redesigning their enforcement mechanisms for the AI era.
The gap between the need for new enforcement mechanisms and the capacity to build them is an institutional void with specific and measurable costs. Professionals who want to use AI responsibly lack clear standards against which to evaluate their own conduct. Clients and patients and students who are affected by AI-assisted decisions lack reliable mechanisms for identifying when the AI contributed to an adverse outcome. Regulators who are responsible for maintaining standards of quality and safety lack the conceptual frameworks and technical expertise to evaluate the human-machine systems that are increasingly the objects of their jurisdiction.
North's proposition that rules without enforcement are suggestions applies with particular force to the AI transition. The formal rules governing AI use — the EU AI Act, the American executive orders, the professional guidance being issued by bar associations and medical boards — are necessary. They are also insufficient without enforcement mechanisms that can actually monitor, evaluate, and sanction the human-machine systems to which the rules apply. The construction of those mechanisms — the institutional infrastructure of AI enforcement — is among the most urgent and least visible tasks of the current moment. It is not glamorous work. It does not generate headlines or attract venture capital. But it is the work upon which the entire formal institutional framework of the AI era depends, because a framework without enforcement capacity is not a framework at all. It is a collection of well-intentioned suggestions, and the history of well-intentioned suggestions in the face of powerful economic incentives is not a history that inspires confidence.
The question that separates institutional economics from most other forms of social analysis is not whether change is needed but why change is so difficult. The diagnosis of misalignment between existing institutions and current reality is, in most cases, the easy part. Employment law was designed for an economy in which productivity tracked with hours. Educational systems were designed to produce standardized cognitive workers. Professional licensing credentials competencies that machines can replicate. The diagnosis is available to anyone who looks. The difficulty is in the treatment.
North spent the second half of his career grappling with this difficulty. His early work had demonstrated that institutions mattered — that they were the primary determinant of economic performance. His later work confronted the harder question: if institutions matter, and if existing institutions are often suboptimal, why do they persist? And when they do change, what determines the direction of the change?
The answer was not reassuring to anyone who hoped that good arguments would be sufficient.
Institutional change, in North's framework, is driven not by the recognition that change is needed but by shifts in relative prices or relative bargaining power that alter the incentives of actors within the existing framework. When the cost of transacting under the existing rules rises relative to the cost of transacting under alternative rules, the actors who bear the rising costs have an incentive to invest in changing the rules. The investment takes the form of political activity — lobbying, coalition-building, the organization of collective action — directed at modifying the formal rules or, more gradually, at shifting the informal norms that govern behavior.
The mechanism is not automatic. The fact that existing institutions are suboptimal does not produce institutional change, because the costs of changing institutions — the political costs of overcoming resistance, the switching costs of adapting to new arrangements, the uncertainty costs of operating under rules whose consequences have not been tested — may exceed the benefits. North documented numerous cases in which suboptimal institutions persisted for centuries because the costs of changing them exceeded the benefits for any individual actor, even when collective welfare would have been improved by the change. This is the institutional analogue of the collective action problem: everyone would benefit from better rules, but no one has a sufficient private incentive to bear the costs of producing them.
The AI transition is altering relative prices and relative bargaining power with extraordinary speed, creating both the incentive and the opportunity for institutional change on a scale that matches the technology's transformative power. The relative price of cognitive labor has collapsed. Work that commanded premium wages because it required years of specialized training can now be performed by a tool that costs a fraction of one worker's salary. This price change alters the incentives of every actor in the system: employers, who can now substitute capital for labor at an unprecedented rate; workers, whose bargaining power diminishes as their skills are replicated by machines; educational institutions, whose product — the credentialed graduate — is depreciating in market value; governments, whose tax base shifts as labor income declines relative to capital income.
Each of these shifts creates pressure for institutional change. Employers facing a twenty-fold productivity multiplier need employment contracts, liability frameworks, and organizational structures adapted to AI-augmented work. Workers facing skill commoditization need retraining programs, portable benefits, and social insurance redesigned for rapid occupational transition. Educational institutions facing declining demand for their traditional product need curricula, pedagogies, and credentialing systems redesigned for an economy that rewards judgment and integration over standardized competence. Governments facing a shifting tax base need fiscal frameworks that capture value from AI-generated productivity rather than solely from labor income.
The pressure is real. But pressure does not produce change without actors who translate pressure into institutional innovation. North called these actors institutional entrepreneurs — individuals and organizations that perceive the opportunity created by the misalignment between existing institutions and current reality, and that invest in designing and implementing alternative institutional arrangements.
Institutional entrepreneurs are not reformers in the conventional sense. Reformers work within the existing institutional framework, seeking to improve its functioning without altering its fundamental structure. Institutional entrepreneurs work on the framework itself. They propose new rules, new norms, new enforcement mechanisms that restructure the incentive environment. They are, in the language of The Orange Pill, builders — but they build not products or technologies but the rules under which products and technologies are developed, deployed, and governed.
The author's description of the beaver captures something essential about institutional entrepreneurship. The beaver does not control the river. The beaver studies the river — its currents, its leverage points, the places where a small structure can redirect enormous flows — and builds accordingly. The building is not a one-time act. It is continuous maintenance, daily repair, constant adaptation to a river that pushes against every structure and exploits every weakness.
But the beaver metaphor, for all its appeal, omits the dimension that institutional economics insists upon. The beaver builds for the ecosystem. The institutional question is: which ecosystem? Whose ecosystem? When the beaver places the dam, the pool that forms behind it benefits some species and floods others. The trout that spawn in still water flourish. The species that required the rapids are displaced. The dam is not neutral. It is a distributional choice disguised as an engineering decision.
Every institutional innovation is a distributional choice. The eight-hour day redistributed from employers to workers. Public education redistributed from the privately tutored to the general population. Social insurance redistributed from the currently productive to the currently vulnerable. The distributional consequences of institutional design are not side effects to be managed. They are the primary effects. They are what institutional design is for.
The AI transition demands institutional entrepreneurs who understand this. Not builders who design elegant systems and assume the distribution will work itself out. Builders who ask, at every stage of the design, who benefits and who bears the cost. Builders who recognize that the same institutional arrangement can serve the many or the few, depending on who participates in the design and whose interests are represented.
The author chose to keep the team and grow it. That choice was an act of institutional entrepreneurship at the organizational level — a decision to design the internal institutional framework of a company around inclusion rather than extraction. The choice was admirable and, in the author's telling, principled. It was also, as the author honestly acknowledged, made under conditions that not every leader shares — conditions of sufficient resources, sufficient autonomy, and sufficient conviction to resist the clean arithmetic of headcount reduction.
The institutional economist's question is how to make such choices systematic rather than dependent on the character of individual leaders. How to design the formal rules, informal norms, and enforcement mechanisms that make inclusive outcomes the default rather than the exception. How to build the dams — not as individual acts of conscience but as structural features of the economic landscape that channel the river's power toward the broad population regardless of who happens to be standing at the riverbank.
This is the work of institutional design, and it is work that cannot be deferred. Path dependence means that the institutional arrangements being established now — through product decisions, corporate strategies, regulatory choices, and the accumulated weight of emerging norms — will develop self-reinforcing properties that make them progressively harder to change. The window for institutional entrepreneurship is not indefinite. It is measured in years, not decades. And the quality of what is built within that window will determine the institutional framework of the AI economy for a generation or more.
The entrepreneurs are needed now. Not in the abstract sense of the word — not the entrepreneurs who build companies and capture markets. The entrepreneurs who build institutions — the formal rules, the informal norms, the enforcement mechanisms that will determine whether the AI transition produces an economy that serves the broad population or one that concentrates its extraordinary gains in the hands of those who happened to be closest to the technology when the rules were being written.
---
The most consequential institutional decisions in history were made during periods of uncertainty by people who could not see the long-term consequences of their choices. The framers of the American Constitution could not have foreseen the industrial economy, the administrative state, or the digital age, yet the institutional framework they established shaped the trajectory of all three. The architects of the Bretton Woods system could not have predicted the collapse of the gold standard, the rise of financial derivatives, or the globalization of capital markets, yet the institutional arrangements they designed governed international economic relations for decades. In each case, the institutional framework was constructed under conditions of radical uncertainty by people who were, in a meaningful sense, building the rules for a game they did not yet fully understand.
The AI transition presents an analogous challenge. The institutional framework that will govern the AI economy — the formal rules, the informal norms, the enforcement mechanisms — is being constructed now, under conditions of extraordinary uncertainty about the technology's trajectory, its economic consequences, and its social implications. The people designing the framework, whether they are legislators, regulators, corporate leaders, educators, or the participants in the broader social conversation about AI, cannot see the long-term consequences of the choices they are making. They are, like the Constitutional framers, building for a future they cannot fully imagine.
North's framework provides specific guidance for institutional design under uncertainty. The guidance does not resolve the uncertainty. It identifies the principles that distinguish institutional frameworks that adapt well to unforeseen circumstances from frameworks that lock in arrangements that become progressively more harmful as conditions change.
The first principle is adaptive efficiency. North distinguished between allocative efficiency — the static optimization of resource allocation given current conditions — and adaptive efficiency — the capacity of an institutional framework to evolve in response to changing conditions. A framework that is allocatively efficient at a given moment may be adaptively inefficient if it lacks the mechanisms for self-correction when conditions change. The AI transition demands adaptive efficiency above all, because the technology is changing faster than any specific institutional arrangement can anticipate. Formal rules that specify how AI must be used will become obsolete before the ink is dry. What is needed instead are institutional mechanisms that enable continuous adaptation — regulatory sandboxes that permit experimentation, sunset provisions that force periodic review, feedback mechanisms that transmit information about institutional performance to the actors responsible for institutional maintenance.
The European Union's AI Act represents one approach to institutional design for AI governance. The Act establishes a risk-based classification system, imposes requirements for high-risk AI systems, and creates enforcement mechanisms through national competent authorities. The framework is comprehensive in scope and precautionary in orientation. It is also, from the perspective of adaptive efficiency, potentially brittle. The risk classifications are defined by current understanding of AI capabilities. The requirements are calibrated to current technology. The enforcement mechanisms are designed for current institutional capacities. To the extent that the framework lacks mechanisms for rapid adaptation — the capacity to reclassify risks, modify requirements, and update enforcement approaches as the technology evolves — it risks becoming a path-dependent structure that governs the AI of 2024 in perpetuity while the AI of 2027 operates in the institutional void that surrounds the framework's boundaries.
The American approach — a patchwork of executive orders, agency guidance, and industry self-regulation — represents the opposite risk. The absence of a comprehensive formal framework creates flexibility but also creates the institutional void that North's analysis identifies as the condition most favorable to capture by the powerful. In the absence of formal rules, the informal norms established by the technology industry become the de facto institutional framework, and the quality of those norms depends on the values and incentives of the companies that establish them rather than on the deliberate design of institutions accountable to the broad public.
Neither approach, taken alone, is adequate. The institutional challenge of the AI transition requires both formal structure and adaptive capacity — rules that constrain behavior within bounds that protect the broad population, combined with mechanisms that permit the rules to evolve as the technology and its consequences become better understood. The construction of such a framework is the central institutional task of the current moment.
The second principle is credible commitment. North and Weingast demonstrated that institutional arrangements that constrain the arbitrary exercise of power produce better economic outcomes than arrangements that leave power unconstrained, because the constraint makes commitments credible and credibility reduces transaction costs. The AI transition raises the credible commitment problem in acute form. Governments that commit to supporting displaced workers must make that commitment credible — through funded programs, through institutional capacity, through enforcement mechanisms that prevent the commitment from being abandoned when fiscal pressures mount. Technology companies that commit to responsible AI development must make that commitment credible — through governance structures, through external oversight, through mechanisms that prevent the commitment from being sacrificed when competitive pressures intensify. Educational institutions that commit to curricula redesigned for the AI economy must make that commitment credible — through faculty investment, through assessment reform, through institutional structures that prevent the commitment from reverting to the familiar when the difficulty of the change becomes apparent.
Credible commitment is expensive. It requires the construction of enforcement mechanisms — the institutional infrastructure that makes the commitment binding rather than aspirational. But the cost of credible commitment is lower than the cost of its absence, because absent credible commitment, the actors who must rely on the commitment — the displaced workers, the users of AI systems, the students investing in education — cannot trust the institutional framework, and the transaction costs of operating in a low-trust environment are vastly higher than the costs of building the enforcement capacity that makes trust rational.
The third principle is inclusive design. North's comparative analysis of economic development demonstrated that institutional frameworks designed with broad participation produce better outcomes than frameworks designed by narrow elites. The reason is informational: the displaced worker knows something about the experience of displacement that the policymaker does not. The teacher in the classroom knows something about the reality of AI-assisted learning that the educational administrator does not. The parent lying awake at three in the morning knows something about the stakes of the transition that the technology executive does not. Institutional design that incorporates this distributed knowledge produces frameworks that are better adapted to the actual conditions they are meant to govern.
Inclusive design is costly. It is slower than expert-driven design. It produces messier compromises. It requires the construction of forums, processes, and deliberative mechanisms that enable meaningful participation by people whose time and attention are already consumed by the demands of navigating the transition. But the cost of inclusive design is lower than the cost of exclusive design, because exclusive design produces frameworks that serve the designers at the expense of the designed-upon, and the institutional arrangements that result are eventually destabilized by the resistance of those whose interests were not represented — the dynamic that North documented in every extractive institutional arrangement he studied.
What would adaptive, credible, inclusive institutional design look like in the context of the AI transition?
It would look like educational reform that replaces standardized competency development with the cultivation of judgment, integration, and the capacity for creative questioning — reform designed not by educational administrators alone but with the participation of teachers, students, parents, employers, and the technology developers whose products are reshaping the demand for human capability. It would look like employment law redesigned for an economy in which the relationship between time and productivity has dissolved — law designed not by legislators and lobbyists alone but with the participation of workers, employers, and the communities affected by displacement. It would look like professional licensing reformed to credential the competencies that AI cannot replicate rather than the competencies that it can — reform designed not by licensing bodies alone but with the participation of practitioners, clients, and the public that the licensing system is meant to protect.
It would look like property rights frameworks that clarify the ownership of AI-generated output, the rights of creators whose work is used in training, and the economic claims of workers whose expertise is being commoditized. It would look like enforcement mechanisms designed for the human-machine system rather than the human alone — mechanisms that can evaluate the quality of AI-assisted professional work, that can attribute responsibility when AI-assisted decisions produce harm, and that can maintain the standards of quality and safety that the existing enforcement apparatus was designed to protect.
It would look like social insurance redesigned for rapid occupational transition rather than long-term unemployment — portable benefits that follow the worker rather than the job, retraining programs that develop the capabilities the new economy rewards, safety nets that preserve dignity during the transition and that are funded by the productivity gains that the technology makes possible.
North's framework does not prescribe the specific content of these reforms. It prescribes the principles under which they should be designed: adaptive efficiency, credible commitment, inclusive participation. The specific content will be determined by the political processes through which the reforms are negotiated — processes whose outcomes depend on the participation of the affected populations.
The historical pattern is that institutional frameworks constructed during periods of technological disruption persist for decades, shaped by path dependence into structures that resist modification. The choices made now — in legislatures, in regulatory agencies, in boardrooms, in classrooms, in the broader social conversation about what AI means and who it serves — will determine the institutional framework within which the AI economy operates for a generation. The framework will either distribute the extraordinary gains of AI broadly, through inclusive institutions designed with the participation of the broad population, or concentrate them narrowly, through extractive institutions designed by the powerful in the absence of broader participation.
The rules of the game are being written. The game has already begun. The window for influencing the rules is finite, and it is closing.
The pen is not held by one hand. It never has been. The question is how many hands reach for it, and whether the hands that need the rules most are among those that write them.
---
The phrase I cannot stop turning over is "rules without enforcement are suggestions."
It arrived midway through a working session, embedded in the logic of a man I never met, and it landed with the weight of something I had known in my bones for thirty years without ever finding the sentence for it. I have built products. I have built teams. I have sat in rooms where people drafted codes of conduct, ethics charters, responsible-AI frameworks, and then walked back to their desks and shipped whatever the sprint demanded. I have been the person who did that. The gap between the stated rule and the lived behavior was so familiar I had stopped seeing it as a gap. North's sentence made me see it again.
The entire journey through institutional economics reorganized something in how I think about the work I described in The Orange Pill. I wrote about dams. I wrote about beavers. I meant it — the metaphor still holds for me, still describes the relationship between the builder and the forces the builder is trying to channel. But North's contribution was to ask a question my metaphor did not contain: Whose dam? Built where? Protecting what territory at the expense of what other territory?
The dam is never neutral. I knew this in some recess of experience. I confessed in The Orange Pill that I had built addictive products knowing they were addictive — that the engagement loops, the dopamine mechanics, the variable reward schedules were not accidents but designs, and that the designs served my company's metrics while externalizing costs onto the people who used the product. What North gave me was the analytical frame for understanding why that happened. Not because I was uniquely reckless, but because the institutional environment in which I operated had no enforcement mechanism that would have made the externality visible as a cost to the people creating it. The rule — "don't build things that harm users" — existed as an informal norm. Nobody enforced it. It was, in North's devastating formulation, a suggestion.
I think now about the institutional void that surrounded every decision I described in that book. The choice to keep the team in Trivandrum rather than convert the twenty-fold multiplier into headcount reduction — I was proud of that choice, and I stand by it. But North forces me to ask: what would have happened if I had been a different kind of leader, with different values, facing the same arithmetic? The institutional framework offered no constraint. No rule required me to share the productivity gain. No norm, in the technology industry of 2026, clearly demanded it. The choice was mine because the void was there, and the void meant that the outcome depended on my character rather than on the structure of the system.
That is a terrifying place to leave the outcome of something this consequential.
I wrote The Orange Pill to help people navigate the AI revolution. I still believe in that aim. But North's framework has made me understand that individual navigation, however wise, is not sufficient. The rules of the game determine who wins, not the skill of the players. A parent who teaches her child to ask great questions, a teacher who redesigns her classroom around judgment, a leader who chooses stewardship over extraction — each of these is necessary. None of them is sufficient without the institutional framework that makes their choices sustainable, replicable, and enforced.
The rules are being written now. Not in a single room, not by a single hand, but through the accumulated weight of product decisions, regulatory choices, corporate strategies, educational reforms, and the millions of individual acts through which a society constructs the norms that govern its members' behavior. The window is finite. Path dependence will lock in whatever emerges. And the breadth of participation in the design — who holds the pen, whose experience informs the rules, whose interests are represented — will determine whether the framework that crystallizes from this moment serves the many or the few.
I do not have a garden in Berlin. I am not going to tend one. But I understand, now, that the dams I build must be held to a standard that I had not previously articulated: not just well-constructed, but justly placed. Not just strong enough to channel the river, but designed with the participation of everyone downstream.
That is what North's institutional lens added to the view from my tower. The river is real. The beaver is real. The building matters. But the rules under which the building occurs matter more.
-- Edo Segal
The Orange Pill argued that AI is an amplifier -- that the quality of what you feed it determines the quality of what comes out. Douglass North's institutional economics asks the harder question: who writes the rules that determine what gets amplified, and whose interests do those rules protect?
Through ten chapters grounded in North's framework, this volume examines the invisible architecture shaping the AI revolution -- the formal laws lagging years behind the technology, the informal norms dissolving faster than new ones can form, and the enforcement mechanisms designed for a world that no longer exists. From property rights in AI-generated work to the path dependence trapping education and employment law in obsolete structures, North's lens reveals that the outcome of this transition depends not on the technology but on the institutional choices being locked in right now.
The rules of the AI economy are being written. The question is whether you are in the room where they are written, or downstream of someone else's dam.

A reading-companion catalog of the 19 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Douglass North — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →