North distinguished between two forms of institutional efficiency. Allocative efficiency refers to the static optimization of resource allocation given current conditions — the standard efficiency measure of neoclassical economics. Adaptive efficiency refers to the capacity of an institutional framework to evolve in response to changing conditions. A framework that is allocatively efficient at a given moment may be adaptively inefficient if it lacks mechanisms for self-correction when conditions change. The AI transition demands adaptive efficiency above all, because the technology is evolving faster than any specific institutional arrangement can anticipate. Formal rules specifying how AI must be used will become obsolete before the ink is dry. What is needed instead are institutional mechanisms enabling continuous adaptation — regulatory sandboxes that permit experimentation, sunset provisions that force periodic review, feedback mechanisms that transmit information about institutional performance to the actors responsible for institutional maintenance.
The European Union's AI Act represents one approach to institutional design for AI governance. The Act establishes a risk-based classification system, imposes requirements for high-risk AI systems, and creates enforcement mechanisms through national competent authorities. The framework is comprehensive in scope and precautionary in orientation. It is also, from the perspective of adaptive efficiency, potentially brittle. The risk classifications are defined by current understanding of AI capabilities. The requirements are calibrated to current technology. The enforcement mechanisms are designed for current institutional capacities. To the extent the framework lacks mechanisms for rapid adaptation, it risks becoming a path-dependent structure that governs the AI of 2024 in perpetuity while the AI of 2027 operates in the institutional void that surrounds the framework's boundaries.
The American approach — a patchwork of executive orders, agency guidance, and industry self-regulation — represents the opposite risk. The absence of comprehensive formal framework creates flexibility but also creates the institutional void North's analysis identifies as most favorable to capture by the powerful. In the absence of formal rules, informal norms established by the technology industry become the de facto institutional framework, and the quality of those norms depends on corporate values and incentives rather than on deliberate design accountable to the broad public.
Neither approach, taken alone, is adequate. The institutional challenge requires both formal structure and adaptive capacity — rules that constrain behavior within bounds protecting the broad population, combined with mechanisms permitting the rules to evolve as the technology and its consequences become better understood. Institutional mechanisms that have historically produced adaptive efficiency include regulatory sandboxes that permit controlled experimentation before general rules are established; sunset provisions that force periodic review of regulations whose continued applicability cannot be assumed; standing advisory bodies with mandates to monitor institutional performance and recommend modifications; and dedicated institutional capacity for tracking the gap between rules and reality.
The adaptive efficiency frame reframes the choice between formal regulation and informal norms. The question is not which to prefer but how to combine them in ways that produce both the constraint and the capacity for evolution. Formal rules provide the skeleton. Informal norms provide the musculature. Adaptive mechanisms provide the nervous system that senses changing conditions and coordinates the body's response.
North developed the distinction in Institutions, Institutional Change and Economic Performance (1990) and refined it in Understanding the Process of Economic Change (2005). The framework drew on his observation that societies with similar allocative efficiency at specific moments exhibited radically different capacities for sustained growth, with the difference traceable to their capacity for institutional adaptation.
The concept has been extended by scholars including Joel Mokyr (on the institutional foundations of the Industrial Revolution) and Philippe Aghion (on innovation-driven growth). Its application to AI governance is recent but draws on established tradition in regulatory theory and complex systems analysis.
Static efficiency is not enough. A framework perfectly optimized for current conditions becomes catastrophically inefficient when conditions change, if it lacks adaptation mechanisms.
Rules age quickly. In rapidly evolving technology environments, detailed prescriptive rules become obsolete before implementation is complete.
Adaptation requires infrastructure. Continuous institutional evolution does not happen automatically; it requires deliberate mechanisms — sandboxes, sunset provisions, standing review bodies.
The EU-US contrast is diagnostic. Comprehensive formal regulation risks brittleness; informal self-regulation risks capture. Adaptive efficiency requires elements of both.
Feedback is structural. Frameworks that cannot sense their own performance cannot adapt; adaptive efficiency requires information flows from the governed to the governors.
Debates focus on whether adaptive mechanisms can be designed with sufficient speed and political legitimacy to keep pace with AI technology. Critics argue regulatory agencies inevitably lag behind technological development and that adaptive efficiency is a worthy goal but practically unachievable. Proponents point to regulatory innovations in financial technology, biotechnology, and other fast-moving domains as evidence that adaptive frameworks can be built if institutional commitment is sustained.