The EU AI Act (Regulation (EU) 2024/1689), finalized in 2024 and entering force progressively through 2025–2026, represents the most comprehensive formal institutional response to the AI transition of any major jurisdiction. The Act establishes a risk-based classification system distinguishing unacceptable-risk AI (prohibited outright), high-risk AI (subject to extensive requirements), limited-risk AI (subject to transparency obligations), and minimal-risk AI (largely unregulated). It imposes requirements on high-risk systems including risk management, data governance, documentation, human oversight, accuracy, and cybersecurity. It creates enforcement mechanisms through national competent authorities coordinated through an AI Office at the European level. From the perspective of North's institutional economics, the Act represents both the strengths and the characteristic risks of comprehensive formal regulation: it provides the structural constraint that inclusive institutional design requires while simultaneously risking the path-dependent brittleness that rapid technological change punishes.
The Act's risk-based classification is analytically sophisticated. Rather than attempting to regulate AI uniformly, it tailors requirements to the severity of potential harm. Social scoring systems and manipulative AI are prohibited outright. AI in employment, education, law enforcement, and critical infrastructure is subject to extensive high-risk requirements. AI that interacts with humans (such as chatbots) must disclose its non-human status. AI used in low-stakes applications faces minimal regulation. The structure reflects proportionality principles familiar from EU regulatory tradition and represents a genuine attempt at comprehensive institutional design.
The specific requirements imposed on high-risk systems map closely onto what institutional economics would prescribe. Risk management systems require ongoing identification and mitigation of potential harms — an adaptive mechanism. Data governance requirements address the training data property rights issues that the broader institutional void leaves unresolved. Documentation and transparency requirements support the enforcement infrastructure that rules require to be effective. Human oversight requirements preserve the capacity for meaningful intervention in AI-assisted decisions. The structure is ambitious and, if enforced, would establish significant protections.
The framework's risks, from an adaptive efficiency perspective, are also significant. The risk classifications are defined by current understanding of AI capabilities — capabilities that are evolving rapidly. The requirements are calibrated to current technology and may become obsolete or counterproductive as the technology changes. The enforcement mechanisms are designed for current institutional capacities; the ability of national competent authorities to actually monitor compliance across the full scope of AI deployment is untested. To the extent that the framework lacks mechanisms for rapid adaptation, it risks becoming a path-dependent structure that governs the AI of 2024 in perpetuity while subsequent AI operates in the institutional void surrounding the framework's boundaries.
The Act's relationship to alternative approaches — particularly the American patchwork of executive orders, agency guidance, and industry self-regulation — illustrates the tradeoff between formal structure and adaptive capacity. The EU approach provides more comprehensive constraint but risks brittleness. The American approach provides more adaptation capacity but risks capture by the powerful in the absence of formal structure. Neither approach adequately resolves the institutional challenge. A framework combining structural constraint with adaptive capacity — neither element alone — is required.
The Act resulted from multi-year negotiations beginning with the European Commission's April 2021 proposal, through parliamentary and Council amendments, to final adoption in 2024. The process involved extensive consultation with industry, civil society, and member states, producing a framework that reflects the institutional tradition of EU regulatory harmonization while addressing novel technological challenges.
The Act builds on earlier EU digital governance frameworks including the General Data Protection Regulation (2018), the Digital Services Act (2022), and the Digital Markets Act (2022). Together these represent the EU's approach to constructing a comprehensive digital institutional framework — an approach that has produced both substantial protections and significant compliance costs for affected actors.
Risk-based proportionality. The Act tailors regulation to potential harm, prohibiting worst uses while permitting less consequential ones — a sophisticated structural approach.
Comprehensive formal structure. The framework provides the constraint on powerful actors that North's analysis of institutional voids identifies as necessary for inclusive outcomes.
Adaptive efficiency risks. Classifications and requirements calibrated to current AI may become obsolete as the technology evolves.
Enforcement infrastructure questions. National competent authorities' capacity to monitor and enforce across the full scope of AI deployment is untested.
The EU-US contrast. Comparison with the American approach illustrates the tradeoff between formal structure and adaptive capacity — neither approach resolves the institutional challenge alone.
Major debates include whether the Act's compliance requirements will create barriers to European AI development that disadvantage EU firms relative to American and Chinese competitors; whether the risk classifications can adapt quickly enough to remain relevant as AI capabilities evolve; and whether the enforcement mechanisms can actually monitor compliance at scale. Industry critics argue the Act is too restrictive; civil society critics argue it has too many exceptions and too weak enforcement.