The standard AI governance conversation treats the gap between capability and regulation as a legal problem — legislatures behind the curve, agencies without authority, laws overtaken by technology. Lessig's framework recasts the diagnosis. Law is the slowest of the four modalities and is doing what slow modalities do: arriving late. The more consequential governance failure is in the modalities doing most of the actual governing — architecture, markets, and norms — which are operating at full force without any of the deliberative safeguards that legal regulation provides. The gap is not that law has fallen behind; the gap is that the governance happening in the non-legal modalities is occurring without deliberation, accountability, or public voice.
There is a parallel reading that begins not with governance modalities but with the physical infrastructure AI requires to exist. The governance gap Lessig diagnoses assumes AI capabilities are already distributed and operating — but this elides the extraordinary concentration of computational power, energy resources, and specialized hardware that makes AI possible. Unlike the internet, which could run on distributed infrastructure, AI at scale requires data centers that cost billions, cooling systems that consume watersheds, and chips manufactured by a single Taiwanese company. The governance question is not how to regulate something that already pervades society, but who controls the infrastructure that determines whether AI exists at all.
This material dependency creates a different governance landscape than Lessig's framework suggests. When compute is the bottleneck, architectural decisions are not made by product teams but by whoever controls GPU allocation. When training runs require the power output of small cities, market dynamics are shaped less by consumer preference than by energy politics. When model capabilities depend on semiconductor supply chains vulnerable to geopolitical disruption, norms emerge not from professional communities but from the security imperatives of nation-states. The real governance gap may not be the speed differential between law and other modalities, but the disconnect between governance discussions that assume AI as given and the material realities that determine whether AI can function. The handful of actors who control these material substrates are not ungoverned — they are governed by the physics of heat dissipation, the geopolitics of rare earth minerals, and the path dependencies of semiconductor fabrication. These constraints shape AI's development more fundamentally than any norm, market signal, or legal framework, yet they remain largely absent from governance conversations.
Consider norms. The professional norm around AI use has undergone the fastest shift in the history of knowledge work. The Berkeley study described in The Orange Pill documented how AI adoption transformed work patterns within months — not through managerial mandate, but through emergent pressure of a new professional expectation. The norm was enforced by the fear of falling behind. No legislature debated this shift. No public comment period considered its implications. No judicial review tested its consistency with prior professional values. The norm simply arrived, enforced by structural pressure, and the people subject to it had no formal mechanism to contest it.
Consider markets. The Death Cross repricing of the software industry is market regulation performing its function with characteristic efficiency and characteristic indifference. A trillion dollars of value has been redistributed. The market does not ask whether the redistribution serves the public interest, whether displaced workers have alternative employment, whether communities dependent on repriced companies have alternative economic foundations. It clears the price. The distribution of consequences is, from the market's perspective, someone else's problem.
Consider architecture. When Anthropic decides Claude should respond with a particular kind of confidence, the decision regulates the cognitive behavior of millions. When a product team sets an interface default that makes accepting the first output easier than requesting alternatives, the decision regulates the user's tolerance for uncertainty. These are governance decisions made under time pressure, often without explicit deliberation about regulatory effects, and always without democratic accountability.
The lopsidedness is structural. Law is visible as governance and attracts the political debate. Architecture, markets, and norms operate below the threshold of public attention as governance, so the governance they perform does not register as governance — it registers as product, economy, culture. The gap is not that law is too slow. The gap is that the rest of the system is ungoverned.
The diagnosis develops the analytical framework from Lessig's earlier work on internet governance, applied to the AI transition. The specific claim that the lopsidedness of the governance conversation is itself a governance failure first appeared in Lessig's 2024 Boston Globe op-ed advocating for California's SB 1047 and was elaborated in the Lessig–On AI volume (2026). The diagnosis builds on Edo Segal's observation in The Orange Pill that most AI governance architecture operates on the supply side while the demand side remains largely unaddressed.
The gap is not legal backwardness. Law is slow because law is slow. The structural failure is that the faster modalities are governing without accountability.
Norms shift at professional speed. Emergent norms can reorganize an entire profession in months, with no formal deliberative process.
Markets redistribute without asking. Market regulation is efficient and amoral. It clears prices; it does not ask who bears the cost.
Architecture governs cognition. Product decisions shape cognitive behavior at population scale without registering as governance.
Multi-modal governance is the only response. A dam built in law alone will be undermined by the pressures from norms, markets, and architecture. Effective governance requires coordinated intervention across all four.
Defenders of the market-norm-self-regulation approach argue that non-legal governance is legitimate governance — that professional communities, market discipline, and product competition constitute genuine forms of accountability, even without formal deliberative processes. Lessig's response, developed across his work on institutional corruption, is that non-legal governance is legitimate only when it operates under structural conditions that prevent capture by concentrated interests. In the AI transition, those conditions do not hold: professional norms are shaped by a handful of dominant employers, markets are concentrated among a few platform companies, and architecture is controlled by the same actors. Self-regulation under these conditions is not governance; it is the absence of governance dressed in governance language.
The diagnostic tension between Lessig's modality analysis and the material substrate view depends entirely on which layer of the AI stack we examine. At the application layer — where professionals adopt AI tools, where markets reprice software companies, where product interfaces shape user behavior — Lessig's diagnosis is essentially correct (90%). The governance gap here is precisely the unchecked speed of norm shifts, market redistributions, and architectural decisions operating without deliberative safeguards. Berkeley's Orange Pill study confirms this: the transformation of professional work happened through emergent pressure, not infrastructure control.
But at the foundation layer — model training, compute allocation, chip manufacturing — the material substrate view dominates (85%). Here, governance is not about regulating rapidly shifting modalities but about managing extreme concentration. The actors are few, the resources are scarce, and the constraints are physical. Taiwan's TSMC, not product teams, determines what's architecturally possible. Energy availability, not professional norms, shapes deployment patterns. At this layer, traditional industrial policy tools — export controls, infrastructure investment, resource allocation — matter more than Lessig's four modalities.
The synthesis requires recognizing that AI governance operates across radically different control surfaces simultaneously. The right framework is not choosing between modality-based and infrastructure-based governance, but mapping which approach applies where. Consumer-facing AI products need the kind of multi-modal governance Lessig prescribes — coordinated intervention across law, norms, markets, and architecture. But the AI supply chain needs something closer to industrial planning — managing material dependencies, securing critical infrastructure, preventing single points of failure. The governance gap is really two gaps: ungoverned speed at the application layer, ungoverned concentration at the foundation layer. Effective response requires different tools for different layers, deployed with awareness of how changes at one layer cascade through the others.