Governance of the surplus is this book's framing of the institutional challenge the second cognitive surplus presents. Abundance requires governance: any system producing at scale requires mechanisms for quality assurance, conflict resolution, responsibility assignment, and protection of the people who depend on the system's outputs. The absence of governance does not produce freedom; it produces the conditions under which the powerful benefit at the expense of the vulnerable. The romantics of the early internet believed otherwise, arguing that the medium was inherently ungovernable. They were wrong, and the evidence of their wrongness was Wikipedia, whose governance — policies on neutrality and verifiability, dispute resolution mechanisms, elected administrators, cascading community authority — was not a restriction on contribution but the condition that made productive contribution at scale possible. The same structural logic applies to the second surplus, with substantially higher stakes because the artifacts being produced are functional software whose failure modes can cause real harm.
The governance challenge has four dimensions requiring distinct but interconnected responses. Quality standards: what minimum quality AI-created software should meet before being shared broadly, with context-appropriate tiers ranging from personal utilities (no external standard) to tools in safety-critical contexts (approaching professional software standards). Liability: who bears responsibility when an AI-created tool malfunctions, given that creation distributes agency across the human creator, the AI model, the platform hosting the tool, and the company operating the model — a distribution existing liability frameworks are poorly equipped to handle. Intellectual property: how to assign credit and compensation given that AI models are trained on vast corpora whose creators have generally not consented to the training and are not compensated for it. Platform governance: the decisions platform operators make about what to highlight, what to restrict, and what to commoditize exercise decisive but largely invisible influence over the surplus's deployment.
The ascending friction thesis applies to governance itself. The friction should ascend from restricting creation, which is counterproductive and impractical at the scale of billions of potential creators, to evaluating creation, which is essential. Governance that restricts who can create will suppress the surplus and forfeit its value. Governance that evaluates what has been created — providing quality assurance, liability frameworks, intellectual property protections, and platform accountability — channels the surplus toward collective value without suppressing the creative energy that produces it.
The instinct of regulators, trained in a world where production was concentrated in identifiable firms, is to regulate the producer. When production is distributed across millions of individual creators, regulating the producer is impractical and counterproductive. The alternative is to regulate the infrastructure — the platforms through which creations are shared, discovered, and used — and to design platform governance that maintains quality, assigns responsibility, and protects users without restricting creative freedom. This shift from producer regulation to infrastructure regulation is probably the most consequential governance reorientation the AI transition requires.
The Shirky Principle predicts that existing regulatory institutions, designed for professional production, will attempt to extend their frameworks to distributed creation. Partial extension is appropriate; wholesale extension will be counterproductive. The institutions that govern the second surplus effectively will be those that recognize the structural difference between concentrated professional production and distributed non-professional creation, and that design accordingly rather than forcing the new landscape into regulatory categories built for the old one.
The framework develops across Shirky's work on institutional adaptation since 2008, extended in this book to the specific challenges AI-enabled creation poses. The underlying observation — that governance is a prerequisite for productive abundance rather than an impediment to it — has been a consistent theme of Shirky's analysis since Here Comes Everybody.
Governance enables, not restricts. Productive abundance requires governance; Wikipedia's governance made contribution possible rather than constraining it.
The four dimensions. Quality standards, liability, intellectual property, and platform governance are distinct problems requiring coordinated responses.
Ascending governance. Governance friction should ascend from restricting creation to evaluating creation; the former is counterproductive at scale, the latter is essential.
Infrastructure over producer regulation. Regulating millions of individual creators is impractical; regulating the platforms they use is tractable.
The distributed agency problem. Responsibility in AI-enabled creation does not map cleanly onto existing legal frameworks designed for identifiable producers.