Mariana Mazzucato — On AI
Contents
Cover Foreword About Chapter 1: The Myth of the Garage Chapter 2: Public Risk, Private Reward Chapter 3: The Training Data as Public Good Chapter 4: The Democratization Paradox Chapter 5: Who Deserves What Chapter 6: Mission-Oriented Innovation and the Direction of AI Chapter 7: Value Creation and Value Extraction Chapter 8: Institutional Design for a Fair Transition Chapter 9: The Entrepreneurial State and the Architecture of Return Chapter 10: The Public Purpose Epilogue Back Cover
Mariana Mazzucato Cover

Mariana Mazzucato

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Mariana Mazzucato. It is an attempt by Opus 4.6 to simulate Mariana Mazzucato's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The number that made me flinch was not about artificial intelligence. It was about Siri.

Siri — the voice assistant that Apple unveiled in 2011 as a flagship feature of the iPhone 4S, the product that cemented Apple's reputation as the most innovative company on Earth. Siri was a DARPA project. Funded by the Defense Advanced Research Projects Agency, developed at the Stanford Research Institute with public money, spun out as a startup, and acquired by Apple twenty-four months before launch. The touchscreen beneath Siri was developed at publicly funded laboratories. The GPS that told Siri where you were standing came from a twelve-billion-dollar constellation of military satellites. The internet that carried Siri's queries to Apple's servers was built by the Department of Defense.

I knew these facts individually. I had never stacked them.

Mariana Mazzucato stacked them, and the stack changed the shape of every argument I had been making about democratization, about the beaver, about who builds what and for whom.

In The Orange Pill, I wrote about the imagination-to-artifact ratio collapsing. I celebrated engineers in Trivandrum discovering capabilities they never had before. I meant every word. But I was telling the story from inside the garage, and Mazzucato forced me to look at the foundation the garage was built on. Public research sustained through two AI winters when every venture capitalist on Sand Hill Road had moved on. University labs funded by grants that would never make anyone rich. Decades of patient, unglamorous investment in basic science that no quarterly earnings report would ever reward.

The tools I used to write this book, that my engineers used to build Napster Station in thirty days, that millions of builders worldwide are using right now to close the gap between imagination and reality — those tools rest on a publicly funded foundation so extensive that the private structures built on top of it could not stand without it.

Mazzucato asks a question that the technology industry has never been comfortable answering: If the public bore the risk, why does only the private sector capture the reward?

That question does not diminish the builder's contribution. It enlarges the frame. It forces you to see the whole river, not just the stretch you happen to be swimming in. And it asks whether the dams we build should direct the flow toward everyone who contributed to the current — not just those who happened to be standing at the point where it became commercially valuable.

This book is that question, applied to AI with prosecutorial specificity. It made me uncomfortable. It should make you uncomfortable too. Discomfort is where the real thinking starts.

Edo Segal ^ Opus 4.6

About Mariana Mazzucato

1968-present

Mariana Mazzucato (1968–present) is an Italian-American economist and professor in the Economics of Innovation and Public Value at University College London, where she founded and directs the Institute for Innovation and Public Purpose (IIPP). Born in Rome and raised in the United States, she studied at Tufts University and the New School for Social Research before holding academic positions at the University of Denver and the University of Sussex. Her 2013 book The Entrepreneurial State: Debunking Public vs. Private Sector Myths fundamentally challenged the prevailing narrative that private enterprise drives innovation while government merely corrects market failures, documenting with granular precision the public funding behind technologies from the internet to the iPhone. Her subsequent works — The Value of Everything: Making and Taking in the Global Economy (2018) and Mission Economy: A Moonshot Guide to Changing Capitalism (2021) — developed frameworks for distinguishing value creation from value extraction and for organizing public investment around ambitious societal missions. She has advised governments and international institutions worldwide, including the European Commission, the South African government, and the World Health Organization. Her research program on "algorithmic rents," conducted with Tim O'Reilly and Ilan Strauss, applies her analytical framework to platform monopolies and AI. She is widely regarded as one of the most influential economists shaping the debate over innovation policy, industrial strategy, and the public purpose of emerging technologies.

Chapter 1: The Myth of the Garage

There is a photograph that functions as scripture in the religion of American innovation. It shows a single-car garage in Los Altos, California — the place where Steve Jobs and Steve Wozniak allegedly built the first Apple computer. The city of Los Altos has designated it a protected historic site. Tour buses decelerate as they pass. The image appears in keynote presentations, business school syllabi, and the opening sequences of documentaries about Silicon Valley with the regularity of a liturgical reading.

The photograph is not false. Work happened in that garage. But the photograph, as it operates in the cultural imagination, performs a specific ideological function: it locates the origin of innovation in private initiative and individual genius. Two young men. A workbench. A soldering iron. An idea that changed the world. The story begins here, in this garage, and everything that follows — the company, the products, the transformation of daily life — flows from this origin point like water from a spring.

The spring is an illusion. What the photograph omits is the aquifer.

The semiconductor industry on which the Apple computer depended was created by military procurement contracts that sustained companies through decades of commercial losses no venture capitalist would have tolerated. The computer science knowledge Wozniak drew upon was produced in university laboratories funded by the National Science Foundation and the Department of Defense. The ARPANET, the government-built precursor to the internet, would eventually transform the personal computer from an expensive curiosity into a platform for everything. The public education system trained the engineers. Public infrastructure enabled the supply chains. Government-funded basic research in materials science, electrical engineering, and information theory provided the intellectual foundations on which every component of the Apple computer rested.

The garage is real. The myth of the garage is that the garage was sufficient.

Mariana Mazzucato has spent her career documenting this omission with prosecutorial specificity. Her 2013 book The Entrepreneurial State traced the funding genealogy of virtually every major technology in the iPhone — GPS (funded by the US Navy), touchscreens (developed at publicly funded laboratories including CERN), the internet itself (DARPA), Siri (a DARPA-commissioned project at the Stanford Research Institute) — and demonstrated that the device celebrated as the supreme achievement of private-sector innovation was, in its foundational technologies, a product of public investment. As she put it: "What would Google be without the DARPA-funded internet? What would Uber be without the US Navy-funded GPS? What would Apple be without the CIA-funded touchscreen technology and DARPA-funded voice assistant, Siri?"

The questions are rhetorical. The answers are devastating. And the pattern they reveal is not historical trivia. It is the operating system of a distributional injustice that the artificial intelligence revolution is reproducing at unprecedented scale.

The genealogy of AI's foundational technologies follows the same structure with a fidelity that would be remarkable if it were not so predictable. The transformer architecture, the computational framework on which virtually every major large language model depends, was introduced in a 2017 paper by researchers at Google titled "Attention Is All You Need." The paper is routinely cited as a landmark of private-sector research. Google receives the credit. The narrative is clean.

Trace the funding chain backward, and the narrative fractures. The mathematical foundations of the transformer — the attention mechanisms, the sequence-to-sequence learning frameworks, the distributed representation approaches — were developed over decades of research funded substantially by public institutions. The National Science Foundation provided grants that supported neural network theory through the 1980s and 1990s, precisely the years when private-sector interest in artificial intelligence had evaporated. DARPA funded research in natural language processing, machine translation, and computational linguistics that produced many of the techniques the transformer refined and scaled. University laboratories across the United States, Canada, and Europe — sustained by public funding through two AI winters during which the private sector abandoned the field — trained the researchers who eventually produced the breakthroughs that private companies commercialized.

Geoffrey Hinton's foundational work on deep learning was conducted at the University of Toronto, supported by Canada's Natural Sciences and Engineering Research Council. Yann LeCun developed convolutional neural networks at Bell Labs and later New York University, where NSF grants and DARPA funding sustained the work. Yoshua Bengio built recurrent neural networks and attention mechanisms at the Université de Montréal with Canadian public funding. These three researchers — widely regarded as the founders of modern deep learning — built the theoretical and practical foundations of the AI revolution primarily with public money, during periods when the private sector considered their work commercially worthless.

The private sector entered the field in force only after the risk had been substantially reduced. Google's acquisition of DeepMind in 2014, Facebook's hiring of LeCun to lead its AI research division in the same year, and the subsequent explosion of corporate AI investment all occurred after decades of publicly funded research had demonstrated that deep learning was technically viable and commercially promising. The salaries became private. The intellectual capital was substantially public.

Mazzucato's framework identifies this pattern as the "socialization of risk and privatization of reward" — a phrase that sounds like a slogan but functions as a precise description of a measurable phenomenon. The state invests during the high-risk phase, when outcomes are uncertain and time horizons extend beyond any private investor's patience. The private sector enters during the low-risk phase, when the foundational science has been validated and the remaining challenge is commercialization. The returns flow to the private sector. The state receives no equity stake, no royalty stream, no direct financial participation in the commercial success of the technologies it funded.

The numbers make the asymmetry concrete. The National Science Foundation's annual budget for computer and information science research is approximately one billion dollars. The combined revenue of the five largest AI companies exceeds two hundred billion dollars annually. These companies' products are built on research that NSF, DARPA, and their international equivalents funded over half a century. The ratio between public investment and private return is not merely unfavorable. In Mazzucato's analytical vocabulary, it is structurally extractive.

The erasure of public investment from the innovation narrative is not passive. It is actively maintained by the institutional architecture through which innovation is discussed, celebrated, and rewarded. Every corporate earnings call that credits "our world-class engineering team" without mentioning publicly funded research reinforces the myth. Every venture capital pitch that celebrates a founder's "vision" without acknowledging public infrastructure reinforces the myth. Every business school case study that analyzes a startup's success without examining the publicly funded research ecosystem that produced the foundational technology reinforces the myth.

The reinforcement is structural, not conspiratorial. The people who tell these stories genuinely believe them, because the institutional architecture of the innovation economy makes public investment invisible. The researcher who publishes a paper does not appear in the startup's cap table. The grant officer who funded the research does not attend the IPO. The taxpayer who financed the grant does not receive a share certificate. The public's contribution is real, documented, essential — and absent from every financial and narrative structure through which innovation is celebrated and rewarded.

The consequences are not merely rhetorical. They are distributional. When the public contribution to innovation is erased from the narrative, the case for public return on that investment disappears with it. The political logic follows directly: if innovation is produced by entrepreneurs in garages, then public research funding is a luxury, not a necessity. Federal R&D spending as a share of GDP in the United States has declined from approximately 1.2 percent in 1976 to approximately 0.7 percent in 2024 — a decline that accelerated precisely as the returns from publicly funded research grew exponentially. The state is investing less at the exact historical moment when the returns from its investments are greater than at any previous point in history.

Mazzucato herself has emphasized this dynamic with increasing urgency in her AI-specific commentary. At the 2025 IIPP Forum, she told the audience: "The narrative that it's the tech bros creating all the value is just wrong." At the Algorithmic Rents Research Showcase in 2023, she drew the connection explicitly, contrasting "the early history of AI research where most researchers were at public institutions like DARPA, with current AI research, where most researchers are in the private sector." The migration happened not because private companies were doing better science, but because they were offering higher compensation — compensation funded, in significant part, by the extractive rents that platform monopolies generate. The brain drain from public to private AI research is itself a consequence of the distributional asymmetry the myth of the garage conceals.

The AI transition makes this pattern more consequential than any previous technology cycle for three reasons.

The scale of the returns is larger. The combined market capitalization of the AI industry exceeds anything the pharmaceutical or semiconductor industries produced. The gap between public investment and private return is correspondingly wider.

The speed of the transition is faster. The pharmaceutical industry took decades to commercialize publicly funded mRNA research. The AI industry is commercializing publicly funded deep learning research in years. The compressed timeline means that institutional responses must be designed and implemented faster than any previous technology transition has required.

The concentration of returns is more extreme. The AI industry is characterized by network effects and scale economies that produce winner-take-most dynamics. The computational cost of training a competitive large language model — measured in hundreds of millions or billions of dollars — creates barriers to entry that concentrate the market among a handful of firms. The distributional consequences of the public-risk-private-reward pattern are amplified by the market structure in which it operates.

The myth of the garage is the founding fiction of an institutional architecture that concentrates the returns from publicly funded innovation in private hands. It operates at three levels simultaneously. At the narrative level, it credits individual genius for achievements produced by collective investment. At the political level, it delegitimizes the public institutions that funded the foundational research. At the distributional level, it provides the ideological justification for an arrangement in which the public bears the risk and the private sector captures the reward.

Dismantling the myth is not an exercise in historical correction. It is the necessary precondition for institutional redesign. The question the AI transition forces is not whether the garage matters — it does — but whether the garage is the whole story. The answer, documented in funding records, patent filings, and budget appropriations spanning seven decades, is unambiguous. The garage sits on a foundation of public investment so extensive that the private structure built on top of it could not stand without it.

The institutional architecture must be redesigned to reflect this reality. The alternative is a distributional outcome in which the most powerful technology in human history enriches its commercializers while the public institutions that made it possible face budget cuts justified by the very myth those institutions' work disproves.

---

Chapter 2: Public Risk, Private Reward

The pharmaceutical industry provides the clearest precedent for understanding the distributional dynamics of the AI transition, because the pharmaceutical industry has been operating under the same pattern — public risk, private reward — for longer, with better documentation, than any other sector of the innovation economy.

Consider the development of mRNA vaccine technology. The narrative that reached the public was straightforward: Pfizer and Moderna, two private pharmaceutical companies, developed COVID-19 vaccines with unprecedented speed, saving millions of lives and earning billions of dollars in revenue. The narrative credited private-sector innovation, entrepreneurial risk-taking, and the competitive dynamics of the pharmaceutical market.

The documented funding history tells a different story.

The foundational research on messenger RNA as a therapeutic platform was conducted over three decades, primarily in university laboratories funded by the National Institutes of Health and equivalent agencies in other countries. Katalin Karikó, the Hungarian-born biochemist whose work on modified nucleosides made mRNA vaccines feasible, conducted her early research at the University of Pennsylvania, supported by NIH grants. Her work was considered so unpromising by the standards of private-sector investment that she was repeatedly denied tenure and demoted in rank. The private sector did not fund this work. It did not identify its potential. It did not bear the risk of the decades of uncertain research that eventually produced a viable therapeutic platform.

What the private sector did was commercialize the platform once the risk had been substantially reduced by publicly funded research. Moderna received nearly a billion dollars in federal funding through Operation Warp Speed. Pfizer received a two-billion-dollar advance purchase commitment from the US government, eliminating the commercial risk of development. Both companies received publicly funded clinical trial infrastructure, regulatory fast-tracking, and liability protections that further reduced their commercial exposure.

The public investment in mRNA research was sustained over three decades, through multiple funding cycles, across administrations of both parties, during periods when the commercial prospects were considered negligible by the private sector. Conservative estimates place the total public investment in mRNA-related research — foundational work on nucleoside modification, lipid nanoparticle delivery systems, clinical trial infrastructure, training of the scientific workforce — at over thirty billion dollars.

The returns were distributed as follows: Moderna's COVID-19 vaccine revenue exceeded thirty-six billion dollars in the first two years. Pfizer's exceeded the same. Combined revenue surpassed seventy billion dollars. The NIH, which funded the foundational research over three decades, received no direct equity return, no royalty stream, no financial participation in the commercial success of a technology it had substantially funded.

The public bore the risk. The private sector captured the reward. This is not a controversial claim. It is a documented fact.

Mazzucato's framework identifies the mechanism with precision. The state operates as what she terms an "entrepreneurial investor" — taking on risks that the private sector will not bear because the expected returns are too uncertain, too distant, or too diffuse. A venture capital fund operates on a ten-year time horizon. The foundational research on mRNA took thirty years to reach commercial application. No venture capital fund in history has maintained a thirty-year investment in a technology with no commercial application. The state can make these bets because it operates on a different time horizon and with different accountability structures. The NSF does not report quarterly earnings. DARPA does not face pressure from activist shareholders to divest from underperforming research programs.

This asymmetry — public risk-taking, private reward-capturing — is the structural template for the AI transition.

The foundational research on neural networks was conducted primarily in university laboratories from the 1950s through the 2010s, funded by government agencies during periods when private-sector interest had largely evaporated. The first AI winter, from the mid-1970s through the mid-1980s, saw private funding decline precipitously. The second, from the late 1980s through the mid-1990s, produced a similar withdrawal. During both periods, the research that eventually enabled the current AI revolution was sustained by public funding through DARPA, the NSF, and equivalents in Canada, the United Kingdom, and France.

The private sector entered in force only after the risk had been substantially reduced. Google's acquisition of DeepMind in 2014, Facebook's recruitment of Yann LeCun that same year, and the subsequent explosion of corporate AI investment all followed decades of publicly funded validation. The commercial AI boom began not when the science became possible — that happened years earlier in public labs — but when the science became safe to invest in.

The returns are distributing according to the pharmaceutical template. The five largest AI companies have captured hundreds of billions in market value from technologies built on publicly funded research. The public institutions that funded that research have received no direct financial return. The researchers who conducted the foundational work — often at modest academic salaries while producing the intellectual capital the private sector later commercialized — have, in most cases, received no equity participation in the commercial applications of their research.

Mazzucato's recent AI-specific work has sharpened this analysis considerably. In her February 2025 essay "AI for What?", she examined how "reducing the computational barriers to AI development" — as demonstrated by China's DeepSeek, which delivered performance comparable to leading models at a fraction of the computing cost — still might not "ensure these technologies serve the public good" if "other forms of market concentration emerge." The barriers to entry are not only computational. They are institutional, informational, and structural — and they operate to channel the returns from publicly funded science toward the same concentrated set of private beneficiaries regardless of how cheaply the models can be trained.

In her 2024 collaboration with Fausto Gernone, Mazzucato was more direct: "The incentives around AI are aligned for rent extraction, where large companies, acting as intermediaries, amass profits at others' expense." The word "intermediaries" is precise and important. The AI companies are not, in Mazzucato's framework, the originators of the value they capture. They are intermediaries between publicly funded research and commercial application — intermediaries who have positioned themselves to extract the maximum possible return from that intermediation while returning the minimum possible share to the public that funded the inputs.

The asymmetry has a temporal dimension that compounds the injustice. Public investment in AI research was sustained through decades when the technology produced no commercial return. The patience required — funding neural network research through two AI winters, supporting researchers whose work was dismissed as impractical by the private sector for twenty years — is a form of risk-bearing that no private entity would undertake. The state bore this risk not because it was guaranteed a return, but because it operates under a mandate broader than quarterly earnings.

Now the returns have arrived. They are enormous. And the institutional architecture provides no mechanism for the patient investor — the public — to participate in the returns its patience made possible.

This institutional gap is not a design flaw in the usual sense. It is the absence of design. The Bayh-Dole Act of 1980, which allowed universities to patent discoveries made with federal funding and license those patents to private companies, was designed to solve a commercialization problem: publicly funded research was sitting unused because the government retained the patents and private companies had no incentive to invest in commercialization. Bayh-Dole solved that problem. It did not solve the distribution problem. Patents are licensed for fees that represent a fraction of commercial returns. The public bears the full risk of foundational research and receives a minimal share of the financial upside when that research succeeds.

The Bayh-Dole framework was adequate for an era when the returns from publicly funded research were smaller, slower, and more diffusely distributed. It is structurally inadequate for the concentrated, massive returns the AI industry generates. A licensing fee that captures two percent of the commercial value of a moderately successful pharmaceutical product is a different proposition than a licensing fee that captures two percent of the commercial value of a technology whose market capitalization exceeds the GDP of most nations.

Mazzucato's framework points toward a specific institutional remedy: the state should participate in the upside of the investments it makes. Not through licensing fees alone, but through equity stakes in companies that commercialize publicly funded research. Through conditions on patents derived from public investment, requiring reasonable pricing, accessibility, or direct royalty payments to public research funds. Through taxation structures designed to capture a share of windfall profits from technologies built on publicly funded foundations. Through public investment funds, modeled on sovereign wealth funds, that receive returns proportional to the public's contribution to the foundational research.

Several countries have experimented with versions of these mechanisms. Finland's Sitra has operated since 1967 as a public innovation fund that invests in technologies derived from publicly funded research. Israel's Yozma program seeded the country's venture capital industry in the 1990s and included provisions for public equity stakes. Norway's Government Pension Fund Global demonstrates that public investment can be managed at enormous scale with professional governance and competitive returns.

The objection that such mechanisms would reduce the incentive to innovate is not supported by the evidence. The Nordic countries, which maintain the highest levels of public investment in research and the most extensive mechanisms for public participation in returns, are consistently ranked among the most innovative economies in the world. Israel, whose Yozma program included public equity stakes, developed one of the most dynamic venture capital ecosystems in history. The empirical record does not support the claim that public participation in returns reduces innovation. If anything, it suggests the opposite: that public investment and public participation create conditions for more innovation, not less.

The AI transition demands these mechanisms at a scale previous technology cycles did not require. The returns are larger. The concentration is more extreme. The public investment that produced them is more extensively documented. And the window for institutional design is narrowing with every quarter that the distributional patterns of the AI economy harden into structural features of the post-AI world.

The mRNA precedent is instructive not only as evidence but as warning. The distributional consequences of public-risk-private-reward in pharmaceuticals became politically visible during the COVID-19 pandemic, when companies that had received billions in public funding charged market prices for products the public had substantially funded. The political response was inadequate: temporary intellectual property waivers that arrived too late and were implemented too narrowly. The institutional architecture was not redesigned. The pattern continues.

The AI transition offers an opportunity — rapidly narrowing — to redesign the architecture before the distributional consequences become a crisis. The tools exist. The precedents are documented. What is missing is political will, and political will is eroded daily by the myth of the garage, which whispers that the public's contribution was peripheral and the private sector's genius was the whole story.

---

Chapter 3: The Training Data as Public Good

The training data on which large language models depend represents what may be the most consequential conversion of a public good into private value in economic history. The scale of this conversion, its speed, and the absence of any institutional mechanism for compensating the public that produced the data constitute a distributional challenge that existing frameworks are not equipped to address.

A large language model is trained on a corpus of text that includes, by various estimates, trillions of tokens drawn from the accumulated written output of human civilization. Books, scientific papers, newspaper articles, encyclopedias, legal documents, software code, online forums, blog posts, government publications, academic theses, patent filings — virtually every form of written expression that has been digitized and made accessible.

This text was not produced by the companies that train their models on it. It was produced by billions of people over centuries, working within institutions substantially funded by public investment.

The books were written by authors educated in publicly funded schools and universities, published through an industry sustained by public literacy programs and library systems, preserved in archives maintained by public institutions. The scientific papers were written by researchers funded by government grants, working in publicly funded laboratories, published in journals operating within a system of peer review sustained by public academic institutions. Wikipedia, the largest single source of structured human knowledge ever assembled, was produced by volunteer labor building on the foundation of publicly funded education. Government publications were produced by public agencies. Legal documents were produced by public courts. The digitization infrastructure — the internet itself — was built with public money.

The training data is not a natural resource discovered by private companies. It is a cultural resource produced by human civilization over millennia, sustained by public investment in education, research, libraries, archives, and communication infrastructure. In any meaningful analytical sense, it is a public good.

Mazzucato's framework supplies the vocabulary for what is happening. The AI companies are performing what she calls "value extraction" — capturing returns from value created by others without producing corresponding value. The creation happened across centuries of human intellectual production. The extraction happens when that production is ingested, processed through neural network architectures, and converted into a proprietary model whose capabilities reflect the accumulated knowledge of the training data but whose ownership resides entirely with the company that performed the conversion.

The model is private property. The training data from which it was derived is a public good. The distributional injustice resides in the gap between these two categories.

The legal framework governing this conversion is inadequate. Copyright law was designed to protect individual works of authorship, not to address the systematic extraction of value from the entire corpus of human knowledge. The concept of fair use — permitting limited use for education, commentary, and research — was not designed for the ingestion of trillions of tokens for commercial purposes. Courts in multiple jurisdictions are adjudicating whether AI training constitutes fair use, but the legal question, regardless of resolution, does not address the underlying distributional question. Even if training is found legal under existing law, the distributional asymmetry remains: the public produced the data, the public funded the institutions that produced it, and the companies that trained their models on it are capturing enormous returns from a resource they neither created nor funded.

Mazzucato and Fausto Gernone addressed the creative labor dimension of this extraction directly in their July 2025 piece arguing that "generative AI models are trained on publicly accessible creative content yet offer little to the artists, journalists, coders, and others who produce it." They proposed "a levy on AI firms' revenues" to fund the creative production on which AI depends — treating creative knowledge explicitly as a public good that requires collective funding. On LinkedIn, Mazzucato summarized the argument with characteristic directness: "Generative AI runs on a paradox: it relies on content creators — artists, journalists and coders — whose work trains its models while eroding their incomes."

The analogy to natural resource extraction is instructive but insufficient. When a mining company extracts minerals from public land, it pays a royalty for the right to extract. The royalty is typically a percentage of the extracted value, ensuring that the public receives some return on a resource it owns. AI companies extract value from the training data — a publicly produced resource — and pay nothing for the right to do so.

The fact that data extraction is non-rivalrous — a book used as training data is still available for reading — does not resolve the distributional question. The value captured by the AI company from the training data is real and substantial regardless of whether the data is depleted. The relevant question is not whether extraction causes physical harm but whether the distribution of economic returns is fair.

Consider the magnitude of the value transfer. The combined market capitalization of major AI companies reflects, in significant part, the value of their trained models. Those models' capabilities derive substantially from the training data. The training data was produced by publicly funded institutions and individual contributors who received no compensation. The value transfer runs from billions of individual creators and public institutions to a handful of private companies. At the scale at which it is occurring — trillions of dollars in market value built on a commons that received no payment — it represents the largest uncompensated extraction of public value in economic history.

Several mechanisms have been proposed for addressing this extraction, each with limitations but all superior to the current arrangement of no compensation whatsoever.

A training data dividend, modeled on the Alaska Permanent Fund, which distributes oil royalties to all state residents, could distribute a share of AI company revenue to the public whose intellectual production enabled the training. The mechanism is elegant but raises questions about entitlement: the training data includes contributions from people in every country, living and dead, and no single jurisdiction can claim exclusive ownership.

A public licensing framework could require AI companies to pay fees for publicly produced training data, with fees directed to public research and education funds. This approach treats the problem as a market transaction — the public owns something valuable and charges for its use — but raises questions about valuation. How much of an AI model's capability derives from the training data, as opposed to the engineering talent, computational infrastructure, and private research investment that produced the model?

A data commons trust, perhaps the most structurally ambitious proposal, would manage the public's interest in the training data collectively: negotiating terms of access on behalf of the public, distributing returns to public institutions, and ensuring that access conditions balance fair compensation against the public interest in continued AI development.

Mazzucato's broader proposal for a "digital windfall tax" to fund open-source AI and public innovation addresses the same distributional asymmetry from a different angle. Rather than trying to price the training data directly — a technically complex undertaking — the windfall tax captures a share of the returns from the entire AI economy, on the principle that those returns derive substantially from public inputs and should partially flow back to public purposes. As she argued in her critique of the UK's AI Action Plan: "AI should be a public good, not a corporate tollbooth."

The international dimension adds considerable complexity. Training data is global — it includes contributions from people in every country, in hundreds of languages, across centuries. Any institutional framework for distributing returns must address this global scope, either through international coordination or through national mechanisms that acknowledge the inherently transnational character of the public's contribution. The difficulty of designing such mechanisms does not justify the current arrangement, in which the returns are distributed entirely to private shareholders while the public receives nothing.

Mazzucato's environmental research adds another dimension to the training data question that most analyses overlook. In her 2024 Guardian article, she highlighted that "large language models such as ChatGPT are some of the most energy-guzzling technologies of all," noting that "about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3." The environmental cost of training models on vast corpora of publicly produced data is borne collectively — through climate impact, water consumption, and energy demand — while the economic returns are captured privately. The extraction is not only informational. It is material, drawing on shared environmental resources to process shared intellectual resources for private gain.

The urgency of institutional action is compounded by the speed of the AI transition. The training data is being consumed at an accelerating rate. Each new generation of models is trained on larger corpora. Synthetic data generated by existing models may eventually supplement human-produced training data, but the foundational capabilities that make synthetic generation possible derive from the original human-produced corpus. The window for establishing mechanisms that ensure a fair return on the public's contribution is narrowing with each new model release.

The absence of a perfect mechanism does not justify the absence of any mechanism. An imperfect system that captures some share of the returns is superior to a system that captures none. The precedent of mineral rights is instructive: when oil was discovered beneath public land in the early twentieth century, the legal framework for managing extraction did not exist. It was built through decades of legislative experimentation and political negotiation. The resulting system is imperfect and contested. But it established the principle that the public has a legitimate claim on the returns from the commercialization of publicly held resources, and it created institutional mechanisms that operationalize that principle at meaningful scale.

The training data royalty requires analogous pragmatism. The initial framework need not be perfect. It must establish the principle — the public produced the training data and is entitled to a return — and create mechanisms that operationalize the principle at sufficient scale to produce meaningful distributional outcomes. Refinement follows experience.

The public good argument is not about restricting access. It is about ensuring that the conversion of a public good into private value produces a fair return for the public that created the good. The training data was produced by human civilization. The returns from its commercial application should flow back to human civilization through institutional mechanisms designed to support the public research, education, and cultural production that created the resource in the first place.

---

Chapter 4: The Democratization Paradox

The democratization of capability that AI delivers is genuine, observable, and measurable. The evidence is abundant. Individuals are building software products, launching businesses, creating complex systems, and producing professional-quality outputs in domains where they previously lacked the technical skills to participate. The barrier between intention and artifact has collapsed to a degree that would have been inconceivable five years ago.

The paradox is that this genuine democratization is being delivered through channels that simultaneously create new forms of dependency, concentration, and extraction. The individual builder's capability is expanded. The individual builder's autonomy is constrained. Both dynamics operate simultaneously, through the same platforms, in the same transactions. The failure to see both is the most dangerous analytical error available in the current moment.

Mazzucato's framework provides the precise vocabulary for this paradox. She distinguishes between the surface phenomenon — expanded access, lowered barriers, broadened participation — and the structural arrangement through which the surface phenomenon is delivered. The surface can be genuinely democratizing while the structure is genuinely concentrating. This is not a contradiction. It is the defining feature of platform economies, and the AI platform economy is its most extreme expression.

Consider the conditions under which the democratization occurs. Engineers who discover extraordinary productivity multipliers through AI tools access that capability through proprietary platforms — Claude, GPT, Gemini — owned by a small number of private companies. Their expanded capability exists at the pleasure of the platform. It is subject to pricing decisions they cannot influence, terms of service they cannot negotiate, architectural choices they cannot challenge, and the platform's continued existence as a going concern.

If the platform raises prices, the builder's capability is effectively taxed without representation. If the platform implements usage caps or tiered access, the builder's daily workflow is restructured by corporate decisions made thousands of miles away. If the platform changes its terms of service — expanding its claim on user data, restricting certain use cases, modifying its acceptable use policies — the builder must accept the new terms or abandon the capability that has become integral to her livelihood. If the platform deprecates a feature around which she has built her workflow, her investment in learning that feature is unilaterally devalued. If the platform goes bankrupt, her capability disappears entirely.

The capability is real. The dependency is structural. And the dependency deepens with use.

This is the dynamic Mazzucato identified in her 2019 coinage of "digital feudalism" — a term she has applied with increasing specificity to AI. In a February 2025 Project Syndicate essay, she warned that "given the pace of AI development, policymakers and civil society must step in now to ensure that the next general-purpose technology serves the public interest. Otherwise, already dominant monopolists will supercharge the socially harmful digital business models they perfected over the past decade." The feudal analogy is deliberate: in a feudal economy, the peasant's productivity increases through better farming techniques, but the lord who owns the land captures the surplus. In the AI platform economy, the builder's productivity increases through AI tools, but the platform that controls access to those tools is positioned to capture an increasing share of the value the builder creates.

The paradox has instructive historical precedent. The agricultural revolution democratized food production, enabling more people to feed themselves than hunter-gatherer economies could support. But the enclosure movement that accompanied agricultural development concentrated land ownership, converting common resources into private property and creating a class of laborers who were simultaneously more productive — through improved techniques — and more dependent, on the landowners who controlled access to the means of production. The industrial revolution democratized access to manufactured goods at prices pre-industrial consumers could not have imagined. But the factory system created new dependencies: workers who relied on factory owners for employment, had no control over working conditions, and whose bargaining power was diminished by the surplus of labor created by agricultural displacement.

In each case, the democratization was genuine and the dependency was also genuine. The institutional mechanisms that eventually addressed the dependency — labor rights, land reform, social insurance, competition law — were not produced by the market. They were produced by political struggle, institutional design, and the explicit recognition that democratization without institutional safeguards produces concentration, not distribution.

The AI platform economy follows this pattern but with characteristics that make the concentration dynamics more severe. The computational cost of training competitive large language models — measured in hundreds of millions or billions of dollars — creates barriers to entry that no previous platform economy has matched. The data advantages of incumbents, who can improve their models using the interactions of millions of users, compound over time. The network effects are self-reinforcing: the more builders use a platform, the more data the platform accumulates, the better its models become, the more builders it attracts.

Mazzucato's Algorithmic Rents research program, conducted with Tim O'Reilly and Ilan Strauss at UCL's Institute for Innovation and Public Purpose, provides the analytical framework for understanding how this concentration operates. Their research found that "today's algorithmic systems are increasingly being used to extract what we call 'algorithmic rents' — using AI and machine learning not to create genuine value, but to concentrate market power and extract wealth from users and smaller players in the digital economy." The concept of "algorithmic attention rents" — published in the peer-reviewed journal Data & Policy in 2024 — describes how platforms grow increasingly capable of extracting value from users through algorithmic control over attention as their market position consolidates.

Applied to AI platforms specifically, the algorithmic rents framework illuminates a dynamic that the democratization narrative obscures. The builder pays a subscription fee — currently in the range of twenty to two hundred dollars per month — for access to capabilities that multiply her productivity dramatically. The financial terms appear extraordinarily favorable. But the builder also provides data to the platform: queries, code, workflows, creative outputs, patterns of use. This data improves the platform's models, informs its product strategy, and increases its competitive advantage. The builder provides attention — time within the platform's ecosystem that could be spent elsewhere. The builder provides lock-in — the deeper the integration, the higher the switching cost, the greater the platform's power.

The platform captures value from each of these channels. Subscription revenue is the most visible. Data value, attention value, and lock-in value are less visible but potentially more consequential over time. The builder experiences democratization. The platform accumulates concentration.

At the Algorithmic Rents Research Showcase in 2023, Mazzucato made the connection between this concentration dynamic and the brain drain from public AI research to the private sector. She "contrasted the early history of AI research where most researchers were at public institutions like DARPA, with current AI research, where most researchers are in the private sector," attributing the migration to "higher incomes in the private sector, which in turn come from extractive rents (bad), not profits (good). Profits come from true technological innovation… Rents could come from monopolies that can raise prices without losing customers." The distinction between profits and rents is foundational to Mazzucato's analytical framework, and its application to the AI platform economy is illuminating. The AI platforms' revenues derive partly from genuine innovation — they have built products that create real value for users — and partly from the monopolistic position that computational barriers, data advantages, and network effects have created. The rent component of their revenue is the portion that reflects market power rather than value creation, and it is this component that funds the compensation packages that drain talent from public research institutions.

The democratization paradox is not an argument against democratization. The capability expansion is genuinely valuable, and the builders who benefit from it are better off than they were before the tools existed. The paradox is an argument for institutional design that addresses the dependency dimension — that ensures builders have a voice in platform governance, that prevents platforms from exercising market power in ways that undermine the democratization they deliver, and that creates the competitive conditions under which platforms must compete for builders rather than extracting from captive ones.

Mazzucato has been explicit about the institutional mechanisms required. In her critique of the UK government's AI Action Plan, she argued that "governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies" and proposed that "a digital windfall tax should be applied to help fund open-source AI and public innovation" and that "the United Kingdom needs to develop its own public AI infrastructure guided by a public-value framework." The public AI infrastructure proposal is particularly consequential: it envisions an alternative to purely private platforms, a public option that would provide builders with access to AI capability without the dependency dynamics that private platforms create.

Competition policy must address the oligopolistic structure of the AI platform market — not only through traditional antitrust enforcement but through structural interventions: mandating interoperability between platforms, establishing open standards for AI tool interfaces, and funding public alternatives to proprietary systems. Data portability standards must enable builders to extract their contributions from one platform and import them to another. Governance structures must give builders a voice in platform decisions that affect their working conditions.

The deeper structural question is whether democratization delivered through concentrating channels is genuinely democratic at all, or whether it represents a new variation on an old pattern: expanded access as the visible surface of deepening control. Mazzucato's warning about digital feudalism is that the surface looks like freedom — more people can build, create, produce — while the structure looks like extraction: the platforms that enable the building position themselves to capture an increasing share of the value the building creates.

The answer to this question is institutional, not technological. The same platforms that currently concentrate control could, under different institutional conditions, deliver genuine democratization — if competition policy ensured that no single platform could exercise monopolistic power, if interoperability requirements ensured that builders could move freely between platforms, if governance structures ensured that builders had a voice in the decisions that shape their working conditions, if public alternatives ensured that access to AI capability did not depend entirely on the commercial decisions of private companies.

The technology enables democratization. The institutional architecture determines whether democratization is real or illusory. The current architecture favors concentration. Redesigning it is the central challenge of AI governance — and the window for redesign narrows with every quarter that the platforms' market positions consolidate, their data advantages compound, and their lock-in deepens.

Chapter 5: Who Deserves What

The distributional question is the question that every celebration of technological progress eventually collides with, and the collision is never gentle. The question is not whether AI creates value — it manifestly does — but who captures that value, who bears the costs of the transition that produces it, and whether the distribution of gains and losses reflects anything resembling the distribution of contributions and risks.

Mazzucato's framework insists on asking this question at the moment of maximum enthusiasm, when the temptation to defer it is strongest. The enthusiasm is warranted. The technology works. The productivity gains are real. The expansion of human capability is measurable. None of this is in dispute. What is in dispute — or rather, what should be in dispute but largely is not — is whether the institutional architecture through which these gains are distributed produces outcomes that are just, sustainable, or even minimally rational from the standpoint of long-term economic stability.

The current distribution concentrates returns among three groups: the shareholders of AI companies, the small number of highly skilled workers who complement AI most effectively, and the platform operators who control the infrastructure through which AI capability is delivered. The costs are distributed among a different set of groups: the public institutions that funded the foundational research and receive no financial return, the workers whose tasks are automated and who receive inadequate transitional support, and the billions of people whose intellectual production serves as training data without compensation.

This distribution is not the result of market forces operating on a level playing field. It is the result of institutional design choices — patent regimes, tax structures, labor policies, regulatory frameworks — that systematically favor the capturing of returns by those who commercialize innovation over those who fund it, produce its inputs, or bear its costs. Mazzucato's career-long argument is that these design choices are not natural laws. They are human decisions, made in specific institutional contexts, and they can be revised.

The labor dimension of the distributional question is where the abstraction meets the kitchen table, and it is where Mazzucato's framework demands the most concrete engagement. The aggregate statistics tell one story: AI increases productivity, creates new categories of work, and — the optimistic version — will eventually produce more jobs than it destroys, as every previous general-purpose technology has done. The aggregate statistics are probably correct. They are also irrelevant to the person sitting across the table from you.

The software developer whose value resided in translating specifications into working code faces a specific, personal crisis that aggregate labor market statistics cannot address. Her skills were genuinely hard to acquire. They represented years of investment — educational, financial, cognitive. The market rewarded those skills generously for two decades. Now the market is repricing them in real time, and the repricing is not gradual. It is the kind of discontinuous drop that makes a career feel like a trapdoor.

The legal associate whose practice consisted primarily of document review, contract drafting, and research memoranda faces the same structural displacement. The radiologist whose diagnostic accuracy is matched by an AI system trained on millions of images. The financial analyst whose modeling work can be replicated in minutes. The translator whose fluency in two languages was, until recently, a marketable skill and is now a commodity. In each case, the displaced worker's predicament is not a failure of individual adaptation. It is the predictable consequence of a technology transition whose costs are borne by the workers least equipped to absorb them.

Mazzucato's distributional framework identifies three categories of institutional failure in the current arrangement, each requiring distinct remedies.

The first is the failure of return on public investment. The public funded the foundational research on which AI depends, sustained that funding through decades of uncertainty, and receives no proportionate share of the commercial returns. The remedies — equity stakes, patent conditions, windfall taxation, sovereign AI funds — have been described in previous chapters. The principle is straightforward: the investor should share in the return. The state was the investor. The state should share.

The second is the failure of transition support. The current institutional infrastructure for displaced workers — unemployment insurance, retraining programs, educational institutions — was designed for a rate of technological change that the AI transition has far exceeded. Unemployment insurance assumes temporary displacement and conditions benefits on job search activity. But the displacement AI produces is not temporary in the traditional sense: the jobs are not coming back in their previous form. Conditioning benefits on searching for jobs that no longer exist is not policy. It is bureaucratic cruelty.

Retraining programs are fragmented, poorly funded, and disconnected from the actual demands of the post-AI labor market. The skills the new economy rewards — judgment, strategic thinking, creative direction, the capacity to formulate productive questions rather than execute routine answers — are not the skills that six-week certificate programs develop. They are the skills that develop through years of challenging professional experience, mentorship, and the kind of deep engagement with complex problems that cannot be compressed into a curriculum designed by a committee that last convened before the current generation of AI tools existed.

The financing of adequate transition support should draw on the returns from AI-related economic activity — a dedicated levy on AI company revenues, structured progressively, dedicated exclusively to worker transition. The principle of Mazzucato's framework applies: the entities that benefit most from the transition should contribute most to managing its costs. This is not redistribution in the politically charged sense. It is the internalization of an externality. The AI companies' profits are enabled, in part, by the displacement of workers whose skills their products have commoditized. The cost of that displacement is currently externalized — borne by the workers themselves and by the public social safety net. A transition levy internalizes the cost, aligning the incentive structure with the distributional reality.

The third failure is the failure of voice. The workers displaced by AI, the builders who depend on AI platforms, the communities affected by AI-driven economic restructuring — none of these groups has a meaningful voice in the decisions that shape their circumstances. Platform governance is unilateral. Corporate AI deployment decisions are made by executives and shareholders. Regulatory frameworks are designed in consultation with the companies they regulate. The people most affected by the AI transition are the people least represented in the institutions that govern it.

Mazzucato's mission-oriented framework implies a specific remedy: the inclusion of affected stakeholders in the design of AI governance institutions. Not as consultants whose input is solicited and ignored, but as participants whose interests are structurally represented in decision-making processes. Worker representatives on AI governance boards. Community input requirements for large-scale AI deployments. Builder advisory councils with formal standing in platform governance. These mechanisms exist in other domains — labor relations, environmental regulation, financial oversight — and their application to AI governance is a matter of institutional design, not conceptual innovation.

The distributional question has an international dimension that most analyses acknowledge in passing and then neglect entirely. The AI companies that control the platforms are concentrated in the United States and, to a lesser extent, China and the United Kingdom. The computational infrastructure is concentrated in the same geographies. The training data is predominantly in English and other high-resource languages. The regulatory frameworks being developed are Western frameworks designed for Western institutional contexts.

A builder in Lagos, Dhaka, or Lima who accesses AI tools is accessing capability developed primarily for builders in San Francisco, London, and Beijing. The tools may be technically accessible — the subscription price is the same regardless of geography — but they are not equally useful. Capabilities, training data biases, acceptable use policies, and support infrastructure are all designed for the high-income markets where the platform companies earn most of their revenue. The applications that could produce the greatest social benefit — AI-assisted healthcare in regions where the ratio of doctors to patients is one to fifty thousand, educational platforms for students with no access to quality instruction, agricultural optimization for smallholder farmers — are precisely the applications that receive the least investment and the poorest platform support.

Mazzucato's observation at the Paris AI Action Summit in February 2025 addresses this directly: "At issue is not whether Europe can compete with China and the United States in an AI arms race; it is whether Europeans can pioneer a different approach that puts public value at the center of technological development and governance." The question generalizes far beyond Europe. The countries that contributed most to the training data — through their populations' participation in the global knowledge economy — have the least voice in the governance of the technologies trained on that data. The countries where mission-oriented AI applications could produce the greatest social benefit have the least access to the tools, infrastructure, and institutional support needed to deploy those applications.

The distributional question, stated in its fullest form: will the AI transition reproduce and amplify existing inequalities — between capital and labor, between platform owners and platform users, between the Global North and the Global South — or will institutional design redirect the transition toward more equitable outcomes?

The current trajectory, absent institutional intervention, points toward amplification. The returns concentrate among shareholders and a thin layer of highly complementary workers. The costs distribute among the displaced, the excluded, and the public institutions that funded the research and sustain the social safety net. The platform governance structures are controlled by the platforms. The international distribution of AI capability reflects existing patterns of economic advantage.

Mazzucato's framework does not treat this trajectory as inevitable. It treats it as a design choice — or rather, as the consequence of design choices that were never made, institutional gaps that were never filled, governance mechanisms that were never built. The distributional trajectory of the AI transition is being determined now, in real time, by what governments, companies, and civil society choose to build or choose to defer. The history of previous technology transitions demonstrates that deferral has a cost, measured in generations of displaced workers who bore the burden of a transition whose benefits their grandchildren eventually shared. The AI transition compresses this timeline. The deferral cost is borne more acutely, by more people, in a shorter period. And the concentration of returns, once hardened into institutional structure, becomes progressively more difficult to reverse.

---

Chapter 6: Mission-Oriented Innovation and the Direction of AI

The most consequential policy debates about artificial intelligence are organized around the wrong question. The dominant question in legislative chambers, regulatory agencies, and editorial pages is: how should AI be regulated? This framing positions the state as a referee — intervening after the game is underway to prevent the worst injuries, but never selecting which game gets played.

Mazzucato's career has been devoted to demonstrating that this framing is both historically inaccurate and strategically catastrophate. The state has never been merely a referee. The documented history of technological innovation shows a state that repeatedly identified emerging technologies before the private sector, invested during periods when private capital fled, bore the risks the market refused, and created the conditions under which commercial application became viable. The state did not regulate the internet into existence. It built the internet. It did not regulate GPS into existence. It funded, designed, and launched the satellite constellation. It did not regulate the semiconductor industry through its commercially unviable early decades. It sustained it through military procurement that guaranteed a market for products civilian consumers did not yet want.

The mission-oriented framework holds that government investment is most effective not when it corrects market failures but when it is organized around ambitious societal goals. The Apollo program did not land a human on the moon by regulating the aerospace industry. It set a mission — a specific, ambitious, publicly defined objective — and directed public investment toward achieving it. The mission required advances across dozens of fields: materials science, computing, telecommunications, life support. These advances were not produced by market signals. They were produced by a publicly defined mission that directed research toward a specific destination.

The innovations subsequently found commercial applications across industries that bore no relation to space travel. Digital cameras, water purification systems, shock-absorbing materials, miniaturized computing — the commercial spillovers from mission-oriented public investment generated returns that dwarfed the original investment. But the spillovers were not the purpose. The purpose was the mission. And the mission organized collective effort in ways that market incentives alone could not.

Mazzucato has applied this framework to AI with increasing specificity. In her March 2024 collaboration with Gernone, she argued that "while regulation is necessary, it is insufficient. Beyond imposing restrictions on private AI companies, governments must assume an active role in AI development by designing systems and shaping markets for the common good." In February 2025, with Tommaso Valletti, she warned that "while AI could deliver profound benefits for all of society, it is likely to do the opposite if governments remain passive bystanders. Policymakers must step in now to foster a decentralized innovation ecosystem that serves the public good."

The critical word is "decentralized." Mazzucato is not arguing for state control of AI development. She is arguing for state direction — the use of public investment and institutional design to ensure that AI capability flows toward societal needs and not only toward the applications that generate the highest private returns.

Consider the current allocation of AI investment. The overwhelming majority flows toward applications with high commercial returns: advertising optimization, financial trading algorithms, content recommendation systems, customer service automation, enterprise productivity tools. These applications create genuine value — businesses operate more efficiently, consumers find products more easily, routine tasks are completed faster. But they represent a narrow band of AI's potential applications, selected by their commercial profitability rather than their social significance.

The applications that would produce the greatest social benefit receive a fraction of this investment. AI tools for early disease detection in communities without adequate medical infrastructure. AI-assisted agricultural planning for smallholder farmers facing climate volatility. Educational platforms designed for students whose schools lack qualified teachers. Environmental monitoring systems that could track deforestation, water contamination, and biodiversity loss in real time. Climate adaptation tools that could help vulnerable communities prepare for extreme weather events.

The technical capability for these applications exists. What is missing is the institutional framework that would direct investment toward them. The market allocates capital according to expected return. The expected return on AI applications serving marginalized communities is, by market standards, lower than the return on applications serving affluent consumers and large enterprises. This is not a market failure in the narrow sense — the market is doing what markets do. The failure is in the assumption that market logic should be the only logic governing the direction of AI development.

At the 2025 Paris AI Action Summit, Mazzucato framed this as a "critical juncture," arguing that the choice is not between innovation and regulation but between directed innovation that serves public purposes and undirected innovation that serves only commercial ones. The mission-oriented framework provides the institutional architecture for directed innovation: publicly defined missions that identify specific societal challenges, public investment that funds research and development toward those missions, institutional mechanisms that coordinate public and private effort, and evaluation frameworks that measure outcomes against the mission's objectives rather than against financial returns alone.

The AI transition presents specific challenges that distinguish mission-oriented AI programs from previous mission-oriented investments. The speed of AI development demands shorter, more adaptive mission timelines. A mission designed around current capability assessments may be obsolete within two years. Programs must incorporate mechanisms for continuous reassessment and redirection. The global nature of the challenges AI could address — climate, health, education — requires international coordination among national programs. And the dual-use character of AI capability demands safeguards against the repurposing of publicly funded tools for surveillance, manipulation, or other applications that contradict the mission's purpose.

Mazzucato's framework implies a specific institutional architecture. Mission definition should emerge through democratic deliberation — not through technocratic determination of what society needs, but through structured public processes that identify priorities and build legitimacy. Mission implementation should operate through dedicated agencies with the autonomy, expertise, and funding to direct complex, multi-stakeholder research programs — modeled on DARPA's structure but oriented toward civilian missions. Mission evaluation should be transparent, with public accounting of investments, outcomes, and distributional consequences.

The political economy of mission-oriented AI policy is challenging. The incumbent beneficiaries of undirected investment — the AI companies whose commercial applications absorb the vast majority of talent and capital — have strong incentives to resist any redirection of investment toward missions whose returns are social rather than financial. The resistance is typically framed as a defense of innovation: directing investment toward social missions will reduce the total amount of innovation by diverting resources from their most productive uses, as determined by the market.

This argument has been deployed against every proposal for mission-oriented public investment in the history of technology policy, and it has been empirically refuted by every country that has implemented such proposals without experiencing a decline in innovation. DARPA's mission-oriented investments did not reduce private-sector innovation in computing, telecommunications, or artificial intelligence. They created the foundational technologies on which private-sector innovation depends. The Human Genome Project did not crowd out private investment in genomics. It created the platform on which a multi-billion-dollar private genomics industry was built. In each case, mission-oriented public investment expanded the frontier of possibility rather than constraining it.

The AI transition demands mission-oriented ambition at a scale commensurate with the technology's potential — and with the urgency of the challenges it could address. Climate adaptation cannot wait for the market to discover that it is profitable. Health equity in sub-Saharan Africa cannot wait for venture capital to identify a business model. Educational access for hundreds of millions of underserved students cannot wait for an ed-tech startup to prove unit economics.

The direction of AI capability is not a second-order question to be addressed after the primary questions of regulation and competition have been resolved. It is the primary question, because the direction determines who benefits and who bears the costs. An AI economy directed primarily by commercial incentives will produce extraordinary tools for affluent consumers and profitable enterprises while underinvesting in the applications that could address humanity's most pressing challenges. An AI economy shaped by mission-oriented investment can do both — serve commercial markets and address societal needs — because the history of mission-oriented innovation demonstrates that the two objectives are complementary, not competing.

The question is not whether to direct AI development. The current allocation of AI investment is already directed — by the commercial logic of the platform companies and their investors. The question is whether that direction will be supplemented by public missions that ensure AI capability reaches the people and problems that commercial logic alone will never serve.

---

Chapter 7: Value Creation and Value Extraction

Mazzucato's most operationally useful analytical contribution is a distinction so simple it is frequently dismissed and so important it should be tattooed on the forearm of every policymaker, investor, and technology executive making decisions about the AI transition. The distinction is between value creation and value extraction.

Value creation is the production of goods and services that meet genuine human needs and improve human welfare. A farmer who grows food creates value. A teacher who educates students creates value. An engineer who builds infrastructure that enables economic activity creates value. A software developer who builds a tool that solves a problem for its users creates value. The common feature: something exists that did not exist before, and its existence makes the world measurably better for the people it serves.

Value extraction is the capture of returns from value created by others without producing corresponding value in return. A monopolist who charges above-market prices by restricting supply is extracting value — the surplus that would otherwise go to consumers is redirected to the monopolist without any increase in the quantity or quality of what is produced. A financial intermediary who captures a percentage of every transaction without adding proportionate value to the exchange is extracting value. A platform that exploits informational asymmetries to charge more than the competitive price, or that uses its gatekeeper position to impose terms that transfer surplus from users to shareholders, is extracting value.

The distinction is not between the private sector and the public sector. There are value-creating private enterprises and value-extracting public institutions. It is not between capitalism and its alternatives. The distinction cuts across sectors and ideologies because it refers to the function of an activity, not its organizational form.

The AI economy contains both dynamics simultaneously, often within the same company, the same product, and the same transaction. The analytical challenge — and the institutional challenge — is to distinguish between them and to design mechanisms that promote creation and constrain extraction.

Mazzucato's Algorithmic Rents research program provides the empirical foundation for applying this distinction to AI specifically. Working with Tim O'Reilly and Ilan Strauss, she documented how platform companies use algorithmic systems not to create genuine value but to "concentrate market power and extract wealth from users and smaller players in the digital economy." The concept of "algorithmic attention rents" describes how platforms grow increasingly capable of extracting value from users through algorithmic control over attention as their market position consolidates. The research, published in Data & Policy and the Hastings Science & Technology Law Journal, demonstrates that the extractive dynamic is not incidental to the platform business model. It is the business model — or at least a growing share of it.

Applied to AI specifically, the distinction illuminates dynamics that the innovation narrative routinely conflates. An AI tool that enables a person who previously could not create software to build a genuinely useful application is creating value. The person's capability is expanded. The application serves a previously unmet need. Human welfare improves. This is value creation, and the AI companies that deliver this capability deserve credit and compensation for it.

An advertising optimization algorithm that uses AI to target consumers with greater precision, capturing more of their attention and directing more of their purchasing decisions, is engaged in a structurally different activity. The algorithm does not create the consumer's need. It captures existing purchasing power by exploiting informational asymmetries and psychological vulnerabilities. The welfare of the consumer is not improved — it may be diminished, as the algorithm's effectiveness at capturing attention comes at the cost of autonomy and decision-making quality. The revenue the algorithm generates for the platform is not a measure of value created. It is a measure of value extracted.

Content generation systems that produce millions of articles, blog posts, and social media updates for the purpose of capturing advertising revenue are extracting value. The content does not serve a genuine information need. It exists to attract attention, which is sold to advertisers, who use it to redirect purchasing decisions. The entire chain runs from extraction to extraction, with no corresponding creation at any link.

Financial AI that uses machine learning to identify and exploit pricing inefficiencies — high-frequency trading algorithms that front-run slower market participants — captures billions in annual profits without contributing to productive capacity. The profits come directly from other market participants who receive worse execution prices. This is a pure transfer, dressed in the language of innovation.

AI-powered workplace surveillance systems that enable employers to monitor employee productivity with granular precision extract value by intensifying the capture of labor. The surveillance does not make workers genuinely more productive. It reduces autonomy, increases stress, and eliminates the informal cognitive rest that neuroscience identifies as essential for creative and analytical work. The employer captures more output per hour, but the output is extracted more efficiently from workers who bear the costs of the intensification.

The confusion between creation and extraction is not merely an analytical error. It is systematically maintained by the way the AI economy presents itself. Companies routinely describe value-extracting activities as "innovation," "disruption," or "creating value for shareholders." Financial metrics — revenue, profit, market capitalization — measure the capture of value, not its creation. A company that extracts a billion dollars through advertising optimization may report higher profits than a company that creates genuine value through AI-assisted healthcare. The metrics do not distinguish between the two, and the market rewards the former as enthusiastically as the latter.

Mazzucato has been explicit about the institutional implications. In her February 2025 essay, she warned about "the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization, and deteriorating information quality." The warning is specific: the extractive dynamics of the previous generation of digital platforms are not merely compatible with AI. They are amplified by it. AI makes extraction more efficient in exactly the same way it makes creation more productive. The amplifier does not discriminate.

The distinction has a temporal dimension that is particularly relevant to the AI transition. Value creation tends to produce returns distributed over time — the builder who creates a useful application generates value for its users throughout the application's lifespan. Value extraction tends to concentrate returns in short bursts — the algorithm that exploits a pricing inefficiency captures value in a single transaction, the advertising system that captures attention extracts value in real time. The temporal distribution matters because it shapes the sustainability of the economic activity and the breadth of its benefits. Patient value creation builds lasting economic capacity. Rapid value extraction depletes it.

Mazzucato's framework does not imply that all private returns from AI are extractive. Far from it. The productivity gains that AI delivers to builders, the efficiency improvements in supply chains and healthcare and education, the genuine expansion of human capability — these are real instances of value creation, and the companies that enable them are creating real value for which they deserve real compensation. The framework implies that value creation and value extraction must be distinguished, measured separately, and governed by different institutional rules.

Tax policy should treat returns from value creation differently than returns from value extraction, taxing the latter more heavily. Competition policy should distinguish between market power that enables creation — through economies of scale, network effects that benefit users, and research investment — and market power that enables extraction through monopolistic pricing, algorithmic manipulation, and the exploitation of switching costs. Patent policy should distinguish between intellectual property that protects genuine innovation and intellectual property used to extract rents from competitors and consumers.

The distinction also applies to the AI companies' relationship with creative labor — the subject of Mazzucato and Gernone's July 2025 analysis. Generative AI models create value when they augment human creative capacity, enabling artists, writers, and developers to produce work that would not otherwise exist. They extract value when they substitute for creative labor without compensating the creators whose work trained the models, effectively capturing the value of creative production while eliminating the income streams that sustained it. The proposed levy on AI revenues to fund creative production is an institutional mechanism designed to maintain the creative ecosystem on which AI's own capability depends — recognizing that extraction, if unchecked, destroys the base on which future value creation rests.

The amplifier does not care what signal it receives. It amplifies creation and extraction with equal efficiency. The institutional question is whether the structures that govern the AI economy will reward creation and constrain extraction — or whether, in the absence of such structures, extraction will consume creation, as it has in every previous era where the distinction was not institutionally enforced.

---

Chapter 8: Institutional Design for a Fair Transition

The institutions needed to govern the AI transition do not yet exist. This statement is not a criticism of the people currently working on AI governance — many of whom are doing serious, thoughtful work under severe constraints of time, resources, and political support. It is a description of a structural gap between the speed of the technology and the speed of institutional response. The gap is widening. The consequences of the gap are distributional, and they compound with every quarter that passes without adequate institutional architecture.

Mazzucato's framework identifies the gap with characteristic directness: "while regulation is necessary, it is insufficient." Regulation addresses the supply side — what AI companies may build, what disclosures they must make, what risks they must assess. The European Union's AI Act, the American executive orders, the emerging frameworks in Singapore, Brazil, and Japan represent genuine efforts to establish rules for the supply of AI capability. These efforts matter. They are also radically incomplete.

What is almost entirely absent is the demand-side institutional framework — the institutions that would ensure citizens, workers, students, and communities can navigate the AI transition with adequate support, meaningful voice, and a fair share of the returns. The supply-side framework asks: what may AI companies do? The demand-side framework asks: what do the people affected by AI need? The first question is being addressed, imperfectly but substantively. The second is barely being asked.

Mazzucato's institutional design framework addresses four interconnected challenges, each requiring distinct mechanisms that operate as a system.

The first is the return on public investment. The mechanisms have been described in earlier chapters — equity stakes, patent conditions, windfall taxation, sovereign AI funds — but their institutional implementation requires specific architecture that deserves elaboration. A national AI investment authority, modeled on successful sovereign wealth funds but mandated specifically to manage the public's stake in AI-related innovation, would hold equity in companies that commercialize publicly funded research, negotiate licensing terms for publicly funded patents, and direct returns toward further public investment in research, education, and infrastructure.

The institutional design must be precise. The authority must be politically independent — insulated from both electoral cycles and industry capture. It must operate with professional management and transparent governance, publishing regular reports on investment performance and distributional outcomes. It must maintain a long-term time horizon — decades, not quarters — matching the patient investment horizon that characterizes the state's most successful entrepreneurial ventures. Several precedents demonstrate feasibility: Norway's Government Pension Fund Global manages over a trillion dollars with professional governance and transparent reporting. Singapore's Temasek invests in emerging technologies and generates competitive returns. Finland's Sitra has operated as a public innovation fund since 1967.

None of these models is perfectly applicable to AI. Each requires adaptation to the specific dynamics of AI investment — the speed of the technology cycle, the concentration of returns, the global market structure, the complexity of the intellectual property landscape. But the existence of successful precedents demonstrates that the institutional challenge is solvable. The obstacle is not technical impossibility. It is political will — and political will is precisely what the myth of the garage erodes by making the case for public return seem illegitimate.

The second challenge is the support of displaced workers. Mazzucato's framework implies a transition infrastructure that is integrated, adequately funded, and designed for the actual pace of AI-driven displacement rather than the pace assumed by pre-AI institutional models.

The current infrastructure is fragmented across agencies with different eligibility criteria, different performance metrics, and different bureaucratic cultures. A worker displaced by AI automation must navigate unemployment insurance (administered by one agency, conditioned on job search activity), retraining programs (administered by another, often disconnected from actual labor market demand), and job placement services (administered by yet another, frequently underfunded). The gaps between these systems are where displaced workers fall, and the bureaucratic friction of navigating multiple systems consumes time and cognitive resources that should be devoted to the transition itself.

An integrated transition framework would merge these functions into a single institutional system. Income maintenance would be conditioned not on searching for jobs that no longer exist but on participation in retraining designed for the post-AI labor market. The retraining itself must be modular — allowing workers to acquire specific capabilities in months rather than years — employer-informed, and inclusive of AI tool proficiency, because the skills the post-AI economy rewards are not standalone technical skills but the capacity to direct AI tools toward productive ends. The framework must address workers at different stages: recent displacement requires immediate income support and skill assessment; mid-transition requires modular retraining and mentorship; late-transition requires job placement that values the combination of domain expertise and AI capability.

The financing should draw on AI-related economic activity. A dedicated levy on AI company revenues — structured progressively, with higher rates for larger companies and higher profit margins, exempting small and early-stage firms — would generate substantial resources while aligning the incentive structure with Mazzucato's foundational principle: the entities that benefit most from the transition should contribute most to managing its costs. The levy should be dedicated exclusively to transition support, ensuring that funds are not diverted to other purposes and that the connection between AI-generated returns and worker support is institutionally visible.

The third challenge is platform governance. Mazzucato's critique of the UK government's AI Action Plan made the institutional prescription explicit: governments must "move beyond unbalanced relationships with digital monopolies" and ensure that "public authorities" stop offering "technology companies lucrative unstructured deals with no conditionalities attached." The word "conditionalities" is central to Mazzucato's institutional vocabulary. It means that public support — tax advantages, procurement contracts, regulatory accommodation, access to publicly funded research — should come with requirements attached: requirements for data sharing, interoperability, fair pricing, environmental disclosure, worker protections.

The application of conditionality to AI platforms would produce specific institutional mechanisms. Mandatory interoperability requirements would ensure that builders can move between platforms without losing their work, data, or workflows. Data portability standards would enable builders to extract their contributions from one platform and import them to another. Transparency requirements would obligate platforms to disclose criteria for pricing decisions, feature availability, terms of service changes, and algorithmic ranking. Participatory governance mechanisms — builder advisory councils with formal standing, mandatory consultation before significant changes, independent arbitration for disputes — would give builders a structured voice in decisions that affect their working conditions.

Mazzucato's proposal for public AI infrastructure deserves particular attention. The proposal — that governments should "develop their own public AI infrastructure guided by a public-value framework" — envisions a public alternative to purely private platforms. Not a government-run AI company, but a publicly funded computational infrastructure that would provide researchers, builders, and public institutions with access to AI capability without the dependency dynamics that private platforms create. The infrastructure would serve as a competitive alternative — reducing the market power of private platforms by ensuring that builders are not entirely dependent on commercial providers — and as a mission-oriented resource, supporting publicly funded research and development that commercial platforms have no incentive to prioritize.

The fourth challenge is the direction of AI capability. The mission-oriented framework described in the previous chapter requires institutional mechanisms for defining missions, coordinating investments, evaluating outcomes, and adapting strategies. At the national level, dedicated mission agencies — modeled on DARPA's organizational structure but oriented toward civilian AI missions in health, education, climate, and agriculture — would define specific objectives, fund research programs, coordinate public and private actors, and evaluate results against mission criteria. At the international level, coordination mechanisms are needed to prevent duplicative investment, share research outputs, and ensure that mission-oriented AI tools reach the countries where they are most needed.

The four challenges are interconnected. The distribution of returns affects the resources available for worker transition. Platform governance affects the conditions under which workers and builders use AI tools. The direction of AI capability affects what kinds of work the AI economy creates. Worker transition affects the political legitimacy of AI development. Addressing any one challenge in isolation produces partial solutions that the other challenges undermine. The institutional architecture must be designed as a system — four interconnected mechanisms that reinforce rather than contradict each other.

Mazzucato's most recent AI commentary converges on a single imperative: the institutional architecture must be built now, during the current window, before the distributional patterns of the AI economy harden into structural features that become progressively more difficult to revise. The speed of the technology compresses the timeline for institutional design into years rather than decades. The concentration of returns creates political constituencies — AI company shareholders, highly compensated AI workers, platform executives — whose interests are served by the current arrangement and who will resist institutional redesign with increasing effectiveness as their economic and political power grows.

The comparison to previous technology transitions is both reassuring and cautionary. Every major transition eventually produced institutional mechanisms that distributed benefits more broadly — labor law, social insurance, public education, competition policy. These institutions were not produced by markets. They were produced by political struggle, institutional design, and the recognition that technological progress without institutional architecture produces concentration, not distribution. The reassuring dimension is that the institutions were eventually built. The cautionary dimension is that "eventually" spanned generations, and the people who bore the costs of the transition did not live to benefit from the institutions that their suffering eventually produced.

The AI transition compresses this timeline. The costs of deferral are borne more acutely, by more people, in a shorter period. The institutional architecture must be built with an urgency that matches the technology's own pace of development — not because urgency produces perfect institutions, but because the alternative to imperfect institutions built now is no institutions at all, and the distributional consequences of that absence are already visible and accelerating.

The institutional challenge is not whether perfect mechanisms can be designed. They cannot. The challenge is whether adequate mechanisms can be built fast enough to shape the distributional trajectory before it becomes fixed. The tools exist. The precedents are documented. The analytical framework is clear. What remains is the political decision to build — and the recognition that the decision, once deferred past the current window, may not be available again on terms that permit equitable outcomes.

Chapter 9: The Entrepreneurial State and the Architecture of Return

For fifty years, the dominant theory of the state's role in innovation has rested on a single assumption: the state funds, the market builds, and the distribution of returns is the market's business. This assumption is so deeply embedded in the institutional architecture of Western economies that it operates less as a theory than as plumbing — invisible, unexamined, determining where the water flows without anyone thinking to ask whether the pipes should be rearranged.

Mazzucato's career has been devoted to exposing the plumbing. Her argument is not that the pipes are leaking. It is that they were laid in the wrong direction. The state funds the high-risk, long-horizon research that produces foundational technologies. The private sector commercializes those technologies. The returns flow through pipes that run exclusively toward private shareholders. The state — which bore the risk, sustained the research through decades of uncertainty, and absorbed the losses from the many investments that failed — receives no proportionate share of the returns from the investments that succeeded.

The metaphor of plumbing is deliberate because Mazzucato's argument is fundamentally structural, not moral. She is not claiming that private entrepreneurs are villains who stole public wealth. She is claiming that the institutional pipes — the patent regimes, the tax codes, the licensing frameworks, the governance mechanisms — were designed to carry returns in one direction, and the AI transition has exposed this directional bias at a scale that makes it impossible to ignore.

The concept of the entrepreneurial state, as Mazzucato has developed it, rests on an empirical claim that is now extensively documented: the state is not merely a fixer of market failures. It is a maker of markets. The distinction matters because the two roles imply radically different claims on the returns.

A market-fixer corrects externalities, provides public goods, and enforces contracts. The market-fixer's compensation is indirect: a well-functioning market produces tax revenue, employment, and economic growth that benefit the public. The market-fixer has no direct claim on the returns from specific innovations because its contribution is general rather than specific. It built the roads, not the factory.

A market-maker identifies emerging technologies before the private sector, invests during periods when private capital retreats, creates the conditions under which commercial application becomes viable, and bears the risks that no private actor will bear. The market-maker's contribution is specific, documented, and traceable to particular innovations. The state did not merely build the roads that led to the AI industry. It funded the specific research programs — at specific universities, through specific agencies, over specific decades — that produced the specific technologies on which the AI industry depends.

A market-maker has a direct claim on the returns from the markets it made. This claim does not extinguish the private sector's claim. The private companies that commercialized publicly funded AI research created genuine value through their engineering talent, their organizational capability, their capacity to scale. Both the public investor and the private commercializer contributed. Both should share in the return. The current arrangement, in which only the private commercializer shares, is not a principled distribution. It is an institutional omission — the absence of any mechanism for the public investor to participate in the returns from investments it made.

Mazzucato has been particularly sharp on how this omission operates in the AI economy. At the 2023 Algorithmic Rents Research Showcase, she drew the connection between the migration of AI researchers from public institutions to private companies and the extractive rent dynamics of the platform economy. Researchers move to the private sector not because private companies are doing better science but because they offer higher compensation — compensation funded by the monopolistic rents that platform companies extract. The public institutions that trained these researchers, funded their early work, and sustained the field through its commercially unviable decades lose their most talented people to companies whose revenues derive partly from the rent extraction that Mazzucato's research documents. The public investment produces the talent. The private sector captures it. And the compensation that funds the capture comes from economic activities that are at least partly extractive rather than creative.

The brain drain compounds the institutional failure. As the most talented researchers leave public institutions, the state's capacity for productive AI investment diminishes. The diminished capacity is then cited as evidence that the state is an inferior investor — a circular argument that uses the consequences of the institutional failure to justify perpetuating it. Mazzucato identified this circularity years ago in her general work on innovation policy, but the AI transition has made it more acute. The combined AI research budgets of the five largest AI companies now exceed the combined AI research budgets of all government agencies in the United States and Europe. This gap is not evidence of the state's inherent incapacity. It is the result of decades of political choices to reduce public investment — choices justified by the myth that innovation is a private-sector activity in which the state's role is peripheral.

Rebuilding the state's entrepreneurial capacity for the AI era requires reversing these choices, and the reversal must be both financial and institutional. Financial, because the state needs resources commensurate with the scale of the AI transition — not the declining fraction of GDP currently allocated to public research, but an investment level that reflects the documented returns from previous public technology investments. Institutional, because funding alone is insufficient without the organizational structures that make public investment effective: agencies with the autonomy to take risks, the expertise to evaluate opportunities, the flexibility to adapt to rapid technological change, and the independence to resist both political interference and industry capture.

Mazzucato has pointed to specific institutional models that demonstrate feasibility. DARPA's organizational structure — small, flat, staffed by program managers with deep technical expertise and broad authority to fund high-risk research — has produced returns on investment that dwarf those of any private venture capital fund. The key features are autonomy (program managers make funding decisions without bureaucratic approval chains), expertise (staff are recruited from the frontier of their fields, not from the civil service pipeline), time horizons (programs are funded on multi-year cycles rather than annual appropriations), and risk tolerance (failure of individual projects is expected and accepted as the cost of a portfolio approach to high-risk investment).

These features are reproducible. They are not magic. They are institutional design choices that can be implemented in new agencies oriented toward civilian AI missions. The obstacle is not the absence of a model. It is the political resistance to creating public institutions with the authority, funding, and independence to function as genuine entrepreneurial investors in AI technology.

The mechanisms for capturing returns on public AI investment also require institutional architecture. Equity stakes in companies that commercialize publicly funded research must be managed by entities with the expertise to evaluate portfolio performance and the governance structures to ensure that returns flow to public purposes. Patent conditions on publicly funded inventions must be enforced by agencies with the legal capacity and political independence to hold companies accountable. Windfall taxation must be structured to capture a meaningful share of AI-related returns without discouraging the genuine value creation that deserves encouragement.

Mazzucato's emphasis on conditionality deserves particular attention in the AI context. Conditionality means that public support comes with requirements attached. Tax advantages for AI companies should be conditional on data sharing, environmental disclosure, and worker protections. Procurement contracts should require interoperability, accessibility, and fair pricing. Regulatory accommodation should be contingent on compliance with public-interest standards. Access to publicly funded research should come with obligations to license foundational patents on reasonable terms.

The application of conditionality to the AI economy would represent a fundamental shift from the current arrangement, in which public support flows to AI companies unconditionally. The shift would not penalize innovation. It would redirect innovation — ensuring that the technologies developed with public support, trained on public data, and deployed on public infrastructure serve public purposes as well as private ones.

The international dimension of the entrepreneurial state's role in AI requires coordination mechanisms that do not yet exist. The AI economy is global. The companies that dominate it operate across jurisdictions. An institutional framework implemented in a single country risks regulatory arbitrage — companies relocating operations to jurisdictions with weaker requirements. Effective institutional design requires shared standards for conditionality, coordinated competition policy, mutual recognition of governance frameworks, and mechanisms for ensuring that the benefits of publicly funded AI research reach the countries that contributed to the training data and the research base but lack the institutional capacity to capture returns independently.

The entrepreneurial state is not an ideological proposition. It is an empirical description of what successful states have done throughout the history of innovation. The AI transition demands that this empirical reality be acknowledged, strengthened, and extended — with institutional mechanisms that ensure the public investor participates in the returns from the markets it created, the technologies it funded, and the workforce it educated.

The alternative is an AI economy in which the most powerful technology in human history enriches its commercializers while the public institutions that made it possible are systematically defunded, using the myth of private-sector genius as justification. That alternative is not a hypothetical. It is the current trajectory. Reversing it requires building the institutional architecture described in these pages — and building it now, while the distributional patterns of the AI economy are still malleable enough to be reshaped.

---

Chapter 10: The Public Purpose

The argument of this book can be stated in a single paragraph, and the paragraph should be uncomfortable.

The foundational technologies of the AI era were substantially funded by public investment over seven decades. The training data was produced by billions of people over centuries, sustained by public education, public research, and public infrastructure. The returns from these technologies are being captured by a small number of private companies whose market capitalizations exceed the GDP of most nations. The workers displaced by the transition receive inadequate support from institutions designed for a previous era. The builders who depend on AI platforms have no meaningful voice in the governance of those platforms. The direction of AI capability is determined almost entirely by commercial logic, with minimal investment in the applications that could address humanity's most urgent challenges. The institutional architecture for ensuring a fair distribution of costs and benefits does not exist. Building it is the most urgent task of economic policy in the current moment.

Each sentence in that paragraph is supported by documented evidence. None is a matter of opinion. The funding histories are in the public record. The market capitalizations are reported quarterly. The inadequacy of transition support is documented in labor market research. The absence of builder voice in platform governance is observable by anyone who has read a terms-of-service agreement. The concentration of AI investment in commercially profitable applications, at the expense of mission-oriented deployment, is measurable in dollar terms.

Mazzucato's framework does not treat the current distribution as a natural outcome of market forces operating on a level playing field. It treats the distribution as the product of institutional design choices — or, more precisely, the absence of institutional design choices — that can be revised. The distributional trajectory of the AI transition is not fixed. It is being determined now, in real time, by decisions that governments, companies, and civil society are making or deferring.

The normative claim that underlies this analysis is simple: those who bear the risks of an enterprise should share in its rewards. This is not a radical proposition. It is the foundational principle of investment law, contract law, and the basic moral intuitions that govern economic relationships in every functioning society. An investor who funds a venture and bears the risk of failure expects to participate in the returns if the venture succeeds. An employee who invests years of labor in a company expects compensation proportional to contribution. A community that invests in infrastructure expects to benefit from the economic activity that infrastructure enables.

The public bore the risk of AI research for seven decades. The public produced the training data. The public funded the educational institutions that trained the researchers. The public maintains the infrastructure on which the AI economy operates. The principle that the risk-bearer should share in the return implies, straightforwardly, that the public should participate in AI's economic gains. The institutional mechanisms for operationalizing this participation — equity stakes, patent conditions, taxation, sovereign funds, public AI infrastructure — exist, are documented, and have been successfully implemented in other contexts.

The political economy of implementing these mechanisms is the real obstacle, and acknowledging it honestly is essential to the credibility of the institutional argument. The current distributional arrangement benefits specific, identifiable constituencies — AI company shareholders, platform executives, highly compensated AI workers — whose economic and political resources are substantial and growing. These constituencies have rational incentives to resist institutional redesign, and they will deploy those resources effectively. Every proposal for public participation in AI returns will be opposed by well-funded arguments that such participation will reduce innovation, stifle growth, and harm the very public it purports to benefit.

These arguments have been deployed against every proposal for public participation in the returns from technological progress in the history of industrial economies. They were deployed against the eight-hour day, against workplace safety regulations, against the minimum wage, against environmental protections, against every institutional mechanism that redirected some portion of innovation's returns from those who captured them to those who bore the costs. In every case, the predicted catastrophe did not materialize. The institutional mechanisms were implemented. Innovation continued. Growth continued. And the distribution of returns became less extreme — not perfectly equitable, but less concentrated than it would have been without institutional intervention.

The evidence from countries that have implemented strong public investment frameworks and return-sharing mechanisms reinforces this point. The Nordic countries maintain the highest levels of public investment in research and the most extensive mechanisms for public participation in returns — and are consistently ranked among the most innovative economies in the world. Israel's Yozma program included public equity stakes and produced one of the most dynamic venture capital ecosystems in history. The empirical record does not support the claim that institutional mechanisms for return-sharing reduce innovation. The claim persists not because it is supported by evidence but because it serves the interests of those who benefit from the current arrangement.

Mazzucato has made this point with a bluntness rare among economists who advise governments. Her insistence that "AI should be a public good, not a corporate tollbooth" is not a rhetorical flourish. It is a policy prescription with specific institutional implications: public AI infrastructure, conditional public support, return-sharing mechanisms, mission-oriented investment, platform governance reform.

The urgency is structural, not rhetorical. Distributional patterns, once established, create political constituencies that defend them. The longer the current arrangement persists, the more powerful the constituencies that benefit from it become, and the more difficult institutional reform becomes. The AI companies that capture billions in revenue from publicly funded technology accumulate political resources — lobbying capacity, media influence, campaign contributions, revolving-door relationships with regulators — that make future reform progressively more difficult. The window for institutional design is not indefinitely open. It is closing with every quarter that the distributional patterns of the AI economy harden into the political structure of the AI polity.

This book has traced a single thread through nine chapters: the documented reality that the AI revolution rests on a foundation of public investment, public knowledge, and public infrastructure — and the institutional absence that allows the returns from this public foundation to flow exclusively to private beneficiaries. The thread connects the myth of the garage to the mRNA precedent to the training data commons to the democratization paradox to the distributional question to mission-oriented innovation to the distinction between creation and extraction to the design of fair institutions to the architecture of return.

The thread points toward a specific conclusion. The AI transition can produce broadly shared prosperity. The technology is powerful enough, the productivity gains are large enough, and the potential applications are broad enough to generate benefits that extend far beyond the shareholders of a handful of companies. But this outcome is not automatic. It is the product of institutional choices that must be made deliberately, competently, and soon.

The institutional architecture described in these chapters — public investment authorities, integrated transition frameworks, platform governance reforms, mission-oriented agencies, conditionality requirements, return-sharing mechanisms — is not a wish list. It is a design specification. Each component addresses a documented institutional gap. Each is supported by evidence from successful precedents. Each is implementable within existing political and legal frameworks, given adequate political will.

The question is not whether the architecture can be built. It is whether it will be built — and whether it will be built in time.

Mazzucato closed her February 2025 analysis of the AI transition with a framing that captures both the stakes and the agency available: the choice is not between innovation and equity. It is between an AI economy directed solely by commercial logic, which will produce extraordinary returns for a few and inadequate returns for the many, and an AI economy shaped by public purpose, which can produce both commercial dynamism and broadly shared prosperity. The first outcome requires no institutional action. It is the default. The second requires the deliberate construction of institutional architecture at a scale and speed commensurate with the technology itself.

The architecture must be built. The window is narrowing. And the people whose lives will be shaped by its presence or absence — the displaced worker navigating a retraining system designed for a previous era, the builder dependent on a platform she cannot influence, the researcher whose publicly funded work enriches a company she will never own a share of, the student in Dhaka or Lagos whose potential contribution to AI-driven health or education remains unrealized for lack of institutional support — these people do not have the luxury of waiting for the political economy to become more favorable.

The public invested. The public produced. The public bore the risk. The institutional architecture must ensure the public shares in the return. That is the public purpose. It is achievable. And the cost of not achieving it is measured not in abstractions but in the specific, documentable, compounding consequences of an AI economy whose extraordinary power flows through pipes that were laid in the wrong direction and have never been rearranged.

---

Epilogue

The receipt was for twelve billion dollars.

Not a receipt anyone printed or signed — but the figure that appears in declassified Department of Defense budget documents as the approximate cost of developing the Global Positioning System. Twelve billion in public money, invested over decades, to build a constellation of satellites that now underpins ride-hailing, precision agriculture, logistics optimization, and approximately three hundred and twenty billion dollars in annual economic activity. The companies that captured the commercial value of GPS — from Uber to John Deere to Google Maps — did not repay the twelve billion. They did not need to. The institutional architecture does not include a mechanism for repayment. The public invested, the public bore the risk, and the public received no equity stake, no royalty stream, no share certificate.

I had known this fact for years in the way you know most facts — as information stored and rarely examined. Mazzucato's framework did something different with it. She turned the fact into a question with a sharp edge: If this is how we handled GPS, and the internet, and touchscreens, and voice recognition, and every other foundational technology that Silicon Valley mythologizes as the product of private genius — then what happens when we do it again with the most powerful technology in human history?

The question is not hypothetical. It is the question that sits beneath every chapter of this book, and it is the question I found hardest to hold steady while writing The Orange Pill. In Trivandrum, I watched twenty engineers discover a twenty-fold productivity multiplier — real capability, real expansion, real democratization. In those pages I celebrated the democratization with genuine enthusiasm, because the enthusiasm was genuine. What Mazzucato forced me to see was the plumbing beneath the floor I was standing on. The capability those engineers accessed was delivered through a private platform, built on publicly funded research, trained on publicly produced data. The productivity multiplier was real. The institutional architecture that determined who captured the returns from that multiplier was also real — and it was pointed in one direction.

The distinction that will not leave me alone is the one between value creation and value extraction. It sounds like an academic sorting exercise until you try to apply it to your own work, at which point it becomes a mirror you would prefer not to look into too carefully. I have built products that created genuine value — tools that solved real problems for real people. I have also built products whose primary function was capturing attention, and attention, once captured, is a resource extracted from the person who gave it. Mazzucato's framework does not let you claim credit for the creation without accounting for the extraction. The amplifier does not discriminate. Neither should the accounting.

What I take from this analysis is not guilt but urgency. The institutional architecture she describes — equity stakes for the public investor, transition support for displaced workers, governance rights for platform-dependent builders, mission-oriented direction of AI capability — is not a utopian fantasy. It is a design specification, grounded in precedents that worked. Norway built a sovereign wealth fund. Israel built a venture capital ecosystem with public equity stakes. DARPA built the organizational structure that produced the internet. The question is not whether these things are possible. The question is whether we will build them in time — before the distributional patterns of the AI economy harden into political structures that resist revision.

The window is narrowing. I feel it in the quarterly rhythm of my own industry, where each earnings report moves the goalposts further from the institutional reforms Mazzucato prescribes. I feel it in the conversations with parents who ask me what their children should study, and who deserve an answer that includes not just "learn to ask better questions" but "demand institutions that ensure the answers benefit more than shareholders." I feel it in the twelve-billion-dollar receipt that nobody sent and nobody paid.

The public invested. The public should share in the return. That sentence is not radical. It is arithmetic. And the time for doing the math is now.

Edo Segal

The dominant story of AI says private genius built it, private capital funded it, and the market will distribute its rewards. That story is wrong — and the evidence is in the budget documents. Mariana Mazzucato has spent her career tracing the public money behind technologies the private sector claims as its own. In this volume, her framework is applied to the AI revolution with surgical precision: the publicly funded research that survived two AI winters, the training data produced by billions of people over centuries, and the institutional architecture that allows returns to flow in one direction while risk flowed in the other. The result is a book that does not ask whether AI creates value — it manifestly does — but who captures that value, who bore the cost of creating it, and whether the distribution reflects anything resembling justice. This is the book for anyone who celebrated the democratization of capability and then wondered who owns the platform delivering it.

The dominant story of AI says private genius built it, private capital funded it, and the market will distribute its rewards. That story is wrong — and the evidence is in the budget documents. Mariana Mazzucato has spent her career tracing the public money behind technologies the private sector claims as its own. In this volume, her framework is applied to the AI revolution with surgical precision: the publicly funded research that survived two AI winters, the training data produced by billions of people over centuries, and the institutional architecture that allows returns to flow in one direction while risk flowed in the other. The result is a book that does not ask whether AI creates value — it manifestly does — but who captures that value, who bore the cost of creating it, and whether the distribution reflects anything resembling justice. This is the book for anyone who celebrated the democratization of capability and then wondered who owns the platform delivering it.

Mariana Mazzucato
“governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies”
— Mariana Mazzucato
0%
11 chapters
WIKI COMPANION

Mariana Mazzucato — On AI

A reading-companion catalog of the 22 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Mariana Mazzucato — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →