By Edo Segal
The shape that broke my thinking was not a line. It was a ring.
Every metric I have ever used to measure progress points in one direction: up. Revenue up. Users up. Productivity up. The twenty-fold multiplier I describe in *The Orange Pill* — up. The adoption curves that stunned the industry — up. The imagination-to-artifact ratio collapsing toward zero — a line heading down, which in this context also means up. Every instrument in the builder's fishbowl draws arrows, and every arrow points toward more.
Kate Raworth drew a doughnut and asked a question none of my instruments could formulate: More of what? And for whom? And at what cost to the living systems that make "more" possible in the first place?
I did not encounter Raworth's framework looking for an economic argument. I encountered it because twelve kept showing up as a number that would not leave me alone. Twelve dimensions of human well-being — food, water, health, education, income, energy, housing, networks, political voice, social equity, gender equality, jobs — that together define the floor below which no person should fall. And nine planetary boundaries that define the ceiling above which the biosphere begins to destabilize. The space between those rings is where humanity can thrive. Not grow. Thrive.
That distinction rearranged something in me. I had been celebrating the amplifier — the extraordinary power of AI to carry human intention further than any tool in history. Raworth forced me to ask what the amplifier was connected to. An amplifier does not generate a signal. It takes whatever signal it receives and makes it louder. Feed it the signal of growth-addicted economics — more throughput, more extraction, more concentration — and AI becomes the most powerful engine of ecological overshoot ever built. Feed it a signal calibrated to the doughnut — meet needs, respect boundaries, distribute gains — and the same technology becomes something worth celebrating.
The chapters that follow use Raworth's compass to examine the AI revolution from outside the builder's fishbowl. They ask what democratization looks like when you measure it across twelve dimensions instead of one. They confront the material costs that the river-of-intelligence metaphor conveniently floats above. They ask what "enough" means in an age when the capacity to produce has outstripped the planet's capacity to absorb the consequences.
This is not an argument against building. It is an argument for checking the direction before you amplify the signal.
The doughnut is the shape of the question the builder's instruments cannot ask. It is the lens I did not have. Now I do.
— Edo Segal ^ Opus 4.6
1970-present
Kate Raworth (born 1970) is a British economist, author, and senior associate at Oxford University's Environmental Change Institute. Trained at the University of Oxford and with a master's from the University of East Anglia, she spent over a decade working for the United Nations Development Programme and Oxfam before turning to the reconceptualization of economics for the twenty-first century. Her 2017 book *Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist* proposed replacing GDP growth as the central goal of economic policy with a visual framework — the doughnut — depicting the safe and just space for humanity between a social foundation of basic human needs and an ecological ceiling of planetary boundaries. The framework drew on the Stockholm Resilience Centre's planetary boundaries research and the United Nations Sustainable Development Goals, and it has since been adopted as a practical policy tool by the city of Amsterdam, the Doughnut Economics Action Lab, and municipalities and organizations worldwide. Raworth is a professor of practice at Amsterdam University of Applied Sciences and a widely sought lecturer whose TED talks and public engagements have brought ecological economics to mainstream audiences. Her work stands as one of the most influential challenges to growth-addicted economic orthodoxy in the early twenty-first century.
In 2017, Kate Raworth drew a picture that changed the terms of an argument economists had been having for seventy years. The picture was a doughnut — two concentric rings with a space between them — and its power lay not in its complexity but in its displacement of the question that had organized economic thinking since the end of the Second World War. That question was: Is the economy growing? Raworth's doughnut replaced it with a different one: Is the economy helping humanity thrive?
The distinction sounds subtle. It is not. It is the difference between a compass that points toward "more" and a compass that points toward "enough." And the arrival of the most powerful amplifier in human history — artificial intelligence as described in Edo Segal's The Orange Pill — makes the distinction between those two compasses a matter of civilizational survival.
The doughnut has two boundaries. The inner ring is the social foundation: twelve dimensions of human well-being — health, education, income, political voice, gender equality, social equity, housing, networks, energy, water, food, jobs — below which no person should fall. When someone lacks clean water, adequate nutrition, basic education, or the political voice to participate in the decisions that shape their life, they have fallen through the floor of the doughnut into a space of deprivation that no civilized economy should tolerate. The outer ring is the ecological ceiling: nine planetary boundaries — climate change, ocean acidification, chemical pollution, nitrogen and phosphorus loading, freshwater withdrawal, land conversion, biodiversity loss, air pollution, ozone layer depletion — beyond which the Earth's life-support systems begin to destabilize. When an economy pushes past these boundaries, it does not merely cause environmental damage in the conventional sense. It undermines the biophysical conditions upon which all economic activity, all human civilization, and all life depend.
The space between the rings — the doughnut itself — is where humanity can thrive. Meeting the needs of all people within the means of the living planet. That is the goal. Not growth. Not GDP. Not productivity. Thriving.
Raworth organized her challenge to orthodox economics around seven fundamental moves: change the goal from growth to the doughnut; see the economy as embedded within society and the living world rather than as a self-contained circular flow; nurture a realistic picture of human nature rather than the fiction of rational economic man; get savvy with systems rather than clinging to mechanical equilibrium; design to distribute rather than hoping growth will even things out; create to regenerate rather than assuming growth will clean things up; and be agnostic about growth rather than addicted to it. Each of these moves has implications for AI that are immediate, specific, and largely unexamined by the technology industry that is building the future at extraordinary speed.
Consider what Segal describes in The Orange Pill: a twenty-fold productivity multiplier achieved in a week of training in Trivandrum. Engineers who had spent years in narrow specializations suddenly building across domains. A single person shipping products that previously required teams of twenty. The imagination-to-artifact ratio collapsing to the width of a conversation. These are genuine, measurable expansions of human capability, and Segal is right to be awed by them.
But the doughnut asks a question that the builder's awe does not naturally generate: In which direction does this expansion push? Toward the social foundation, raising people who lacked capability into the space of meaningful economic participation? Or toward the ecological ceiling, intensifying the throughput of an economic system that is already transgressing planetary boundaries? Or — and this is the answer that makes the doughnut framework indispensable — both, simultaneously, in a paradox that cannot be resolved by celebrating one direction and ignoring the other?
The amplifier metaphor that runs through The Orange Pill captures this paradox with accidental precision. An amplifier does not generate a signal. It takes whatever signal it receives and makes it louder. Feed it the signal of growth-addicted economics — more production, more consumption, more throughput, more extraction, the logic that has governed the global economy since Bretton Woods — and AI becomes the most powerful engine of ecological overshoot ever invented. Feed it the signal of doughnut economics — meet needs, respect boundaries, distribute gains, regenerate systems — and AI becomes the most powerful tool for human thriving in the history of the species. The technology does not determine the outcome. The economic logic that governs its deployment does.
This is not a theoretical distinction. It is playing out, right now, in decisions being made in boardrooms and server farms and venture capital meetings and government offices around the world. And in almost every case, the economic logic governing those decisions is the logic of growth. The metrics by which AI companies are evaluated — revenue growth, user adoption, market capitalization, tokens processed, parameters trained — are growth metrics. They measure expansion. They do not measure direction. A company that doubles its revenue while pushing three planetary boundaries further into overshoot and concentrating its gains among a narrow class of shareholders registers as a success by every metric the market currently applies.
Raworth's framework reveals this as a measurement failure of historic proportions. The instruments are calibrated to the wrong scale. It is as though a hospital evaluated its performance solely by the number of patients processed per hour, without asking whether any of them got better.
The growth-addicted measurement system has a specific, observable consequence for AI deployment: it systematically rewards throughput over direction. The company that processes more queries, trains larger models, deploys more agents, generates more code — that company attracts more capital, hires more talent, captures more market share, regardless of whether its activities raise or lower the social foundation, regardless of whether they respect or transgress the ecological ceiling. The doughnut is invisible to the instruments that govern investment.
Segal's The Orange Pill operates, honestly and transparently, from within this growth-addicted framework. The book's metrics are the metrics of capability expansion: how much faster engineers can build, how many more products a single person can ship, how rapidly adoption curves ascend. Segal is a builder, and builders measure what they build. The fishbowl he describes — the set of assumptions so familiar you have stopped noticing them — is, from Raworth's perspective, the fishbowl of growth economics. The water he breathes is the assumption that capability expansion is inherently good, that more production is better than less, that the direction problem will be solved by the quality of the questions people ask rather than by the structure of the economic system that determines which questions get rewarded.
This is not a criticism of Segal's honesty or his intelligence. It is a diagnosis of the fishbowl itself. And the diagnosis matters because the fishbowl is not idiosyncratic. It is the dominant economic worldview of the technology industry, the venture capital ecosystem, the policy frameworks that govern AI deployment, and the educational institutions that train the people who build these systems. The fishbowl is the water the entire AI economy swims in.
The doughnut cracks this fishbowl open. It does so not by rejecting capability expansion — Raworth has never argued against human capability — but by insisting that expansion be evaluated against both boundaries simultaneously. The social foundation asks: Is this expansion reaching the people who need it most? Is it lifting the developer in Lagos above the threshold of meaningful economic participation, or is it primarily enriching the shareholders of companies headquartered in San Francisco? The ecological ceiling asks: What is the material cost of this expansion? How much energy does the training run consume? How much water does the data center require? What rare earth minerals were extracted, from whose land, under what conditions, to build the devices on which this capability is accessed?
These questions are not hostile to technology. They are hostile to the omission of consequences from the evaluation of technology. And the omission is systematic. In Segal's account of the Trivandrum training, the engineers gained extraordinary new capabilities. The products they could build expanded dramatically. The imagination-to-artifact ratio collapsed. All of this is true, and all of it matters. What is not mentioned — because the builder's fishbowl does not contain it — is the energy consumed by the Claude Code instances those engineers were running, the carbon footprint of the data centers that processed their prompts, the water used to cool the servers, the minerals in the laptops, the total material throughput of the productivity expansion. Not because Segal is hiding these costs, but because the economic framework he inhabits does not make them visible.
The doughnut makes them visible. That is its function. It is a lens that reveals both rings simultaneously, refusing to let the celebration of the social-foundation advance obscure the ecological-ceiling transgression, and refusing to let ecological concern become a reason to deny capability to people who desperately need it. The doughnut holds both boundaries in view at the same time. That is what makes it uncomfortable, and what makes it necessary.
In her 2018 TED talk, Raworth directly addressed the relationship between emerging technologies and distributive design: "If we can harness today's technologies, from AI and blockchain to the Internet of Things and material science, if we can harness these in service of distributive design, we can ensure that health care, education, finance, energy, political voice reaches and empowers those people who need it most." The framing is revealing. Technologies — AI included — are instruments. They can serve distributive design or they can serve concentrative extraction. The technology does not decide. The design decides.
The design that currently governs AI deployment is not distributive. It is concentrative. The gains of AI productivity flow disproportionately to the companies that build the models, the investors who fund them, and the knowledge workers in wealthy countries who have the infrastructure, education, and connectivity to use them. The developer in Lagos that Segal describes is a real and important counter-example — a person for whom AI genuinely lowers the floor of economic participation. But she is the exception that proves the structural rule: the default flow of AI-generated value is toward concentration, not distribution, because the economic system within which AI operates is designed for concentration.
Raworth's most uncomfortable insight, applied to AI, is this: the amplifier is only as good as the economic system it amplifies. An amplifier connected to a distributive, regenerative economy produces distributive, regenerative outcomes at scale. An amplifier connected to a concentrative, degenerative economy produces concentration and degeneration at scale. The technology is a multiplier of the existing logic. It does not transform the logic. It accelerates it.
This means that the project of directing AI toward human thriving is not primarily a technology project. It is an economics project. It requires changing the economic logic that governs deployment — the incentive structures, the ownership models, the measurement systems, the governance frameworks, the cultural assumptions about what counts as success. It requires, in Raworth's terms, redesigning the economy so that AI amplifies thriving rather than overshoot.
The chapters that follow trace this redesign across the doughnut's two boundaries and seven economic moves. They examine the social foundation and what genuine AI democratization would require. They confront the ecological ceiling and the material costs the builder's fishbowl conceals. They ask what growth-agnostic AI deployment would look like, what distributive design means for the ownership of AI-generated value, what regenerative design means for the direction of freed human energy, and what an economics of enough means in an age when the capacity to produce has, for the first time in history, outstripped the planet's capacity to absorb the consequences.
The doughnut is not an argument against AI. It is an argument for directing AI toward the only goal that makes long-term sense: an economy in which every person's needs are met without transgressing the planetary boundaries that sustain all life. The amplifier is extraordinary. The question is whether the signal it carries will be worthy of its power.
The social foundation of the doughnut consists of twelve dimensions of human well-being, drawn from the internationally agreed minimum standards in the Sustainable Development Goals: sufficient food, clean water, adequate health care, quality education, decent housing, minimum income, access to energy, access to networks and information, political voice, social equity, gender equality, and meaningful work. Below this foundation lies deprivation — the space where human beings lack what they need to participate in life with dignity. The project of economics, in Raworth's framing, is to ensure that no one falls below this floor while no one pushes through the ecological ceiling above.
Artificial intelligence, as Segal describes it in The Orange Pill, offers the most dramatic expansion of productive capability since the industrial revolution. A developer in Lagos with Claude Code has access to the same generative leverage as an engineer at a major Silicon Valley firm. An engineer in Trivandrum who spent years confined to backend systems suddenly builds complete user-facing features. A non-technical founder prototypes a revenue-generating product over a weekend. In each case, a barrier between human intention and economic participation has been lowered or removed. These are social-foundation interventions in the doughnut sense — genuine advances toward ensuring that more people can participate meaningfully in economic life.
The question the doughnut poses is not whether these advances are real. They are. The question is whether they are sufficient — whether lowering the skill barrier alone constitutes a durable advance toward the social foundation, or whether skill access is one dimension of a multi-dimensional problem that requires simultaneous attention to all twelve dimensions to produce genuine thriving.
The evidence suggests the latter, decisively.
Consider the developer in Lagos more carefully. Segal acknowledges, with characteristic honesty, that access to AI tools requires connectivity, hardware, English-language fluency, and capital that billions of people do not have. Each of these requirements maps onto a dimension of the social foundation that AI does not address.
Connectivity requires infrastructure — fiber optic cables, cell towers, reliable electrical grids. In much of sub-Saharan Africa, internet penetration remains below forty percent, and the connections that exist are often slow, expensive relative to local incomes, and unreliable. The developer in Lagos may have connectivity; her counterpart in rural Niger almost certainly does not. AI tools that require stable, high-bandwidth internet connections are, by their architecture, inaccessible to the people most below the social foundation. The democratization of capability has a coverage map, and the coverage map follows the contours of existing infrastructure investment, which follows the contours of existing wealth.
Hardware requires capital. A laptop capable of running modern development environments costs several hundred dollars — a figure that is trivial relative to a software engineer's salary in San Francisco and represents months of income for a worker in Dhaka or Nairobi. The devices on which AI is accessed are physical objects manufactured through global supply chains, sold at prices determined by global markets, and affordable only to people who have already cleared certain income thresholds. The social foundation includes minimum income as a dimension for precisely this reason: economic participation requires economic resources, and the people most in need of the capability AI provides are often the people least able to afford the devices through which it is delivered.
English fluency is perhaps the most structurally significant barrier, because it is the least visible from inside the builder's fishbowl. The large language models that power AI coding assistants were trained predominantly on English-language data. Their capabilities in other languages are improving but remain substantially weaker. The prompts that produce the best results are English prompts. The documentation is in English. The community forums where users share techniques and troubleshoot problems are overwhelmingly English-language. For a developer whose first language is Yoruba, Bangla, or Quechua, the barrier is not merely linguistic — it is epistemic. The entire knowledge infrastructure that makes AI tools usable is built in a language that approximately eighty percent of the world's population does not speak as a first language.
Raworth's framework insists on comprehensiveness. An advance on one dimension of the social foundation that leaves other dimensions untouched does not constitute thriving — it constitutes a partial, fragile improvement that can be reversed by any shock along the unaddressed dimensions. A developer in Lagos who gains AI coding capability but lacks reliable electricity, affordable healthcare, political voice in the governance of the platforms she depends on, and economic security sufficient to absorb a failed project has not been lifted into the doughnut. She has been handed a powerful tool while standing on unstable ground.
This is not an argument against giving her the tool. It is an argument against mistaking the tool for the ground.
The history of technology-driven development is littered with single-dimension interventions that were celebrated as transformative and proved to be fragile. The One Laptop Per Child initiative of the mid-2000s distributed millions of low-cost laptops to children in developing countries on the theory that access to computing would transform educational outcomes. The laptops arrived. The educational transformation, in most cases, did not. Not because the technology was wrong, but because the technology addressed one dimension — device access — of a multi-dimensional problem that included teacher training, curriculum design, electricity reliability, internet connectivity, parental engagement, and the economic conditions that determine whether a child attends school at all.
The doughnut would have predicted this failure. A single-dimension intervention into a multi-dimensional deprivation produces a single-dimension improvement that is insufficient for thriving and vulnerable to reversal. The lesson is not that technology is useless for development. The lesson is that technology deployed without attention to the full set of social-foundation dimensions produces gains that are narrow, unequal, and impermanent.
Applied to AI, this lesson generates specific, actionable implications. A doughnut-compatible AI democratization strategy would address not merely the skill barrier but the full set of conditions required for the skill to translate into durable economic participation. It would invest in the infrastructure that makes connectivity reliable and affordable. It would develop AI capabilities in languages beyond English — not as an afterthought or a charitable initiative, but as a core design requirement, on the understanding that a tool accessible only to English speakers is not a democratized tool but a tool with a particularly insidious form of gatekeeping. It would address the economic conditions — income security, healthcare, housing stability — that determine whether a person can afford to take the risk of building something new with a powerful tool. And it would ensure that the governance of AI platforms includes the voices of the people most affected by their deployment, not merely the voices of the shareholders and engineers who build them.
This last point — governance — connects to what Raworth has called the most neglected dimension of the social foundation: political voice. The platforms on which AI capability is delivered are governed by corporate decisions made in a small number of wealthy countries by a small number of wealthy individuals. The developer in Lagos has no vote on Anthropic's board. She has no input into the pricing decisions that determine whether she can afford the tools. She has no voice in the safety policies that determine what the tools can and cannot do. She is a user — a consumer of capability — in a system whose architecture she had no role in designing and whose direction she has no power to influence.
Raworth's distributive design principle insists that this is not merely an inconvenience but a structural failure. An economy in which the most powerful productive tools are governed exclusively by the people who profit from them is an economy designed for concentration, regardless of how widely the tools are nominally available. Access without governance is consumption, not participation. And consumption without participation is not the kind of economic agency the social foundation requires.
The contrast with the Trivandrum story is instructive. Segal's engineers in India gained genuine capability through AI tools. They could build more, build faster, build across domains they had never touched. But they gained this capability within the context of an employment relationship — a context that included income security, team support, mentorship, organizational infrastructure, and the guidance of a leader who chose to invest in their development rather than replace them with a smaller, cheaper team. The social-foundation conditions were already in place. The AI tool amplified capability that was already supported by a multi-dimensional foundation of economic security, institutional belonging, and human relationship.
Strip away those conditions and the same tool produces a very different outcome. A freelance developer in a country with no labor protections, no healthcare system, no unemployment insurance, using AI tools on a pay-per-use basis with no organizational support, faces a fundamentally different relationship with the technology. The capability is the same. The context is entirely different. And context, in the doughnut framework, is everything, because thriving is not a property of individuals in isolation — it is a property of individuals embedded in systems that support their well-being across all twelve dimensions simultaneously.
The most optimistic reading of AI democratization — the one that dominates the builder's discourse — holds that capability expansion will, over time, generate the economic activity that addresses the other dimensions. The developer in Lagos, empowered by AI, builds a product, earns income, invests in her community, and the rising tide lifts all boats. Raworth has spent a career dismantling this logic. The rising-tide theory has been the dominant defense of growth-first economics for seventy years, and seventy years of evidence demonstrate that tides do not rise evenly. They rise in channels carved by existing power structures, concentrating gains among those who already have the infrastructure to capture them and leaving those without that infrastructure exactly where they were — or further behind, as the gap between the empowered and the unempowered widens.
This does not mean AI democratization is a false promise. It means it is an incomplete one. Completing it requires institutional design that goes far beyond the technology itself — design that addresses infrastructure, education, healthcare, income security, political voice, and the governance of the platforms that deliver capability. These are not technology problems. They are economics problems, governance problems, political problems. And they are precisely the problems that the doughnut framework was built to make visible.
The social foundation is not a checklist to be completed one dimension at a time. It is an interconnected system in which each dimension supports and depends on the others. Health depends on income. Education depends on nutrition. Political voice depends on social equity. Meaningful work depends on networks, energy, and information access. AI can contribute to multiple dimensions simultaneously — but only if it is deployed within an institutional framework that deliberately connects capability expansion to the full set of conditions required for thriving. Without that framework, the amplifier amplifies the existing pattern: capability for those who already have the foundation to use it, and a wider gap for those who do not.
The doughnut does not ask whether AI can help. It asks whether AI, as currently deployed, is helping enough people in enough dimensions to constitute genuine progress toward a world in which no one falls below the social foundation. The honest answer, in 2026, is: not yet. Not because the technology is inadequate, but because the economic system within which the technology operates is not designed for comprehensiveness. It is designed for growth. And growth, as Raworth has argued with precision and persistence, is not the same thing as thriving.
There is an absence at the center of The Orange Pill that is shaped exactly like the thing it does not mention. Segal's book describes the river of intelligence flowing for 13.8 billion years, from hydrogen atoms through biological evolution through symbolic thought through computation. The metaphor is beautiful and, in its own terms, correct. But the river, in Segal's telling, is an abstraction — a flow of pattern and capability, weightless and immaterial, requiring nothing from the physical world except the minds through which it moves.
The actual river has a body. It runs on silicon and copper and cobalt and lithium. It is cooled by billions of gallons of water. It is powered by electricity generated from natural gas, coal, nuclear fission, solar radiation, and wind. It is housed in concrete structures the size of aircraft hangars, built on land that was, until recently, something else — farmland, forest, wetland, desert. The river of intelligence, in its latest and most powerful channel, is a material phenomenon with material consequences, and those consequences press directly against the ecological ceiling of the doughnut.
This is not a peripheral concern. It is the concern that the builder's fishbowl systematically excludes, and its exclusion is the single most dangerous feature of the current AI discourse.
Raworth built the ecological ceiling of the doughnut on the planetary boundaries framework developed by Johan Rockström and colleagues at the Stockholm Resilience Centre in 2009, subsequently updated and refined. The framework identifies nine Earth-system processes that regulate the stability of the biosphere — the conditions within which human civilization developed over the past ten thousand years and upon which all economic activity depends. For each process, the framework identifies a boundary beyond which the risk of destabilizing the system increases sharply and, in some cases, irreversibly. As of the most recent assessment, humanity has already transgressed six of the nine boundaries: climate change, biosphere integrity, land-system change, biogeochemical flows, novel entities (chemical pollution), and freshwater change.
AI operations interact with at least four of these boundaries directly.
Climate change is the most discussed and the most measurable. Training a large language model requires computational operations on a scale that was, until recently, unimaginable. The energy consumed by a single training run of a frontier model is measured in gigawatt-hours — equivalent to the annual electricity consumption of thousands of households. Inference — the ongoing energy cost of running the trained model, answering queries, generating code, processing the millions of prompts that constitute the daily operation of AI systems — consumes even more energy in aggregate than training, because it runs continuously at scale. The International Energy Agency projected in its 2024 report that global data center electricity consumption could more than double by 2026, driven primarily by AI workloads. That projection is already being revised upward.
The carbon intensity of this energy consumption depends on the energy mix of the grids that power the data centers. In regions where the grid is powered predominantly by renewable sources, the carbon footprint per computation is relatively low. In regions powered by natural gas or coal, it is substantial. The AI industry's response has been to invest in renewable energy procurement — power purchase agreements, on-site solar and wind installations, commitments to carbon neutrality. These investments are real and meaningful. They are also insufficient, for a structural reason that Raworth's framework makes visible: the AI industry is not the only claimant on the renewable energy supply. Every megawatt-hour of renewable electricity consumed by a data center is a megawatt-hour unavailable for decarbonizing transportation, heating, manufacturing, or agriculture. The total renewable energy capacity is growing, but it is growing within a finite planet, and the demands upon it are growing faster than the supply. AI is not merely consuming energy; it is competing for the clean energy that every other sector of the economy also needs in order to decarbonize within the timeframe the climate boundary demands.
Freshwater use is less discussed and, in some regions, more immediately consequential. Data centers generate enormous quantities of heat, and that heat must be dissipated. The most common cooling method uses water — evaporative cooling systems that consume millions of gallons annually per facility. A 2023 study by researchers at the University of California, Riverside, estimated that a single conversational exchange with a large language model could consume the equivalent of a small bottle of water in cooling requirements. Multiplied across billions of daily interactions, the aggregate freshwater consumption is substantial. In regions already experiencing water stress — the American Southwest, parts of India, the Middle East, sub-Saharan Africa — the construction of data centers creates direct competition between AI operations and human water needs. The ecological ceiling constrains freshwater withdrawal. The social foundation requires access to clean water. AI data centers press against both simultaneously.
The extraction of rare earth minerals and critical materials for semiconductor manufacturing interacts with several planetary boundaries at once. Cobalt, lithium, tantalum, neodymium, and dozens of other elements are required in the chips, batteries, and devices that constitute the physical infrastructure of AI. Their extraction involves mining operations that displace ecosystems, contaminate water sources, generate toxic waste, and — in several well-documented cases — depend on labor conditions that fall below any reasonable interpretation of the social foundation. The cobalt mines of the Democratic Republic of Congo, the lithium extraction operations in Chile's Atacama Desert, the rare earth processing facilities in Inner Mongolia — these are the upstream realities of the devices on which AI capability is accessed. They are invisible from inside the builder's fishbowl, not because they are hidden, but because the economic framework that governs AI deployment does not include them in its accounting.
Kate Crawford, in her 2021 book Atlas of AI, traced these material supply chains with forensic precision, documenting the gap between the immaterial rhetoric of AI — intelligence, learning, understanding — and the brutally material reality of its infrastructure. Crawford's analysis is a direct complement to Raworth's framework: the ecological ceiling is not an abstraction. It is cobalt miners in Kolwezi, water tables dropping in Arizona, carbon emissions from natural gas plants in Virginia, and toxic waste from semiconductor fabs in Taiwan. The doughnut demands that these material realities be included in any honest evaluation of AI's contribution to human thriving.
The Orange Pill does not address these costs. This is not an accusation of dishonesty — Segal explicitly identifies his perspective as the builder's fishbowl and acknowledges that every fishbowl reveals part of the world while hiding the rest. But the part that the builder's fishbowl hides is precisely the part that determines whether AI can operate within the doughnut or whether it is structurally incompatible with the ecological ceiling that bounds the safe and just space for humanity.
The standard response from the technology industry is efficiency: AI systems are becoming more computationally efficient per unit of capability. Each generation of hardware does more computation per watt. Each generation of model architecture achieves more performance per parameter. These improvements are real and important. They are also subject to a dynamic that economists have understood since the nineteenth century and that Raworth highlights as one of the most persistent traps in growth-addicted economics: the Jevons paradox.
William Stanley Jevons observed in 1865 that improvements in the efficiency of coal-fired steam engines did not reduce total coal consumption. They increased it, because cheaper energy made new applications economically viable, and the total demand generated by the new applications exceeded the savings from the efficiency improvement. The same dynamic operates in AI. More efficient models do not reduce total energy consumption. They reduce the cost per query, which makes more queries economically viable, which drives adoption, which increases total energy consumption. The efficiency gains are captured by the growth logic of the system and converted into more throughput rather than less resource consumption.
This is not a failure of engineering. It is a feature of the economic system within which the engineering operates. A growth-addicted economy systematically converts efficiency gains into throughput expansion, because the incentive structure rewards volume — more users, more queries, more revenue — rather than sufficiency. A doughnut economy would do something different: it would capture efficiency gains as ecological space, using the reduced energy per query not to process more queries but to reduce total resource consumption, creating room within the ecological ceiling for other essential activities.
The difference is structural, not technological. The same AI hardware, running the same models with the same efficiency, produces radically different ecological outcomes depending on whether the economic logic governing its deployment is oriented toward growth or toward the doughnut. In the growth scenario, efficiency improvements accelerate ecological overshoot by making it cheaper. In the doughnut scenario, efficiency improvements create ecological space by reducing the material footprint of meeting human needs.
Raworth's embedded economy model — the economy nested within society, nested within the living world — provides the framework for seeing this distinction clearly. Orthodox economics depicts the economy as a self-contained system: households supply labor to firms, firms supply goods to households, money circulates between them, and the system is analytically complete. The living world appears, if at all, as an externality — a cost imposed on parties not directly involved in the transaction. Raworth replaces this with a picture in which the economy is a subset of society, which is a subset of the biosphere. Nothing enters the economy that does not come from the living world. Nothing leaves the economy that does not return to the living world. The economy is an open subsystem of a finite, materially closed planetary system.
Applied to AI, this embedded model reveals what the self-contained model conceals. Every token processed by a large language model is a transformation of energy extracted from the Earth, processed through an infrastructure of mines, refineries, power plants, cooling systems, and networks, and returned to the Earth as waste heat, carbon emissions, depleted aquifers, and contaminated landscapes. The token is not immaterial. It is a material event with material consequences. And the sum of billions of daily material events is a planetary-scale material flow that presses against boundaries the Earth's systems cannot absorb indefinitely.
The doughnut does not demand that AI stop operating. It demands that AI operate within the ecological ceiling — that the total material throughput of AI systems, including training, inference, hardware manufacturing, and device production, remain within the boundaries that sustain the biosphere. This is a design constraint, not a prohibition. It asks the AI industry to do what every other industry must also do: account for its full ecological footprint and design its operations to fit within the planetary boundaries that sustain all life.
The honest assessment, in 2026, is that the AI industry is moving in the opposite direction. Total energy consumption is rising. Total water consumption is rising. Total material extraction for hardware is rising. Efficiency improvements per unit of computation are real, but they are being overwhelmed by the expansion of total computation. The Jevons paradox is operating exactly as Jevons predicted, and the growth-addicted economic logic that governs AI deployment ensures that it will continue to do so until the logic itself changes.
The ecological ceiling is not negotiable. It is not a policy preference or a political position. It is a set of biophysical thresholds beyond which the Earth's systems behave differently — less predictably, less hospitably, less compatibly with the conditions under which human civilization developed. The doughnut makes these thresholds visible as hard constraints on economic design. The AI industry has not yet reckoned with them. That reckoning is coming, and it will reshape the industry as fundamentally as the capability expansion that The Orange Pill celebrates. The only question is whether the reckoning will be chosen — designed, deliberate, guided by the doughnut's compass — or imposed, by a planet whose boundaries do not care about quarterly earnings reports.
The most dangerous idea in economics is not any particular theory about markets or money or trade. It is the assumption that more is better — that the fundamental purpose of an economy is to grow, that growth is the measure of success, and that the policy question is never whether to grow but only how fast. This assumption is so deeply embedded in the institutions, metrics, and cultural expectations of modern economic life that it has become invisible — the water in the fishbowl, to borrow Segal's metaphor, that every fish breathes without noticing.
Kate Raworth calls this growth addiction, and she distinguishes it carefully from growth itself. Growth is a phenomenon — a measurable increase in economic output. Growth addiction is a dependency — the structural inability of an economic system to function without continuous expansion. The distinction matters because an economy can be healthy while growing, healthy while stable, or healthy while contracting, depending on whether the activity within it meets human needs within planetary boundaries. What an economy cannot be is healthy while addicted — while its institutions, incentive structures, and measurement systems are calibrated to a single variable (GDP growth) that tells you nothing about whether people are thriving or the planet is sustaining.
The AI economy, as it exists in 2026, is growth-addicted in every dimension Raworth's framework identifies.
The metrics are growth metrics. The AI industry measures itself by adoption speed — ChatGPT reaching fifty million users in two months, Claude Code's run-rate revenue crossing $2.5 billion. It measures by productivity multipliers — twenty-fold, fifty-fold, the number climbing with each new benchmark. It measures by market capitalization, by tokens processed, by parameters trained, by the percentage of code that is AI-generated. Every number that the industry tracks is a measure of expansion. None is a measure of direction.
Ask a technology executive whether AI is succeeding, and the answer will invariably reference one or more of these growth metrics. Revenue is up. Adoption is accelerating. Productivity is multiplying. The numbers go up and to the right, and in the grammar of growth-addicted economics, up and to the right is the only direction that constitutes success.
The doughnut asks a set of questions that this grammar cannot formulate. Is the expanding AI economy meeting more human needs? Specifically: are the people below the social foundation — those lacking adequate food, water, health care, education, housing, income, energy, networks, political voice, social equity, gender equality, meaningful work — being lifted above it by AI deployment? Or are the gains accruing primarily to people who were already above the foundation, widening the gap between those who have enough and those who do not?
The available evidence is mixed but leans heavily toward concentration. The primary beneficiaries of AI productivity gains, as of 2026, are knowledge workers in wealthy countries — software engineers, designers, product managers, executives, and the investors who fund the companies that employ them. These are people who, by any reasonable measure, were already above the social foundation before AI arrived. The gains are real — Segal's engineers in Trivandrum genuinely expanded their capabilities — but the engineers in Trivandrum were already employed, already connected, already educated, already above the social foundation in most dimensions. The tool amplified existing capability. It did not, in most observed cases, create capability where none existed before.
The developer in Lagos is the counter-example that the industry reaches for when this concentration is pointed out. And the counter-example is genuine — there are real people, in real places, for whom AI access represents a meaningful advance in economic participation. But the counter-examples are countable. The structural pattern — gains flowing to existing capability rather than creating new foundations — is systemic.
Raworth's growth-agnostic alternative does not reject these gains. It reframes them. A growth-agnostic evaluation of AI would ask not "How much has productivity increased?" but "Has the increase brought more people into the doughnut's safe and just space?" The answer requires measuring something that the technology industry does not currently measure: the distribution of AI-generated value across the full range of the social foundation's dimensions, evaluated against the ecological ceiling's constraints.
Consider the Software Death Cross that Segal describes in The Orange Pill — the moment when the AI market overtakes the SaaS market in aggregate value, with a trillion dollars of SaaS market capitalization evaporating in weeks. Segal reads this as a repricing: the market discovering that code has become a commodity and adjusting valuations accordingly. The companies whose value was always above the code layer — in ecosystem, data, institutional trust — will survive. The companies that were always just code will not.
Through the doughnut lens, the Death Cross is a more complex event. It is the growth-addicted software economy encountering a contradiction within its own logic. The SaaS model was built on the premise that software is scarce — hard to write, expensive to produce, valuable because it is difficult. When AI made software abundant, the scarcity that justified the business model evaporated, and the valuations collapsed. This is growth-addicted economics encountering a limit, not an ecological limit but an internal one — the limit of a business model premised on artificial scarcity in a world where the artificial scarcity has been eliminated.
A growth-agnostic reading of the Death Cross sees opportunity where the growth-addicted reading sees catastrophe. If the goal is not to maximize the value of software companies but to meet human needs within planetary boundaries, then the commodification of software is potentially a great advance. Software that was previously expensive becomes cheap or free. Capabilities that were previously gated by the ability to pay become widely accessible. The barrier between human intention and technological implementation drops toward zero, democratizing the capacity to build solutions for human needs.
But — and this is the qualification that the growth-agnostic lens insists upon — this advance is real only if the freed capability is directed toward needs rather than captured for further concentration. If the commodification of software simply transfers value from SaaS companies to AI infrastructure companies, the distribution has not changed — only the address to which the concentrated gains are delivered. The Death Cross, in this reading, is not a doughnut event unless institutional design ensures that the freed capability flows toward the social foundation rather than toward a new form of corporate concentration.
Raworth's seventh principle — be agnostic about growth — provides the framework for this evaluation. A growth-agnostic economy does not pursue growth as an objective. It does not resist growth as a matter of principle. It is genuinely indifferent to whether the economy grows, stabilizes, or contracts, because it measures success by a different variable entirely: whether people are thriving within planetary boundaries. GDP can rise and the doughnut can shrink — if the growth is concentrated among the wealthy and ecologically destructive. GDP can fall and the doughnut can expand — if the contraction reflects a shift from ecologically damaging production to regenerative, distributive activity. The variable that matters is not the size of the economy but its shape.
Applied to AI, growth agnosticism generates a set of questions that the current discourse does not ask. When Segal describes the twenty-fold productivity multiplier, the growth-agnostic response is not "Wonderful!" or "Terrible!" but "What was produced?" If the twenty-fold increase produces twenty times as many products that meet genuine human needs — health tools, educational resources, infrastructure solutions, agricultural technologies — then the growth is doughnut-compatible, contingent on its ecological footprint. If the twenty-fold increase produces twenty times as many products competing for the same affluent consumers in the same wealthy markets, the growth has expanded the economy without expanding the doughnut. More stuff, for the same people, consuming more energy, extracting more materials, generating more waste, without lifting a single person above the social foundation.
The productivity metrics that dominate the AI discourse are agnostic about content. A twenty-fold multiplier is a twenty-fold multiplier whether it produces a diagnostic tool for rural clinics in Malawi or a marginally improved marketing dashboard for enterprise software in San Jose. The doughnut insists that these two outputs are not equivalent — that the former advances the social foundation while the latter, however commercially successful, does not — and that an economic system that treats them as equivalent has failed to measure what matters.
This is not a call for central planning or government direction of AI development. Raworth has been clear that the doughnut is a compass, not a map — it tells you whether you are moving in the right direction without prescribing the specific path. The doughnut-compatible AI economy would not dictate what gets built. It would change the incentive structures that determine what gets rewarded. Tax systems that account for ecological footprint. Procurement policies that prioritize social-foundation impact. Investment criteria that measure doughnut advancement alongside financial return. Governance frameworks that include the voices of people below the social foundation in the design decisions that shape AI deployment.
These are institutional interventions, not technological ones. They operate on the economic logic that governs AI deployment rather than on the AI systems themselves. And they require something that the growth-addicted framework cannot provide: a definition of enough. How much AI productivity is sufficient to meet human needs within planetary boundaries? At what point does further expansion of AI capability cease to advance the doughnut and begin to transgress its boundaries? These questions are unanswerable within a growth-addicted framework, because the growth-addicted framework has no concept of sufficiency. There is no point at which the growth-addicted economy says, "We have produced enough." The variable is always more.
The doughnut provides the missing concept. Enough is the level of economic activity that lifts everyone above the social foundation without pushing anyone beyond the ecological ceiling. It is not a fixed quantity — it varies with population, technology, and ecological conditions. But it is a bounded quantity, which is the point that growth economics has spent seventy years denying. The economy operates within a finite planet. The planet has boundaries. The economy must fit within them. And AI, for all its extraordinary power, is subject to the same constraint.
The most provocative implication of growth agnosticism for the AI economy concerns the concept of value itself. In growth-addicted economics, value is measured by market price — the willingness of buyers to pay. In doughnut economics, value is measured by contribution to thriving — the degree to which an activity advances the social foundation and respects the ecological ceiling. These two measures can diverge radically. A product with a high market price and no social-foundation impact has high growth-economic value and zero doughnut value. A product with zero market price — an open-source educational tool, a community health application, a public-domain agricultural resource — can have zero growth-economic value and enormous doughnut value.
AI, with its capacity to reduce the cost of production toward zero, makes this divergence acute. When the cost of building software approaches nothing, the market price of software approaches nothing — as the Death Cross demonstrates. But the doughnut value of software built for social-foundation purposes does not approach nothing. It may, in fact, increase, because the tools are now available to address needs that were previously too expensive to serve. The growth-addicted economy sees the falling price and panics. The doughnut economy sees the falling price and asks: Now that this capability is nearly free, how do we direct it toward the needs of the three billion people who remain below the social foundation?
That question — how to direct abundant capability toward genuine need — is the question the AI economy has not yet learned to ask, because the economic framework within which it operates does not contain it. The framework contains growth. The framework contains efficiency. The framework contains market capitalization and revenue and adoption curves and productivity multipliers. It does not contain enough. It does not contain direction. It does not contain the doughnut.
Until it does, the most powerful amplifier in history will continue to amplify the logic it is connected to — the logic of more, faster, bigger, without boundary or direction — and the planet will continue to absorb the consequences of an economy that has forgotten, if it ever knew, what it was for.
In the winter of 2026, Edo Segal faced a decision that most technology leaders will recognize and few will admit to finding difficult. His engineers in Trivandrum had demonstrated a twenty-fold productivity multiplier using Claude Code. The arithmetic was clean: if five people could now produce what a hundred had produced before, the rational economic response was to keep five and release ninety-five. The quarterly savings would be immediate, substantial, and legible to every investor, board member, and financial analyst who evaluated the company's performance.
Segal chose differently. He kept the team. He expanded it. He directed the productivity gains toward building more ambitious products rather than toward reducing headcount. He chose, in his language, to grow the top line rather than cut the cost base.
This choice is presented in The Orange Pill as a moral and strategic decision — the Beaver's ethic, the commitment to building an ecosystem rather than extracting from it. And it is both of those things. But Raworth's framework reveals something additional about this choice that Segal does not fully articulate, perhaps because the economic vocabulary for articulating it does not exist within the builder's fishbowl: the choice is a distributive design decision. And the fact that it was a choice — individual, voluntary, dependent on one leader's values, reversible at the next board meeting — is precisely the problem that doughnut economics exists to solve.
Raworth draws a sharp distinction between redistribution and distributive design. Redistribution is the corrective approach: allow the economy to generate whatever distribution of value it generates, then use taxation, transfers, and social programs to redistribute some of the gains from those who captured them to those who did not. This is the approach that has governed economic policy in most wealthy democracies since the mid-twentieth century, and its track record is, by Raworth's assessment, structurally inadequate. Not because redistribution is wrong, but because it operates after the fact — correcting outcomes that the economic structure has already produced, fighting the current rather than redirecting it. Redistribution is the Upstream Swimmer of economic policy: noble in its intentions, exhausting in its execution, and ultimately overwhelmed by the force it opposes.
Distributive design is fundamentally different. It builds equitable distribution into the structure of the economy itself — into the ownership models, governance frameworks, and institutional architectures that determine how value is generated and allocated in the first place. Instead of allowing the economy to concentrate gains and then attempting to redistribute them, distributive design ensures that the gains flow broadly from the moment of their creation. The dam is built before the water arrives, not after the downstream communities have already been flooded.
Raworth identifies five key domains of distributive design: the distribution of wealth (who owns what), the distribution of enterprise (who controls what), the distribution of technology creation (who designs what), the distribution of knowledge (who knows what), and the distribution of power to create money (who finances what). Each of these domains has direct, specific, and largely unexamined implications for the AI economy.
Consider the distribution of wealth. The AI productivity gains that Segal describes — twenty-fold multipliers, solo founders shipping products, engineers building across disciplines — generate enormous new value. The question is who captures that value. In the current economic structure, the answer is overwhelmingly clear: the owners of the AI platforms capture the largest share, through subscription fees, usage charges, and the data that flows through their systems. The investors who funded those platforms capture the second-largest share, through equity appreciation. The knowledge workers who use the tools capture a meaningful but smaller share, through increased productivity that translates, unevenly and with significant lag, into higher compensation or expanded capability. And the people below the social foundation — the billions who lack access to the tools entirely — capture nothing.
This distribution is not an accident. It is a design feature of the economic system within which AI operates. The ownership structure of AI companies — private equity, venture capital, concentrated shareholding — ensures that the value generated by AI flows to capital rather than to labor or community. The governance structure — corporate boards answerable to shareholders, not to workers, users, or affected communities — ensures that decisions about how AI is deployed, priced, and developed are made in the interests of those who already own the most.
Segal's choice to keep his team is an individual act of distributive intent within a system designed for concentration. It is admirable, and it is fragile. The quarterly pressure he describes — the board conversation, the arithmetic of headcount reduction, the market's reliable reward for cost-cutting over capability-building — is the gravitational pull of a concentrative economic structure. Segal resists that gravity through personal conviction. But personal conviction is not a scalable economic policy. The next leader may not share Segal's values. The next quarter may bring pressures that even Segal cannot resist. The Beaver's ethic is real, but it is one beaver's ethic, and the river does not pause while individual beavers debate their principles.
Distributive design would make Segal's choice the default rather than the exception. What would this look like in the AI economy? Raworth's framework suggests several specific structural interventions.
First, the ownership of AI-generated value could be distributed through models that go beyond conventional corporate shareholding. Platform cooperatives — enterprises owned and governed by the people who use them — represent one such model. An AI coding assistant organized as a platform cooperative would distribute its gains to its developer-users rather than concentrating them among private shareholders. Data trusts — institutional structures that hold and govern data on behalf of the communities that generate it — represent another. The data on which large language models are trained was produced by billions of people, writing and coding and creating across the open internet, and the value generated from that data flows entirely to the companies that harvested it. A data trust would ensure that some portion of the value returns to the communities whose collective labor made it possible.
In her 2017 World Economic Forum article, Raworth posed the question directly: "Technology: who will own the robots, and why should it be that way? Given that much basic research underlying automation and digitization has been publicly funded, should a share of the rewards not return to the public purse?" The question is more urgent in 2026 than it was when Raworth asked it. The AI models that generate the productivity gains Segal celebrates were built on publicly funded research — decades of government investment in computing, mathematics, linguistics, and cognitive science, conducted at public universities and government laboratories, funded by taxpayers who will never see a return on their investment in the form of equity appreciation. The private capture of publicly funded research is a specific, measurable form of concentrative design, and distributive design would address it directly: through public equity stakes in AI companies built on publicly funded research, through licensing frameworks that require returns to the public institutions that produced the foundational knowledge, or through governance structures that give public representatives a voice in the deployment decisions of companies whose capabilities rest on public investment.
Second, the governance of AI deployment could be distributed beyond the boardroom. Stakeholder governance — the inclusion of workers, users, affected communities, and environmental representatives in the decision-making structures of AI companies — is a distributive design principle that Raworth has advocated across industries. Applied to AI, it would mean that the engineer in Trivandrum has a voice not merely in what she builds with the tool but in how the tool itself is developed, priced, and deployed. It would mean that the developer in Lagos has a seat at the table where decisions about language support, pricing tiers, and access policies are made. It would mean that the communities downstream of data centers — the people whose water is consumed for cooling, whose air absorbs the emissions from the power plants, whose land was converted for the facilities — have standing in the governance of the companies whose operations affect their lives.
Third, the distribution of knowledge itself could be structurally redesigned. The current AI economy concentrates knowledge in two ways simultaneously: it concentrates technical knowledge among the engineers who build the models (a shrinking elite with enormous leverage), and it concentrates the practical knowledge of how to use AI tools effectively among the knowledge workers who have the infrastructure, education, and connectivity to adopt them. Distributive knowledge design would ensure that both forms of knowledge — the technical and the practical — flow broadly rather than narrowly. Open-source AI models are one mechanism. Public education in AI literacy is another. Community-based training programs, modeled on the agricultural extension services that distributed farming knowledge in the twentieth century, could distribute AI capability in the twenty-first.
None of these interventions is utopian. Platform cooperatives exist and operate profitably in multiple industries. Data trusts are being piloted in several countries. Stakeholder governance is practiced, in various forms, throughout Northern Europe. Public equity stakes in companies built on public research have historical precedent in multiple sectors. The mechanisms are available. What is lacking is the political will to deploy them in the fastest-growing sector of the economy — a sector whose lobbying power, cultural prestige, and ideological confidence make it exceptionally resistant to structural redesign.
Raworth's insight is that this resistance is not merely political. It is conceptual. The growth-addicted economic framework that governs AI deployment cannot conceive of distribution as a design principle because it conceives of distribution as a consequence — something that happens after the economy has generated its output, to be managed through policy rather than built into structure. The doughnut reverses this: distribution is not a corrective applied to an economy that has already concentrated its gains. It is a feature of an economy designed from the start to ensure that gains flow broadly.
The Beaver's dam, in Segal's telling, creates a pool behind it — a habitat in which hundreds of species flourish. But in the current economic structure, the pool is privately owned. The beaver built it, and the beaver decides who drinks from it, on what terms, at what price. Distributive design would make the pool a commons — a shared resource, governed collectively, maintained by the community that depends on it, with the beaver's building skills honored and compensated but not converted into ownership of the water itself.
This is not an abstract philosophical preference. It is an institutional design challenge with specific, implementable solutions. The doughnut does not demand the abolition of private enterprise. It demands that private enterprise operate within structures that distribute its gains broadly enough to raise the social foundation and constrain its throughput sufficiently to respect the ecological ceiling. Applied to AI, this means ownership models, governance frameworks, and knowledge-distribution systems that ensure the most powerful amplifier in history amplifies thriving rather than concentration.
The choice Segal made — keep the team, grow the top line, invest in capability — is the right choice. The question Raworth poses is why it must be a choice at all. An economy designed for distribution would make it the structural outcome rather than the heroic exception. An economy designed for concentration makes it a quarterly gamble, dependent on the values of whoever happens to be sitting in the leader's chair.
The dam must be built into the architecture, not balanced on the conscience of individual builders.
One of the most suggestive concepts in The Orange Pill has implications its author does not fully trace. Segal calls it ascending friction — the principle that every significant technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor. Assembly language forced programmers to manage memory; compilers abstracted that away, freeing programmers to think about architecture. Frameworks abstracted architecture; cloud infrastructure abstracted servers. At each stage, the practitioners who lost the lower friction gained access to problems they could not previously reach. The difficulty did not vanish. It climbed.
Raworth's framework reveals a dimension of this principle that is invisible from inside the builder's fishbowl: the ascending friction is not merely cognitive. It is material. The work at the higher level is physically different from the work at the lower level — different in its energy requirements, its resource demands, its ecological footprint. And this material difference has profound implications for the doughnut.
Consider the concrete case Segal describes. Before Claude Code, his engineers spent roughly half their time on what one of them called "plumbing" — dependency management, configuration files, debugging syntax errors, the mechanical connective tissue between the components they actually cared about. This plumbing work was cognitively demanding in a specific way: it required sustained attention to detail, tolerance for repetitive frustration, and the kind of procedural thinking that computers are exceptionally good at. It was also, in material terms, relatively low-impact: a human being staring at a screen, typing, thinking, consuming roughly the same hundred watts of metabolic energy that any office worker consumes.
When Claude Code took over the plumbing, the engineers' time was freed for higher-level work: product strategy, architectural judgment, user experience design, the question of what should be built and for whom. This higher-level work is cognitively different — it requires integration across domains, taste, ethical judgment, the capacity to imagine what does not yet exist. It is the work that Segal identifies as the new premium in the AI economy.
It is also, in material terms, fundamentally different in its demands. Judgment, care, ethical reasoning, and creative direction are activities that happen in human brains. They do not require additional material throughput beyond what the human organism already consumes. A product strategist thinking about whether a tool should exist does not consume more energy, water, or rare earth minerals than a programmer debugging a configuration file. The cognitive work has ascended. The material footprint has not ascended with it.
This asymmetry — higher cognitive demands, stable or reduced material demands — is the regenerative potential of ascending friction, and it connects directly to Raworth's sixth principle: create to regenerate.
Raworth distinguishes between degenerative and regenerative economic design. A degenerative economy takes materials from the Earth, makes them into products, uses the products briefly, and discards them as waste — the linear "take, make, use, lose" model that has dominated industrial economics for two centuries. A regenerative economy, by contrast, designs its material flows in cycles rather than lines — restoring, renewing, and replenishing the resources and ecosystems it draws upon.
The AI economy, as currently structured, is degenerative at the infrastructure level. The data centers, the devices, the semiconductor supply chains — these operate on a take-make-use-lose model that is ecologically destructive at every stage, as the previous chapter documented. But the human activity that AI enables — the ascending friction, the freed cognitive capacity, the shift from mechanical execution to judgment and care — has inherently regenerative characteristics. Care work builds social capital without consuming material resources. Judgment improves decision quality without increasing throughput. Creative direction channels productive capacity toward meeting needs rather than generating waste.
The regenerative potential of ascending friction is conditional. It depends entirely on where the freed human energy goes.
The Berkeley researchers documented where it actually goes in the current economy: into more work. More tasks, longer hours, blurred boundaries between work and rest. The freed capacity was captured by the growth-addicted system and converted into additional throughput — more production, more output, more stuff. The ascending friction produced higher-level cognitive work, but the economic system demanded more of it, more urgently, without pause, until the workers were more exhausted than before the friction was removed.
This is the degenerative capture of regenerative potential. The freed energy, which could have been directed toward care, community, ecological stewardship, and the cultivation of human capability, was instead absorbed by the growth logic and converted into more production. The ascending friction generated a surplus of human cognitive capacity, and the economy consumed the surplus before it could be invested in the regenerative activities the doughnut demands.
Raworth's regenerative design principle asks a specific question of this dynamic: how do we design economic institutions that direct the freed capacity toward regeneration rather than allowing the growth-addicted system to capture it for more throughput? The answer involves institutional structures that the technology industry has not built and the economic discourse has not demanded.
Consider what a regenerative direction of ascending friction would look like in practice. An organization that captures AI productivity gains and uses them to reduce working hours rather than increase output. A four-day working week made possible by AI-augmented productivity, with the freed day available for care, community, rest, or ecological stewardship. The same total output, produced in less time, with the surplus time returned to the humans rather than consumed by the system.
This is not a fantasy. It is a design choice. The same twenty-fold multiplier that Segal describes in Trivandrum could produce the same output in one-twentieth the time, freeing nineteen-twentieths of the working week for non-productive activities. Or it could produce twenty times the output in the same time, consuming all the freed capacity for additional production. The technology supports either outcome equally. The economic logic determines which one prevails.
In the current logic, the second outcome prevails almost universally. Organizations that achieve AI productivity gains use them to produce more, faster, with fewer people — or with the same people working at higher intensity. The freed capacity is not returned to the humans. It is captured by the system. And the system demands, with the inexorable logic of quarterly reporting and competitive pressure, that the captured capacity be converted into growth.
Raworth would recognize this dynamic as a specific instance of a general pattern she has identified across industries: the systematic undervaluation of care, community, and ecological stewardship by a growth-addicted economy. These activities — raising children, tending relationships, maintaining ecosystems, cultivating the social infrastructure that makes economic activity possible — are precisely the activities that the doughnut's social foundation requires. They are also precisely the activities that the growth economy treats as unproductive, because they do not generate market transactions and therefore do not register in GDP.
The ascending friction thesis suggests that AI is creating the conditions for a massive reallocation of human energy from productive labor to care labor — from making stuff to tending relationships, communities, and ecosystems. But the reallocation will not happen automatically. It will happen only if the economic institutions that govern the allocation of human time are redesigned to value care, community, and stewardship alongside — or above — production.
The historical parallel is instructive. When electricity arrived in factories in the early twentieth century, the immediate response was to use the freed capacity for more production — longer hours, faster lines, continuous operation. The regenerative potential of electrification — less drudgery, more leisure, better working conditions — was captured by the growth logic and converted into throughput. It took decades of labor organizing, legislation, and cultural change to redirect some of the freed capacity toward human well-being: the eight-hour day, the weekend, child labor laws. The dams were built, but they were built after enormous human suffering, by workers who fought for the structures that the economic system would not provide voluntarily.
The AI transition is following the same trajectory. The freed capacity is being captured for throughput. The regenerative potential is being consumed by growth. The workers are more productive and more exhausted. The doughnut's social foundation is not being strengthened by the productivity gains, because the gains are flowing to output rather than to the care, community, and ecological activities that the foundation requires.
The dams, once again, must be built. Not by individual beavers choosing to resist the quarterly pressure — though those choices matter — but by institutional design that redirects the freed capacity toward the regenerative activities the planet and its people need. Working-time legislation that connects productivity gains to reduced hours rather than increased output. Care infrastructure that makes the freed time usable for care rather than for additional gig work. Ecological stewardship programs that employ human judgment — the very capacity ascending friction reveals as most valuable — in the restoration and maintenance of the living systems the economy depends upon.
The ascending friction is real. The cognitive elevation is real. The regenerative potential is real. But potential is not outcome. The outcome depends on design. And the design, in 2026, remains oriented toward extraction rather than regeneration — toward capturing the freed capacity for growth rather than returning it to the humans whose energy produced it, and to the planet whose boundaries constrain what can be produced at all.
On a single day in late February 2026, IBM lost more market capitalization than it had lost on any single day in over twenty-five years. The proximate cause was a blog post — Anthropic's announcement that Claude could modernize COBOL, the programming language that had been IBM's institutional moat for half a century. Tens of billions of dollars in enterprise contracts depended on the fact that COBOL was difficult, that the programmers who understood it were aging and retiring, and that the cost of maintaining legacy systems written in it was high enough to justify enormous annual payments to IBM and its ecosystem of consultants and integrators. Claude threatened to make all of that expertise cheap.
The IBM event was a single data point in what Segal calls the Software Death Cross — the moment when the AI market overtakes the SaaS market in aggregate value, with a trillion dollars of software company valuation evaporating in the first weeks of 2026. Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. The market was repricing an entire industry according to a new theory of value: code had become a commodity, and companies whose value proposition was "we wrote the code" were suddenly vulnerable to any competitor who could describe the same functionality to an AI and receive a working implementation in hours.
Segal reads the Death Cross as a migration of value from code to ecosystem — from the software itself to the data layers, integrations, institutional trust, and workflow patterns that surround it. The companies that survive will be those whose value was always above the code layer. The companies that die will be those that were always just code.
Raworth's framework reads the Death Cross as something more fundamental: a signal that the growth model of one of the economy's most celebrated sectors is encountering its own internal contradictions, and that the contradictions have implications far beyond the software industry.
The SaaS growth model is a specific expression of the growth-addicted economics Raworth describes. It works as follows: build a software product, acquire users, charge them a recurring subscription, expand the product to increase the value of the subscription, acquire more users, raise prices, and compound the process indefinitely. The model depends on two premises: that the software is difficult to replicate (creating a moat around the existing product), and that the market for the software continues to expand (creating the growth the model requires). When AI makes software easy to replicate and the market for any particular implementation saturates, both premises collapse, and the model fails.
The failure is specific to the SaaS model, but the dynamic it reveals is general. Growth-addicted business models depend on artificial scarcity — the difficulty of producing the thing the business sells — combined with expanding demand. When a technology eliminates the scarcity, the model breaks. This is what happened to the music industry when digital reproduction eliminated the scarcity of recorded music. It is what happened to the newspaper industry when the internet eliminated the scarcity of advertising distribution. And it is what is happening to the software industry as AI eliminates the scarcity of code.
In each case, the growth-addicted response was panic, followed by attempts to restore scarcity through legal or technical means (digital rights management, paywalls, proprietary formats), followed by the emergence of new business models that captured value in different ways. The doughnut reading of this pattern is that the panic is the sound of a growth-addicted system encountering a limit, and that the limit is an opportunity — not for more growth in a different form, but for a fundamentally different relationship between economic activity and human thriving.
When code becomes abundant, the premium shifts from the capacity to produce software to the capacity to decide what software should exist. Segal identifies this shift clearly: judgment, taste, creative direction, the ability to ask the right question, become the scarce and therefore valuable resources. The Death Cross is, in this reading, the market discovering that execution was never the real value — that the real value was always in the human capacities that execution obscured.
Raworth would push this insight further. If the real value is in judgment, care, and the capacity to direct productive capacity toward genuine needs, then the Death Cross is not merely a repricing event within the growth-addicted framework. It is a potential departure from the framework itself. A company whose value is in its ecosystem — its community of users, its accumulated institutional knowledge, its relationships of trust — is a company whose value is relational rather than extractive. It does not need to grow infinitely. It needs to serve its ecosystem well. The logic of "more users, more features, more revenue" gives way to the logic of "better service, deeper relationships, genuine needs met."
This is, in embryo, what Raworth means by growth-agnostic economics applied to a specific sector. A company oriented toward ecosystem service rather than infinite expansion can be profitable, sustainable, and valuable without growing. It can stabilize at a size that serves its community well, generate sufficient revenue to maintain and improve its service, compensate its workers fairly, and operate within ecological boundaries — without the compulsion to expand that the growth-addicted model demands.
The Death Cross does not automatically produce this outcome. Growth addiction is resilient, and the most likely response of the technology industry to the commodification of code is not growth agnosticism but a migration of growth addiction to a new substrate. The AI infrastructure companies — the model builders, the cloud providers, the chip manufacturers — are already exhibiting the same growth-addicted dynamics that the SaaS companies exhibited before them: exponential revenue targets, winner-take-all market dynamics, massive capital expenditure justified by the promise of future growth. The concentration is shifting, not dissolving. Value is migrating from software companies to AI companies, and the AI companies are governed by the same growth-addicted logic that governed the SaaS companies before them.
Raworth's framework predicts this migration with uncomfortable precision. Growth addiction is a systemic property, not a sectoral one. When one sector encounters the limits of its growth model, the addiction does not dissipate — it finds a new host. The structural logic of the economy — the ownership models, the governance frameworks, the measurement systems, the cultural expectations — remains oriented toward growth, and the newly powerful sector inherits that orientation.
Breaking this cycle requires interventions at the systemic level, not the sectoral one. The Death Cross creates an opening — a moment of vulnerability in which the old model has failed and the new model has not yet calcified — and that opening can be used to introduce structural changes that redirect the emerging AI economy toward the doughnut. Ownership reforms that distribute the gains of AI infrastructure more broadly. Governance reforms that include the voices of workers, users, and affected communities in the decisions that shape AI deployment. Measurement reforms that evaluate AI companies against doughnut metrics — social-foundation advancement and ecological-ceiling respect — rather than growth metrics alone.
The window is narrow. Once the AI infrastructure sector consolidates — once the ownership patterns, governance structures, and growth expectations have hardened into institutional form — the opportunity for structural redesign diminishes sharply. The SaaS industry had a similar window in its early years, when the ownership and governance of software platforms were still fluid. That window closed without significant structural reform, and the result was the concentrated, growth-addicted industry that the Death Cross is now repricing.
The Death Cross is a crack in the fishbowl. Through it, a different economic logic is briefly visible — one in which value is relational rather than extractive, sufficiency replaces growth as the measure of success, and the productive capacity freed by technological abundance is directed toward human thriving rather than corporate expansion. Whether that logic takes hold depends on whether the institutional structures are built to support it during the brief period of fluidity that the Death Cross creates.
The doughnut is the shape of the structure that needs building. The Beaver's dam channeling the river not toward infinite expansion but toward the safe and just space where human needs are met within the means of the living planet. The Death Cross is the moment when the river shifts course. Where it flows next depends on whether the dam is in place when the water arrives.
The developer in Lagos appears in The Orange Pill as an emblem of possibility. A woman with intelligence, ambition, and ideas, previously excluded from the building process by lack of institutional access, training, and capital, who now gains, through AI tools, the productive leverage to turn her imagination into economic reality. Segal presents her as evidence that the floor is rising — that the most morally significant feature of the AI moment is the expansion of who gets to build.
The doughnut looks at the same woman and sees twelve dimensions.
Raworth's social foundation is not a single threshold to be crossed but a composite of twelve interconnected conditions, each necessary, none sufficient alone. The developer in Lagos may gain access to a powerful tool. But the tool operates within a life. And the life exists within a system. And the system determines whether the tool translates into durable participation or remains a fragile, individual achievement that can be reversed by any of the eleven conditions the tool does not address.
Map the twelve dimensions against her reality.
Food. Lagos is a city of over twenty million people with food supply chains that are stretched, expensive, and vulnerable to disruption. The developer's ability to use Claude Code does not change the price of rice, the reliability of market supply, or the nutritional quality of what is available. A week of illness from contaminated water or inadequate nutrition removes her from the productive process as effectively as a lack of technical skill ever did, and far more unpredictably.
Water. Lagos Water Corporation serves a fraction of the city's needs. Most residents depend on boreholes, water vendors, or sachet water of variable quality. Access to clean water is a daily negotiation, not a given. The developer's AI-augmented capability does not produce clean water. It does not reduce the time she spends securing it or the health consequences of failing to.
Health. Nigeria's healthcare system is under-resourced, unevenly distributed, and largely out-of-pocket. A medical emergency can consume months of income. The developer faces health risks — malaria, waterborne illness, the chronic conditions exacerbated by air pollution and inadequate nutrition — that her counterpart in San Francisco does not. AI productivity gains are irrelevant to a person hospitalized by a preventable disease.
Education. The developer may be self-taught, well-read, resourceful — the autodidact's path that technology democratization celebrates. But her children need formal education, and the quality of public education in Lagos varies enormously by neighborhood, income, and luck. The social foundation includes education not merely as a personal capacity but as an intergenerational investment. A developer whose children cannot access quality schooling is building on a foundation that narrows with each generation, regardless of how wide her personal capabilities have become.
Income. AI tools can expand what the developer produces. Whether they expand what she earns depends on market structures she does not control — the global pricing of software development, the competitive dynamics of a market in which AI-augmented developers in Bangalore, Bucharest, and Buenos Aires are building similar products simultaneously, the platform fees charged by the infrastructure providers on whose systems her work depends. The democratization of capability is also the democratization of competition, and in a market with near-zero production costs, the pricing pressure is relentless.
Energy. Lagos experiences frequent power outages. Generators are expensive to operate and maintain. The developer's ability to work depends on electricity, and the electricity supply is unreliable. A power cut during a critical deployment is not a minor inconvenience — it is a competitive disadvantage that her counterparts with reliable grid access do not face. The AI tools that amplify her capability also amplify her dependence on infrastructure that does not dependably exist.
Housing. The cost of housing in Lagos has risen dramatically. Adequate housing — with space to work, reliable electricity, and internet connectivity — is priced beyond the reach of many of the people the democratization narrative is supposed to serve. The developer who works from a shared room with intermittent power and bandwidth constraints faces material barriers to productive engagement that no software tool can overcome.
Networks. Access to professional networks — mentors, collaborators, investors, customers — remains geographically and institutionally stratified. The developer in Lagos can reach the global internet, but reaching the global internet is not the same as reaching the networks of trust, recommendation, and opportunity through which economic participation actually flows. Venture capital, enterprise contracts, partnership opportunities — these flow through networks that are overwhelmingly concentrated in a small number of cities, institutions, and social circles. AI does not redistribute social capital.
Political voice. The developer has no input into the policies that govern the AI platforms she depends on, the trade agreements that determine her access to global markets, the regulatory frameworks that shape the competitive landscape, or the infrastructure investments that determine whether her neighborhood has reliable power and internet. Her productive capacity has expanded. Her political capacity has not. She is a more capable economic actor operating within a political system that does not represent her interests.
Social equity. The developer operates within social structures shaped by class, ethnicity, religion, and geography. AI tools do not dissolve these structures. A developer from a marginalized community faces barriers — discrimination in markets, exclusion from networks, reduced access to capital — that AI capability does not address and may, by making other dimensions more visible, render more acute.
Gender equality. If the developer is a woman — and Segal's archetype is explicitly female — she faces additional barriers that the technology does not touch. The gender gap in technology is a global phenomenon, but it is particularly pronounced in contexts where women's economic participation is constrained by cultural norms, safety concerns, caregiving responsibilities, and explicit discrimination. AI tools that are equally accessible to men and women in principle may be unequally accessible in practice, because access depends on time, space, safety, and social permission that are unequally distributed by gender.
Jobs. The doughnut's inclusion of "jobs" in the social foundation refers not merely to employment but to meaningful, dignified, fairly compensated work. A developer using AI tools on a gig basis, with no employment protections, no benefits, no sick leave, no retirement savings, and no bargaining power, has a job in the narrowest sense but not in the sense the social foundation requires. The gig economy that AI enables — fragmented, competitive, with workers bearing all risk and platforms capturing all margin — is not the meaningful work the doughnut demands.
Taken together, these twelve dimensions compose a picture that is more complex, more sobering, and more honest than the democratization narrative typically acknowledges. The developer in Lagos has gained one thing: productive capability. She remains below the social foundation in multiple other dimensions, and the gains in capability do not automatically translate into advances on those dimensions. They may, under favorable circumstances, contribute to advances — higher income can improve access to food, healthcare, and housing. But the translation is not automatic, not reliable, and not assured. It depends on institutions, markets, infrastructure, and social structures that the AI tool does not influence.
The doughnut does not diminish the value of what the developer has gained. Access to powerful productive tools is genuinely valuable, and its extension to people previously excluded is genuinely significant. Raworth has never argued against expanding capability. Her argument is against mistaking capability expansion for thriving — against treating one dimension as a proxy for twelve, and celebrating the proxy while neglecting the conditions that determine whether it translates into a life of genuine dignity and security.
The most revealing test of the democratization narrative is not whether the developer in Lagos can build a product. It is whether she can sustain a livelihood. Sustainability requires not a single successful project but the durable conditions — health, security, infrastructure, governance, community — within which a series of projects can be undertaken over a working life. A single product built with AI tools is an event. A sustainable livelihood built on AI-augmented capability is a trajectory, and the trajectory depends on the social foundation being in place across all its dimensions.
The history of development is filled with events mistaken for trajectories. The One Laptop Per Child initiative was an event — a distribution of devices — that was celebrated as a trajectory — a transformation of education. The microfinance movement was a series of events — small loans to individual entrepreneurs — that was celebrated as a trajectory — the end of poverty through entrepreneurship. In each case, the event was real and the capability it provided was genuine. The trajectory failed because the surrounding conditions — the eleven other dimensions of the social foundation — were not addressed.
AI democratization risks the same error. The tool is real. The capability is genuine. The developer in Lagos can build things she could not build before. But the twelve-year-old in Segal's story who asks "What am I for?" deserves an answer that encompasses not merely what she can produce but whether the world she inhabits will support her in producing it — whether the food is adequate, the water is clean, the healthcare is accessible, the education is available, the power stays on, the networks are open, the political system represents her, and the society treats her with equity regardless of her gender, ethnicity, or origin.
The doughnut is not a ceiling on aspiration. It is a floor beneath it. And the floor must be built across all twelve dimensions simultaneously, or the aspiration, however powerful the tools that serve it, stands on ground that can give way at any point, along any dimension, without warning.
The iPhone in your pocket weighs approximately six ounces. It has no seams, no visible screws, no tactile resistance of any kind. Its surface is so featureless that it could have been grown rather than manufactured. It is, in the language of industrial design, seamless — a word the technology industry uses as an unqualified compliment.
Byung-Chul Han, the philosopher whose critique of smoothness runs through several chapters of The Orange Pill, argues that this seamlessness is the signature aesthetic of our era — an era that has decided friction is always a cost and never a benefit. Segal takes Han's diagnosis seriously, spending three chapters examining the loss that accompanies the removal of productive struggle: the geological layers of understanding that accumulate through debugging, the embodied knowledge that comes from hands-on engagement with resistant material, the depth that can only be built through difficulty.
Raworth's framework reveals a dimension of smoothness that neither Han nor Segal fully articulates, because neither is an economist in the sense that the ecological ceiling demands: smoothness is not merely a cognitive or aesthetic phenomenon. It is a material one. And its material consequences press directly against the planetary boundaries that define the outer ring of the doughnut.
The logic runs as follows. Smoothness, in economic terms, is the reduction of friction in production and consumption. When friction is reduced, the cost of producing and consuming drops. When the cost drops, throughput increases — more goods produced, more services delivered, more transactions completed, more stuff moving through the economic system per unit of time. This is what growth-addicted economics celebrates: efficiency gains that increase throughput, which increases GDP, which is counted as progress.
But throughput is precisely what the ecological ceiling constrains. Every unit of economic throughput — every product manufactured, every service delivered, every transaction completed — draws on material and energy flows from the living world. Raw materials extracted. Energy consumed. Waste generated. Carbon emitted. Water used. Land converted. The ecological ceiling is, in physical terms, a constraint on the total throughput the planet can absorb without destabilizing the biosphere.
Friction, in this reading, is not merely a cognitive nuisance or an aesthetic deficiency. Friction is the natural speed limit of an economy embedded in a finite planet. It slows production to rates the ecosystem can absorb. It creates pauses in which direction can be evaluated and waste can be processed. It introduces resistance that prevents the system from accelerating past the boundaries the living world imposes.
When AI removes friction from the production of software, the immediate effect is the one Segal celebrates: a twenty-fold productivity multiplier, an explosion of creative capability, the collapse of the imagination-to-artifact ratio. The secondary effect, invisible from inside the builder's fishbowl but glaringly visible from the doughnut's perspective, is a twenty-fold increase in the throughput of the economic system. Twenty times as many products competing for attention, requiring infrastructure, consuming energy in their deployment, generating data that requires storage and processing, feeding the cycle of production and consumption that the ecological ceiling constrains.
The smooth economy is, from the doughnut's perspective, an economy with its speed governor removed. It can accelerate without resistance, and it does — because the economic incentive structure rewards speed, and the removal of friction is universally coded as progress, and no one in the system has the authority or the inclination to ask whether the acceleration is compatible with the continued functioning of the biosphere.
Han sees the loss of depth. Raworth sees the acceleration of ecological overshoot. Both diagnoses are correct, and together they compose a picture more alarming than either presents alone. The smooth economy is simultaneously shallower in its human dimension — producing practitioners who can execute without understanding — and faster in its material dimension — pushing throughput past the boundaries the planet can sustain. The cognitive loss and the ecological loss are connected: the same removal of friction that prevents deep understanding also prevents the slow, reflective, ecologically bounded pace of production that a finite planet requires.
The Berkeley data that Segal examines in The Orange Pill — the finding that AI intensifies work, colonizes pauses, and drives workers to fill every available moment with productive activity — is, through this lens, a measurement of throughput acceleration at the human level. The workers are not merely working harder. They are producing more — more tasks, more output, more transactions — and each unit of additional output has a material footprint, however small, that aggregates across millions of workers into a measurable increase in economic throughput.
The structured AI Practice that the Berkeley researchers propose — sequenced workflows, protected pauses, deliberate disengagement — is a friction intervention in the doughnut sense. It reintroduces resistance into a system from which resistance has been removed. But it addresses only the cognitive dimension: the human capacity for sustained attention, the need for rest, the risk of burnout. It does not address the material dimension: the throughput increase that AI-augmented productivity generates, the ecological footprint of the additional output, the planetary consequences of an economy that has learned to produce more, faster, without pause.
A doughnut-compatible response to AI-driven smoothness would address both dimensions simultaneously. It would reintroduce cognitive friction where depth requires it — the structured pauses, the deliberate engagement with difficulty, the protection of the slow thinking that builds understanding. And it would reintroduce material friction where the ecological ceiling requires it — throughput constraints that limit total production to levels the planet can absorb, carbon budgets that make the energy cost of computation visible and accountable, resource accounting that connects each unit of AI-enabled production to its full ecological footprint.
This is not a demand to slow AI down for the sake of slowness. It is a demand to match the speed of the economic system to the speed of the ecological systems it depends on. The planet processes carbon at a certain rate. Aquifers recharge at a certain rate. Ecosystems regenerate at a certain rate. These rates are not negotiable. They are features of a four-and-a-half-billion-year-old biophysical system that does not accelerate because the economy has learned to. An economy that outpaces these rates — that produces faster than the planet can absorb — is not efficient. It is accumulating a debt that the living world will eventually collect, with interest rates set by physics rather than by central banks.
The aesthetics of the smooth is, in this framing, the aesthetics of ecological denial — the celebration of frictionless production in a world where friction is the only thing standing between the economy and the boundaries it is already transgressing. The doughnut does not demand ugliness, inefficiency, or the valorization of struggle for its own sake. It demands that the smooth be evaluated not merely by what it produces but by what it consumes — not merely by the capability it enables but by the material flows it sets in motion.
AI is the most powerful smoothing technology in history. It removes friction from production more thoroughly, more rapidly, and across more domains than any previous technology. The capability this releases is extraordinary. The throughput it enables is unsustainable. Holding both of these truths simultaneously — and building the institutional structures that preserve the capability while constraining the throughput — is the design challenge of the century. It is the doughnut applied to the most powerful amplifier ever built. And the design must begin now, before the smoothness becomes so embedded in the economic infrastructure that reintroducing friction becomes politically, technically, and culturally impossible.
The beaver builds the dam not to stop the river but to create the conditions for life. The smooth economy is a river without a dam — fast, powerful, accelerating, and heading toward a cascade that no amount of downstream engineering can reverse. The dam must be built upstream, at the point where the friction still exists, before the current carries it away entirely.
For the entire history of economics as a discipline, one question has dominated all others: How do we produce more? The mercantilists asked how to accumulate more gold. The physiocrats asked how to extract more from the land. Adam Smith asked how to make the division of labor more efficient. The marginalists asked how to allocate scarce resources to maximize output. The Keynesians asked how to stimulate demand to absorb more production. The growth economists asked how to increase GDP year after year, without interruption, forever. The question changed its form across centuries, but its essence remained constant: More. The assumption was that more was better, that the problem of economics was a problem of insufficiency, and that the solution was expansion.
Kate Raworth's doughnut replaces this question with one that is, in historical terms, revolutionary: How much is enough?
The word "enough" has an almost subversive quality in economic discourse. It implies a limit — not an external limit imposed by scarcity, but an internal limit chosen by judgment. Enough food means the amount that nourishes without excess. Enough housing means shelter that is adequate, safe, and dignified. Enough income means the amount that enables participation in economic and social life without deprivation. In each case, "enough" is a bounded quantity — a quantity that has a floor (below which deprivation exists) and a ceiling (above which further accumulation adds nothing to well-being and begins to impose costs on others or on the planet).
The doughnut is the visual expression of enough. The social foundation defines the floor: no one should have less than enough to participate in life with dignity. The ecological ceiling defines the upper bound: humanity's total material throughput must not exceed what the planet can sustain. Between these two boundaries lies the safe and just space — the space of enough. Not too little. Not too much. The right amount.
AI makes the question of enough more urgent and more difficult than at any previous moment in economic history, for a reason that is structural rather than accidental: AI has, for the first time, made the capacity to produce effectively unlimited.
This is not hyperbole. When the cost of producing software approaches zero, when a single person can build in a weekend what a team of twenty built in a year, when the imagination-to-artifact ratio collapses to the width of a conversation, the binding constraint on production is no longer capability. It is no longer skill. It is no longer capital, at least not for many categories of production. The binding constraint is something the growth-addicted economic framework has no vocabulary for: the decision about what is worth producing at all.
In an economy of scarcity — the economy that has existed for all of human history until approximately now — the question of what to produce was partly answered by the difficulty of production itself. You could not build everything you imagined, so you were forced to choose. The friction of implementation was a filter. It selected for projects that someone cared about enough to invest the enormous effort required to realize them. The friction was wasteful, exclusionary, and often unjust in what it filtered out. But it performed a function: it imposed a discipline of selection on an economy that would otherwise have produced indiscriminately.
AI removes that filter. When production is nearly frictionless, the discipline of selection must come from somewhere else — from human judgment about what deserves to exist, from institutional structures that reward need-meeting over throughput-maximizing, from cultural values that distinguish between production that serves life and production that merely fills the market.
The doughnut provides the framework for that discipline. Enough is the amount that lifts everyone above the social foundation. Enough is the amount that does not push beyond the ecological ceiling. Production beyond that point is not merely unnecessary — it is actively harmful, because it consumes ecological space without producing additional well-being, widening the gap between what the economy extracts from the planet and what the planet can sustain.
This principle, applied to the AI economy, generates implications that are immediate and specific.
First, the productivity gains that AI enables should be evaluated not by their magnitude but by their direction. A twenty-fold multiplier directed at building diagnostic tools for underserved communities advances the social foundation with minimal ecological cost. The same multiplier directed at building the fifteenth competing project management application for affluent knowledge workers produces market activity without advancing the doughnut. Growth-addicted economics cannot distinguish between these two applications — both register as production, both contribute to GDP, both are counted as economic success. The doughnut distinguishes between them precisely, because the doughnut measures direction, not magnitude.
Second, the freed human capacity that ascending friction generates should be directed toward the activities the social foundation requires and the growth economy undervalues: care, education, community building, ecological stewardship. These are activities with high social value and low ecological footprint — the opposite of the high-throughput production that the growth economy rewards. An economics of enough would recognize care work as economically essential and compensate it accordingly. It would treat ecological stewardship not as a cost to be minimized but as a productive activity that maintains the biophysical infrastructure upon which all other economic activity depends.
Third, the governance of AI deployment should be organized around the principle of sufficiency rather than maximization. This does not mean limiting AI capability — the technology will continue to advance regardless of governance frameworks. It means directing the application of that capability toward the doughnut's safe and just space. Investment criteria that weight social-foundation impact. Procurement policies that prioritize need-meeting over cost-cutting. Regulatory frameworks that account for the full ecological footprint of AI operations, including the energy, water, and material costs that the current accounting systematically excludes.
Fourth, the ownership of AI-generated value should be distributed broadly enough to raise the social foundation, through the mechanisms the previous chapters have outlined: platform cooperatives, data trusts, stakeholder governance, public equity in publicly funded research. An economics of enough is not an economics of poverty. It is an economics of shared prosperity within boundaries — an economy in which the extraordinary productive capacity AI provides is used to ensure that everyone has enough, rather than to ensure that some have more than they could ever use while others remain below the foundation.
The deepest challenge of an economics of enough is cultural rather than institutional. Growth addiction is not merely an economic phenomenon. It is a cultural one — embedded in the stories a society tells about success, progress, and the good life. The story that has dominated Western culture for three centuries is a story of expansion: more production, more consumption, more wealth, more stuff, more growth, without limit or satiation. This story is so deeply embedded that questioning it feels not like an intellectual exercise but like a betrayal of aspiration itself.
The doughnut tells a different story. It is a story of arrival rather than pursuit — a story in which the goal is not to accumulate more but to ensure that everyone has enough. In this story, the measure of economic success is not the height of the tallest tower but the solidity of the lowest floor. The measure of technological achievement is not the magnitude of the productivity multiplier but the number of people it lifts above the social foundation without breaching the ecological ceiling. The measure of a life well-lived is not how much was produced but whether what was produced served the well-being of people and planet.
AI makes this story possible in a way it has never been possible before. The productive capacity now exists to meet the needs of every person on the planet. The diagnostic tools, the educational resources, the agricultural technologies, the healthcare applications, the infrastructure solutions — all of this can be built, at near-zero cost, by people equipped with AI tools and directed by judgment about what genuinely needs building. The capacity is there. What is missing is the economic logic that would direct it toward enough rather than toward more.
Raworth has called the doughnut a compass, not a map. It tells you whether you are heading in the right direction without prescribing every step of the journey. Applied to AI, the compass reading is clear: the technology is extraordinary, the productive capacity is unprecedented, and the direction is wrong. The amplifier is amplifying the signal of growth-addicted economics — more production, more throughput, more ecological overshoot, more concentration of gains among those who already have the most. The compass points toward a different signal: enough production to meet everyone's needs, within the boundaries the planet sets, distributed broadly enough that no one falls below the floor.
The twelve-year-old in Segal's The Orange Pill who asks "What am I for?" deserves an economy that answers her. Not with the growth story — you are for producing, consuming, competing, accumulating — but with the doughnut story: you are for the work of building a world in which everyone has enough. Your tools are more powerful than any in human history. Your judgment about how to use them is the most valuable thing you possess. And the measure of your contribution will not be how much you produced but whether what you produced helped humanity thrive within the boundaries of the living planet that sustains us all.
The amplifier awaits its signal. The doughnut is the shape of the signal it should carry. And the choice — always the choice — belongs to the humans who hold the microphone.
The number I cannot get out of my head is twelve.
Not twenty — the productivity multiplier I describe in The Orange Pill. Not thirteen point eight billion — the age of the river. Not a trillion — the dollars that evaporated in the Death Cross. Twelve. The number of dimensions in Kate Raworth's social foundation. The number of things a human being needs — simultaneously, not sequentially — to participate in life with dignity.
I keep counting them against the developer in Lagos, the woman I offered as proof that the floor is rising. Food. Water. Health. Education. Income. Energy. Housing. Networks. Political voice. Social equity. Gender equality. Jobs. I gave her one thing — productive capability — and called it democratization. Raworth's framework made me see the eleven things I did not give her, and the honesty of that accounting is the kind that stays with you because it does not accuse. It simply measures.
I wrote The Orange Pill from inside the builder's fishbowl, and I said so. I identified the glass. I described the water. I pressed my face against the surface and tried to see beyond the refraction. But the refraction I could not correct for — the one so pervasive that I breathed it without noticing — was economic. I measured the AI revolution in productivity multipliers and adoption curves and revenue milestones because those are the instruments my fishbowl contains. Raworth handed me different instruments — a compass with two rings instead of one arrow — and the reading changed everything.
The compass does not say stop building. It says check your direction.
I think about my team in Trivandrum, and the choice I made to keep them. I wrote about that choice as though it were a moral stand — the Beaver's ethic, building for the ecosystem. Raworth showed me the structural fragility of that position: one leader's conscience, one quarter's decision, reversible at the next board meeting if the numbers do not cooperate. The choice was real. The architecture that would make it durable does not yet exist. I built one dam with my own sticks and mud. The river needs dams built into its banks.
The ecological cost is the hardest piece. I spend my nights building with Claude, thrilled by what emerges, and I had not paused to calculate the material footprint of those conversations — the energy, the water, the minerals in the servers, the carbon in the grid. Not because I was hiding from the numbers but because the builder's fishbowl does not contain them. They are outside the glass. Raworth's doughnut put them inside, and now I cannot unsee them, any more than I could unsee the orange pill the first time the machine met me in my own language.
There is a word I have been circling for this entire book and could never quite land on. Raworth gave it to me.
Not the word of resignation. Not the word of someone settling for less. The word of someone who has finally understood what the goal actually is: a world where everyone has what they need, within boundaries that sustain the living systems we all depend on. Enough is the most ambitious word in the language, because meeting it — genuinely, for everyone, within boundaries — would require more creativity, more judgment, more courage, and more of the very capability that AI provides than any growth target ever demanded.
The amplifier is extraordinary. I believe that more than ever. The question is not whether it works — it works with terrifying power. The question is what signal I feed it, and whether that signal points toward the doughnut or away from it. Whether my building serves the twelve dimensions or only the one I happen to be good at.
I am still in the river. Still building. But I am building with a compass I did not have before. Two rings. A safe and just space between them. And the recognition — uncomfortable, necessary, permanent — that the floor matters more than the ceiling, and that twelve dimensions is a harder count than twenty-fold.
In The Orange Pill, Edo Segal described AI as an amplifier -- a tool that carries whatever signal it receives. Kate Raworth's doughnut economics reveals what that signal currently is: growth without direction, throughput without boundaries, productivity gains that flow to the already-capable while three billion people remain below the floor of basic human dignity. This book uses Raworth's two-ringed compass to examine AI's celebrated disruptions -- the twenty-fold multiplier, the Death Cross, the democratization of capability -- and asks whether they advance human thriving or merely accelerate an economy already transgressing the planetary boundaries that sustain all life.
The chapters trace the AI revolution across the doughnut's twelve dimensions of social foundation and nine planetary boundaries, confronting the material costs the builder's fishbowl conceals and the distributive failures the adoption curves cannot measure. The result is not an argument against AI but a redesign brief -- a framework for directing the most powerful amplifier in history toward the only goal that makes long-term sense: enough for everyone, within the means of a living planet.

A reading-companion catalog of the 21 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Kate Raworth — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →