By Edo Segal
The number that broke my argument open was not about productivity.
It was about capture. Who keeps what. The difference between building something and owning what you built. I had spent months writing The Orange Pill celebrating the collapse of the imagination-to-artifact ratio — the fact that anyone, anywhere, with an idea and a subscription could now build. A developer in Lagos. An engineer in Trivandrum. Me, on a ten-hour flight, writing a book I could not have written alone.
The tools were the same. The capability was converging. That was the story I told, and I believed it.
Then I sat with Branko Milanovic's work, and the story cracked.
Not because the capability story was wrong. It was right. The floor of who gets to build has genuinely risen. But Milanovic measures something I was not measuring — something the entire technology discourse is not measuring. He measures where the value goes after the building is done. Who captures the surplus. What share flows to the builder and what share flows to the infrastructure she builds on top of — the cloud providers, the model companies, the platforms, the payment processors, the shareholders in jurisdictions thousands of miles from her desk.
The gradient is steep. The same tools, the same output, dramatically different returns — determined not by talent but by the institutional architecture surrounding the builder. That is what Milanovic has spent forty years documenting across every major economic transition of the modern era. His elephant curve — the single most cited chart in contemporary political economy — showed the world that globalization's aggregate gains concealed a distributional reality that eventually reshaped the politics of every democracy it touched.
AI is producing its own elephant. Its own valley. Its own trunk where the gains concentrate. And the curve is forming faster than any previous distribution in economic history.
I am a builder. I celebrate building. Nothing in this book changes that. What Milanovic's lens changes is the question I ask after the building is done. Not just what did we create but who captured what we created — and whether the institutional dams exist to ensure the answer is not just the people who were already at the top.
This is the lens the AI discourse is missing. Not the capability lens. The capture lens. The one that asks whether democratized tools produce democratized outcomes, or whether they reproduce the same gradients through new infrastructure.
Milanovic drew the curve for globalization after the damage was done. For AI, there is still time to draw it first.
— Edo Segal ^ Opus 4.6
1953-present
Branko Milanovic (1953–present) is a Serbian-American economist widely regarded as the world's leading scholar of global income inequality. Born in Belgrade, Yugoslavia, he studied economics at the University of Belgrade before earning his doctorate there and spending nearly two decades as lead economist in the World Bank's research department, where he pioneered the use of household survey data to measure inequality across nations. He is best known for the "elephant curve" (developed with Christoph Lakner), which plots income growth by global percentile during the era of globalization and became one of the most influential charts in modern political economy. His major works include Worlds Apart: Measuring International and Global Inequality (2005), The Haves and the Have-Nots: A Brief and Idiosyncratic History of Global Inequality (2011), Global Inequality: A New Approach for the Age of Globalization (2016), and Visions of Inequality: From the French Revolution to the End of the Cold War (2023). Key concepts include Kuznets waves (recurring cycles of rising and falling inequality driven by technological and institutional change), homoploutia (the condition in which the same individuals are simultaneously rich in both capital and labor income), and the citizenship premium (the finding that national location explains more income variation than any individual characteristic). Currently a Presidential Professor at the City University of New York's Graduate Center and a senior scholar at the Luxembourg Income Study, Milanovic continues to shape global debate on inequality, redistribution, and the distributional consequences of technological change.
Every technological revolution produces a distribution curve. Not the kind that describes what the technology can do — its speed, its capability, its dazzling expansion of the possible — but the kind that describes who captures what the technology produces, and by how much, and at whose expense. The capability curve is what the builders celebrate. The distribution curve is what determines whether a civilization holds together or fractures under the weight of its own progress. Branko Milanovic has spent four decades measuring distribution curves, and the pattern they reveal is among the most consistent in economic history: aggregate gains from technological change tell you almost nothing about who actually benefits. The aggregate is where the celebration lives. The distribution is where the consequences live. And the consequences, not the celebration, are what shape the political future.
The industrial revolution's distribution curve was catastrophic for its first sixty years. British GDP per capita rose substantially between 1780 and 1840 — the aggregate numbers were impressive by any standard. But real wages for industrial workers did not meaningfully rise until the 1840s or later, depending on the measure and the region. The handloom weavers of Lancashire watched their incomes collapse by more than half as power looms undercut their market position. Children were pressed into mill labor. Urban populations crowded into conditions that produced epidemics and early death. The factory owners, the financiers, and the landlords who owned the property on which the new mills were built captured the overwhelming majority of the productivity gains. The gains were real. The distribution was ruinous. And the distribution persisted not because the technology demanded it but because the institutional architecture of early industrial Britain — weak labor protections, no progressive taxation, no public education system, no social insurance — was designed to protect the interests of capital owners, not to share the surplus with the workers whose labor produced it.
This was not a failure of the spinning jenny. It was a failure of institutions. The technology eventually produced broad-based prosperity — but only after generations of institutional construction that the technology's early beneficiaries neither anticipated nor welcomed. Labor movements fought for the right to organize. Legislation prohibited child labor. Public education expanded human capital. Progressive taxation redistributed some of the gains. The institutions changed. The technology did not. And the institutions, not the technology, determined whether the distribution curve bent toward shared prosperity or concentrated wealth.
Milanovic documented this pattern across multiple technological transitions and multiple centuries. The consistency is uncomfortable for those who prefer to believe that technological progress automatically improves conditions for everyone. It does not. It improves aggregate conditions — total output, average productivity, GDP per capita — with reliable predictability. But aggregate improvements are compatible with distributions that range from broadly shared prosperity to concentrated wealth with mass immiseration at the base. The aggregate tells you the size of the pie. The distribution tells you who eats. And the distribution is always, without exception, determined by institutions.
The AI transition is producing aggregate gains that are genuinely impressive. Edo Segal, in The Orange Pill, describes a twenty-fold productivity multiplier observed during a training session with his engineering team in Trivandrum, India. Products built in thirty days that would previously have required six months. Solo builders generating revenue without teams or institutional backing. An engineer who had never written frontend code building a complete user-facing feature in two days. These are real gains, and Milanovic's framework does not dispute them. His framework asks a different question — the question that the aggregate numbers conceal and that the technology discourse consistently evades.
Gains for whom?
The question is not rhetorical. It is empirical, answerable with data, and the data from every previous technological transition provides an answer that is deeply uncomfortable for the populations currently celebrating AI's productivity gains. The answer is that productivity gains from technological revolutions do not distribute themselves. They are distributed by institutions — tax systems, educational structures, labor laws, regulatory frameworks, social insurance programs — and when those institutions are weak, captured, or absent, the gains concentrate at the top of the distribution with the reliability of gravity pulling water downhill.
Milanovic's most famous contribution to economics is the elephant curve — a graph that plots income growth across the global distribution between 1988 and 2008, the most intensive period of economic globalization. The curve earned its name because its shape unmistakably resembles an elephant. The trunk rises at the far right: the global top one percent, whose incomes grew enormously. The back humps in the middle-left: the rising middle classes of China, India, and other rapidly industrializing Asian economies, whose incomes grew substantially. Between the back and the trunk lies a deep valley: the lower-middle and middle classes of the developed world — particularly the United States and Western Europe — whose incomes stagnated or declined in real terms even as the global aggregate improved.
Three populations. Three distributional experiences. One set of enabling technologies and institutional arrangements. The elephant curve was not a description of globalization as a process. It was a description of the institutional architecture through which globalization's gains were distributed. The same underlying forces — reduced trade barriers, revolutions in communication and transportation, integration of previously isolated economies — produced dramatically different distributional outcomes depending on the institutional context. In the Nordic countries, where strong labor institutions, progressive taxation, and robust social insurance moderated the impact, the valley was shallower. In the United States and the United Kingdom, where institutional responses were weaker, the valley was deeper. The technology was held constant. The institutions varied. And the institutions, not the technology, determined who gained and who was left behind.
The AI transition will produce its own elephant curve. The shape of that curve — its peaks, its valleys, the distance between the trunk and the trough — is not yet determined, because the institutional architecture that will distribute AI's gains is still being constructed, or more precisely, is still mostly absent. But the structural characteristics of the technology and the institutional context in which it is being deployed allow the curve's likely contours to be anticipated with considerable confidence.
At the trunk of the curve: the owners of AI capital. The shareholders of Anthropic, Google, Microsoft, Meta, OpenAI, and the firms that deploy AI most aggressively to capture market share. Their gains are already of historic proportions — trillions of dollars in market capitalization increases, venture capital returns that exceed any previous technology cycle, wealth accumulation compressed into years that would normally require decades.
At the upper hump: the AI-complementary knowledge workers that Segal describes so vividly — the builders, the creative directors, the judgment workers who can direct AI tools toward productive ends. Their productivity has multiplied. Their market value has increased. Their capabilities have expanded across domains that were previously inaccessible. For this population, the AI transition is a genuine windfall.
And in the valley between the hump and the trunk: the professional middle class whose implementation skills are being commoditized. The paralegals, the junior analysts, the copywriters, the customer service workers, the mid-level developers whose tasks can be performed by AI at lower cost. Not displaced entirely — not yet — but compressed. Their wage premiums are eroding. Their bargaining power is declining. Their career trajectories are flattening. The aggregate productivity statistics mask their experience entirely, because the aggregate shows improvement while their specific position in the distribution shows stagnation or decline.
This is the shape of the AI curve, and it is forming faster than any previous distribution curve in economic history. The globalization elephant took two decades to form. The AI elephant is forming in years. The compression of the timeline is not incidental — it is the single most important feature of this particular technological transition from a distributional perspective, because the speed of the transition determines how much time institutions have to respond. The industrial revolution's distributional catastrophe persisted for sixty years before institutional responses began to moderate it. The globalization transition produced distributional consequences that were visible for decades before adequate institutional responses materialized — and even then, the responses were widely regarded as insufficient. The AI transition is producing distributional consequences in months and quarters, on a timeline that makes the globalization transition look leisurely.
The institutions that might govern the distribution of AI's gains — progressive taxation of AI-derived capital income, portable benefit systems for displaced workers, educational infrastructure that develops AI-complementary judgment, international coordination to prevent regulatory arbitrage — are, in most jurisdictions, rudimentary or nonexistent. The gap between the speed of the technological change and the speed of the institutional response is wider than it has been in any previous transition. And the gap is where the default distribution operates — the distribution that prevails when institutions do not intervene. The default distribution, as Milanovic's research has documented across centuries of data, is radically unequal.
Segal acknowledges the distributional tension in The Orange Pill. He describes standing in a room in Trivandrum, watching his engineers achieve extraordinary productivity gains, and feeling both exhilaration and terror. The exhilaration was the recognition of genuine capability expansion. The terror was the recognition of what that expansion implied — for teams, for timelines, for the assumptions on which careers had been built. He describes the quarterly board conversation in which the arithmetic is on the table: if five people can do the work of a hundred, why not have five? He describes choosing to keep the team, to invest in top-line growth rather than headcount reduction.
Milanovic's framework respects the choice but identifies its structural limitation. It is an individual choice, made by a single leader in a single organization, operating against structural incentives that push in the opposite direction. The market rewards margin expansion. Investors understand headcount reduction. Competitors who capture the productivity gain as margin report higher earnings, attract more investment, and gain competitive advantage. The structural pressure to convert AI productivity gains into capital returns is not a matter of individual greed — it is a feature of competitive markets in which the firms that maximize shareholder returns attract the capital that enables further growth. The individual leader who chooses otherwise is admirable. The structural incentives that penalize that choice are the distributional reality.
The shape of the curve is not yet fixed. That is the essential point — the point that separates Milanovic's distributional analysis from both the techno-optimists who assume the gains will distribute themselves and the techno-pessimists who assume catastrophe is inevitable. The shape is a political question, not a technological one. It will be answered by institutional choices about taxation, education, labor law, corporate governance, and international coordination. The technology has arrived. The productivity gains are real. The distributional outcome is undetermined. And the window for institutional action is narrowing with every quarter that the transition accelerates without adequate institutional response.
The historical pattern is consistent and uncomfortable. Every major technological transition has increased aggregate prosperity while producing distributional crises that persisted for decades. The sharing has never been automatic. It has always been institutional — the product of trade unions, social insurance, public education, and redistributive taxation built by specific people through specific political struggles. Those who predict that AI's gains will be shared automatically are ignoring this history. And those who believe that individual ethical choices by technology leaders can substitute for institutional architecture are confusing admirable intention with structural adequacy.
The shape of the curve is the question. The institutions are the answer. And the answer is being written — or not written — in the choices that societies are making right now.
---
Between 1988 and 2008, Milanovic and his collaborator Christoph Lakner assembled household income data from more than a hundred countries, covering the incomes of virtually every person on earth, and plotted the cumulative growth in real income for each percentile of the global distribution. The result — the elephant curve — became the most cited chart in modern political economy not because it told specialists something they did not already suspect, but because it made a distributional reality visible to a broad audience for the first time. Before the chart, the distributional consequences of globalization were buried in technical papers. After the chart, they were undeniable. The valley had a shape. The stagnation had a picture. And the picture generated the political pressure that eventually — belatedly, inadequately — produced institutional responses.
The AI transition needs its own elephant curve. The retrospective data does not yet exist in the form that distributional analysis requires — household income surveys are conducted annually or biannually, with results published months or years after collection, and the AI transition is measured in months. But the structural characteristics of the technology and the institutional context in which it is being deployed allow the curve's likely shape to be sketched with considerable analytical confidence. The sketch is not a prediction. It is a distributional hypothesis, grounded in the same structural logic that the globalization elephant was grounded in, and testable against data as the data becomes available.
The AI elephant has four segments, each corresponding to a distinct population in the global distribution.
The first segment occupies the far left of the distribution: the global poor, the populations without reliable connectivity, affordable hardware, or the educational infrastructure to engage with AI tools productively. For these populations — approximately two to three billion people, concentrated in sub-Saharan Africa, parts of South Asia, and the least developed regions of every continent — the AI transition has produced neither gains nor losses in the short term. Their incomes are determined by factors that AI has not yet touched: agricultural productivity, local labor markets, subsistence economies, the basic infrastructure of daily survival. The curve for this segment is flat — not declining, but not rising either. The flatness is not safety. It is exclusion. As the populations above them on the distribution pull away — capturing AI-augmented productivity gains that compound over time — the excluded populations fall further behind in relative terms even if their absolute conditions do not deteriorate.
This matters because relative position, not just absolute condition, determines life prospects in a connected world. A farmer in rural Mali whose income is unchanged while a developer in Nairobi captures AI-augmented productivity gains has not lost anything in absolute terms. But the gap between them has widened, and the widening gap determines access to education, healthcare, capital, and the institutional infrastructure that might eventually enable the Mali farmer to participate in the AI economy. Exclusion compounds. The populations left flat at the bottom of the AI distribution are not merely missing out on the current round of gains. They are falling further from the institutional infrastructure that would enable them to capture future gains. The flatness of the curve at the left is a slow-motion divergence that will become increasingly costly to reverse.
The second segment — the valley — is where the distributional damage is most concentrated and most politically consequential. It encompasses the professional middle classes of the developed world: the knowledge workers, the white-collar professionals, the educated workers whose skills are concentrated in tasks that AI can now perform at lower cost with comparable or superior quality. Document review. Data compilation and analysis. Standard code production. Formulaic content creation. Administrative coordination. Customer service for routine queries. These are the tasks that sustained middle-class incomes in the knowledge economy, and they are the tasks most immediately vulnerable to AI automation.
The valley population does not correspond neatly to any single occupational category. It cuts across professions — including portions of law, accounting, journalism, marketing, software development, financial analysis, and middle management. What unites them is not their industry but their position in the skill distribution: skilled enough to have been valuable in the pre-AI economy, but concentrated in implementation tasks that AI can now replicate. The valley is not primarily a story of unemployment. Most of these workers remain employed. The story is competitive compression — the erosion of the wage premium for skills that AI has made abundant. When a junior developer using AI tools can produce output that previously required a senior developer, the scarcity premium that the senior developer's skills commanded is compressed. The senior developer is not replaced. She is devalued — not in the absolute sense that her skills are worthless, but in the market sense that fewer people are willing to pay the premium that her scarcity once justified.
Milanovic's research on the globalization elephant showed that the populations in the valley experienced their condition not as stability but as decline, because human beings evaluate their economic position in relative terms, comparing themselves to their recent past and to the people around them. The AI valley produces the same subjective experience. The professional whose absolute income is unchanged but whose relative position is eroding — whose colleagues who adopted AI early are pulling ahead, whose industry is restructuring around capabilities she has not yet developed, whose children are asking questions about the future that she cannot answer — experiences the transition as loss. The aggregate productivity statistics show improvement. Her specific distributional experience shows stagnation. And the gap between the aggregate and her experience is where political consequences form.
The third segment — the upper hump — encompasses the AI-complementary knowledge workers: the builders, the architects, the creative directors, the people whose distinctive contribution is judgment rather than implementation. These are the populations that Segal describes most vividly in The Orange Pill — the engineers who discovered they could build across domains, the designers who implemented features end to end, the leaders whose capacity to articulate and direct was amplified by tools that collapsed the gap between vision and artifact. For this population, the AI transition is a genuine and substantial income gain. Their productivity has multiplied. Their market value has increased. Their economic position has strengthened.
The upper hump is real, and its existence is important because it demonstrates that the AI transition is not uniformly negative for labor. Some workers gain substantially. But the existence of the upper hump does not validate the aggregate claim that "AI benefits workers." The relevant question is how many workers occupy the upper hump relative to how many occupy the valley, and by what institutional mechanisms the gains of the hump population might be shared with the valley population. If the upper hump is narrow — composed of a relatively small number of workers with the specific configuration of judgment, domain expertise, and adaptive capacity that AI-complementary work demands — and the valley is broad, encompassing the majority of the professional middle class, the distributional outcome is a widening gap within the working population itself, masked by aggregate statistics that average the hump and the valley into a single number.
The fourth segment — the trunk — rises at the far right of the distribution: the owners of AI capital. The shareholders of the companies that build AI models, the investors who fund AI development, the executives whose compensation is tied to the market capitalization of AI-deploying firms. For this population, the AI transition is a windfall of historic proportions. The numbers are public and staggering. Trillions of dollars in market capitalization created in a few years. Venture capital returns that exceed any previous technology cycle. Wealth accumulation at a rate and scale that compresses what would normally be decades of capital appreciation into quarters.
The AI elephant is more extreme than the globalization elephant in three respects that Milanovic's analytical framework identifies as critical. First, the concentration of gains at the trunk is greater. Globalization's gains were distributed across a broad stratum of capital owners and highly skilled workers across multiple countries. AI's gains are concentrated among a much smaller group of firms and investors, overwhelmingly located in a single country. The Gini coefficient of AI wealth — if it could be precisely measured — would be substantially higher than the Gini coefficient of globalization wealth.
Second, the valley is deeper. Globalization stagnated the incomes of the developed-world middle class in relative terms — their incomes failed to grow while incomes above and below them rose. AI threatens to compress the wage premiums that sustain middle-class incomes in absolute terms, because the automation of knowledge work is more sudden and more complete than the offshoring of manufacturing. A factory job that moved to China was lost gradually, over years, as supply chains restructured. A knowledge task that AI can perform is commoditized in months, as soon as the tool reaches sufficient capability. The speed of the compression is qualitatively different from the speed of offshoring, and the speed determines the depth of the valley.
Third, and most consequential for institutional response: the timeline is compressed. The globalization elephant formed over two decades — enough time for institutional responses to develop, however imperfectly. Trade adjustment assistance programs were created. Educational institutions began, slowly, to reorient. Social insurance systems absorbed some of the shock. The responses were inadequate, as the political backlash of the 2010s demonstrated. But they existed. They moderated the valley's depth. They bought time.
The AI elephant is forming in years. The institutional responses that might moderate its valley — progressive taxation of AI-derived capital gains, portable benefits for workers in transition, educational systems redesigned for judgment rather than implementation, international coordination on AI governance — are not merely inadequate. In most jurisdictions, they do not exist. The gap between the speed of the distributional shift and the speed of the institutional response is the defining feature of the AI transition, and it is the gap through which inequality flows.
Milanovic's concept of Kuznets waves provides a longer historical frame for understanding this pattern. Simon Kuznets hypothesized in the 1950s that inequality follows an inverted-U shape during industrialization — rising as economies develop, then falling as institutions mature. Milanovic extended this into a theory of recurring cycles: inequality rises with each major technological transition, is moderated by institutional responses, then rises again with the next transition. Each wave has its own shape, its own timing, its own institutional requirements. The AI transition is the latest wave — and its unprecedented speed means the institutional response must be correspondingly faster, or the distributional consequences will be correspondingly more severe.
The elephant that Milanovic drew for globalization was retrospective — assembled from twenty years of data, published after the distributional damage was largely done. The AI elephant must be prospective — sketched in advance, from structural analysis rather than retrospective measurement, with enough clarity to motivate institutional action before the distribution hardens. Drawing the curve before it fully forms is an act of analytical ambition that Milanovic's framework both enables and demands. The structural logic is clear. The historical precedents are documented. The institutional gap is measurable. What remains is the political will to look at the distribution rather than the aggregate — to ask not "Is AI productive?" but "Productive for whom?" — and to build the institutions that the answer demands.
---
Inequality is not a single phenomenon operating at a single scale. It is fractal — reproducing itself at every level of social organization, from the global distribution of income between nations down to the distribution within firms, within teams, within households. Milanovic's research has consistently demonstrated this multi-level character, showing that the global Gini coefficient decomposes into two distinct components: inequality between countries and inequality within countries. The relative weight of these components has shifted dramatically over two centuries. In the early nineteenth century, most global inequality was within-country — the gap between rich and poor within England was larger than the gap between England's average income and China's. By the mid-twentieth century, the relationship had reversed: most global inequality was between countries, reflecting the enormous divergence in national incomes produced by industrialization's uneven global spread.
The AI transition is introducing a new dimension to this multi-level pattern — one that operates at a granularity more extreme than any previous technological revolution. Previous transitions created inequality between broad categories: capital versus labor, skilled versus unskilled, industrialized nations versus agricultural ones. The divisions were relatively clean, the categories large enough to identify, to organize around, to build institutional responses for. AI is creating inequality within each of these categories, producing what might be called fractal distributional differentiation — winners within winners, losers within losers, in patterns so granular that traditional distributional categories struggle to capture them.
Consider the technology sector, where the AI transition is most advanced. The between-firm differentiation is already stark. Companies that deploy AI aggressively are capturing market share from those that do not — not gradually, as competitive advantages normally accumulate, but rapidly, because the productivity differential between AI-augmented and non-augmented operations is large enough to produce competitive outcomes in quarters. The trillion-dollar market capitalization losses among traditional software companies that Segal describes as the "Software Death Cross" are the between-firm expression of AI distributional differentiation: concentration of market share, revenue, and profit among the fastest adopters.
But within the firms that have adopted AI, a second level of differentiation emerges that is more analytically interesting and more consequentially dangerous. Workers who complement AI effectively — those with the judgment, the domain knowledge, the adaptive flexibility to direct AI tools toward productive ends — are ascending. Workers who lack these complementary capabilities are being marginalized, not always through formal demotion or termination, but through competitive displacement within the organization. They are assigned less critical work. Their contributions are valued less. Their career trajectories flatten while their AI-complementary colleagues accelerate. The differentiation is real, measurable, and invisible in aggregate productivity statistics that show the firm's total output rising.
Within teams, the differentiation is finer still. Segal's account of the Trivandrum training is distributional evidence of the first order, though he reads it primarily as narrative. Twenty engineers participated in a single week of AI training. Some experienced liberation — the discovery that they could build across domains, that the implementation friction consuming their working hours could be eliminated, that their judgment was more valuable than they had realized. Others experienced existential crisis — the confrontation with the question of what their skills were actually worth once the implementation work that defined eighty percent of their careers could be performed by a tool. The senior engineer whose judgment proved genuinely distinctive found his position strengthened. The senior engineer whose seniority rested primarily on implementation speed — on knowing the syntax, the frameworks, the procedural knowledge that years of practice had deposited — found the basis of his seniority suddenly undermined.
Within a single team, in a single room, over a single week: winners and losers. The differentiation was not between engineers and non-engineers, or between senior and junior, or between any traditional occupational category. It was within the category of "senior engineer," separating those whose expertise was grounded in judgment from those whose expertise was grounded in execution. The traditional category contained both. The AI transition sorted them.
This within-category sorting is the distributional signature of the AI transition, and it is the feature that makes traditional institutional responses most difficult to design. The labor movement organized factory workers because factory workers shared a common position in the distribution — they were all on the same side of the capital-labor divide, facing the same structural pressures, with aligned interests in higher wages and better conditions. The categories were clean enough for collective action. What organizes the AI-displaced knowledge worker who sits in the same office, holds the same job title, and possesses similar credentials as the AI-augmented knowledge worker who is pulling away from her? The distributional conflict is not between clearly defined groups with opposed interests. It is within groups, between individuals whose divergent trajectories are determined by characteristics — cognitive flexibility, domain expertise depth, adaptive capacity, prompt fluency — that do not map onto any traditional category of social organization.
Milanovic's concept of homoploutia — the condition in which the same individuals are simultaneously wealthy in both capital and labor income — illuminates the upper end of this fractal differentiation with particular clarity. In previous eras, the wealthy were either capitalists (owners of productive assets who drew income from returns on capital) or highly paid workers (professionals and executives who drew income from labor). The two categories were distinct, and the distinction structured distributional politics: capital and labor had different interests, and the tension between them was the engine of redistributive institutional construction.
Homoploutia dissolves this distinction. The homoploutic elite — the population that simultaneously earns the highest labor incomes and owns the most capital — is composed of people who are at once the best-paid workers and the wealthiest asset-owners. They are the AI engineers earning seven-figure salaries and holding equity worth tens of millions in the companies that employ them. They are the founders who draw executive compensation and hold founder shares whose appreciation dwarfs their salary. They are the venture capitalists whose management fees constitute high labor income and whose carried interest constitutes high capital income. In each case, the same individual occupies the top of both distributions simultaneously, and the traditional tension between capital and labor — the tension that historically drove redistributive politics — is absent because both sides of the tension are embodied in the same person.
Milanovic has noted, with characteristic empirical bluntness, that the remedy for homoploutia may be "nothing" — that it represents a structural entrenchment of elite position that is resistant to the traditional tools of redistributive policy. Progressive income taxation captures labor income but not unrealized capital gains. Capital gains taxation, where it exists, is applied at preferential rates. The homoploutic elite feels that it merits its position — they have attended the best schools, built the most impressive companies, worked extraordinarily hard — and this meritocratic self-perception provides ideological insulation against redistributive claims. They are not rentiers clipping coupons. They are builders, and the builder's identity makes redistributive politics harder to mobilize against them, even when the distributional consequences of their position are severe.
The AI transition intensifies homoploutia dramatically. The technology sector — where homoploutia is most pronounced — is the sector most directly enriched by AI. The engineers, executives, and investors who were already at the intersection of high labor and high capital income are the populations whose positions are most amplified by the AI transition. Their labor income rises because their AI-complementary skills are scarce. Their capital income rises because the firms they own equity in are capturing AI-augmented productivity gains. The amplification is compounding — high labor income enables capital accumulation, capital appreciation provides economic security that enables risk-taking, risk-taking in the AI sector produces further capital appreciation — in a self-reinforcing cycle that concentrates gains at the extreme upper end of both distributions simultaneously.
Meanwhile, the populations in the valley of the AI elephant — the professional middle class whose implementation skills are being commoditized — experience the opposite of homoploutia. Their labor income is compressed by AI-enabled competitive pressure. Their capital income, if they have any, is modest and does not compensate for the labor income compression. They are doubly exposed: squeezed as workers by the devaluation of their skills, and excluded as capital owners from the wealth appreciation that AI is generating. The traditional categories would classify them as "middle class" and group them with the homoploutic elite in the same broad stratum. The fractal reality is that their distributional trajectory is diverging sharply from the trajectory of the people who share their educational credentials and their job titles but not their position in the AI distribution.
The fractal character of AI inequality has a temporal dimension that makes it particularly resistant to measurement. Traditional distributional data captures income at a point in time — what people earned last year, what their wealth was on a particular date. But the AI transition is producing distributional differentiation in trajectories — in the rate of change of income and wealth over time — rather than in levels at any given moment. Two workers with identical current incomes can have dramatically different distributional trajectories if one is on an AI-augmented growth path and the other is on an AI-compressed stagnation path. Point-in-time measurement captures their current equality. It misses their diverging futures. And the diverging futures are where the distributional consequences accumulate.
This is why the measurement infrastructure that Milanovic's framework demands for the AI transition must be designed differently from the measurement infrastructure that served for globalization. The globalization elephant was drawn from two decades of point-in-time household income surveys. The AI elephant requires higher-frequency measurement — quarterly or monthly indicators, drawn from administrative data, tax records, platform transaction data — and longitudinal tracking that captures trajectories rather than snapshots. The technology to build this measurement infrastructure exists. The institutional will to deploy it is the missing element, and the will is missing because the populations most capable of demanding it — the professional middle class in the valley of the distribution — have not yet fully recognized their distributional position. They feel the vertigo that Segal describes. They have not yet seen the curve that explains it.
The fractal inequality that AI produces is the most analytically challenging distributional phenomenon of the current transition. It defies the categorical boundaries on which traditional institutional responses depend. It operates within the groups that traditional politics organizes. It compounds across levels — within-team, within-firm, within-sector, within-nation, between-nation — each level reinforcing the others. And it is accelerating at a pace that makes the careful, iterative institutional construction of previous eras look impossibly slow.
The winners within winners are pulling away. The institutional architecture to moderate that divergence does not yet exist at the level of granularity the divergence demands. And the clock, as always, is ticking.
---
Segal argues in The Orange Pill that AI democratizes capability by lowering the floor of who gets to build. The argument is compelling in one dimension and misleading in another, and the dimension in which it is misleading is the dimension that matters most for the global distribution of AI's gains. A developer in Lagos can now build a product that would previously have required a team of engineers in San Francisco. An engineer in Trivandrum who had never written frontend code can construct a complete user-facing feature in two days. The floor of individual capability has genuinely risen. The tools are, in principle, globally accessible. In this dimension — the capability dimension — the democratization claim is supported by evidence.
But capability is not value capture. The distinction is everything, and it is the distinction that the democratization narrative consistently elides. The developer in Lagos can build. But the economic value she creates when she builds flows through infrastructure — cloud services, AI model providers, app stores, payment systems, advertising platforms — that extracts rent to shareholders headquartered in San Francisco, Seattle, Mountain View, and Cupertino. She has access. She does not have distributional power. The gap between access and distributional power is the gap through which the geography of global inequality reproduces itself in the AI era.
Milanovic's research on global inequality identified the citizenship premium as the single largest source of variation in individual income worldwide — larger than education, larger than occupation, larger than gender, larger than any individual characteristic that labor economists typically study. The citizenship premium is the additional income a person earns simply by being located in a wealthy country, holding constant every other factor. An equally skilled, equally educated, equally hardworking person earns dramatically different incomes depending on the country in which she happens to live. The premium is not a reward for merit. It is a rent derived from institutional quality — legal systems that protect property, financial systems that provide capital access, educational systems that develop human capital, infrastructure that ensures reliability, proximity to markets that reward production.
The popular understanding of AI democratization assumes that global tools will erode the citizenship premium — that when a developer in Lagos has access to the same AI model as a developer in San Francisco, the institutional differences between their locations become less consequential. Milanovic's framework suggests the opposite. AI tools do not operate in an institutional vacuum. They operate within specific institutional contexts, and their productivity is amplified by the quality of the surrounding infrastructure. The same tool produces different returns in different institutional environments, and the difference is multiplicative rather than additive.
Consider the specific economics. The Lagos developer pays for AI tools in dollars — a currency whose acquisition cost, relative to her local income, is substantially higher than for a developer in a dollar-denominated economy. She hosts her application on cloud infrastructure priced for global markets but disproportionate relative to her revenue potential. She distributes through platforms charging commission rates calibrated to developed-world expectations. She processes payments through systems imposing currency conversion costs and compliance requirements that add friction to every transaction. At each step of the value chain, rent flows to geographic locations far from her workspace. The developer captures the residual — what remains after the infrastructure rents have been paid. The residual, while potentially meaningful, is a fraction of the value she creates.
The San Francisco developer faces none of these structural disadvantages. She operates within the institutional ecosystem that the value chain was designed to serve. She pays for tools in the currency she earns. She accesses venture capital through networks that are geographically proximate. She distributes to a market that is culturally and linguistically familiar. Her intellectual property protections are robust. Her institutional infrastructure amplifies the productivity of AI tools rather than taxing it.
The result is a gradient of value capture along the line that Segal draws from Lagos to Trivandrum to San Francisco. The same tools, the same nominal capability, but dramatically different economic returns — determined not by individual talent or effort but by the institutional infrastructure in which the tools are deployed. The capability has been democratized. The value capture has not. And the gap between democratized capability and concentrated value capture is the mechanism through which the citizenship premium reproduces itself in the AI era.
Milanovic's framework suggests that AI may actually amplify the citizenship premium rather than erode it. Before AI, the productivity difference between a developer in Lagos and a developer in San Francisco was primarily a function of direct infrastructure differences — faster connectivity, more reliable power, better hardware. AI introduces a multiplicative factor. The productivity gain from AI tools is itself conditioned by institutional context — faster connectivity means faster AI iteration, more reliable power means more productive hours, better complementary infrastructure means higher-quality AI-augmented output. The multiplication widens the absolute gap between equally skilled workers in different institutional environments, even as the relative gap in raw capability narrows.
The development economics of this dynamic warrant careful examination. Throughout the twentieth century, the primary pathway to closing the gap between rich and poor nations was industrialization — building domestic manufacturing capacity that captured value locally rather than exporting raw materials for processing abroad. The countries that successfully industrialized — South Korea, Taiwan, and then China — moved from the bottom of the global distribution toward the middle and top. The mechanism was institutional: industrial policy directed domestic investment toward manufacturing, protected infant industries until they achieved competitive scale, developed the educational infrastructure that human capital required, and built the financial systems that funded expansion.
AI creates a new version of this development challenge. The equivalent of industrialization in the AI era is not frontier model development — the capital requirements for that are concentrated among a handful of firms and are beyond most national budgets. It is the construction of the institutional infrastructure that determines local value capture from AI-augmented work: educational systems that prepare workers for AI-complementary roles, financial systems that fund AI-augmented enterprises, digital infrastructure that maximizes AI tool productivity, regulatory frameworks that require some portion of value to be captured domestically, and platform alternatives that reduce dependence on the concentrated infrastructure of wealthy nations.
Without this institutional investment, AI risks producing a digital periphery — economies that participate in the AI value chain but capture only a fraction of the value they help create, with the majority flowing to the core economies that control the infrastructure. The pattern has clear historical precedents. The garment worker in Bangladesh earns more by producing for global brands than she would in the domestic economy. But the value her labor creates flows predominantly to the brand owners, retailers, and shareholders in wealthy nations. She participates. She does not capture proportionally. AI is extending this pattern into the knowledge economy: the developer in Lagos participates in the AI-augmented global value chain, captures genuine individual gains relative to her pre-AI condition, and simultaneously contributes to a geographic distribution of value that concentrates wealth in the institutional core.
The Trivandrum engineers occupy an instructive middle position. They are embedded in a global firm that provides institutional support — training, hardware, connectivity, economic security — that the independent Lagos developer lacks. Their productivity gains are genuine and substantial. But they remain positioned within a global labor market that compensates Indian engineers at a fraction of American rates for comparable output. And the AI transition introduces a double squeeze: from above, San Francisco captures disproportionate value through proximity to capital and institutional infrastructure; from below, AI enables less-experienced workers in lower-cost locations to produce comparable output, intensifying competitive pressure on Trivandrum's position.
Peter Knight, a former World Bank colleague of Milanovic's, criticized him for being insufficiently attentive to exponential technologies — for applying historical analogical reasoning to a technological transition that may be qualitatively different from anything that preceded it. The criticism has some force. If AI's productivity gains are truly exponential rather than linear — if they compound in ways that previous technologies did not — then historical precedents may underestimate both the gains and the distributional disruption. But Knight's critique cuts both ways. If the gains are exponential, the distributional consequences of institutional failure are also exponential. An exponentially productive technology distributed through an institutional architecture designed for a linear economy produces exponentially concentrated wealth. The bigger the gains, the more the distribution matters. And the distribution, as always, is determined not by the technology but by the institutions.
The geography of value capture is not immutable. It can be altered by the kind of sustained institutional investment that enabled South Korea, Taiwan, and China to escape peripheral status in the industrial era. But the alteration requires recognition that capability democratization and distributional democratization are different things — that giving everyone the same tools is not the same as giving everyone an equal share of the value the tools produce. The rhetoric of democratization, sincerely held and genuinely motivating for many of the builders who deploy AI tools, functions as a distributional anesthetic — numbing the awareness of geographic inequality by pointing to the capability gains while ignoring the value-capture gradient. The developer in Lagos is better off with AI tools than without them. She is also more integrated into a value chain that directs the majority of the value she creates to geographic locations thousands of miles from her home. Both facts are true simultaneously. The first is what the democratization narrative celebrates. The second is what distributional analysis insists on measuring. And the measurement, not the celebration, is what institutional construction requires.
The globalization elephant revealed a specific distributional pathology: the stagnation of incomes for the lower-middle and middle classes of the developed world during a period when incomes above and below them on the global distribution were rising. The populations in the valley did not experience absolute impoverishment — not initially. They experienced something that the aggregate statistics classified as stability and that the people living it experienced as decline. The distinction between statistical stability and lived decline is not a matter of perception versus reality. It is a matter of which reality you measure. If you measure absolute income, the valley population was stable. If you measure relative position — income relative to reference groups, to recent trajectory, to the populations visibly pulling away above them — the valley population was falling. And human beings, as decades of research in behavioral economics and the psychology of well-being have demonstrated, evaluate their condition in relative terms. The reference group matters more than the absolute number. The trajectory matters more than the level.
The AI transition is producing a squeeze that targets a different population through a different mechanism but operates on the same distributional logic. The population being squeezed is the professional middle class of the developed world — the knowledge workers, the white-collar professionals, the credentialed class that invested decades in human capital under the assumption that education would reliably convert to economic security. The mechanism is not offshoring. It is competitive compression — the erosion of wage premiums for skills that AI has made abundant.
The economics of competitive compression are straightforward and uncomfortable. Before AI, a skilled financial analyst commanded a wage premium because her capabilities — data gathering, pattern identification, model construction, insight generation — were scarce relative to market demand. Scarcity sustained the premium. AI made these capabilities abundant, not by replacing the analyst entirely but by enabling less-skilled workers to produce comparable output with AI assistance. The supply of analyst-quality work increased. Increased supply, in a competitive labor market, depresses price. The analyst's absolute skills did not change. Her relative scarcity did. And the premium was always a function of scarcity, not of absolute skill level.
This is the mechanism Segal describes when he writes about the junior developer who shipped in a weekend what her senior colleague had quoted six months for. The junior developer did not become as skilled as the senior. She became capable of producing comparable output at lower cost, and the production of comparable output at lower cost compressed the premium that the senior developer's skills had commanded. The senior developer was not fired. She was repriced. The market's valuation of her specific skill configuration — deep procedural knowledge, syntactic fluency, implementation speed — declined because AI had increased the supply of workers capable of producing equivalent results.
The compression operates across the professional middle class, not just in technology. It affects any profession in which core tasks can be partially or fully performed by AI tools, leaving only the highest-value activities — judgment, strategy, interpersonal navigation, creative synthesis — as the basis for human wage premiums. The remaining activities are genuinely valuable. But they are also fewer in number, and the number of workers who can perform them at a level that justifies a premium is smaller than the number currently employed in each profession. The result is not mass unemployment — a prediction that Milanovic has consistently cautioned against, noting in his "Three Fallacies" essay that fears of permanent technological unemployment have been issued at every major transition and have been consistently wrong. The result is something less dramatic and more corrosive: a narrowing of the income band for the professional middle class, a flattening of career trajectories, a gradual erosion of the economic distance between the credentialed professional and the AI-augmented generalist.
The squeeze has a compound character that makes it more severe than a simple wage reduction. It operates simultaneously through the labor market and through the capital market, and the two channels reinforce each other. On the labor side, competitive compression reduces the wage premium for professional skills. On the capital side, the productivity gains that AI produces flow disproportionately to the firms that deploy AI and to the shareholders who own those firms — populations that are overwhelmingly concentrated above the professional middle class in the income distribution. The squeeze is not merely a compression of wages from below. It is a simultaneous pulling-away from above, as the populations at the top of the distribution capture AI-augmented capital gains that accelerate their trajectory while the middle class stagnates.
Milanovic's data on the capital-labor split — the division of national income between returns to capital and compensation for labor — provides the structural frame for understanding this dynamic. The labor share of national income in the United States has been declining since the early 1980s, from approximately 64 percent to approximately 57 percent by the early 2020s, a decline of seven percentage points that represents a massive shift of income from workers to capital owners. The causes are debated — globalization, declining unionization, changes in market structure, the rise of superstar firms — but the direction is not. Capital's share has been rising for four decades.
AI accelerates this shift through a mechanism that is specific to the economics of AI-augmented production. The cost of AI augmentation is trivial relative to the productivity gain. Segal describes the cost: one hundred dollars per person, per month, for Claude Code with the Max plan. The productivity gain: a twenty-fold multiplier for a significant class of work. The surplus — the difference between the cost of the tool and the value of the additional output — is enormous. And the institutional mechanisms that might redirect some of that surplus from capital to labor — strong unions that could bargain for higher wages reflecting the productivity improvement, profit-sharing requirements that would mandate distribution of AI-augmented gains, progressive taxation that would capture some of the capital appreciation for public investment — are at their weakest point in the post-war era. Union membership in the United States private sector is below seven percent. Profit-sharing is voluntary and concentrated among firms that were already generous. Capital gains taxation is applied at preferential rates that are lower than the rates applied to ordinary labor income.
The arithmetic that Segal describes in his quarterly board conversations — if five people can do the work of a hundred, why not have five? — is the arithmetic of the capital-labor split expressed at the level of the individual firm. The decision to keep the team or to cut it is not merely an operational decision. It is a distributional decision that determines how the AI-augmented productivity surplus is allocated between capital and labor within that specific organization. Segal chose to keep the team. The market structure in which he operates rewards the opposite choice. A firm that cuts the team reports higher margins, attracts investment on more favorable terms, and gains competitive advantage over firms that retain surplus labor. The structural incentive is unambiguous, and individual ethical choices, however admirable, do not override structural incentives at the level of the economy.
The political dimension of the squeeze deserves examination because it determines whether the distributional outcome triggers institutional response or institutional paralysis. The populations being squeezed by AI are not the populations who were squeezed by globalization. The globalization valley was populated primarily by manufacturing workers and lower-middle-class service workers — populations with declining political influence, limited access to media platforms, and weakening organizational infrastructure. Their squeeze produced political consequences — populist movements, Brexit, the realignment of working-class voters — but the consequences arrived decades after the distributional damage began, and the institutional responses that followed were widely regarded as too late and too modest.
The AI valley is populated by the professional middle class — a population that is more educated, more politically engaged, more institutionally connected, and more culturally influential than the manufacturing workers who bore the brunt of globalization. This population includes journalists, teachers, academics, creative professionals, and public-sector workers — the populations that shape the narratives through which societies understand themselves. Their experience of the AI transition will be disproportionately represented in public discourse, not because they are more numerous than other affected populations but because they occupy the positions from which cultural narratives are produced. The squeeze of the professional middle class is, in a specific and measurable sense, a squeeze of the narrating class — the population that tells the story of what is happening to society.
This proximity to narrative production has a dual character. It makes the professional middle class's distributional experience more visible than the experience of less culturally influential populations, potentially generating political pressure for institutional response more rapidly than occurred during the globalization transition. But it also risks centering the discourse on the experience of the professional middle class while neglecting the experiences of populations lower on the distribution — the clerical workers, the service workers, the manual laborers whose AI-related displacement may be less visible but more severe in absolute terms. The risk is a distributional discourse that is articulate about the valley but blind to the left tail, where the populations excluded from the AI economy entirely face a different and potentially more severe form of distributional damage.
The squeeze is not yet visible in standard economic data, for the same reason that the globalization valley was not visible in standard data during its first decade: the measurement infrastructure is designed to capture levels rather than trajectories, and the trajectories are where the early signals appear. Aggregate employment data shows labor markets that remain historically tight. Average wage data shows modest growth. The aggregate conceals the distribution, and the distribution is where the squeeze is forming — in the compression of premiums for specific skills, in the flattening of career trajectories for specific populations, in the widening gap between the AI-augmented and the AI-compressed that no aggregate statistic captures.
The historical precedent offers both instruction and warning. The globalization squeeze produced institutional responses that were, by broad consensus, inadequate: trade adjustment assistance programs that were chronically underfunded, educational reforms implemented over decades while displacement occurred in years, social insurance systems strained by the populations they were meant to protect. The inadequacy of the response produced the political backlash that reshaped democratic politics across the developed world for a generation. The AI squeeze is forming faster, in a population that is more politically capable of generating a response, within an institutional environment that is less equipped to provide one. Whether the response takes the form of productive institutional construction — the dams that distributional analysis demands — or destructive backlash against the technology and the institutions that failed to govern its distributional consequences is the political question of the coming decade. And the answer depends, as it has always depended, on whether the distribution is made visible before or after the political consequences become unmanageable.
---
There is a class dimension to technological optimism that is structural rather than conspiratorial and that warrants examination precisely because it is so consistently overlooked. The observation is straightforward, empirically supported, and routinely ignored by the people it describes: the most enthusiastic voices in any technological transition are disproportionately the voices of those who stand to gain most from it. This is not a matter of dishonesty. It is a matter of position. Where you stand in the distribution shapes what you see, and what you see shapes what you believe, and what you believe shapes what you advocate. The structural correlation between distributional position and technological optimism is one of the most consistent features of every technological transition in the historical record, and the AI transition exhibits it with unusual clarity.
The populations that dominate the AI discourse — the builders, the founders, the investors, the technology executives, the venture capitalists, the commentators with financial exposure to AI companies — are disproportionately located at the trunk of the AI elephant. Their experience of the technology is genuine. Their productivity gains are real. Their enthusiasm is grounded in demonstrable outcomes. When Segal describes the exhilaration of building a product in thirty days that would have taken six months, or the creative liberation of directing AI tools toward problems he could not have attempted alone, the account is honest and the experience is representative — of his position in the distribution. The question that distributional analysis poses is not whether the experience is genuine but whether it is generalizable. Can the productivity gains that a well-resourced builder in the technology industry experiences be extrapolated to the global population? Or are they the gains of a specific stratum — the AI-complementary knowledge workers of the developed world, embedded in institutional infrastructure that amplifies AI's returns — that are being presented, sincerely but inaccurately, as a universal experience?
The evidence from every previous technological transition suggests the latter. The early beneficiaries of the industrial revolution — the factory owners, the financiers, the engineers who designed the machines — were the most enthusiastic advocates of mechanization. They predicted broad-based prosperity with the confidence of people whose own prosperity was expanding visibly. Their predictions eventually proved correct, but only after sixty years of distributional catastrophe that their enthusiasm had obscured and that their advocacy had delayed institutional response to. The financiers of the globalization era predicted that rising tides would lift all boats with the confidence of people whose boats were rising fastest. The predictions were correct about the aggregate. They were catastrophically wrong about the distribution.
The mechanism through which position shapes perception is not mysterious. It is the same mechanism that operates in every domain where experience is partial and generalization is tempting. A person at the top of the distribution evaluates a new technology through the lens of an experience that is genuinely positive, and the positivity of the experience colors the evaluation. The technology looks like progress — because from that position, it is progress. It looks like capability expansion — because the evaluator's capabilities have genuinely expanded. It looks like democratization — because the tools are nominally available to everyone, and the evaluator's position does not require her to examine whether nominal availability translates to equivalent benefit.
The bias operates through a specific rhetorical mechanism that deserves attention because of its prevalence in the AI discourse: the anecdotal generalization. The developer in Lagos who built a successful product with AI tools. The engineer in Trivandrum who expanded her capabilities across domains. The non-technical founder who prototyped an application over a weekend. Each story is true. Each describes a genuine expansion of individual capability. And the inference from individual cases to distributional claims — the move from "these individuals gained" to "the technology is broadly beneficial" — is precisely the kind of reasoning that distributional analysis exists to challenge.
The anecdotal generalization is persuasive because it is vivid. A story about a developer in Lagos building a product is more emotionally compelling, more memorable, and more shareable than a Gini coefficient or a distribution curve. But the story is also systematically selected from the positive tail of the distribution. The developer who succeeded with AI tools is visible, vocal, and available for citation. The developer who tried and failed is invisible. The worker displaced by AI automation is not posting triumphant threads on social media. The resulting selection bias produces a discourse that is systematically skewed toward the positive end of the distribution, creating the impression that the technology is broadly beneficial when the distributional evidence, were it available, might tell a more complicated story.
Milanovic encountered this same mechanism throughout the globalization debate. The success stories — the Chinese factory worker whose income tripled, the Indian engineer whose career blossomed, the consumer who benefited from lower prices — were cited endlessly as evidence that trade liberalization was broadly beneficial. They were all true. They were also selected from the positive tail of the distribution, and they obscured the populations in the valley whose experience was stagnation. The elephant curve was the corrective — the empirical demonstration that the aggregate of individual success stories did not describe the distribution. The AI transition awaits the same corrective, and until it arrives, the anecdotal generalization will continue to shape the discourse in ways that favor the perspectives of those who gained.
The bias has direct policy consequences. When the most influential voices in the discourse are disproportionately the voices of the gainers, the policy prescriptions that emerge are disproportionately tailored to the interests of the gaining population. The dominant AI policy prescriptions in the current discourse — accelerate AI capability development, invest in AI-complementary education, reduce regulatory barriers to AI adoption, build attentional ecology for AI-augmented workers — are all valuable. They are also prescriptions that primarily benefit the populations already positioned to capture AI's gains. They address the concerns of the upper hump of the AI elephant. They do not address the structural dynamics producing the valley: the capital-labor split that directs productivity gains to shareholders, the competitive compression that erodes middle-class wage premiums, the geographic concentration that extracts value from the periphery to the core.
The prescriptions that Milanovic's distributional analysis identifies as most critical — progressive taxation of AI-derived capital gains, strengthened worker bargaining power, international coordination on digital taxation, mandatory profit-sharing in AI-augmented firms — are conspicuously absent from the mainstream AI discourse. They are absent not because they are analytically unsound but because they are politically uncongenial to the populations that dominate the conversation. The founder does not advocate for higher capital gains taxes on her own equity. The investor does not advocate for profit-sharing requirements that reduce returns. The technology executive does not advocate for stronger unions in her industry. The absence is structural, not conspiratorial. The discourse reflects the distributional position of its participants, and its participants are disproportionately located at the point in the distribution where the gains are largest and the appetite for redistribution is smallest.
Segal's own account in The Orange Pill illustrates the tension with unusual honesty. He acknowledges the distributional question. He describes the quarterly board conversations in which the arithmetic of headcount reduction is on the table. He writes about the parent at the kitchen table and the teacher watching students disappear into tools. The concern is genuine. But the prescriptions that follow — attentional ecology, the cultivation of questioning, the builder's ethic — are prescriptions that operate at the level of the individual rather than the level of the system. They help the individual navigate the AI transition. They do not change the structural dynamics that determine who captures the gains. The gap between individual prescription and structural analysis is the gap through which the plutocratic bias operates, even in accounts that are more self-aware than most.
Milanovic took a personal stand on the relationship between AI and intellectual work that illuminates a different dimension of the bias. In 2025, he publicly endorsed what he called the Bylsma Pledge — a commitment to refusing AI tools in his own writing, at any stage, for any purpose. The pledge was not a Luddite gesture; it was a statement about the relationship between intellectual labor and the tools that mediate it. The distinction matters because it reveals that even sophisticated analysts who understand AI's productive potential feel the need to draw boundaries around domains of human intellectual activity that they regard as too important, too constitutive of identity and meaning, to submit to algorithmic mediation. The pledge is a data point about the cultural stakes of the AI transition — stakes that aggregate productivity statistics cannot capture and that the optimism of those positioned to gain cannot adequately represent.
The antidote to the plutocratic bias is not pessimism. Pessimism is simply the bias inverted — the perspective of those positioned to lose presented as universal truth. The antidote is distributional analysis: the rigorous, empirical measurement of who gains and who loses, conducted with sufficient independence from the populations whose experience dominates the discourse. The elephant curve performed this function for globalization by making visible what the anecdotal generalizations of the winners had obscured. The AI transition needs the same kind of independent distributional measurement, conducted by analysts whose cognitive frameworks are not shaped by position at the trunk of the distribution, and presented with sufficient clarity and force to break through the selection bias that currently governs the discourse.
The question is not whether AI produces genuine gains. It does. The question is whether the discourse about those gains is representative of the distributional reality or representative of the distributional position of those who dominate the discourse. The evidence from every previous transition suggests the latter. And until independent distributional measurement corrects the bias, the policies that emerge from the discourse will continue to serve the populations that shaped it — the populations at the trunk of the elephant, whose genuine experience of gain is presented, sincerely and inaccurately, as the experience of all.
---
If the institutional architecture is not built — if progressive taxation of AI-derived income remains unreformed, if the educational infrastructure remains inadequate, if international coordination on AI governance remains absent, if the capital-labor split continues its four-decade drift toward capital — the AI elephant evolves into something worse. The elephant shows stagnation in the valley: incomes that fail to grow while the populations above them pull away. The serpent shows absolute decline: incomes that fall, purchasing power that erodes, economic conditions that deteriorate in terms that are not merely relative but material. The distinction between the elephant and the serpent is the distinction between a population that is falling behind and a population that is falling. And the distinction has political consequences that are different not merely in degree but in kind.
The industrial revolution produced a serpent phase. For the handloom weavers of England, the transition was not relative stagnation. It was immiseration. Wages fell by more than half between 1800 and 1830. Communities disintegrated as the economic base that sustained them collapsed. Living conditions deteriorated in measurable, documented, undeniable ways. The serpent persisted for decades and produced the social upheaval — machine-breaking, labor unrest, Chartism, the early socialist movements — that eventually generated the institutional responses redirecting the industrial revolution's gains toward broader prosperity. The responses arrived, but they arrived after decades of distributional damage that need not have occurred if the institutions had been constructed earlier.
The question for the AI transition is not whether a serpent is possible. The structural conditions for it are present. The question is whether it is probable — and over what timeline, and for which populations, and with what political consequences. Milanovic's framework provides a structured analysis of the conditions under which the elephant becomes the serpent, and the conditions are disturbingly close to the current institutional reality.
The serpent arrives through three phases, each deepening the valley and widening the affected population. The first phase is displacement — the absorption of the most automatable tasks by AI systems and the resulting income shock for workers concentrated in those tasks. This phase is already underway. It is visible in the contraction of entry-level positions at law firms, consulting companies, and accounting practices that have adopted AI tools. It is visible in the declining demand for freelance work in categories — standard copywriting, basic design, routine data analysis, boilerplate code production — that AI performs at lower cost. It is visible in the layoffs at technology companies that cite AI-driven efficiency gains as justification for headcount reductions. The displacement phase does not affect the majority of the professional middle class. It affects the workers at the margin — those whose tasks are most routine, most codifiable, most amenable to AI replication. But the margin is where the serpent enters, and the margin is wider than the aggregate employment statistics suggest.
The second phase is compression — the mechanism described in the previous chapter, in which wage premiums for a broader range of professional skills erode as AI-enabled competition increases the supply of workers capable of producing any given quality of output. The compression phase extends the distributional impact from the directly displaced workers to the broader professional middle class. It does not produce visible unemployment. It produces invisible stagnation — incomes that fail to rise, career trajectories that flatten, wage premiums that narrow — while the populations above the compressed middle class capture AI-augmented gains that accelerate their own trajectories. The compression phase is more insidious than displacement precisely because it is invisible to standard measurement. Workers remain employed. They continue to produce. The aggregate statistics show a labor market that is functioning. The distributional reality — the erosion of the economic position of the professional middle class relative to the populations pulling away above them — is captured only by the granular, trajectory-sensitive measurement that Milanovic's framework demands.
The third phase is exclusion — the widening of the gap between populations that participate in the AI economy and populations that do not. This phase affects primarily the global poor — the two to three billion people without the connectivity, hardware, or educational infrastructure to use AI tools productively. Their absolute conditions may not decline, but the gap between their conditions and the conditions of the AI-augmented populations above them widens at an accelerating rate. The exclusion is not the result of active deprivation. It is the result of differential acceleration: the AI-augmented populations accelerate while the excluded populations remain at their prior trajectory. The gap compounds over time, and the compounding makes future catch-up progressively more difficult, because the institutional infrastructure needed to participate in the AI economy becomes more complex and more expensive as the AI economy advances.
Each phase has a tipping point beyond which the distributional dynamics become self-reinforcing. The displacement phase tips when the number of displaced workers exceeds the capacity of retraining programs and social insurance systems to absorb them. The compression phase tips when the erosion of middle-class wage premiums reduces the tax base that funds the public investments — in education, infrastructure, social insurance — that moderate the compression. The exclusion phase tips when the gap between the AI economy and the non-AI economy becomes large enough that the institutional investments needed to bridge it exceed the political will of the populations asked to fund them.
Milanovic's historical analysis reveals a pattern in the political dynamics of distributional crises that is relevant to the serpent trajectory. Populations that experience relative stagnation — the elephant valley — respond with political dissatisfaction that can be channeled through existing institutions: elections, policy debates, incremental reform. The populist movements of the 2010s, however disruptive, operated within the democratic framework. They produced electoral outcomes, policy shifts, and institutional adjustments that, while insufficient, remained within the bounds of institutional politics. Populations that experience absolute decline — the serpent valley — respond differently. The historical record shows that absolute economic losses produce political upheaval that threatens the institutional order itself. The machine-breaking of the Luddites, the revolutionary movements of 1848, the labor violence of the early twentieth century — these were responses not to relative stagnation but to absolute immiseration, and they operated outside institutional boundaries because the institutions had failed to prevent the immiseration.
The AI serpent phase, if it materializes, would unfold among a population that is more educated, more connected, more politically capable, and more institutionally embedded than any population that has previously experienced a serpent-phase distributional crisis. The professional middle class of the developed world is not the handloom weavers of 1812. They have organizational capacity, communication tools, and political influence. Their response to absolute economic losses would be faster, more coordinated, and more consequential than any previous distributional backlash.
The speed of the AI transition compounds the danger. Historical serpent phases unfolded over decades, providing time — however inadequate — for institutional adjustment. Industrial automation displaced manual workers gradually enough that institutional responses could at least partially moderate the damage. AI is displacing knowledge tasks in months and years. A task that was secure in January may be automatable by June. A wage premium that was robust at the beginning of the year may be compressed by its end. The speed compresses the serpent phase into a period too short for the institutional responses that moderated previous serpent phases — the labor laws, the educational reforms, the social insurance expansions — to be designed, legislated, funded, and implemented.
The serpent is not a prediction. It is a conditional trajectory — the distributional path that the current institutional architecture enables by default if no corrective action is taken. The conditionality is the critical point, because it means the serpent is avoidable. Each phase can be moderated by institutional intervention. The displacement phase can be cushioned by robust social insurance — adequate unemployment benefits, severance requirements, transition support. The compression phase can be moderated by strengthening labor's bargaining position — through collective bargaining rights, profit-sharing requirements, minimum standards for AI-augmented productivity sharing. The exclusion phase can be narrowed by investing in the institutional infrastructure of developing nations — digital connectivity, educational systems, financial access — on timelines that match the speed of the AI transition rather than the speed of traditional development assistance.
But the interventions must arrive in time. Every month that the AI transition proceeds without adequate institutional response is a month in which the distributional trajectory deepens. The serpent does not announce itself. It forms gradually, through the accumulation of individual distributional shifts — a displaced worker here, a compressed premium there, a widening gap between the connected and the disconnected — that are individually modest and collectively transformative. The aggregate statistics remain positive throughout. GDP rises. Productivity improves. The technology discourse celebrates the gains. The distribution, visible only to those who measure it, tells a different story. And by the time the aggregate statistics begin to reflect the distributional damage — by the time the political consequences become impossible to ignore — the serpent has already coiled, and the institutional response required to uncoil it is correspondingly more costly, more contested, and more difficult to achieve.
The serpent is the cost of institutional failure. The elephant is the cost of institutional delay. The only distribution that reflects genuine shared prosperity is one that is built deliberately, through institutional architecture designed for the specific distributional characteristics of the AI transition, constructed at a speed that matches the speed of the technological change, and maintained with the continuous attention that distributional justice has always required.
---
Segal builds his central metaphor around the beaver — the animal that neither refuses the river nor pretends it is benign, but studies the current carefully enough to know where structures can redirect the flow. The metaphor is more useful than most in the technology discourse because it captures something essential: that the response to a powerful force is neither surrender nor resistance but architecture. Architecture that requires continuous maintenance, because the current never stops pressing. Architecture that serves not just the builder but the ecosystem downstream.
But the dams Segal describes are primarily attentional and educational — structures that help individuals navigate the cognitive demands of AI-augmented work. Attentional ecology. The cultivation of questioning. The builder's ethic. These are real contributions to the problem, and they are insufficient to the distributional challenge, because they operate at the level of the individual while the distributional dynamics operate at the level of the system. Teaching a developer in Lagos to ask better questions does not change the value chain that extracts the majority of the value she creates to shareholders in San Francisco. Cultivating judgment in an engineer in Trivandrum does not alter the global labor market that compensates Indian engineers at a fraction of American rates for equivalent output. Individual dams in a systemic river redirect individual streams. They do not change the river's course.
The dams that the distributional analysis demands are institutional — structures that operate at the level of the economy, the labor market, the tax system, the international governance framework, and the corporate governance structures that determine who captures the surplus from AI-augmented productivity. These dams are harder to build than individual dams. They require political will, legislative action, international coordination, and sustained maintenance against the constant pressure of interests that benefit from the current distribution. But they are the only structures with the scale and the durability to redirect a distributional flow that operates at the level of the global economy.
At the national level, the most consequential dam is the reform of how AI-derived income is taxed. The current tax architecture in most developed nations was designed for an economy of wage labor and physical capital. It taxes wages at progressive rates, which captures labor income effectively. It taxes capital gains — the primary channel through which AI's productivity gains flow to the top of the distribution — at preferential rates that are lower than the rates applied to ordinary income. In the United States, the maximum federal rate on long-term capital gains is twenty percent, compared to thirty-seven percent on ordinary income. The differential is a subsidy for capital accumulation, and in the AI era, it is a subsidy for the concentration of AI-augmented wealth at the trunk of the elephant.
The reform is not technically complex. Eliminating the preferential rate for capital gains — taxing all income at the same progressive rates regardless of source — would capture a substantially larger share of AI-derived wealth for public investment. Additional measures — taxing unrealized capital gains above a threshold, closing the carried-interest provision that allows venture fund managers to characterize their compensation as capital gains, implementing a minimum effective tax rate for corporations with substantial AI-derived revenue — would further broaden the base. The revenue generated by these reforms would fund the investments in education, social insurance, and digital infrastructure that the AI transition demands.
The obstacles are political, not analytical. The populations that would bear the cost of reform — the shareholders, the founders, the venture capitalists at the trunk of the AI elephant — are politically organized, well-funded, and effective at blocking redistributive legislation. Milanovic has consistently argued that the redistribution of income, while necessary, is insufficient because the mobility of capital allows the wealthy to shelter income in lower-tax jurisdictions. His preferred approach — the redistribution of endowments rather than income — focuses on equalizing access to the assets that generate income: education, wealth, and capital ownership. In the AI context, this translates to policies that broaden ownership of AI capital — employee stock ownership in AI-deploying firms, public equity stakes in AI companies that receive government research subsidies, sovereign wealth funds that capture some of the returns from publicly funded AI research — rather than relying exclusively on after-the-fact redistribution of income that has already been earned and is already mobile.
At the firm level, the critical dam is the structure of surplus-sharing. The twenty-fold productivity multiplier generates an enormous surplus — the difference between the cost of AI tools and the value of the augmented output. Under current corporate governance structures, the allocation of this surplus is determined by management and shareholders, with workers receiving their contracted compensation regardless of the productivity gain their augmented labor produces. The structural incentive is to capture the surplus as margin — higher profits, higher stock prices, higher returns to investors. The individual leader who shares the surplus with workers is admirable. The structural incentive that penalizes sharing is the distributional reality.
Profit-sharing requirements — mandatory distribution of a percentage of AI-augmented productivity gains to the workers who produce them — would alter this structural incentive. The mechanism has precedents: several European countries require or incentivize profit-sharing arrangements, and the empirical evidence suggests that profit-sharing firms are not less competitive than their non-sharing counterparts. Worker representation on corporate boards — the codetermination model practiced in Germany and several Nordic countries — would give workers a voice in surplus-allocation decisions. Employee stock ownership programs would give workers a direct stake in the capital appreciation that AI generates. Each mechanism redirects a portion of the surplus from capital to labor, moderating the capital-labor split that is the primary channel through which AI's gains concentrate at the top.
At the international level, the dam that is most urgently needed and most politically difficult to construct is a governance framework that addresses the geographic concentration of AI value capture. The current international order lacks the institutional capacity to govern a technology that is developed in a handful of nations and deployed globally, that generates revenue in every country and books profits in whichever country offers the most favorable tax treatment, and that extracts value from the global periphery to the institutional core through platform rents, infrastructure charges, and concentrated capital ownership.
Digital taxation — the principle that revenue generated in a country should be taxed in that country, regardless of where the generating firm is headquartered — is the foundational reform. The OECD's Base Erosion and Profit Shifting framework established the principle; implementation remains fragmentary and contested. Technology transfer provisions — ensuring that AI capabilities are shared with developing nations on terms that support institutional development rather than deepening dependence — would begin to address the structural reproduction of the digital periphery. Minimum international tax agreements would prevent the race to the bottom in AI taxation that regulatory fragmentation enables. Each reform is politically contentious, technically demanding, and institutionally complex. Each is also necessary, because without international coordination, national reforms are undermined by the mobility of digital capital and the ease with which AI operations can be relocated across jurisdictions.
Milanovic has argued throughout his career that distributional outcomes are the product of institutional choices, not technological inevitabilities. The argument is supported by the entire historical record of technological transitions. The same technology — industrial machinery, electrical power, digital computing — produced dramatically different distributional outcomes in different institutional contexts. The AI transition is no different. Its distributional outcome is being determined, right now, by choices that are being made or not made about taxation, corporate governance, labor law, educational investment, and international coordination.
The historical record also shows that the distributional dams required to moderate technological transitions have never been built proactively. They have always been built reactively — in response to distributional crises that produced political upheaval severe enough to force institutional action. The labor laws of the late nineteenth century were responses to decades of industrial immiseration. The social insurance systems of the early twentieth century were responses to mass unemployment and social instability. The post-war welfare states were responses to the catastrophic inequality that had contributed to two world wars. In each case, the institutions were built after the distributional damage was done, at a cost in human suffering that proactive institutional construction could have substantially reduced.
The AI transition offers an opportunity — narrow, urgent, and historically unprecedented — to break this pattern. The distributional dynamics are visible early, because the analytical frameworks for identifying them exist and because the speed of the transition makes early signals legible to those who know how to read them. The institutional tools for moderating the dynamics are known, because they are the same tools — adapted, updated, redesigned for the specific characteristics of the AI transition — that have moderated every previous distributional crisis. The political conditions for institutional action are, at this moment, more favorable than they will be once the distributional damage deepens and political positions harden.
The window is narrow because the speed of the AI transition compresses the time available for institutional response. It is urgent because every quarter that passes without adequate institutional architecture is a quarter in which the distributional trajectory deepens toward the serpent. And it is historically unprecedented because never before has the analytical framework for understanding distributional dynamics been available early enough to motivate proactive institutional construction rather than reactive institutional repair.
Whether the opportunity is seized depends on political choices that distributional analysis can inform but cannot make. The analysis can show the shape of the curve. It can identify the populations in the valley. It can document the mechanisms through which gains concentrate at the top. It can describe the institutional architectures that have moderated distributional crises in the past. It can estimate the costs of institutional failure and the benefits of institutional action. What it cannot do is generate the political will to act.
The political will must come from the populations whose distributional position is at stake — the professional middle class in the valley, the workers whose premiums are being compressed, the citizens whose tax systems are failing to capture the gains that AI generates. Their capacity to organize, to demand, to hold political institutions accountable for distributional outcomes is the variable that the analytical framework cannot supply. The history of distributional justice has never been a history of wise analysts persuading benevolent governments to build dams. It has been a history of affected populations organizing, demanding, and building the political coalitions that forced governments to act.
The analysis provides the map. The construction requires the builders. And the construction must begin now — not after the distributional damage hardens, not after the political coalitions calcify, not after the serpent coils — but now, while the curve is still being drawn and the institutions are still malleable and the distributional outcome of the most powerful amplifier in human history is still, for a brief and narrowing window, undetermined.
The history of distributional justice is not a history of analysts publishing correct diagnoses and governments implementing rational responses. It is a history of affected populations organizing, fighting, and building the political coalitions that forced institutional change. The distinction matters because the AI transition has produced, so far, an abundance of diagnosis and an almost complete absence of political mobilization around distributional outcomes. The analytical framework exists. The distributional dynamics are visible to anyone trained to read them. The institutional prescriptions are known. What is missing is the political agency that translates analysis into architecture — the builders of the distributional dams.
Every previous distributional crisis produced its builders through a specific mechanism: the formation of collective identity among the populations bearing the costs. The industrial revolution produced the labor movement because factory workers shared a common experience — the same factory floor, the same hours, the same wages, the same foreman — that made collective identity natural and collective action possible. The globalization transition produced, belatedly, the populist movements that reorganized the politics of the developed world, because the populations in the valley of the elephant curve eventually recognized their shared condition and expressed it through the available political channels. In each case, the collective identity preceded the institutional construction. People had to see themselves as belonging to a group with shared interests before they could organize to advance those interests. The seeing came first. The building followed.
The AI transition has not yet produced this collective recognition, and the structural features of AI-era inequality help explain why. The fractal character of the distributional differentiation — the fact that it operates within categories rather than between them, separating the AI-augmented from the AI-compressed within the same offices, the same professions, the same educational cohorts — makes collective identity formation extraordinarily difficult. The factory workers of the industrial revolution knew they were factory workers. The knowledge workers being compressed by AI do not yet have a name for what they are. They sit next to colleagues who are thriving with the same tools that are eroding their own position. They hold the same job titles, possess similar credentials, attend the same meetings. The distributional divergence between them is real and growing, but it lacks the categorical visibility that has historically been the precondition for political mobilization.
Milanovic's research on within-country inequality provides a framework for understanding this mobilization deficit. His data shows that between-group inequality — inequality between clearly defined categories such as racial groups, occupational classes, or educational levels — generates political conflict more readily than within-group inequality, even when the within-group inequality is larger in magnitude. The reason is cognitive: between-group inequality maps onto identities that people already hold, making the distributional experience legible as a group experience rather than an individual one. Within-group inequality is experienced as individual success or failure — as personal adaptability or personal obsolescence — rather than as a structural condition affecting a definable population. The AI-compressed knowledge worker who watches her AI-augmented colleague pull ahead does not think "my group is being squeezed." She thinks "I am falling behind." The individualization of the experience is the primary obstacle to the collective mobilization that distributional justice requires.
This individualization is reinforced by the dominant cultural narrative of the AI transition — the narrative that Segal captures in The Orange Pill with the phrase "are you worth amplifying?" The question is directed at the individual. It frames the distributional outcome as a function of individual quality — of judgment, adaptability, willingness to learn, capacity for the kind of creative direction that AI cannot replicate. The frame is not wrong; individual characteristics genuinely influence individual outcomes in the AI economy. But the frame is incomplete in a way that has political consequences. If the distributional outcome is understood as a function of individual quality, the political response to distributional inequality is individual self-improvement — better training, better prompts, better judgment. If the distributional outcome is understood as a function of institutional architecture — tax policy, corporate governance, labor law, international coordination — the political response is collective institutional construction. The cultural dominance of the individual frame suppresses the collective frame, and the suppression delays the mobilization that distributional justice requires.
The suppression is amplified by the specific psychology of the professional middle class — the population most affected by the AI squeeze. Milanovic has noted that the homoploutic elite's meritocratic self-perception provides ideological insulation against redistributive claims. The same meritocratic ideology operates in the professional middle class, but with the opposite distributional consequence. The professional who experiences AI-driven compression interprets her stagnation through a meritocratic lens: if the system rewards merit, and I am stagnating, then I must not be meritorious enough. The structural explanation — that her skills are being commoditized by a technological shift whose distributional consequences are determined by institutional architecture rather than individual merit — is available intellectually but resisted psychologically, because accepting it requires abandoning the meritocratic framework that has organized her professional identity since graduate school.
The result is a population that is being squeezed, that feels the squeeze, that experiences the vertigo Segal describes, but that lacks the collective identity, the structural analysis, and the political framework necessary to translate individual distress into collective action. The professional middle class of the developed world is, in distributional terms, the population most in need of institutional dams. It is also, in political terms, the population least equipped to build them, because the cultural narratives it inhabits — meritocracy, individual agency, the builder's ethic — frame distributional outcomes as individual rather than structural and thereby inhibit the collective mobilization that institutional construction requires.
This is not an argument against individual agency. Individual choices matter — the choice to develop AI-complementary judgment, to cultivate the questioning capacity that Segal describes, to maintain the attentional disciplines that protect cognitive depth. These are genuine contributions to individual resilience in a distributional landscape that is shifting beneath everyone's feet. But individual resilience is not distributional justice. The developer in Lagos who cultivates extraordinary judgment and builds a brilliant product with AI tools is individually resilient. She remains embedded in a value chain that extracts the majority of the value she creates to shareholders in another hemisphere. Her individual resilience does not change the extractive architecture. Only collective institutional construction changes the architecture.
The missing builders of the distributional dams are the populations in the valley of the AI elephant who have not yet recognized their shared condition, who have not yet developed the collective identity that political mobilization requires, who have not yet built the organizational infrastructure that translates individual distress into institutional demand. The analytical framework for understanding their condition exists — Milanovic's distributional analysis provides it. The institutional tools for moderating their condition are known — progressive taxation, portable benefits, profit-sharing, international coordination. What is missing is the political bridge between the analysis and the action — the moment of collective recognition in which the populations bearing the distributional costs of the AI transition see themselves as a group with shared interests and begin to organize accordingly.
The history of distributional justice suggests that this recognition will come. It has come in every previous distributional crisis, though often later and at higher cost than proactive institutional construction would have required. The question is whether it comes early enough — before the serpent coils, before the distributional damage hardens into structural inequality that is resistant to institutional correction, before the political polarization that distributional crises produce makes collective action more difficult rather than less.
The analytical contribution that distributional economics can make at this moment is to accelerate the recognition — to make the distributional dynamics visible, legible, and undeniable before the affected populations have fully felt their consequences. The elephant curve did this for globalization, retrospectively and too late. The AI transition needs its equivalent, prospectively and in time. The curve must be drawn. The valley must be named. The populations in the valley must see themselves in the data. And the data must be presented not as an academic finding but as a call to construction — a demonstration that the distributional outcome is not determined by the technology, that it is determined by institutional choices, and that the choices are being made right now, by populations that are either organizing to shape them or allowing them to be shaped by default.
The builders are in the valley. They do not yet know they are builders. Making them see what they can construct — not individual resilience, but collective institutional architecture — is the most important contribution that distributional analysis can make to the political economy of the AI transition.
---
The distributional analysis of the AI transition arrives at a set of requirements that are analytically clear, institutionally demanding, and politically contested. They are requirements in the precise sense that the historical record supports: without them, the distributional trajectory follows the default path toward concentration, and the default path, in every previous technological transition, has produced distributional crises that were eventually moderated by institutional construction — but at a cost in human suffering that earlier construction could have substantially reduced. The requirements are not aspirational. They are the minimum institutional architecture necessary to prevent the AI elephant from becoming the AI serpent, derived from the structural characteristics of the technology, the institutional context in which it is being deployed, and the distributional evidence from every analogous transition in the historical record.
The first requirement is distributional visibility. What is not measured is not managed, and the distributional consequences of the AI transition are not being measured with anything approaching the granularity, frequency, or independence that the moment demands. The measurement infrastructure for globalization — the household income surveys, the purchasing power parity adjustments, the decomposition methods — took decades to develop and produced the elephant curve only retrospectively, after the distributional damage was largely done. The AI transition requires measurement infrastructure designed for its specific characteristics: higher frequency than annual surveys, finer granularity than standard occupational categories, trajectory-sensitive rather than point-in-time, and independent of the populations whose distributional position gives them an interest in the results.
The specific metrics are identifiable. The capital-labor split at the firm level — what share of AI-augmented productivity gains flows to workers versus shareholders — can be tracked through compensation data, profit disclosures, and equity-grant filings. The skill-premium dynamics — how the wage premium for specific capabilities changes as AI alters demand — can be tracked through longitudinal wage data disaggregated by AI exposure. The geographic distribution of value capture — how AI-generated revenue flows across jurisdictions — can be tracked through platform transaction data, tax filings, and cross-border payment flows. None of these metrics are currently collected with the frequency and granularity the moment requires. All of them could be, with modest investment in statistical infrastructure and a political commitment to distributional transparency.
The second requirement is the reform of how AI-derived income is taxed. This is the most consequential fiscal intervention for the AI transition, and its logic is direct: the productivity gains from AI flow disproportionately through capital channels — equity appreciation, capital gains, corporate profit — that are taxed at preferential rates or, in some cases, effectively untaxed until realization. The preferential treatment of capital income is a subsidy for concentration. Eliminating it — taxing all income at the same progressive rates regardless of source — would capture a substantially larger share of AI-generated wealth for public investment without requiring any novel fiscal mechanism. The additional measures that Milanovic's endowment-redistribution framework suggests — broadening ownership of AI capital through employee equity programs, public equity stakes in AI firms that benefit from publicly funded research, sovereign wealth funds that capture returns from the AI transition — address the structural cause of concentration rather than merely redistributing its symptoms.
The third requirement is the construction of portable benefit systems that follow workers across jobs, occupations, and employment modes. The traditional model of employer-provided benefits — health insurance, retirement savings, disability protection — was designed for an economy of long-term employment with a single employer. The AI transition is accelerating the dissolution of this model by increasing the pace of job transitions, expanding contingent and gig-based work, and creating a labor market in which career paths are less predictable and more frequently interrupted. Portable benefits — attached to the worker rather than the employer, maintained across transitions, funded through contributions from employers, workers, and public sources — would provide the floor of economic security that enables workers to invest in the retraining and adaptation the AI transition demands. Without this floor, the cost of transition falls entirely on the individual worker, and the workers least equipped to bear that cost — those with the least savings, the fewest alternative options, the most precarious existing positions — bear it most heavily.
The fourth requirement is the strengthening of workers' collective capacity to bargain over the distribution of AI-augmented productivity gains. The capital-labor split that determines who captures the surplus from AI-augmented work is not determined by economic law. It is determined by relative bargaining power, and bargaining power is an institutional variable that can be altered by policy. Sectoral bargaining frameworks — in which wage and benefit standards are negotiated for entire industries rather than individual firms — would prevent the race to the bottom in which each firm's competitive pressure to capture the surplus as margin undermines the workers at every other firm. Works councils or worker representation on corporate boards would give employees a voice in the specific decisions — hiring, compensation, surplus allocation — that determine distributional outcomes at the firm level. These mechanisms are not speculative. They operate in multiple European economies and have demonstrated that strong labor institutions are compatible with economic dynamism and technological adoption.
The fifth requirement is international coordination to prevent the regulatory arbitrage that undermines national distributional reforms. AI operations are mobile in ways that manufacturing operations are not. A firm that faces higher taxes or stronger labor requirements in one jurisdiction can relocate its AI operations — or the legal entities that book its AI-derived profits — to a jurisdiction with more favorable terms. The mobility of digital capital makes unilateral national reform vulnerable to competitive undercutting, and the vulnerability provides a convenient justification for inaction: why build a dam if the water will simply flow around it? The answer is coordination — minimum international standards for AI taxation, digital revenue allocation, and labor protections that prevent the floor from being undercut by jurisdictional competition. The OECD's framework on base erosion and profit shifting provides a starting architecture. Extending it to the specific characteristics of AI-derived income is technically feasible and politically necessary.
The sixth requirement is educational reconstruction on a timeline that matches the speed of the technological transition rather than the speed of traditional educational reform. The educational systems of most nations are preparing students for a labor market that is being restructured beneath them. The emphasis on implementation skills — coding, data analysis, domain-specific technical knowledge — addresses the labor market of five years ago. The labor market of five years from now will reward judgment, integrative thinking, evaluative capacity, and the ability to direct AI tools toward problems worth solving. The educational reconstruction required is not a matter of adding AI modules to existing curricula. It is a reorientation of educational philosophy from the development of implementation capacity to the development of judgment — the capacity to ask good questions, to evaluate competing answers, to synthesize across domains, to exercise the ethical reasoning that determines whether amplified capability serves human needs or merely accelerates extraction.
These six requirements are not a policy platform. They are the institutional minimum that the distributional evidence demands. They are derived from the same analytical framework that identified the distributional consequences of the industrial revolution, the globalization transition, and every analogous technological shift in the historical record. They are adapted to the specific characteristics of the AI transition — its speed, its geographic concentration, its capital-intensive value chain, its fractal distributional dynamics. And they are, at the present moment, almost entirely unimplemented.
The gap between what the distributional evidence demands and what the institutional architecture provides is the central political fact of the AI transition. The gap is not a matter of analytical uncertainty — the requirements are clear. It is not a matter of technical infeasibility — the mechanisms are known. It is a matter of political will — the willingness of democratic societies to construct the institutions that the moment requires, against the resistance of populations whose distributional position gives them an interest in the status quo.
Milanovic's career has been spent demonstrating that distributional outcomes are not natural phenomena. They are institutional products — the results of specific choices made by specific people through specific political processes. The AI transition does not change this fundamental truth. It intensifies it. The gains are larger. The concentration is more extreme. The speed is faster. The institutional gap is wider. And the consequences of institutional failure are more severe, because the technology's power as an amplifier means that whatever distribution emerges — concentrated or shared, extractive or generative — will be amplified to a degree that previous technologies could not achieve.
The curve is being drawn. Its shape is not yet determined. The analytical framework for understanding what determines the shape is available. The institutional tools for bending the shape toward equity are known. The political will to deploy them is the variable that the analysis cannot supply. But the analysis can — and must — make the stakes visible: the distance between the elephant and the serpent, the populations in the valley, the mechanisms through which concentration compounds, the historical cost of institutional delay, and the narrowing window in which institutional construction remains possible before the distributional trajectory hardens into the structural inequality that no subsequent intervention can easily undo.
Distribution is the question. Institutions are the answer. And the answer is being written, or not written, in the political choices that are being made right now — by the voters who choose their representatives, by the representatives who write the laws, by the firms that design their governance structures, by the international bodies that coordinate or fail to coordinate, and by the populations in the valley who are beginning, slowly and with increasing urgency, to recognize that their distributional condition is not a personal failure but a structural feature of a transition whose outcome remains, for a brief and narrowing window, in their hands.
---
A number changed everything for me.
Not the twenty-fold multiplier — though that number reshaped my company and rewired my assumptions about what a small team could build. The number that changed everything was subtler, buried in Milanovic's data, and it took me weeks to understand why it wouldn't leave me alone. The citizenship premium: the empirical finding that where you are born explains more of the variation in your lifetime income than your education, your talent, your work ethic, or any other individual characteristic that the meritocratic narrative says matters most.
I had built my career inside that meritocratic narrative. I believed it — not naively, not without qualification, but fundamentally. Work hard. Build well. The tools reward quality. The market sorts for value. When I wrote in The Orange Pill about the developer in Lagos and the engineers in Trivandrum and drew a line connecting them to my own experience, I was drawing a line of shared capability. Same tools. Same potential. The floor had risen for everyone.
Milanovic drew a different line through the same three points, and his line measured something I had not been measuring: not what each person could build, but what each person captured from building it. The gradient was steep. The same tools, the same nominal capability, dramatically different economic returns — determined not by talent or effort but by the institutional infrastructure surrounding the builder. The developer in Lagos paid for tools in a currency that cost her more to acquire, hosted on infrastructure priced for wealthier markets, distributed through platforms that extracted rent to shareholders an ocean away. She built. The value chain extracted. My line showed convergence. His showed a gradient of capture that reproduced the very inequality I thought the tools were dissolving.
That was uncomfortable. What followed was worse.
The capital-labor split — the question of who captures the surplus when AI makes a worker twenty times more productive — is a question I face in my own boardroom. I described in The Orange Pill the quarterly conversation where the arithmetic sits on the table: if five people can do the work of a hundred, why not have five? I described choosing to keep the team. I still believe that was the right choice. But Milanovic's framework forced me to see that my choice, however sincere, is an individual decision operating against structural incentives that push every other firm in the opposite direction. The market rewards margin expansion. The competitor who cuts captures the surplus as profit, reports higher earnings, attracts cheaper capital. My individual ethics do not change the structural incentive. Only institutions change structural incentives. And the institutions — the tax systems, the labor frameworks, the governance structures that would redirect the surplus toward the workers who produce it — are weaker right now than at any point in the post-war era.
The concept I cannot stop thinking about is the serpent. Not the elephant — the distributional curve where the middle stagnates while the top pulls away. The serpent — the curve where the middle falls. Where the professional middle class doesn't just fail to keep pace but actually loses ground. Where absolute conditions deteriorate. Where the vertigo I described in my book turns into something harder, something with political consequences that the aggregate productivity numbers cannot anticipate and that no individual act of ethical leadership can prevent.
I am a builder. I believe in building. I believe that the AI tools we have created represent a genuine and historic expansion of human capability. Nothing in Milanovic's analysis changes that belief. What his analysis changes is my understanding of what building means and what building requires.
Building a product is individual. Building the institutional architecture that determines whether a product's value is shared broadly or captured narrowly — that is collective. That requires the kind of political engagement that builders in the technology industry have historically avoided, preferring to let the tools speak for themselves and trusting that capability expansion will translate automatically into shared prosperity.
It does not translate automatically. It has never translated automatically. Every previous expansion of capability — the industrial revolution, electrification, globalization, digital computing — required decades of institutional construction before the gains were broadly shared. The construction was never initiated by the people at the top of the distribution. It was forced by the people in the valley.
The people in the AI valley — the professional middle class whose premiums are compressing, the knowledge workers whose implementation skills are being commoditized, the global populations participating in value chains that extract more than they return — have not yet fully recognized their shared condition. When they do, the demand for institutional architecture will be urgent, and the quality of what gets built will depend on whether the builders in the technology industry have, by then, contributed to the design.
I keep the team. I invest in the top line. I build dams in my own river. But I now understand that these are individual choices operating in a structural vacuum, and that filling the vacuum — building the distributional dams that match the scale of the distributional challenge — requires a kind of building I have not yet learned how to do.
Milanovic measured what I was not measuring. The curve he would draw for the AI transition has not yet been drawn, but its shape is being determined right now, by choices that are being made or not made, by institutions that are being built or not built. The shape is not inevitable. But the default — the shape the curve takes when no one builds the dams — is one I do not want my children to inherit.
That is why this book exists. Not because the analysis is comfortable, but because the analysis is necessary. The aggregate says we are building something extraordinary. The distribution says the extraordinary thing is being built on a foundation that will not hold unless we build beneath it.
The foundation is the work that remains.
The technology discourse celebrates capability. Branko Milanovic — the economist whose elephant curve revealed what globalization's cheerleaders refused to see — measures something harder: distribution. Who captures the surplus when a tool makes one person twenty times more productive? Where does the value flow when a developer in Lagos builds on infrastructure owned in San Francisco? What happens to the professional middle class when the skills that sustained their incomes become abundant overnight? Drawing on four decades of inequality research and the sharpest distributional lens in contemporary economics, this book applies Milanovic's framework to the AI revolution with unsettling precision. The result is a map of who gains, who stagnates, and who falls — not because the technology failed, but because the institutions that determine distribution were never built for this moment. This is the book the aggregate numbers do not want you to read. The curve is forming. The valley is real. The question is whether we draw it before or after the damage is done.

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Branko Milanovic — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →