By Edo Segal
The bargain I made with myself turned out to be with someone else.
I mean this literally. For thirty years, I operated inside an implicit deal: invest in technical skill, build expertise, stay at the frontier, and the economy will reward you. I kept my end. I learned to code in assembler. I built companies. I shipped products. I stayed up later than anyone around me and learned faster than the curriculum demanded. The economy kept its end too — until it didn't.
What unsettled me about Robert Reich was not that he predicted the disruption. Plenty of people predicted it. What unsettled me was that he had mapped the *terms* of the bargain decades before the bargain broke. In 1991, he sorted the workforce into three categories — routine production, in-person service, symbolic analysis — and told the symbolic analysts they would win. He was right. We won. We won so thoroughly that we stopped asking who had written the rules of the game we were winning.
Then the machines learned to manipulate symbols.
I described in *The Orange Pill* what that felt like from the builder's side — the twenty-fold productivity multiplier, the trillion dollars of evaporating market value, the engineers who could suddenly build what entire teams could not. What I had not reckoned with was the distributional question underneath all of it. Not *can* AI do the work, but *who captures the value when it does*? Not *will* the transition happen, but *who bears the cost while it unfolds*?
These are not technology questions. They are political questions. And political questions have political answers — answers that depend on who is in the room when the rules get written.
Reich spent his career insisting on something that builders like me find deeply uncomfortable: that markets do not distribute gains fairly on their own. That the rules governing who wins and who loses are written by people, not by physics. That those rules can be rewritten — but only by people who show up to rewrite them.
I brought Reich into this series because the technology lens is not sufficient. You can understand every capability of every model and still miss the question that determines whether the AI revolution lifts people or hollows them out. That question is about power, about institutions, about who writes the contract between the economy and the people who live inside it.
The river of intelligence does not care about your preferences. But the channels it flows through were dug by human hands. Reich taught me to look at the hands.
— Edo Segal ^ Opus 4.6
1946-present
Robert Reich (1946–present) is an American political economist, professor, author, and public servant who served as the twenty-second United States Secretary of Labor under President Bill Clinton from 1993 to 1997. Born in Scranton, Pennsylvania, Reich earned degrees from Dartmouth College, Oxford University as a Rhodes Scholar, and Yale Law School, where he was a classmate of Bill and Hillary Clinton. He has been a professor at Harvard's John F. Kennedy School of Government and has held the Chancellor's Professorship of Public Policy at the University of California, Berkeley, since 2006. His landmark 1991 book *The Work of Nations: Preparing Ourselves for 21st-Century Capitalism* introduced the influential taxonomy of routine production workers, in-person service workers, and symbolic analysts, reshaping how policymakers and educators understood the global knowledge economy. His subsequent books, including *Saving Capitalism: For the Many, Not the Few* (2015) and *The System: Who Rigged It, How We Fix It* (2020), extended his analysis of how concentrated economic power shapes the rules governing markets, wages, and opportunity. A prolific public intellectual, Reich has reached millions through documentary films, social media commentary, and his Substack newsletter, consistently arguing that economic outcomes are determined not by inevitable market forces but by political choices about who writes the rules.
In 1991, Robert Reich sorted the American workforce into three boxes. The sorting was clean, persuasive, and — for three decades — largely correct.
The first box held routine production workers. These were the people on assembly lines, in data-processing centers, at the repetitive end of manufacturing. Their work followed rules. Do this, then this, then this. The rules could be written down, and anything that could be written down could eventually be automated. Reich predicted these workers would lose. They did.
The second box held in-person service workers. Janitors, waiters, home health aides, retail clerks. Their work required physical presence and human contact. A robot could weld a car door, but it could not calm a frightened patient or short-order cook a breakfast while making small talk with a regular. These workers would not thrive, Reich predicted, but they would survive. They more or less did.
The third box held the symbolic analysts. Programmers, lawyers, consultants, designers, financial engineers, writers, architects — anyone whose work consisted of manipulating symbols: words, numbers, images, code. These workers solved problems by rearranging abstractions. They identified patterns, built models, drafted arguments, wrote software. Reich predicted these workers would win — and win big — because their skills were the skills the global economy rewarded most generously. They were right to invest in education, right to pursue credentials, right to develop the capacity for complex symbolic manipulation that the knowledge economy valued above all else.
For thirty years, the prediction held. The symbolic analysts captured an ever-larger share of national income. The wage premium on a college degree, and especially on a graduate degree, widened steadily. The knowledge economy became the economy, and the people who worked with symbols became its aristocracy.
Then the machines learned to manipulate symbols.
This is the fact that breaks Reich's taxonomy wide open. Not gradually, not at the margins, but at the center. The entire analytical framework of The Work of Nations rested on a foundational assumption: that symbolic manipulation was the domain most resistant to automation. Routine production work could be mechanized because it followed rules. In-person service work resisted mechanization because it required a body. Symbolic analysis resisted mechanization because it required a mind — and machines, in 1991, did not have minds.
By the winter of 2025, they had something close enough to change the equation. Large language models could write code, draft legal briefs, analyze financial data, generate architectural renderings, compose music, and produce marketing copy — not perfectly, but competently. Not with the depth of a twenty-year veteran, but with the breadth of a capable junior associate. The specific skills that defined the symbolic analyst class — the manipulation of language, number, image, and code — were precisely the skills that artificial intelligence replicated most effectively.
Reich himself recognized the inversion. In a September 2025 appearance on PBS NewsHour, he updated his taxonomy. The three categories were no longer routine production, in-person service, and symbolic analysis. They were making, thinking, and caring. "The making jobs really have been automated over the last thirty to forty years," he said. "The thinking jobs are the ones that are most at risk from artificial intelligence." The caring jobs — childcare, eldercare, nursing, psychotherapy, any work whose essence is human empathy and human touch — would resist AI longest.
The revision is striking because of what it concedes. The category Reich once positioned at the apex of the economic hierarchy — the symbolic analysts, the thinkers, the knowledge workers — is now the category he identifies as most exposed. The winners of the knowledge economy have become the primary targets of the AI economy. The very skills they were told to develop, the skills they invested decades and hundreds of thousands of dollars to acquire, are the skills the machines learned to perform.
Segal documents this inversion from the inside. In The Orange Pill, he describes the moment a Google principal engineer sat down with Claude Code, described a problem in three paragraphs of plain English, and received a working prototype of a system her team had spent a year building. "I am not joking," she posted, "and this isn't funny." The observation is diagnostic. The person most qualified to understand what the tool was doing — a principal engineer at one of the world's most sophisticated technology companies — was the person most shaken by what it meant. Her expertise was not being augmented. It was being reproduced, at a fraction of the cost and a fraction of the time, by a tool available to anyone with a subscription.
This is not the disruption that Reich's original framework predicted for symbolic analysts. In The Work of Nations, the threat to routine production workers was clear: their tasks followed rules, and rules could be encoded. The threat to symbolic analysts was supposed to come from global competition — from equally skilled analysts in Bangalore or Beijing willing to do the same work for less — not from machines that could do the work without any human analyst at all. Globalization threatened to redistribute symbolic analysis geographically. AI threatens to automate it entirely.
The distinction matters because it determines the available responses. When symbolic analysis was redistributed geographically, the American symbolic analyst could respond by moving up the value chain — by developing more specialized expertise, more sophisticated analytical capabilities, more complex problem-solving skills that justified a premium over the cheaper competition. When symbolic analysis is automated by AI, the value-chain response is less available, because AI does not merely compete at the bottom of the symbolic-analysis hierarchy. It competes across the hierarchy, performing routine symbolic tasks with ease and approaching non-routine symbolic tasks with increasing competence.
Reich's updated taxonomy — making, thinking, caring — captures this new reality, but it also raises a question his original framework was not designed to answer. If the thinking jobs are the ones most at risk, what happens to the tens of millions of people whose entire economic strategy was built on becoming thinkers? What happens to the educational institutions that trained them, the credential systems that certified them, the professional communities that sustained them, the cultural narratives that told them they were making the right investment?
The answer is playing out in real time, and the early evidence is not encouraging. Segal describes in The Orange Pill the dichotomy he observed among senior engineers: some leaning into the technology with an intensity bordering on compulsion, others moving to rural areas to lower their cost of living in anticipation of professional obsolescence. Fight or flight, and both responses are rational. The fighter sees the tool as an amplifier of her existing capabilities — a way to do more, build more, reach further. The fleer sees the tool as a replacement of her existing capabilities — a signal that the investment has been devalued and the position is no longer secure.
Both are reading the same evidence. Neither is wrong.
What Reich's framework contributes to this moment is the insistence that the disruption is not a natural phenomenon. Markets do not automate symbolic analysis the way weather produces storms. Markets automate symbolic analysis because specific companies build specific tools, deploy them under specific terms, capture specific revenues, and operate under specific rules — rules about intellectual property, about labor, about competition, about taxation — that are themselves the product of political choices. The question is not whether AI will displace symbolic analysts. The technology makes that likely. The question is who captures the gains from the displacement, and who bears the costs.
This is the question that the technology discourse, including the discourse in The Orange Pill, tends to treat as secondary to the question of capability. Segal's book is fundamentally about what AI can do and what it means for human potential. Reich's framework insists on a prior question: who owns the AI? Who wrote the rules governing its deployment? Who profits from its output? And who is left without a chair when the music stops?
"Right now, the answers are clear," Reich wrote in a 2024 essay. "A handful of technology companies own the most powerful AI systems. They write the rules — the terms of service, the licensing agreements, the pricing structures — that determine who can use these systems and how. And they capture the vast majority of the economic value that the systems generate."
The three categories need revision not merely because the technology has changed but because the distribution of power has changed. In 1991, symbolic analysts possessed something that routine production workers did not: skills that could not be easily replicated. This scarcity gave them bargaining power. They could demand premium wages because the supply of people who could do what they did was limited. AI eliminates this scarcity for a significant and growing portion of symbolic work. When the scarcity disappears, the bargaining power disappears with it. The symbolic analyst who could once command a premium because her skills were rare now faces a market in which those skills are available to anyone with a hundred-dollar-a-month subscription.
The political implications are immediate. A workforce that has lost its bargaining power is a workforce that cannot negotiate the terms of its own transition. The routine production workers who were displaced by automation in the 1980s and 1990s lacked the political power to shape the rules governing their displacement — and the rules that were written reflected the interests of the people who wrote them, not the workers who were displaced. The question now is whether the symbolic analysts will suffer the same fate, or whether their remaining cultural and institutional power will be sufficient to shape the rules governing the AI transition in ways that protect their interests and serve the broader public good.
Reich's taxonomy was never just a description of the workforce. It was a diagnosis of the distribution of economic power. The three categories were not merely occupational classifications. They were power classifications. Routine production workers had little power. In-person service workers had slightly more. Symbolic analysts had the most, because their skills were scarce and their institutional position was strong. AI is reclassifying the power distribution by eliminating the scarcity that sustained the symbolic analysts' position, and the reclassification is happening faster than the institutional and political responses that might cushion it.
The three categories need to be revisited not because they were wrong but because they were right — right about the mechanism by which economic power is distributed, and therefore right about the mechanism by which economic power can be lost. The symbolic analysts are losing their position through exactly the process that Reich described thirty-five years ago: the automation of the tasks that defined their economic value. The only difference is that the tasks being automated are not routine physical production. They are routine cognitive production — the writing, coding, analyzing, and designing that the symbolic analysts were told would be safe.
The irony is structural and it is bitter. The workers who did everything right — who invested in education, who developed the skills the economy rewarded, who followed the path that every institution in their society told them was the right path — are now the workers most directly in the line of fire. The rules of the game changed, and the people who played by the old rules are paying the price.
Markets did not change these rules. People changed these rules — people who built AI systems, people who invested in AI companies, people who wrote the regulations (or chose not to write them) that govern AI's deployment. The rules can be changed again. But only if the people affected by the rules have enough power to demand it. And that power, for the symbolic analysts, is draining away with every month that AI capabilities expand.
---
The economic value of the symbolic analyst was always a function of scarcity. This is the point that gets lost in the nostalgia. The programmer commanded a premium not because programming was inherently noble or socially valuable — much of what programmers built was trivial, some of it was harmful — but because relatively few people could do it. The lawyer commanded a premium not because legal reasoning was a public good — much legal work served narrow private interests — but because the barriers to entry were high enough to keep the supply of lawyers well below the demand for legal services. The designer, the financial analyst, the management consultant, the architect — in every case, the premium was a scarcity premium. The skills were hard to acquire, the training was expensive and time-consuming, and the credential systems that certified the training restricted the supply of qualified practitioners.
AI dissolves this scarcity. Not completely. Not overnight. But fast enough to change the economics.
Consider what happened in software development. Before the winter of 2025, building a professional-quality software application required either a team of trained engineers or years of individual study in programming languages, frameworks, deployment systems, and the accumulated craft knowledge that distinguished working software from broken software. The imagination-to-artifact ratio — Segal's term from The Orange Pill for the distance between a human idea and its realization — was large enough that only the trained could cross it. The training took years. The years justified the premium.
Segal describes what happened in a room in Trivandrum, India, in February 2026. Twenty engineers, experienced technical people who had been building software for decades, sat across from him while he told them that by the end of the week, each one of them would be able to do more than all of them together. By Friday, the claim had been validated. A twenty-fold productivity multiplier, at a hundred dollars per person per month.
The number is important, but the implication is more important. If one engineer augmented by AI can produce what twenty engineers produced without it, then the demand for engineers does not increase twenty-fold. The demand for engineers falls. The scarcity that justified the premium disappears. The symbolic analyst's bargaining position erodes.
This is not speculation. It is already visible in the market. By February 2026, a trillion dollars of market value had vanished from software companies. Workday had fallen thirty-five percent. Adobe had lost a quarter of its value. Salesforce had dropped twenty-five percent. The market was repricing the entire software industry based on a simple calculation: if the cost of producing software approaches zero, then the companies that sell software at premium prices are overvalued. The repricing was not about the software itself. It was about the human labor that the software encoded. When that labor can be replicated by AI, the value proposition changes.
Reich would frame this repricing as a power shift, not a technology story. The market value that evaporated from software companies did not disappear. It migrated — to the companies that build and control the AI systems, to the investors who hold stakes in those companies, to the executives who capture the productivity gains as profit rather than distributing them as wages. The symbolic analysts whose labor was encoded in the software saw the value of that labor repriced downward. The owners of the AI systems that performed the repricing saw the value of their assets repriced upward. This is not creative destruction in the Schumpeterian sense — a general gain with temporary dislocations. This is a transfer of economic value from one class of actors to another, and the direction of the transfer is from labor to capital.
"It comes down to who has the power," Reich told a CNBC panel in 2023. The observation applies with particular force to the symbolic analysts, because the symbolic analysts had power that other workers did not. They had educational credentials that restricted competition. They had professional associations that set standards and regulated access. They had institutional positions — in universities, law firms, consulting practices, technology companies — that gave them influence over the rules governing their industries. They had, in short, the apparatus of professional privilege: the accumulated institutional weight of a class that had spent decades building barriers to entry.
AI does not demolish these barriers directly. But it makes them less effective, because it enables uncredentialed individuals to produce work that approaches professional quality. The junior developer who uses Claude Code to build a complete feature in a weekend has not earned the senior developer's credentials. She has not invested the senior developer's years of training. She has not deposited the layers of understanding that Segal describes in The Orange Pill as the sedimentary process of expertise. But she has produced a working artifact that the market cannot distinguish from the senior developer's output — and the market, as Reich has spent his career arguing, does not care about the process. It cares about the result.
This is the specific threat that the symbolic analyst faces: not the elimination of expertise but the decoupling of expertise from output. The expertise still exists. The senior developer still possesses knowledge that the junior developer augmented by AI does not. But the market's willingness to pay a premium for that knowledge depends on the knowledge producing visibly superior output — and when AI narrows the output gap between the expert and the novice, the premium narrows with it.
The effect is wage compression within the symbolic analyst class. The top of the distribution — the most senior, most experienced, most judgment-rich practitioners — retains its premium, because the judgment that AI cannot replicate becomes more visible and more valuable when the implementation work that AI can replicate is stripped away. But the middle and lower tiers of the distribution — the practitioners whose value was primarily in execution rather than judgment — face a shrinking premium that may not recover.
Reich predicted something like this in a February 2023 essay, though even he may not have anticipated the speed. "Now comes AI — which will replace almost all professional work," he wrote. "At every stage, productivity has increased dramatically, so fewer workers have been needed to accomplish what came before." He went further: the professional class itself would be the constituency that pushed for universal basic income, because the professionals would be the ones who needed it. "My prediction: it will be the high-level professional class, including top business executives and the wizards of finance, who push for the most obvious solution: a guaranteed universal basic minimum income for everyone, financed by a tax on AI."
The prediction is counterintuitive but structurally sound. The working class has been asking for economic security for decades and has been largely ignored. The professional class has not needed to ask, because its bargaining position was strong enough to command it. When the professional class's bargaining position weakens — when the scarcity that sustained it dissolves — the professional class will discover that economic security is a political product, not a market outcome, and that the political systems that provide economic security require the same kind of organized advocacy that the working class has been practicing, with limited success, for generations.
The symbolic analyst under siege is not merely losing income. The symbolic analyst is losing the structural position that made income security possible without political mobilization. The symbolic analyst is becoming, in economic terms, what the routine production worker has been for decades: a person whose livelihood depends on political choices about the distribution of economic gains rather than on the scarcity of the skills she possesses.
This transformation is politically significant because it changes the political calculus of the AI transition. When the displaced workers are routine production workers — the factory workers, the data processors, the assembly-line operators — the political system can afford to ignore them, because they lack the institutional power to demand attention. When the displaced workers are symbolic analysts — lawyers, programmers, designers, financial analysts, physicians, architects — the political system confronts a constituency that has institutional power, cultural authority, and the organizational capacity to make demands.
Whether the symbolic analysts will use that capacity is an open question. The professional class has historically preferred individual solutions to collective problems. The lawyer who loses her position looks for another position. The programmer who faces displacement learns a new skill. The culture of individual adaptation — what Segal calls "fight or flight" in The Orange Pill — runs deep in the symbolic analyst class, and the preference for individual over collective response is itself a product of the meritocratic ideology that told symbolic analysts their success was earned through individual merit rather than sustained by institutional arrangements.
But the AI transition may break this preference, because the scale of the disruption exceeds what individual adaptation can address. The symbolic analyst who reskills discovers that the new skill is itself subject to AI displacement. The symbolic analyst who moves to a new position discovers that the new position is being restructured around AI capabilities. The individual solutions fail not because the individuals are insufficiently adaptive but because the disruption is structural, and structural disruptions require structural responses.
The question, in Reich's framework, is who writes the structural response. If the symbolic analysts disengage — if they flee to the woods, as Segal describes, or retreat into defensive credential-hoarding — the response will be written by the technology companies whose interests are served by unregulated deployment. If the symbolic analysts engage — if they organize, advocate, and participate in the construction of the institutional frameworks that will govern the AI economy — the response may serve broader interests. But engagement requires the symbolic analysts to do something they have historically been reluctant to do: to recognize that their individual positions are contingent on collective conditions, and that those collective conditions are the product of political choices that can be shaped by political action.
Reich has been making this argument for decades, to an audience that was largely not listening, because the audience did not need to listen. The symbolic analysts were doing well. The system was working for them. The rules of the game rewarded their skills. Why would they worry about who wrote the rules?
Now they know.
---
Every generation of economic winners discovers the same uncomfortable truth. The skills that produced success in one environment become the exposure that produces vulnerability in the next. The adaptation was not wrong. The investment was not misguided. The expertise was real, the rewards were earned, and the path that led to economic security was, at the time, the rational path to take. The problem is that rationality is always local. The strategy that was optimal for the economy that existed is not necessarily optimal for the economy that is arriving, and the transition between the two is where the pain concentrates.
This is not a new pattern. It is one of the oldest patterns in economic history, and Reich has traced it across multiple cycles of technological disruption. The farmers who mechanized successfully in the early twentieth century were perfectly adapted to the agricultural economy. Their investment in equipment, their knowledge of crop rotation and soil management, their relationships with local suppliers and buyers — all of this was rational, productive, and valuable. Then the economy shifted. Mechanized farming produced surplus labor. The surplus labor migrated to cities. The cities industrialized. And the farmers who had invested most heavily in the agricultural economy were the ones most exposed to the transition, because their investment was specific to an environment that was disappearing.
The manufacturers who adapted to globalization in the 1980s and 1990s repeated the pattern. The factories that survived the first wave of automation did so by specializing — by investing in the specific skills, processes, and supply chains that globalized markets rewarded. The specialization was rational. It produced competitive advantage. And it created the exposure that the next disruption — digitization, and now AI — would exploit. The factory that specialized in precision machining for the aerospace industry had invested in equipment, training, and institutional knowledge that was optimized for a specific market. When that market restructured, the investment became a liability rather than an asset.
The symbolic analysts are living this pattern now. The programmer who invested twenty years in mastering a programming language, a set of frameworks, a particular approach to systems architecture, made a rational investment. The investment paid dividends for decades. The expertise was genuine, the market rewarded it, and the path from investment to reward was clear and well-trodden. Now AI produces competent code in the same languages, using the same frameworks, with the same architectural patterns — and the twenty-year investment does not protect against the disruption. It may even intensify the exposure, because the programmer's identity, her professional community, her sense of what constitutes valuable work, are all organized around the specific skills that AI replicates most effectively.
Segal captures this dynamic with an image from The Orange Pill that resonates beyond its immediate context. A senior software architect described himself as a master calligrapher watching the printing press arrive. The comparison is precise in ways the architect may not have intended. The calligrapher's skill was real. The beauty of hand-lettered text was genuine. The years of practice that produced the calligrapher's control over line and form were not wasted in any absolute sense. But the market for hand-lettered text was about to collapse, because the printing press could produce legible text at a scale and speed that hand-lettering could never match. The calligrapher's investment in his craft had been rational for the world that existed. It was catastrophically misaligned with the world that was arriving.
The irony cuts deeper than simple displacement. The symbolic analysts did not merely adapt to the knowledge economy. They built it. The programmers wrote the software that automated routine production. The consultants designed the organizational structures that distributed symbolic analysis globally. The financial engineers created the instruments that channeled capital toward knowledge-intensive industries. The educators trained the next generation of symbolic analysts. The entire institutional apparatus of the knowledge economy — the universities, the credential systems, the professional associations, the compensation structures — was built by symbolic analysts for symbolic analysts. The economy that AI is now disrupting is an economy that the symbolic analyst class designed.
There is a structural irony here that Reich's framework makes visible. The symbolic analysts built an economy optimized for their own skills. The optimization was so successful that it attracted the investment that produced the technology that now threatens those skills. The venture capital that funded AI research came from the profits of the knowledge economy. The data that trained the language models was the output of the knowledge economy — the code, the legal documents, the financial analyses, the design files, the written text that symbolic analysts had spent decades producing. The symbolic analysts generated the training data for the systems that are now competing with them. They built the economy that funded the research that produced their own potential obsolescence.
Reich would locate this irony within his broader analysis of how economic power operates. The symbolic analysts did not choose to fund their own displacement. They chose to build a knowledge economy, and the knowledge economy produced the conditions — the accumulation of symbolic output, the concentration of capital, the institutional infrastructure for technological research — that made AI possible and, in a certain sense, inevitable. The displacement is not a betrayal by the technology. It is a consequence of the technology that the symbolic analysts' own success made possible.
This understanding does not reduce the pain of the transition. It does, however, change the moral framing. The narrative that the technology discourse often employs — a narrative in which AI arrives as an external force that disrupts an innocent population — obscures the structural relationship between the symbolic analyst class and the technology that threatens it. The relationship is not external. It is intimate. The symbolic analysts are not being disrupted by a foreign invader. They are being disrupted by the logical extension of the economy they built.
The pattern of adaptive success becoming adaptive vulnerability also illuminates why the standard responses to disruption — reskilling, retraining, pivot — are less available to the symbolic analysts than the disruption discourse assumes. The reskilling narrative assumes that the displaced worker can identify a new set of skills that the market will reward, invest in developing those skills, and resume her position in the economy with minimal disruption. This narrative worked, to a limited extent, for routine production workers who transitioned from manufacturing to service work. It is less applicable to symbolic analysts, for a specific reason: the skills that the reskilling narrative would direct them toward — higher-level judgment, creative direction, strategic thinking — are not skills that can be acquired through a training program. They are skills that develop over years of practice, and the practice that develops them is itself being restructured by AI.
The senior engineer in Segal's Orange Pill who discovered that his true value lay in the twenty percent of his work that was not automatable — the judgment about what to build, the architectural instinct, the taste — did not acquire that judgment through a retraining program. He acquired it through twenty years of doing the eighty percent that was automatable. The debugging, the refactoring, the patient accumulation of understanding through friction — these were the activities that deposited the judgment that AI cannot replicate. The judgment was a byproduct of the execution. Remove the execution, and you remove the process that produces the judgment.
This is the deepest vulnerability that adaptive success creates: the pipeline problem. The skills that the AI economy values most — the non-routine cognitive capacities of judgment, discernment, and creative direction — are produced through the routine cognitive activities that AI is automating. The surgeon's clinical judgment is deposited through thousands of hours of routine surgical practice. The lawyer's strategic instinct is developed through years of routine legal research and brief-writing. The programmer's architectural intuition is built through decades of routine coding and debugging. In each case, the non-routine capacity that AI cannot replicate is produced by the routine practice that AI can replicate. Automate the routine practice, and you disrupt the pipeline that produces the non-routine capacity.
The implication is that the AI transition threatens not only the current generation of symbolic analysts but the next generation's ability to develop the capacities that the current generation possesses. The senior practitioner who developed her judgment through years of friction can continue to exercise that judgment in the AI-augmented environment. But the junior practitioner who enters a profession in which the friction has been removed by AI may never develop the judgment that the profession requires. The profession retains its most experienced members while failing to produce their replacements.
Reich recognized this dynamic in his September 2025 PBS appearance when he distinguished between thinking jobs that AI could automate and caring jobs that required human empathy. But the distinction may be less stable than it appears, because the thinking jobs and the caring jobs are not entirely separate categories. The physician exercises both thinking and caring in every clinical encounter. The teacher exercises both thinking and caring in every classroom interaction. The designer exercises both thinking and caring in every project that serves a human need. If AI automates the thinking component while preserving the caring component, what remains may be a profession that cares without understanding — a profession that has the human touch but lacks the cognitive depth that makes the touch meaningful.
The vulnerability of the symbolic analyst is not that her skills are worthless. Her skills remain valuable, especially the non-routine capacities that AI cannot replicate. The vulnerability is that the institution that produced those skills — the long apprenticeship of routine practice through which non-routine capacity is developed — is being hollowed out by the same technology that makes the non-routine capacity more valuable. The symbolic analyst class faces a paradox: the capacities it needs most are the capacities whose development pipeline is most threatened.
Nations, not markets, will determine whether this paradox is resolved. Markets do not invest in the long-term development of human capacity. Markets invest in whatever produces returns in the current quarter. The development of judgment, discernment, and creative direction — the non-routine capacities that the AI economy values — requires institutional investment on a timescale that markets do not reward. It requires educational institutions that maintain the friction necessary for deep learning even as AI makes that friction optional. It requires professional structures that protect the apprenticeship pipeline even as AI makes apprenticeship less immediately productive. It requires a social contract that distributes the costs of the transition broadly enough that the next generation of symbolic analysts can develop the capacities the economy needs.
These are political choices, not market outcomes. And they will be made, well or badly, by institutions that the symbolic analyst class still has the power to influence — if it chooses to engage rather than flee.
---
The work of nations used to be production. Countries competed to manufacture goods more efficiently than their neighbors, and the wealth of a nation was measured by what it could make. Then the work of nations became information. Countries competed to process, analyze, and act on information more effectively than their rivals, and the wealth of a nation was measured by what it could think. In both eras, the competitive advantage went to the nation that organized its workforce around the activity the economy valued most: first making, then thinking.
The work of nations in the AI age is neither making nor thinking. AI makes. AI thinks. The work of nations is deciding.
This is not a subtle shift. It is a transformation of what national competitiveness means. The nation that wins the AI economy is not the nation that produces the most AI or deploys it most aggressively. It is the nation that develops the institutional capacity to direct AI toward the activities that serve its citizens — that cultivates in its population the capacities for judgment, ethical reasoning, and creative direction that determine whether AI capability is used wisely or squandered, deployed for the common good or captured by the few.
Reich has spent his career arguing that the competitiveness of nations depends on what he calls the "human capital" of their populations — the skills, knowledge, and capabilities of the people who live and work within their borders. In The Work of Nations, this meant investing in the education and training of symbolic analysts, because symbolic analysis was the activity the global economy rewarded most generously. The policy prescription was clear: educate your population for the knowledge economy, and the knowledge economy will reward your nation with growth, prosperity, and competitive advantage.
The policy prescription needs updating. Educating a population for the knowledge economy meant training symbolic analysts — people who could manipulate words, numbers, images, and code. Educating a population for the AI economy means training something different: people who can determine what symbolic manipulation should be performed, for what purpose, to what standard, and in whose interest. The distinction between performing symbolic analysis and directing it is the distinction between doing and deciding, and it maps onto a different set of human capacities than the ones the knowledge economy rewarded.
The capacities the AI economy rewards are not symbolic-analytical in the traditional sense. They include judgment: the ability to evaluate options, weigh competing considerations, and choose a course of action under conditions of uncertainty. They include taste: the capacity to distinguish between adequate and excellent, between functional and beautiful, between something that works and something that matters. They include ethical reasoning: the ability to determine not merely what can be done but what should be done, and to recognize that the two questions have different answers. And they include what might be called creative direction: the ability to envision a result before the process begins, to hold an image of the thing that should exist and to guide others — human and machine — toward its realization.
Segal describes this capacity in The Orange Pill through his own experience building Napster Station in thirty days — a product that would have taken months under conventional conditions. His role was not to write the code or design the hardware or engineer the audio. His role was to hold the vision of what the product should be and to direct the process that realized it. The vision was not abstract. It was specific: a particular user experience, a particular aesthetic, a particular interaction pattern. The specificity was the value. AI handled the implementation. The human held the direction.
The word "direction" matters. A director is not a producer. A producer makes things. A director determines what should be made, how it should be made, and whether the result meets the standard. The director's authority is not in her technical skill — the cinematographer has better command of the camera, the editor has better command of the cut, the actor has better command of the performance — but in her vision of the whole, her ability to see how the parts fit together, her judgment about what serves the work and what distracts from it.
The AI economy needs directors. Not in the Hollywood sense, though the analogy is apt. It needs people whose value lies not in performing any particular symbolic operation but in orchestrating the operations that AI performs — in determining the sequence, evaluating the output, adjusting the direction, and maintaining the standard. This is the meta-symbolic work that constitutes the new competitive advantage. The nation that cultivates directors will lead the AI economy the way the nation that cultivated symbolic analysts led the knowledge economy.
But here is the problem that Reich's framework illuminates with uncomfortable clarity: the development of directorial capacity does not follow the same institutional pathways that the development of symbolic-analytical capacity followed. The knowledge economy needed symbolic analysts, and the educational system responded by training them. Universities expanded their computer science departments, law schools, business schools, and design programs. The credential systems that certified symbolic-analytical competence — the degrees, the professional licenses, the standardized tests — were designed to evaluate the specific skills the knowledge economy rewarded. The pipeline from education to employment was well-defined, well-funded, and, for the most part, well-functioning.
The AI economy needs directors, but the educational system has no established pathway for producing them. Directorial capacity — judgment, taste, ethical reasoning, the ability to hold a vision and guide others toward it — is not the kind of capability that a curriculum can straightforwardly develop. It is cultivated through experience, through mentorship, through the slow accumulation of pattern recognition that comes from engaging with complex problems over extended periods. It is developed, in many cases, through the same routine practice that AI is now automating — which returns the analysis to the pipeline problem identified in the previous chapter.
Segal argues in The Orange Pill that education must shift from teaching students to produce toward teaching them to judge, to question, to determine what is worth building. The argument is correct in its direction but underspecified in its mechanism. How do you teach judgment? How do you cultivate taste? How do you develop the ethical reasoning that determines whether a capability should be deployed? These are not rhetorical questions. They are the central educational challenge of the AI age, and the answers will determine whether the work of nations is distributed broadly or concentrated in the hands of a few who happen to possess the directorial capacities that the economy rewards.
Reich's insistence that nations, not markets, decide is crucial here. Markets will not invest in the development of directorial capacity, because the returns on that investment are too diffuse, too long-term, and too difficult to capture. The market rewards the symbolic analyst who produces code today, not the director who will make good decisions about code in ten years. The market funds training programs that produce measurable skills, not educational experiences that cultivate unmeasurable judgment. Left to the market, the development of directorial capacity will be haphazard, unequal, and insufficient — and the nations that rely on markets to produce their directors will find themselves led by people whose directorial capacity was acquired by accident rather than by design.
The nations that invest in the development of directorial capacity — through educational reform, through institutional innovation, through the deliberate cultivation of the capacities that AI makes most valuable — will have a structural advantage in the AI economy. This advantage will not be visible in the short term, because directorial capacity does not produce measurable outputs in the way that symbolic-analytical skill does. The nation that trains more programmers can count the programs they write. The nation that cultivates more directors cannot count the decisions they make, because the value of a decision is visible only in retrospect, and often only years after the decision was made.
But the advantage will be decisive in the long term, because the AI economy will be shaped not by the capability of its machines — all nations will have access to roughly comparable AI technology within a few years of its development — but by the quality of the human decisions that direct those machines. The nation whose population can make better decisions about what AI should do will outperform the nation whose population deploys AI capability without the judgment to direct it wisely, regardless of the raw AI capability each nation possesses.
This is the new work of nations: not the production of symbolic analysis, which AI can perform, but the cultivation of the human capacities that determine whether AI capability is directed toward human flourishing or merely toward economic efficiency. The distinction between flourishing and efficiency is a political distinction, not a technological one. Markets optimize for efficiency. Nations — when they are functioning well — optimize for something broader: for justice, for opportunity, for the kind of widely shared prosperity that gives citizens a stake in the system and a reason to participate in its governance.
Reich's framework insists that this optimization does not happen automatically. It happens through political choice, through the construction of institutions that embody the values the nation chooses to prioritize, through the constant maintenance of those institutions against the pressure of concentrated economic power that would prefer to capture the gains for itself. The work of nations in the AI age is the work of building and maintaining the institutional infrastructure that directs AI capability toward the common good — the educational systems, the regulatory frameworks, the social contracts, the democratic processes through which citizens determine the rules governing the most powerful technology in human history.
"Markets determine what is efficient," Reich has argued throughout his career. "Nations determine what is just." The AI transition will be efficient regardless of how nations respond. Whether it will be just depends entirely on the institutional choices that nations make now — about who is educated, how they are educated, what capacities are cultivated, what safety nets are constructed, and how the gains from AI productivity are distributed.
These choices are being made in real time. The educational institutions are still training symbolic analysts. The credential systems are still certifying symbolic-analytical skill. The labor markets are still structured around the assumption that human value lies in the execution of symbolic tasks that AI can increasingly perform. The institutional infrastructure of the knowledge economy is still standing, but the economy it was built for is changing beneath it, and the gap between what the institutions provide and what the AI economy requires is widening with every month that passes.
The nations that close this gap will lead. The nations that allow it to widen will follow. And the widening, unlike the capabilities of AI itself, is not a technology problem. It is a political problem. It has a political solution. The solution requires investment, institutional innovation, and the political will to prioritize the cultivation of human directorial capacity over the short-term efficiency gains that unregulated AI deployment provides.
The work of nations has always been the work of choosing. The AI age has not changed this. It has raised the stakes.
There is a pattern in economic history that no one wants to see while they are living inside it. The pattern is this: the skills that produce success in one era become the specific vulnerabilities exploited by the next. The adaptation is not wrong. The investment is not foolish. The expertise is real, and the rewards are earned. But the very success of the adaptation creates the conditions for its undoing, because success attracts the capital and the attention that produce the next disruption — and the next disruption targets precisely the capabilities that the previous success was built on.
The pattern is old enough to have a name. Economists call it the "success trap." Biologists call it "overspecialization." Reich, characteristically, calls it a consequence of who writes the rules. But whatever the name, the mechanism is the same: optimize for the current environment, and you create the exposure the next environment will exploit.
Consider the American farmer at the turn of the twentieth century. Agriculture was the backbone of the economy. The successful farmer invested in mechanization — in tractors, combines, threshers — because mechanization increased output per acre and output per worker. The investment was rational. It paid off. Mechanized farms outcompeted unmechanized farms, and the farmers who mechanized earliest captured the largest gains. But mechanization also produced a surplus of agricultural labor. The surplus labor migrated to cities. The cities industrialized. And within a generation, the economic center of gravity had shifted from agriculture to manufacturing, and the farmers who had invested most heavily in agricultural optimization were the most tightly bound to an economic sector that was shrinking in relative importance.
The manufacturers repeated the pattern. The factories that survived the mid-twentieth century did so by investing in precision, in quality control, in the specific capabilities that globalized markets rewarded. The investment was rational. Japanese manufacturers who invested in lean production captured enormous global market share. American manufacturers who matched them survived. But the optimization also produced the conditions for the next disruption: the accumulation of process knowledge that could be codified, the development of supply chains that could be managed digitally, the creation of production systems that were so well-defined that they could be replicated anywhere — including in countries where labor costs were a fraction of what they were in the developed world. The manufacturers who had optimized most successfully were the ones most exposed to offshoring, because the precision of their optimization was precisely what made their processes replicable.
The symbolic analysts are the third iteration. Reich identified them as the winners of the knowledge economy, and he was right. The programmers, lawyers, consultants, designers, and financial engineers who invested in symbolic-analytical skills captured an increasing share of national income for thirty years. The investment was spectacularly rational. A computer science degree from a good university was, for decades, one of the most reliable paths to economic security available to a young person. The returns justified the cost. The expertise was genuine. The rewards were earned.
And the expertise was precisely what AI learned to replicate.
This is not coincidence. It is causation. The symbolic analysts' success attracted the capital that funded AI research. The output of their success — the code, the legal documents, the financial models, the written text, the design files — constituted the training data on which AI models were built. The institutional infrastructure of the knowledge economy — the universities, the research labs, the technology companies — provided the organizational context in which AI was developed. The symbolic analysts did not merely fail to anticipate their own disruption. They financed it, trained it, and built the institutions that brought it into being.
Reich made this point with characteristic directness in a 2024 essay: "We assume that AI will be working for us, but it seems more likely to be the other way around." The observation captures the structural irony with precision. The symbolic analysts built AI to serve their purposes — to automate the tedious parts of their work, to augment their capabilities, to increase their productivity. The AI did all of these things. It also learned to perform the work itself, which transformed the tool into a competitor. The servant learned to do the master's job.
The irony is compounded by the speed of the transition. Previous iterations of the success-trap pattern unfolded over decades. The agricultural transition took a generation. The manufacturing transition took thirty years. The symbolic-analyst transition is measured in months. Segal documents the velocity in The Orange Pill: ChatGPT reached fifty million users in two months. Claude Code crossed $2.5 billion in run-rate revenue within weeks of its capability breakthrough. The adoption curves that previous technologies traversed over years, AI traversed in weeks — because, as Segal argues, the adoption speed measured not the quality of the technology but the depth of the pent-up need. The symbolic analysts had been waiting, without knowing they were waiting, for a tool that would close the gap between what they could imagine and what they could build. The tool arrived, closed the gap, and in closing it, demonstrated that the gap itself had been the source of the symbolic analysts' economic value.
The gap between imagination and artifact was the scarcity that justified the premium. Close the gap, and the premium disappears. The symbolic analysts purchased the very tool that eliminated their competitive advantage.
This is the irony of success in its purest form: the rational pursuit of capability produces the conditions for the obsolescence of that capability. The farmer who mechanized eliminated the demand for agricultural labor. The manufacturer who optimized eliminated the barriers to offshoring. The symbolic analyst who funded AI eliminated the scarcity of symbolic manipulation. In each case, the actor was rational. In each case, the investment paid off in the short term. And in each case, the long-term consequence was the erosion of the position that the investment was designed to secure.
Reich's contribution to understanding this pattern is the insistence that the pattern is not natural. It is not a law of economics or a feature of technological progress that the gains from disruption must be captured by capital while the costs are borne by labor. The pattern recurs because the rules governing the distribution of gains and costs are written by the actors with the most power, and the actors with the most power are typically the ones who benefit from the disruption rather than the ones who are displaced by it. The farmers who were displaced by mechanization did not write the agricultural policies of the early twentieth century. The manufacturers who were displaced by offshoring did not write the trade agreements of the late twentieth century. And the symbolic analysts who are being displaced by AI are not, so far, writing the rules governing AI's deployment.
"Increasingly," Reich wrote in Saving Capitalism, "antitrust law, labor law, property law, contract, liability, bankruptcy, taxes, and health and safety regulations have been shaped by big corporations and wealthy individuals. The result has been a vicious cycle in which ever more concentrated wealth translates into ever-greater political power to change the rules, in order to generate even more wealth at the top." The AI transition is the latest iteration of this cycle. The companies that build AI systems use the wealth generated by those systems to influence the rules governing their deployment — through lobbying, through campaign contributions, through the revolving door between industry and government, through the funding of research that supports their preferred policy positions.
The irony of the symbolic analysts' situation is that they possess, at least for the moment, the institutional power to break this cycle. The symbolic analysts staff the universities that produce the research on which policy is based. They populate the professional associations that set standards for their industries. They hold positions in the media organizations that shape public discourse. They have, in short, the cultural and institutional apparatus that could be mobilized to write rules governing the AI transition that serve their interests and the broader public good. But mobilization requires collective action, and collective action requires the recognition that individual adaptation is insufficient — and the symbolic analysts' defining ideology, the meritocratic belief that individual talent and effort determine individual outcomes, makes this recognition exceptionally difficult.
The success trap is not merely an economic phenomenon. It is a psychological one. The symbolic analyst who succeeded through individual effort in the knowledge economy believes, with the force of personal experience, that individual effort is sufficient to navigate any transition. The farmer who mechanized successfully believed the same thing. So did the manufacturer who optimized. The belief is not entirely wrong — individual effort matters, and the most adaptive individuals will fare better than the least adaptive in any transition. But the belief is dangerously incomplete, because it obscures the structural dimension of the disruption. No amount of individual adaptation can change the rules governing who captures the gains from AI and who bears the costs. Changing the rules requires collective action, and collective action requires solidarity, and solidarity requires the recognition that the meritocratic ideology — the belief that success is individual and earned — is an inadequate framework for responding to a structural disruption that affects an entire class simultaneously.
Reich has been making this argument for decades. The audience was not listening, because the audience did not need to listen. The symbolic analysts were winning. The rules were working in their favor. The system was delivering the rewards that the meritocratic bargain promised. Why would the winners question the rules of the game they were winning?
Now the rules have changed. The game has changed. And the winners of the last game are discovering, with a speed that leaves no time for the grief to be processed before the next capability announcement arrives, that winning the last game does not guarantee even participation in the next.
The irony is complete. The symbolic analysts built the knowledge economy. The knowledge economy funded the AI that threatens them. The AI is deployed under rules that the symbolic analysts did not write and are only beginning to contest. The success created the vulnerability. The adaptation produced the exposure. The investment financed its own disruption.
There is no villain in this story. There is a pattern, and the pattern is driven not by malice but by the logic of economic power: the actors who benefit from the current arrangements use their position to shape the next arrangements, and the actors who are displaced by the transition bear the costs unless they organize to distribute those costs more broadly. The symbolic analysts have the power to organize. They have the institutional position, the cultural authority, and the organizational capacity. What they lack is the recognition that organization is necessary — that the individual strategies that served them in the knowledge economy will not serve them in the AI economy, because the AI economy is restructuring the conditions under which individual strategy operates.
The question is whether the symbolic analysts will recognize this before their institutional power erodes to the point where recognition comes too late. The farmers waited too long. The manufacturers waited too long. The symbolic analysts have more power than either group had at the equivalent moment in their transition, but power is not permanent, and the window for using it is not infinite.
The irony of success is that it teaches the wrong lesson. Success teaches that the strategy that worked will continue to work. History teaches that the strategy that worked will be disrupted by the next environment, and the disruption will be funded by the proceeds of the success. The symbolic analysts are learning this lesson now. Whether they learn it in time to act on it is the political question of the decade.
---
Reich's original taxonomy had a gap. The gap was invisible for thirty years because it did not matter. Now it matters more than almost anything else in the economic analysis of AI.
The gap was this: Reich treated symbolic analysis as a single category. Programmers, lawyers, financial analysts, architects, writers, consultants — all were symbolic analysts, because all manipulated symbols for a living. The taxonomy distinguished symbolic analysis from routine production and from in-person service, but it did not distinguish within symbolic analysis between different kinds of cognitive work. It did not ask whether some symbolic analysis was routine and some was not.
The question did not need asking in 1991 because the answer did not affect the prediction. Both routine and non-routine cognitive work were performed by humans, and both were rewarded by the knowledge economy. The programmer who wrote boilerplate code and the programmer who designed novel system architectures were both symbolic analysts. The lawyer who drafted standard contracts and the lawyer who developed innovative litigation strategy were both symbolic analysts. The distinction between routine cognitive work (applying established patterns to predictable problems) and non-routine cognitive work (generating new patterns, exercising judgment in novel situations, making decisions under genuine uncertainty) was real but economically irrelevant, because the market rewarded both and machines could do neither.
AI changed the economics by automating one while leaving the other largely untouched.
The distinction is worth defining precisely because its imprecision is the source of most of the confusion in the current debate about AI and professional work. Routine cognitive work is symbolic manipulation that follows established patterns. It is the application of known methods to known types of problems. The programmer who writes a standard API integration is doing routine cognitive work. She is not making novel decisions. She is applying a well-established pattern to a well-defined problem. The work requires skill and training — it is not simple — but the skill and training consist primarily of knowing which pattern to apply, not of generating new patterns. The lawyer who drafts a standard commercial lease is doing routine cognitive work. The financial analyst who builds a quarterly earnings model from a standardized template is doing routine cognitive work. The designer who creates a website layout following established design conventions is doing routine cognitive work.
Non-routine cognitive work is symbolic manipulation that generates new patterns or exercises judgment in situations where established patterns do not apply. The programmer who designs a novel system architecture to solve a problem that existing architectures cannot handle is doing non-routine cognitive work. The lawyer who develops a litigation strategy that exploits an ambiguity in the law that no previous case has addressed is doing non-routine cognitive work. The financial analyst who identifies a market pattern that existing models cannot explain is doing non-routine cognitive work. The designer who creates a user experience that redefines how people interact with a category of product is doing non-routine cognitive work.
The boundary between routine and non-routine is not sharp. Most professional work involves a mixture of both. The architect who designs a building spends some of her time applying building codes and structural standards (routine) and some of her time making aesthetic and functional decisions that no code or standard dictates (non-routine). The physician who treats a patient spends some of her time following diagnostic protocols (routine) and some of her time exercising clinical judgment in situations where the protocols do not provide clear guidance (non-routine). The ratio of routine to non-routine varies across professions, across individuals within professions, and across the career arc of any individual professional.
The ratio also varies in a way that has direct implications for who is most exposed to AI displacement: junior professionals do more routine cognitive work than senior professionals. This is not accidental. It is how expertise develops. The junior programmer writes boilerplate because writing boilerplate is how she learns the patterns that will eventually enable her to design novel architectures. The junior lawyer drafts standard contracts because drafting standard contracts is how she develops the familiarity with legal language and logic that will eventually enable her to develop innovative litigation strategies. The routine work is the apprenticeship through which non-routine capacity is cultivated.
AI automates the routine cognitive work with remarkable efficiency. This is not surprising. Routine cognitive work follows patterns, and pattern recognition is what large language models do. The code that follows established conventions, the legal documents that apply standard templates, the financial models that use standardized structures, the designs that follow established guidelines — all of this is work that AI can perform competently, because the patterns are well-represented in the training data and the task requires the application of existing patterns rather than the generation of new ones.
Non-routine cognitive work is more resistant to AI automation — not because AI cannot perform it at all, but because the performance is less reliable, less consistent, and more dependent on the quality of human direction. AI can generate novel code, but the novelty must be evaluated by a human who understands the problem well enough to judge whether the novel solution is actually better than the conventional one. AI can draft creative litigation strategies, but the strategies must be evaluated by a lawyer who understands the specific case well enough to judge whether the creative approach is brilliant or reckless. The non-routine work requires the judgment that Segal identifies in The Orange Pill as the surviving human contribution — the capacity to evaluate, to discern, to choose wisely among possibilities.
The economic implication is a divergence within the symbolic analyst class. The symbolic analysts whose work is primarily routine cognitive — the junior professionals, the mid-level practitioners who have reached competence but not mastery, the specialists whose expertise consists of applying established methods with reliability — face the sharpest displacement. Their work is the work AI replicates most effectively, and the premium the market paid for their skills was a premium on scarcity that AI is eliminating.
The symbolic analysts whose work is primarily non-routine cognitive — the senior practitioners whose judgment has been deposited through decades of experience, the creative professionals whose value lies in generating genuinely novel solutions, the strategic thinkers whose contribution is determining what should be done rather than doing it — face a different situation. Their skills become more visible and more valuable when the routine work that surrounded and obscured them is stripped away. The senior engineer in Segal's Orange Pill who discovered that the twenty percent of his work that was not automatable was the twenty percent that actually mattered is the paradigmatic example. The eighty percent was valuable in the old economy because it was necessary for the product to exist. In the AI economy, the eighty percent is handled by the machine. The twenty percent is what the human is for.
But here is the problem that makes this distinction less reassuring than it appears: the twenty percent is a product of the eighty percent. The non-routine capacity is developed through the routine practice. Strip away the routine practice, and you disrupt the developmental pipeline that produces the non-routine capacity.
This is the pipeline problem in its starkest form. The senior professional whose judgment is irreplaceable developed that judgment through decades of routine practice that AI now performs. The junior professional who enters the field in 2026 will not have the same opportunity for routine practice, because the routine work will be done by AI from the start of her career. She will be augmented from day one — more productive, more broadly capable, able to reach across disciplinary boundaries that would have confined her predecessors. But she may never develop the depth of understanding that her predecessors acquired through the slow, friction-rich accumulation of pattern recognition that routine practice provided.
Reich's framework adds a distributional dimension to this pipeline problem. The junior professionals who are most affected by the automation of routine cognitive work are disproportionately young, disproportionately early in their careers, and disproportionately the people on whom the future of the profession depends. The senior professionals who retain their value are disproportionately established, disproportionately well-compensated, and disproportionately the people who can afford the disruption. The AI transition, within the symbolic analyst class, redistributes value from the young to the old — from the people who need the apprenticeship to the people who have already completed it.
This redistribution is the opposite of what the profession needs. The profession needs to invest in the development of the next generation. Instead, the AI transition makes the next generation's developmental pathway less available and less rewarding, while making the current generation's accumulated judgment more valuable. The seniors benefit. The juniors bear the cost. And the profession, over time, hollows out — retaining its most experienced members while failing to produce their replacements.
"The real question," Reich has argued consistently, "is whether AI's productivity gains are widely shared." Within the symbolic analyst class, the early evidence suggests they are not. The gains are captured disproportionately by the senior practitioners whose judgment AI makes more visible and more valuable, and by the companies that deploy AI to reduce their dependence on junior labor. The costs are borne disproportionately by the junior practitioners whose routine cognitive work is most directly automated, and by the educational institutions that trained them for a labor market that no longer exists in its previous form.
The distinction between routine and non-routine cognitive work is not a clean division between "safe" and "exposed." The boundary shifts. What counts as non-routine today may be routine tomorrow, as AI capabilities expand and the frontier of what machines can do advances into territory that currently requires human judgment. The senior professional whose non-routine capacity is currently irreplaceable cannot assume that it will remain irreplaceable indefinitely. The history of AI capability is a history of confident predictions about what machines cannot do being overturned by machines that do exactly that.
But the distinction matters now, because it determines who bears the cost of the current transition and who captures the gains. It determines which educational investments are still rational and which have been overtaken by events. It determines what institutions need to be built to protect the developmental pipeline that produces the non-routine capacity the economy needs. And it determines the political constituency for reform: the junior symbolic analysts whose routine cognitive work is being automated, the educational institutions that trained them, and the communities that depend on the economic contributions of a professional class that is being restructured from within.
---
For forty years, the single most reliable path to economic security in the United States was the acquisition of symbolic-analytical skills. Get a college degree. Get a graduate degree. Learn to write code, draft legal arguments, build financial models, design user interfaces. The investment was expensive — decades of education, hundreds of thousands of dollars in tuition, years of foregone income — but the returns justified the cost. The wage premium on a bachelor's degree over a high school diploma averaged roughly eighty-four percent over the past four decades. The premium on a graduate degree was higher still. The symbolic analysts earned more because their skills were scarce, and scarcity, in a market economy, commands a price.
The premium was not merely an economic fact. It was a social organizing principle. The premium told young people what skills to develop. It told parents what education to invest in. It told universities what programs to expand. It told governments what training to fund. The premium was a signal — the market's way of communicating which human capabilities were most valued — and generations of Americans organized their lives around it.
AI is compressing that premium. Not eliminating it. Compressing it. The distinction matters, because compression does not mean that symbolic-analytical skills become worthless. It means that the gap between what the skilled earn and what the unskilled earn narrows, and the narrowing changes the economic calculus that has governed educational and career investment for decades.
The compression operates through a specific mechanism: AI raises the floor of competent symbolic performance without, in most cases, raising the ceiling. The junior programmer augmented by AI produces code that approaches the quality of the senior programmer's unaided work. The gap between the two narrows — not because the senior programmer has become less capable, but because the junior programmer has been amplified. The amplification is real and measurable: Segal describes a twenty-fold productivity multiplier in The Orange Pill, achieved by engineers using AI tools at a cost of a hundred dollars per month per person. The multiplier does not apply equally across the skill distribution. It amplifies the lower end more than the upper end, because the lower end has more room for amplification. The junior practitioner who lacked the accumulated knowledge to perform certain tasks independently can now perform them with AI assistance. The senior practitioner who already possessed the knowledge gains less from the assistance, because the assistance provides what she already had.
The result is compression: a narrowing of the performance gap between the more and less experienced, the more and less credentialed, the more and less invested. The narrowing does not reflect a convergence of actual human capability. The senior practitioner's judgment, taste, and depth of understanding remain superior to the junior practitioner's. But the market does not pay for capability in the abstract. It pays for output, and when AI narrows the output gap, the market narrows the compensation gap with it.
Reich predicted this dynamic in broad strokes. In his 2026 essay "AI and the Coming Jobless Economy," he posed a thought experiment: "Imagine a small box — call it an iEverything — capable of producing for you everything you could possibly desire. It is a modern-day Aladdin's lamp. You simply tell it what you want and — presto! — the item or service suddenly appears." The thought experiment was designed to illustrate the endpoint of AI-driven production efficiency. But it also illustrates the premium-compression dynamic at an earlier stage: as AI makes production cheaper and more accessible, the scarcity premium on the human labor that production previously required shrinks toward zero.
The shrinkage is not uniform. It varies across professions, across experience levels, and across the routine-to-non-routine spectrum described in the previous chapter. The premium on routine symbolic manipulation — the code that follows conventions, the legal documents that apply templates, the financial analyses that use standard models — is compressing fastest. This is the work AI performs most competently, and the more competently AI performs it, the less the market needs to pay a human to perform it instead.
The premium on non-routine symbolic manipulation — the judgment-intensive, context-dependent, genuinely creative work that AI performs less reliably — is compressing more slowly. In some cases, it may actually be expanding, because the automation of routine work makes the non-routine work more visible and more critical. When the eighty percent that was routine is handled by AI, the twenty percent that requires genuine human judgment becomes the totality of the human contribution — and the totality, by definition, commands the full value.
But the aggregate effect, across the symbolic analyst class as a whole, is compression. The top of the distribution retains or increases its premium. The middle and lower tiers of the distribution see their premiums shrink. The total income going to symbolic analysts may increase, as AI-augmented productivity generates more total output, but the distribution of that income within the class becomes more unequal: the best-compensated retain their position while the median and below-median experience real losses.
This within-class inequality mirrors the between-class inequality that Reich has documented for decades. The knowledge economy produced a widening gap between symbolic analysts and other workers. The AI economy may produce a widening gap within the symbolic analyst class itself — between the senior practitioners whose judgment commands an expanding premium and the junior practitioners whose routine capabilities command a shrinking one. The result is a professional class that is simultaneously more productive in aggregate and more unequal in its internal distribution.
The policy implications are significant. The educational system that trained symbolic analysts was designed around the assumption that the premium on symbolic manipulation was broad-based — that any young person who acquired symbolic-analytical skills would benefit from the premium, regardless of whether she ended up at the top, middle, or bottom of the professional distribution. This assumption justified the enormous investment in higher education that the knowledge economy demanded: if the premium was reliable and broad-based, the investment was rational for virtually everyone.
If the premium is compressing at the middle and bottom while expanding at the top, the investment calculus changes. Higher education remains rational for the students who will end up at the top of the distribution — the ones who develop the non-routine cognitive capacity that commands an expanding premium. But it becomes less rational for the students who will end up in the middle or below, where the premium is shrinking and the debt accumulated to capture it becomes harder to justify.
This is not an argument against education. It is an argument for a different kind of education — one that focuses less on the development of routine symbolic-analytical skills and more on the cultivation of the judgment, taste, and creative direction that command the expanding premium. The educational system that continues to train students primarily in coding, drafting, modeling, and analyzing — the routine cognitive skills that AI performs competently — is training them for the shrinking portion of the premium distribution. The educational system that trains students in judgment, discernment, ethical reasoning, and the capacity to determine what is worth building is training them for the expanding portion.
Reich would add, characteristically, that the compression has political consequences that extend beyond the professional class. The symbolic analysts' broad-based premium was a stabilizing force in the political economy. It gave the professional class a stake in the system. It justified the investment in education that the meritocratic ideology demanded. It provided the material foundation for the professional class's cultural authority — the authority of the class that had invested in itself and was visibly rewarded for the investment.
As the premium compresses, the material foundation weakens. The professional class that earns less has less stake in the system that produced its earnings. The meritocratic ideology that justified the investment in education becomes harder to sustain when the investment produces diminishing returns for a growing fraction of the class. The political stability that the broad-based premium provided begins to erode, because the people whose stake in the system was contingent on the premium find that the premium is no longer sufficient to justify the investment they made to capture it.
"People have got to have enough money in their pockets to buy all of the things that the new economy is capable of producing," Reich told CNBC in 2023. "If you've got AI and all of these platforms and you've got people who are basically no longer needed, huge numbers of people, then how do you get money in their pockets to buy everything the economy is capable of producing?" The question applies with particular force to the symbolic analysts, because the symbolic analysts are the consumers whose purchasing power drives the knowledge economy. Compress their premium, and you compress the demand that the economy depends on. The AI economy can produce more than any previous economy. Whether it can sell what it produces depends on whether the people who used to earn the premium that funded their purchases still earn enough to keep buying.
The compression of the symbolic-analyst premium is not a future possibility. It is a present reality, visible in the stock-market repricing that Segal documents in The Orange Pill — the trillion dollars of market value that vanished from software companies in early 2026 — and in the hiring freezes, the restructurings, and the shift from human labor to AI-augmented production that are reshaping the technology industry in real time. The premium that organized American economic life for forty years is shrinking, and the shrinkage is changing the economic calculus for individuals, institutions, and nations.
Markets will not reverse the compression. Markets produced the compression, because markets respond to scarcity, and the scarcity of symbolic manipulation is diminishing. The policy question is whether nations will intervene to cushion the compression, to redistribute the gains from AI-driven productivity broadly enough that the economy retains the purchasing power it needs to function, and to invest in the educational infrastructure that develops the human capacities commanding the expanding portion of the premium rather than the shrinking portion.
The answer, as always in Reich's framework, depends on who has the power to shape the rules. Right now, the power lies with the companies that build and deploy AI — companies that benefit from the compression because it reduces their labor costs. Whether the power can be redistributed to the workers and citizens who bear the costs of the compression is a political question that the premium's shrinkage makes increasingly urgent.
---
The symbolic analyst of the knowledge economy was a producer. She produced code, briefs, analyses, designs, models, reports. The value of her work was measured by its quantity and quality — by how much she produced, how well she produced it, and how quickly she produced it. The entire infrastructure of professional life — the performance reviews, the billable hours, the lines of code, the case dispositions — was organized around the measurement of productive output.
The symbolic analyst of the AI economy is becoming something different. Not a producer but a director — a person whose value lies not in the symbolic manipulation itself but in the determination of what symbolic manipulation should be performed, to what standard, for what purpose, and in whose interest.
The transition from producer to director is not a promotion. It is a transformation — a change in the fundamental nature of the work, the skills it requires, the institutions that support it, and the metrics by which it is evaluated.
The producer's skill is execution. She knows how to write the code, draft the brief, build the model. Her training prepared her for execution, her credentials certify her capacity for execution, and her compensation rewards her execution. The quality of her execution is measurable: the code either works or it does not, the brief either persuades or it does not, the model either predicts accurately or it does not. The measurement provides feedback, and the feedback enables improvement. The producer knows where she stands because the output tells her.
The director's skill is judgment. She knows what code should be written, what brief should be drafted, what model should be built. Her value lies not in performing the work but in determining whether the work should be performed at all — and if so, how, and by whom, and to what standard. The quality of her judgment is not immediately measurable. A decision to build one product rather than another does not produce a measurable result for months or years. A decision to pursue one legal strategy rather than another may not be vindicated until the case is resolved. A decision to invest in one technology rather than another may not prove wise or foolish for a decade. The director operates under uncertainty that the producer never faces, because the producer's feedback is immediate and the director's feedback is delayed.
Segal describes this transition in The Orange Pill through his own practice as what he calls a "creative director" — a person whose primary contribution is the vision of what should exist and the capacity to guide others toward its realization. His account of building Napster Station in thirty days illustrates the director's role: he did not write the code, design the hardware, or engineer the audio. He held the vision and directed the process. The AI handled the implementation. The human provided the direction.
The transition is real, and it is already reshaping the professional landscape. But it is also more difficult and more fraught than the technology discourse typically acknowledges, because the transition does not merely require new skills. It requires a new professional identity — a new understanding of what it means to be good at your work, a new set of metrics for evaluating your contribution, a new relationship with the output that used to define your professional self.
The producer knows she is good at her work because the work itself tells her. The code compiles. The brief wins. The model predicts. The feedback is concrete, immediate, and unambiguous. The director knows she is good at her work — how, exactly? The decision was right, but she will not know it was right for months. The vision was correct, but the correctness is visible only in retrospect. The judgment was sound, but soundness is a matter of assessment, not measurement, and assessment is always contestable.
This shift from measurable output to assessable judgment creates a specific form of professional anxiety that the technology discourse has not adequately addressed. The producer who transitions to a directorial role does not merely acquire new skills. She loses the primary source of professional validation that her career provided. The code no longer compiles under her fingers. The brief no longer bears her handwriting. The model no longer reflects her personal analytical choices. The work is done by AI, and her contribution — the direction, the judgment, the taste — is invisible in the final product. She directed the process, but the process produced an artifact that does not carry her mark. The artifact carries the mark of the machine that produced it, smoothed and polished in the way that AI output is always smoothed and polished, indistinguishable from the output that any other director using the same AI might have produced.
This is the identity crisis of the transition. The producer's identity was embedded in her output. The director's identity must be embedded in something else — in her judgment, her vision, her capacity for discernment — and these are qualities that resist the kind of external validation that output provides. The producer could point to her code and say: I built that. The director can point to a product and say: I directed that. But the claim feels thinner, less tangible, more dependent on the willingness of others to recognize the contribution that the visible artifact does not reveal.
Reich's framework locates this identity crisis within a broader structural transformation. The knowledge economy's institutions — the educational systems, the credential structures, the performance evaluation systems, the compensation models — were all designed for producers. They measured output. They rewarded execution. They certified the capacity for symbolic manipulation. The AI economy needs institutions designed for directors, and those institutions do not yet exist.
What would they look like?
An educational system designed for directors would not primarily train students to produce symbolic output. It would train them to evaluate it. It would develop the capacity for judgment — the ability to distinguish between code that works and code that is elegant, between a legal strategy that is defensible and one that is wise, between a product that is functional and one that matters. This capacity cannot be developed through lectures and examinations. It requires the kind of sustained engagement with complex, ambiguous problems that the traditional apprenticeship model provided — the mentorship, the guided practice, the slow accumulation of pattern recognition through repeated exposure to situations where the right answer is not obvious and the consequences of being wrong are real.
A credentialing system designed for directors would not certify knowledge. It would certify judgment. This is harder to do, because judgment is context-dependent, evolves over time, and resists the standardized evaluation that knowledge-based credentials employ. But the difficulty does not reduce the necessity. The current credential systems certify capabilities that AI can perform. The economy needs credential systems that certify capabilities that AI cannot perform — and the development of such systems is one of the most important institutional challenges the professional class faces.
A compensation model designed for directors would not reward output volume. It would reward output quality as determined by outcomes — by the long-term consequences of the decisions the director made, the value created by the projects she chose to pursue, the harm avoided by the projects she chose not to pursue. This requires compensation structures that are oriented toward the long term rather than the short term, toward outcomes rather than activities, toward the quality of judgment rather than the quantity of production.
Reich would note that markets alone will not build these institutions. Markets are optimized for the current environment, and the current environment still rewards production more reliably than it rewards direction. The educational systems that train producers are profitable because the demand for production credentials is high. The credentialing systems that certify knowledge are established because the infrastructure for knowledge evaluation exists. The compensation models that reward output are entrenched because the measurement of output is straightforward. The institutions that the AI economy needs — the ones that cultivate, credential, and compensate directorial capacity — require investment that the market will not provide, because the returns on that investment are too long-term, too diffuse, and too difficult to capture.
This is where the work of nations enters. The nation that invests in the institutional infrastructure for the directorial economy — the educational programs, the credentialing innovations, the compensation experiments, the professional communities that support the transition from production to direction — will have a structural advantage that no amount of AI capability can replicate. AI capability is a commodity. Every nation will have access to roughly comparable AI within a few years of its development. But the institutional capacity to direct AI wisely — to cultivate in the population the judgment, taste, and ethical reasoning that determine whether AI capability is used well or poorly — is not a commodity. It is the product of deliberate, sustained, politically supported institutional investment, and the nations that make that investment will lead while the nations that rely on markets to produce directors will follow.
"Markets determine what is efficient," Reich argues. "Nations determine what is just." The transition from producer to director will be efficient regardless of what nations do — the market will ensure that the most capable directors capture the highest rewards. Whether the transition will be just — whether the institutions that develop directorial capacity will be accessible broadly, whether the gains from AI-directed production will be distributed equitably, whether the symbolic analysts who cannot make the transition will be supported rather than abandoned — depends entirely on the political choices that nations make.
The symbolic analysts who become directors will not be the same professionals they were. The work is different. The skills are different. The metrics are different. The identity is different. The transition is not a step up a ladder. It is a move to a different structure entirely — and the structure is being built, imperfectly and incompletely, in real time, by the people who are making the transition while the ground shifts beneath them.
What the symbolic analysts become next depends on what they choose to become. And what they can choose to become depends on the institutions their nation builds to support the choice. The choice and the institution are inseparable. Neither works without the other. And both are products of political will, not market forces.
The disruption is not falling evenly, and anyone who says otherwise is selling something.
This is the core insight that Reich has hammered for forty years, across every technological transition he has analyzed: the question is never whether a new technology creates value. It almost always does. The question is who captures the value and who absorbs the cost. The distribution is never automatic. It is never natural. It is always the product of rules — rules about property, about labor, about competition, about taxation — and the rules are written by the people with the power to write them.
The AI transition is distributing its effects across the symbolic analyst class with a specificity that the general discourse about "AI and jobs" obscures. The disruption is not a uniform wave washing over the professional landscape. It is a targeted flood, hitting some positions and geographies and experience levels with devastating force while leaving others temporarily dry. Understanding the distribution is a prerequisite for building the institutional responses that the transition demands, because institutional responses that treat the disruption as uniform will fail the people who are most affected while wasting resources on people who are least.
Start with experience level. The previous chapter established the distinction between routine and non-routine cognitive work and noted that junior professionals perform more routine cognitive work than senior professionals. The distributional consequence is stark: the AI transition hits the entry level hardest. The junior associate at the law firm, the first-year analyst at the investment bank, the newly hired software engineer — these are the positions where routine symbolic manipulation constitutes the largest share of daily work, and these are the positions where AI substitution is most direct.
The data is emerging in real time. Hiring in junior technology positions slowed dramatically in 2025 and 2026, even as the companies doing the hiring reported record productivity from their AI-augmented senior teams. The pattern is consistent with what the economic framework predicts: if AI can perform junior-level work competently, the demand for junior-level humans declines. The senior professionals whose judgment AI augments become more productive. The junior professionals whose execution AI replaces become less necessary.
The distributional consequence extends beyond the immediate labor market. Junior positions are not merely jobs. They are the entry point of a pipeline — the mechanism through which the next generation of professionals develops the judgment and expertise that the profession requires. When the entry-level positions contract, the pipeline narrows. Fewer young professionals gain the apprenticeship experience that produces the non-routine cognitive capacity that the AI economy values. The profession retains its current experts while failing to produce their successors.
Reich would identify this as a classic case of the market producing an efficient short-term outcome that is catastrophic in the long term. In the short term, it is efficient to replace junior labor with AI. The output is comparable. The cost is lower. The senior professionals who direct the AI are more productive. Every quarterly metric improves. In the long term, the profession that fails to develop its next generation of practitioners will find that the expertise it depends on is aging, retiring, and not being replaced. The efficiency of the current quarter becomes the fragility of the next decade.
Now consider geography. The symbolic analyst class is not evenly distributed across the national landscape. It is concentrated in metropolitan areas — in the coastal cities, the university towns, the technology hubs where the knowledge economy's institutions are densest. The AI transition affects these areas first and most intensely, because these are the areas where the highest concentration of routine symbolic work is performed. The law firms in Manhattan. The technology companies in San Francisco. The consulting firms in Chicago. The financial institutions in Boston. These are the places where the disruption is most visible, most immediate, and most consequential.
But the secondary effects radiate outward. The junior associate who is not hired in Manhattan does not disappear from the economy. She enters a different labor market — perhaps a smaller city, perhaps a different profession, perhaps the "woods" that Segal describes in The Orange Pill. Her displacement creates competitive pressure in the labor markets she enters, depressing wages and displacing workers who were previously insulated from the AI transition. The disruption propagates, like a wave moving outward from the point of impact, affecting communities and labor markets that have no direct relationship to AI but that feel its effects through the displacement of the workers who do.
Segal describes this propagation from the perspective of the builder. In The Orange Pill, he documents the democratization of capability — the expansion of who gets to build — and celebrates the developer in Lagos who can now access the same coding leverage as an engineer at Google. The celebration is warranted. The expansion of access is real, and its potential for reducing global inequality is genuine. But Reich's framework insists on asking the distributional question: who captures the value that the developer in Lagos creates? If the developer builds a product using AI tools provided by an American company, under terms of service written by that company, on a platform owned by that company, the value she creates flows upward to the platform owner while the value she captures is limited to whatever the platform's terms permit.
This is not a new dynamic. It is the platform economy applied to symbolic analysis. The ride-share driver creates value by providing transportation, but the platform captures a significant share of that value through its control of the marketplace. The AI-augmented developer creates value by building software, but the AI platform captures a significant share of that value through its control of the tools. The democratization of capability is real. The democratization of value capture is not guaranteed, and the distinction between the two is the distinction that determines whether the AI transition reduces inequality or deepens it.
Reich has been explicit about this danger. "It comes down to who has the power," he told CNBC. The observation applies with particular force to the distributional dynamics of the AI transition. The power to determine how the gains are distributed lies with the actors who control the AI platforms — the handful of companies that own the most powerful AI systems, set the terms of access, and capture the revenue that the systems generate. These companies are not villains. They are rational economic actors operating within the rules as they currently exist. But the rules as they currently exist do not require them to distribute the gains from AI productivity broadly, and in the absence of rules that require broad distribution, the gains will concentrate — because concentration is what unregulated market power produces.
The concentration is visible in the stock market data that Segal documents in The Orange Pill. The trillion dollars of market value that vanished from software companies in early 2026 did not vanish from the economy. It migrated — to the AI platform companies whose products displaced the software companies' human labor. The value moved from a large number of companies employing a large number of workers to a small number of companies employing a small number of workers augmented by AI. The total economic value may have increased. Its distribution became more concentrated.
Within the symbolic analyst class, the distribution follows a pattern that Reich has documented across previous transitions: the gains flow upward. The senior practitioners whose judgment AI amplifies capture more value. The companies that deploy AI to replace junior labor capture the savings. The AI platform companies that provide the tools capture a share of every transaction. The junior practitioners whose labor is displaced, the educational institutions that trained them, the communities that depended on their economic contributions — these bear the costs.
The political economy of this distribution is not determined by the technology. It is determined by the rules. Rules about intellectual property determine who owns the output that AI produces using training data generated by human symbolic analysts. Rules about labor determine whether displaced workers receive severance, retraining support, or nothing. Rules about competition determine whether the AI platform market remains concentrated in a handful of companies or is opened to broader participation. Rules about taxation determine whether the gains from AI productivity are taxed and redistributed or whether they accumulate untaxed in the balance sheets of the platform companies.
"The incipient AI industry has already become a major political force in the United States," Reich wrote in 2026, "supporting candidates who pledge not to regulate it and opposing candidates who intend to regulate it." The observation locates the distributional question where it belongs: in the political arena, where the rules are written. The distribution of the AI transition's costs and benefits will be determined not by the technology's capabilities but by the political choices that govern its deployment. The symbolic analysts who engage with those political choices — who organize, advocate, and participate in the rule-writing process — have a chance to shape the distribution in their favor and in the public interest. The symbolic analysts who disengage, who rely on individual adaptation rather than collective action, who flee to the woods or retreat into credential-hoarding, leave the rule-writing to the actors whose interests are served by concentrated gains and distributed costs.
The distribution of disruption is the political question of the AI transition. Every other question — how fast the technology will advance, which professions will be most affected, what new capabilities will emerge — is secondary to the question of who captures the gains and who bears the costs. The answer to that question is being written now, in legislative chambers and corporate boardrooms and the terms of service that govern access to AI platforms. The symbolic analysts have a seat at the table, if they choose to take it. The question is whether they will.
---
Every economic era has a social contract — a tacit agreement between the major actors in the economy about the terms on which they will cooperate. The social contract is not written in any document. It is not signed by any party. It is expressed in the institutions that govern economic life: the labor laws, the tax codes, the educational systems, the safety nets, the regulatory frameworks that determine the rules of the game.
The knowledge economy had a social contract. Its terms were implicit but widely understood. Workers would invest in education. They would develop symbolic-analytical skills. They would work hard, adapt to changing conditions, and compete on the basis of merit. In return, the economy would provide them with secure employment, rising wages, and the opportunity for upward mobility. Employers would invest in their workers — through training, through career development, through the kind of long-term employment relationship that gave workers a stake in the firm's success. The government would invest in education, maintain a safety net for those who fell through the cracks, and enforce the rules that kept the competition fair.
The contract worked, imperfectly but recognizably, for roughly three decades. The symbolic analysts who fulfilled their end of the bargain — who invested in education, developed expertise, worked hard, and competed on merit — were rewarded with the security and mobility the contract promised. Not everyone benefited equally. The contract was always better for the symbolic analysts than for the routine production workers and in-person service workers who occupied the lower tiers of Reich's taxonomy. But the contract existed, and its existence provided a framework within which the major actors in the economy could cooperate with reasonable confidence that the terms would be honored.
AI has broken this contract. It has not broken it deliberately or maliciously. It has broken it the way a rising river breaks a levee — through the sheer pressure of capability that the existing structure was not designed to contain. The terms of the old contract assumed a stable relationship between human investment and economic reward. AI has destabilized that relationship by making the returns on certain kinds of investment approach zero while creating demand for capabilities that the old contract did not anticipate and the old institutions are not designed to develop.
The question is not whether a new social contract is needed. It is. The question is what the new contract should contain, and who should write it.
Reich has been explicit about the terms he believes the new contract should include. In essay after essay, interview after interview, he has laid out a specific agenda. The items are concrete. They are actionable. They are politically contentious. And they are, in Reich's view, non-negotiable if the AI transition is to serve the common good rather than merely the interests of the few.
First: the workers whose labor produced the training data on which AI systems were built must be compensated. The code, the legal documents, the financial analyses, the written text, the design files — all of the symbolic output that the knowledge economy generated and that AI companies used to train their models — was produced by human workers who were not consulted and were not compensated when their output was incorporated into training data. The intellectual property rules that govern this incorporation are inadequate. They were written for a world in which the reproduction of human output was limited by the cost of copying. In the AI world, reproduction is unlimited and nearly costless. The rules must be rewritten to ensure that the people whose work makes AI possible share in the value that AI creates.
Second: the gains from AI-driven productivity must be broadly distributed. This means, in practical terms, a tax on AI that funds a universal basic income sufficient to provide economic security for the workers who are displaced. Reich has predicted that the professional class itself — the symbolic analysts whose displacement the previous chapters have examined — will become the constituency that demands UBI, because the professionals will be the ones who need it. "My prediction," he wrote in 2023: "it will be the high-level professional class, including top business executives and the wizards of finance, who push for the most obvious solution: a guaranteed universal basic minimum income for everyone, financed by a tax on AI." The prediction is counterintuitive. It is also structurally sound. The professional class has the political power to demand redistribution. The working class has been demanding it for decades and has been largely ignored. When the professionals join the demand, the political calculus changes.
Third: antitrust enforcement must prevent the concentration of AI capability in a handful of companies. The current AI market is dominated by a small number of firms — Anthropic, OpenAI, Google, Meta — whose control of the most powerful AI systems gives them unprecedented influence over the terms on which AI capability is accessed and deployed. This concentration is not inevitable. It is the product of specific policy choices — about intellectual property, about market regulation, about public investment in AI research — that could be made differently. Reich has called for robust antitrust enforcement that prevents monopolistic control of the tools of symbolic production, just as antitrust enforcement in the early twentieth century prevented monopolistic control of the tools of physical production.
Fourth: public investment in AI tools that are open, accessible, and democratically governed. The current AI landscape is dominated by proprietary platforms whose terms of service are set by the companies that own them. The developer who builds on a proprietary AI platform is dependent on that platform's continued availability, pricing, and terms — a dependency that gives the platform owner enormous power over the developer's livelihood. Public investment in open AI systems — systems that are governed by democratic institutions rather than corporate boards — would reduce this dependency and ensure that the tools of symbolic production are accessible to the broad public rather than controlled by a narrow elite.
Fifth: educational reform that prepares the next generation for the AI economy rather than the knowledge economy. The educational system must shift from training symbolic analysts — people who can execute — to developing directorial capacity — people who can judge, discern, question, and determine what is worth doing. This shift requires not merely curriculum changes but institutional transformation: new forms of credentialing that evaluate judgment rather than knowledge, new pedagogical methods that cultivate taste and ethical reasoning alongside technical competence, new institutional structures that maintain the apprenticeship pipeline even as AI automates the routine work through which apprenticeship has traditionally operated.
These five items — compensation for training data, broad distribution of gains, antitrust enforcement, public AI investment, and educational reform — constitute the skeleton of a social contract for the AI economy. The skeleton needs flesh. Each item requires detailed policy design, political negotiation, and institutional construction. The details will vary across nations, across industries, across the specific circumstances that each community confronts. But the skeleton provides a framework — a set of commitments that the major actors in the AI economy can organize around.
"None of this will happen automatically," Reich wrote. "Markets do not regulate themselves. Rules do not write themselves. Power does not redistribute itself. These changes will happen only if citizens demand them."
The demand must come, in significant measure, from the symbolic analysts themselves. The symbolic analysts possess the institutional knowledge, the cultural authority, and the organizational capacity to shape the rules governing the AI transition. They staff the universities that produce the research on which policy is based. They populate the professional associations that set standards for their industries. They hold positions in the media organizations that shape public discourse. They have, in short, the apparatus of professional influence — and the question is whether they will deploy it in the service of a social contract that serves the broad public or whether they will retreat into the individual adaptation strategies that the meritocratic ideology prescribes.
The retreat is tempting. Individual adaptation has served the symbolic analysts well for three decades. The meritocratic bargain told them that their success was earned through individual merit, and the bargain was largely honored. The instinct to respond to the AI transition with another round of individual adaptation — another degree, another skill, another pivot — is powerful, because it is the instinct that the professional class's entire culture reinforces.
But the AI transition is structural, not individual. The disruption cannot be navigated by individual adaptation alone, because the conditions that determine whether individual adaptation succeeds or fails — the rules governing intellectual property, labor, competition, taxation, and education — are collective conditions that require collective responses. The symbolic analyst who reskills discovers that the new skill is itself subject to AI displacement. The symbolic analyst who pivots discovers that the new position is being restructured around AI capabilities. The individual solutions fail not because the individuals are insufficiently adaptive but because the problem is structural.
Collective action is the symbolic analysts' unfamiliar territory. The professional class has historically preferred individual over collective responses, market over political solutions, competition over solidarity. These preferences are deeply embedded in the professional class's culture, and they will not be easily overcome. But they must be overcome, because the alternative — a transition governed by rules written by the actors who benefit most from unregulated deployment — will produce an AI economy that concentrates gains at the top while distributing costs broadly, that replaces broad-based prosperity with narrow-based wealth, that hollows out the professional class while enriching the platform owners.
The social contract for the AI economy will be written. It is being written now, in every legislative session, every regulatory proceeding, every corporate decision about AI deployment. The question is not whether a contract will exist but what its terms will be — and the terms will be determined by who has the power and the will to negotiate them.
The symbolic analysts have the power. The question is whether they have the will.
"The technology is not the problem," Reich wrote. "The rules are the problem. And rules can be changed."
---
The number that haunts me is not from any economic model. It is three.
Three categories. That was Robert Reich's original count — the sorting that organized an entire generation's understanding of who would win and who would lose in the global economy. Routine production workers, in-person service workers, symbolic analysts. Clean boxes. Clear predictions. And for thirty years, correct ones.
I built my career inside the third box. Every person reading this book probably did too, or is raising a child they hope will. We invested in education. We developed expertise. We followed the path. We trusted the bargain.
What I did not expect, when I first sat down with Reich's framework during the writing of this project, was the vertigo of recognition. Not the pleasant recognition of seeing your experience described in someone else's words. The other kind — the kind where you realize the framework you have been living inside has a crack in it, and the crack runs all the way down.
Reich saw the crack before most of us did. As early as 2023, he was writing that AI would replace "almost all professional work." He updated his own taxonomy — from symbolic analysts to "thinking jobs" — and placed those thinking jobs at the center of the target. The very class he had once identified as the economy's most valuable was now the class he identified as most exposed. That intellectual honesty, the willingness to follow the evidence past the point where it undermines your own prior conclusions, is what earned Reich his place in this project.
But what arrested me was not the prediction of displacement. I have been living that prediction. I described it in The Orange Pill — the twenty-fold productivity multiplier, the engineers in Trivandrum who could suddenly build what entire teams could not, the trillion dollars of market value evaporating from software companies in weeks. I have seen the displacement from the builder's side. What I had not seen, until I worked through Reich's analysis, was the distributional architecture underneath it.
Who captures the gains? Who bears the costs? Who writes the rules?
These questions sound abstract until you sit with them long enough to feel their weight. Then they become the most concrete questions in the world. They are the questions that determine whether my engineers in Trivandrum see their new capabilities translate into expanded opportunity or into their own obsolescence. They are the questions that determine whether the developer in Lagos captures the value she creates or watches it flow upward to the platform that provided her tools. They are the questions that determine whether my son's generation enters an economy that rewards their humanity or one that has no use for it.
The irony-of-success argument is the one I cannot shake. The symbolic analysts — we — built the knowledge economy. The knowledge economy funded the AI that is disrupting us. We generated the training data. We staffed the institutions. We created the conditions for our own displacement. This is not irony in the literary sense. It is irony in the structural sense — the kind where the system you built operates exactly as designed, and the result is your own exposure.
I wrote in The Orange Pill that the river of intelligence does not care about your preferences. Reich taught me something harder: the river is not natural. The river was engineered. It flows through channels that were dug by specific actors for specific purposes, and the actors who dug the channels are not the same actors who are now being swept downstream.
Rules can be changed. That sentence — so simple, so direct, so characteristic of a man who has spent his life insisting that nothing about our economic arrangements is inevitable — is the sentence I want to leave with you. Not because it is comforting. Because it is true, and because the truth it contains is the foundation on which everything else must be built.
The dams we need are not just personal. They are not just organizational. They are political. They are the rules about who owns AI-generated output, about how the gains from AI productivity are taxed and distributed, about whether the platforms that control the tools of symbolic production are accountable to the public or only to their shareholders. These rules will be written. They are being written now. And the symbolic analysts — the class with the most to lose and the most institutional power to shape the outcome — must be in the room where they are written.
I am still building. I cannot stop building. But Reich taught me that building is not enough. You have to ask who captures what you build. You have to ask who wrote the rules under which you are building. And you have to be willing to fight for rules that serve more than your own quarterly numbers.
The three categories have collapsed into a question. Not "which box are you in?" but "who decides what the boxes look like next?"
We do. If we choose to.
-- Edo Segal
Robert Reich told an entire generation that symbolic analysts -- coders, lawyers, designers, consultants -- would inherit the knowledge economy. He was right for thirty years. Then AI learned to do exactly what the winners spent decades training to do, using the very output they created as training data. The class that built the system funded its own disruption.
This volume applies Reich's political-economic framework to the AI revolution unfolding now. It traces how the symbolic analyst premium is compressing, how the distribution of gains and costs is being shaped by rules that most technologists never examine, and why the question "who captures the value?" matters more than the question "what can the model do?"
When markets shift this fast, the instinct is to adapt individually -- reskill, pivot, hustle harder. Reich's framework reveals why individual adaptation is necessary but insufficient, and why the rules governing AI's deployment will determine whether this transition expands opportunity or concentrates it. The dams are political. The builders must become citizens.
-- Robert Reich

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Robert Reich — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →