Thomas Sowell — On AI
Contents
Cover Foreword About Chapter 1: The Two Visions Chapter 2: The Constrained Vision and AI Skepticism Chapter 3: The Unconstrained Vision and AI Optimism Chapter 4: Trade-Offs Versus Solutions Chapter 5: The Knowledge Problem in AI Chapter 6: Systemic Processes Versus Intentional Design Chapter 7: The Role of Incentives Chapter 8: Equality and the AI Divide Chapter 9: The Empirical Record Chapter 10: Toward an Honest Assessment Epilogue Back Cover
Thomas Sowell Cover

Thomas Sowell

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Thomas Sowell. It is an attempt by Opus 4.6 to simulate Thomas Sowell's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The trade-off I refused to see was the one I was making every day.

I built my career on a simple faith: that every barrier removed is a barrier worth removing. Friction is cost. Speed is value. The tool that lets you skip the struggle is the tool that sets you free. This logic carried me through three decades of building technology, and it carried me into the AI revolution with the confidence of a person who has been right often enough to stop questioning his assumptions.

Thomas Sowell would call that confidence dangerous. Not because it is wrong — the barriers I removed were real barriers, and the value I created was real value — but because my confidence had a blind spot the exact shape of its own success. I was so focused on what was gained that I never developed a serious accounting of what was lost.

Sowell is an economist, but the idea that rewired my thinking is not about economics. It is about the structure of disagreement itself. In *A Conflict of Visions*, he showed that the people who argue most fiercely about policy — about crime, education, inequality, war — are rarely arguing about the thing they think they are arguing about. They are arguing about human nature. About whether people are fundamentally limited creatures who need institutions to channel their selfishness toward tolerable outcomes, or fundamentally capable creatures whose potential is constrained only by the barriers that history and bad design have placed in their way.

I recognized myself instantly. I am an unconstrained-vision person. I look at a problem and ask what the solution looks like. Sowell taught me to notice that there are serious people in the room asking a different question — not "What is the solution?" but "What are the trade-offs?" — and that their question is not a failure of imagination. It is a discipline I lack.

The AI discourse is this conflict in its purest form. The optimist sees capability expanding. The skeptic sees expertise dissolving. Both cite evidence. Both have evidence to cite. And they talk past each other with the specific frustration of people who cannot understand why the other side finds their data irrelevant. Sowell explains why. They are not disagreeing about AI. They are disagreeing about what human beings are and what human societies can sustain.

This book does not resolve that disagreement. It makes it visible. And making it visible is the prerequisite for every honest conversation about what we are building and what it costs.

The ideas in these pages challenged me more than almost any other thinker in this series. They should challenge you too.

Edo Segal ^ Opus 4.6

About Thomas Sowell

1930-present

Thomas Sowell (1930–present) is an American economist, social theorist, and senior fellow at the Hoover Institution at Stanford University. Born in North Carolina and raised in Harlem, Sowell studied at Harvard, Columbia, and the University of Chicago, where he earned his doctorate in economics under Milton Friedman. Across more than fifty books and hundreds of columns spanning six decades, he has examined race, culture, education, economics, and the history of ideas with a commitment to empirical evidence over ideological orthodoxy. His major works include *A Conflict of Visions* (1987), which identified the constrained and unconstrained visions as the deep structures underlying political disagreement; *Knowledge and Decisions* (1980), which extended Friedrich Hayek's insights about dispersed knowledge to institutional design; *The Vision of the Anointed* (1995), a critique of elite policy assumptions; and the trilogy *Race and Culture* (1994), *Migrations and Cultures* (1996), and *Conquests and Cultures* (1998). Sowell's insistence that there are no solutions, only trade-offs, and his rigorous application of incentive analysis to social policy have made him one of the most influential and debated public intellectuals in American life.

Chapter 1: The Two Visions

In 1987, Thomas Sowell published a book that had nothing to do with artificial intelligence, nothing to do with computers, nothing to do with technology of any kind. A Conflict of Visions was about the French Revolution, the American Constitution, crime policy, income distribution, and war. It was about why intelligent people, confronted with the same facts about these subjects, reach opposite conclusions with equal confidence and equal indignation.

The book's argument was deceptively simple. Most political and social disagreements are not about the things people think they are about. They are not about specific policies, specific data, or specific outcomes. They are about something deeper and prior to all of these — about the assumptions people carry into the argument before the argument begins. Sowell called these assumptions "visions," and he identified two that have organized Western political thought for centuries.

The constrained vision sees human nature as fundamentally limited. Human beings are selfish, short-sighted, and prone to error. These are not flaws to be corrected. They are features of the species, as permanent as the skeletal structure. The question for the constrained vision is never "How do we make people better?" but "How do we design institutions that produce tolerable outcomes from the crooked timber of humanity?" Adam Smith's invisible hand is a constrained-vision mechanism. It does not require merchants to be altruistic. It channels their selfishness, through the price system, toward outcomes that serve the public — not because anyone designed it to, but because the structure of competitive markets makes serving others the most reliable path to serving yourself. Edmund Burke's defense of tradition is constrained-vision reasoning. Evolved institutions — common law, customary practice, inherited moral codes — embody the accumulated wisdom of countless generations, each of which solved problems the current generation has not yet encountered. Dismantling those institutions because they offend the reason of a single generation is, in the constrained vision, an act of breathtaking arrogance.

Friedrich Hayek's critique of central planning is the constrained vision applied to economics. The knowledge required to coordinate a complex economy is dispersed among millions of individuals, each of whom knows things about their own circumstances that no central authority can know. The price system aggregates this dispersed knowledge into signals — prices — that coordinate behavior without anyone understanding the whole. Central planning fails not because planners are stupid but because the knowledge they would need to succeed does not exist in a form that can be centralized.

The unconstrained vision sees human nature as malleable, improvable, and potentially unlimited. Human beings are capable of reason, empathy, and moral progress. The limits of the present are not the limits of the future. The question for the unconstrained vision is "What would the ideal outcome look like, and how do we move toward it?" William Godwin, writing during the French Revolution, believed that human reason could eventually eliminate all social conflict, all crime, and even — he was serious about this — death itself. The Marquis de Condorcet sketched a future in which education and institutional reform would produce a species so improved that the problems of the present would be incomprehensible to future generations. John Stuart Mill argued that the proper goal of social institutions was not the management of human weakness but the maximization of human potential.

These are not theories that can be tested against each other in any simple way. They are frameworks within which theories are constructed. A person operating from the constrained vision looks at a social problem and asks, first, "What are the trade-offs?" A person operating from the unconstrained vision looks at the same problem and asks, first, "What is the solution?" The constrained vision assumes that every gain has a cost, and the honest analysis must include both. The unconstrained vision assumes that costs are obstacles to be overcome, and the honest analysis must include the possibility of overcoming them.

Sowell did not argue that one vision was correct and the other wrong. His argument was structural. He wanted to show that the two visions produce different questions, different standards of evidence, different criteria for what counts as a satisfactory answer — and that until these differences are made visible, the participants in any social debate will talk past each other indefinitely. They will think they are arguing about crime or education or income distribution. They are actually arguing about human nature. And because neither side knows this, neither side can understand why the other finds its evidence unpersuasive.

Nearly four decades after Sowell identified these visions, the most consequential social argument of the twenty-first century reproduces them with uncanny precision.

The artificial intelligence discourse is a conflict of visions. Not a conflict about data, though data is cited endlessly by both sides. Not a conflict about technology, though the technology is discussed in granular detail. A conflict about what human beings are and what human societies can achieve — the same conflict Sowell mapped in 1787 and 1987 and that is unfolding again, with new vocabulary but identical structure, in 2026.

Consider the specific claims at the center of the AI debate. One side argues that AI will expand human capability, democratize creative power, and lift the floor of who gets to build. The other side argues that AI will erode deep expertise, dissolve the friction that produces genuine understanding, and concentrate power in the hands of the few companies that control the models. One side looks at a developer in Trivandrum building at twenty-fold leverage and sees the expansion of human potential. The other side looks at the same developer and sees a person whose long-term understanding is being hollowed out by the very tool that makes them faster today.

Both sides cite evidence. Both sides have evidence to cite. The Berkeley study showing AI intensifies work rather than reducing it. The adoption curves showing AI is being embraced at historically unprecedented speed. The productivity data showing individual output expanding dramatically. The displacement data showing entire categories of expertise becoming economically irrelevant in months.

The evidence does not settle the argument. This is Sowell's fundamental insight, and it applies to the AI discourse with a precision that would be elegant if it were not so consequential. The evidence does not settle the argument because the two sides are not interpreting the evidence within the same framework. The optimist and the skeptic can look at the same productivity study and reach opposite conclusions — not because one of them is dishonest but because they are asking different questions of the data. The optimist asks, "Does this show that capability expanded?" The skeptic asks, "Does this show what it cost?" Both questions are legitimate. Neither is answered by the data the other finds decisive.

Edo Segal's The Orange Pill is the most honest attempt in the current literature to hold both visions simultaneously. Segal does not resolve the tension. He presents the constrained vision's evidence — the Luddites were partly right, the elegists see real loss, Byung-Chul Han's diagnosis of auto-exploitation has empirical support — alongside the unconstrained vision's evidence — each technological abstraction has expanded capability, the developer in Lagos now has access to tools previously reserved for Silicon Valley, the imagination-to-artifact ratio has collapsed to the width of a conversation. He asks the reader to hold both truths at once.

This is the intellectually defensible position. It is also unstable. Two visions cannot coexist indefinitely in a single framework without one eventually dominating, because visions are not balanced positions. They are gravitational fields. Every piece of new evidence, every new case study, every new productivity number gets pulled into one field or the other. The person who starts in the middle tends to drift, and the direction of the drift depends on which vision's pull is stronger — which usually depends on which costs the person has experienced firsthand and which benefits they have witnessed.

The point of making the visions visible is not to resolve the conflict. Sowell did not resolve it in 1987 and explicitly stated that he could not. The point is to change the quality of the disagreement. As long as the participants think they are arguing about AI, they will produce heat without light. The optimist will cite productivity data. The skeptic will cite displacement data. Each will find the other's evidence beside the point, because each is measuring something the other's framework does not value.

When the visions are made visible, the participants can at least understand why they disagree. The optimist is not ignoring costs. The optimist's framework treats costs as temporary obstacles that innovation will overcome. The skeptic is not ignoring benefits. The skeptic's framework treats benefits as inseparable from costs that the optimist's framework systematically underestimates.

This does not make the disagreement disappear. It makes the disagreement productive. And productive disagreement — the kind that generates insight rather than anger — is the scarcest resource in the AI discourse, where both sides have retreated into camps and are shouting across a gap they cannot see because they do not know it exists.

There is a further complication that Sowell identified and that applies with particular force to the AI moment. The two visions do not merely produce different conclusions. They produce different standards for what counts as an acceptable conclusion.

The constrained vision is satisfied with trade-offs. A policy that produces a net gain is acceptable even if some people are made worse off, because the constrained vision does not believe that costless solutions exist. The honest assessment acknowledges the gain and the cost and asks whether the gain exceeds the cost by a sufficient margin.

The unconstrained vision is satisfied only with solutions. A policy that leaves anyone worse off is, by definition, not yet a solution — it is a problem that requires further effort. The honest assessment identifies the remaining costs and asks what further innovation, what better design, what more enlightened approach could eliminate them.

Applied to AI: the constrained vision looks at the evidence that AI intensifies work, erodes craft knowledge, and creates new forms of inequality, and asks whether the gains — expanded capability, democratized access, compressed timelines — are worth those costs. The unconstrained vision looks at the same evidence and asks what institutional redesign, what educational reform, what cultural shift could deliver the gains without the costs.

Both responses are coherent. Both are internally consistent. And they are fundamentally incompatible, because they rest on different assumptions about whether costless gains are possible. The constrained vision says they are not. The unconstrained vision says they are, with sufficient effort and ingenuity.

Every participant in the AI debate, from the venture capitalist accelerating deployment to the philosopher gardening in Berlin to the parent lying awake at night wondering what her child's future looks like — every one of them is operating from one of these visions, usually without knowing it.

The chapters that follow apply Sowell's framework to the specific claims, evidence, and arguments of the AI moment. They do not argue for one vision over the other. They argue that making the visions visible is a prerequisite for thinking honestly about what is happening, because the most dangerous form of intellectual error is not being wrong about the facts but being unaware of the assumptions that determine which facts you find relevant.

Thomas Sowell spent his career making assumptions visible. The AI discourse needs his method more than it needs any particular conclusion.

---

Chapter 2: The Constrained Vision and AI Skepticism

The constrained vision has never been popular. It tells people things they do not want to hear. It says that human nature has limits that cannot be legislated or educated or innovated away. It says that every gain comes with a cost and that the cost is not a bug in the system but a feature of reality. It says that the enthusiasts who promise costless transformation are either deceiving themselves or deceiving others, and that the accumulated wisdom of evolved institutions is more reliable than the confident pronouncements of people who believe they are smarter than the past.

This is not an appealing message. The unconstrained vision offers hope, progress, the possibility of a better future unencumbered by the mistakes of the past. The constrained vision offers trade-offs, caution, and the unsentimental observation that most attempts to improve the world have made it worse in ways their designers did not anticipate and could not have anticipated, because the knowledge required to anticipate them was dispersed among millions of people whose circumstances the designers did not understand.

But unpopularity is not a rebuttal. The question is whether the constrained vision sees something real — something the unconstrained vision misses — and whether what it sees matters enough to be taken seriously even by people who find its conclusions uncomfortable.

In the AI discourse, the constrained vision is represented by a range of voices that share a common structure even when they disagree about specifics. Byung-Chul Han, the philosopher who gardens in Berlin and refuses to own a smartphone, argues that the removal of friction from human experience produces not efficiency but hollowness — that the struggle to understand, the resistance of material to intention, the slowness of genuine learning, are not obstacles to be removed but conditions for depth to develop. The Berkeley researchers who documented work intensification measured what the constrained vision predicts: that AI tools do not reduce the burden on workers but expand it, because the efficiency gains are immediately absorbed by additional demands that fill every space the tool creates. The Luddites of 1812, as Segal presents them in The Orange Pill, were not irrational technophobes. They were skilled workers who saw with devastating accuracy what the power loom would do to their wages, their communities, and their children's futures.

These voices share a diagnostic structure. They see costs that the optimists do not see, or see but do not weigh seriously. The costs are real. The question is what follows from their reality.

Han's argument is the most philosophically rigorous version of the constrained vision applied to AI. His claim is not merely that friction is useful but that friction is constitutive — that the experience of struggling with resistant material is what produces understanding in the first place. Remove the friction and you remove not an obstacle to understanding but the process through which understanding is generated. The code that works without being debugged is code that is not understood. The essay that writes itself is an essay whose argument has not been thought. The brief that drafts itself is a brief whose legal reasoning has not been internalized. In each case, the output exists. The understanding does not.

This is a constrained-vision argument in its purest form. It says that there is a relationship between effort and comprehension that cannot be bypassed by technology, because the relationship is not a limitation of current technology but a feature of human cognition. The brain learns through resistance. Neurons strengthen their connections through repeated firing under conditions of difficulty. Remove the difficulty and the connections do not form. This is not a metaphor. It is neuroscience. And it applies to the acquisition of expertise with a directness that the optimists have not adequately addressed.

The Berkeley data provides empirical support. Xingqi Maggie Ye and Aruna Ranganathan embedded themselves in a technology company for eight months and watched what happened when AI tools entered the workflow. What happened was not what the optimists predicted. Workers did not use the freed time for reflection, strategic thinking, or the development of higher-order skills. They used it for more work. The tools colonized every gap in the day — lunch breaks, waiting rooms, the minute between meetings. The researchers called it "task seepage," and it operated not through external pressure but through the internalized imperative to produce that Byung-Chul Han had diagnosed philosophically before anyone had measured it empirically.

The constrained vision would not be surprised by this finding. Its prediction is that efficiency gains will be absorbed by the system rather than banked by the individual, because the incentive structure rewards absorption. The organization that can extract more output from the same headcount will extract it. The worker who can produce more in the same hours will be expected to produce more. The gain does not accumulate to the worker. It flows to the system. This is not malice. It is incentives operating as incentives always operate — in the direction of maximum extraction.

The Luddites understood this intuitively, even if they lacked the vocabulary to express it in economic terms. They could see that the power loom would reduce the value of their labor, not because their labor was worthless but because the machine could produce the same output at a fraction of the cost. The machine did not need to understand tensile properties, thread counts, or the thousand small adjustments a master weaver made by feel. It just needed to run. And running was cheaper than understanding.

The parallel to the AI moment is precise. Claude Code does not need to understand systems architecture, database design, or the thousand small judgments a senior engineer makes through decades of accumulated experience. It needs to generate code that compiles and runs. And generating code that compiles and runs is cheaper than the expertise that used to be required to produce it.

The constrained vision does not argue that this is wrong. It argues that it is costly. The cost is the expertise itself — the deep, embodied, hard-won knowledge that took decades to build and that the market is now in the process of devaluing. The framework knitters of Nottingham lost not just their wages but their identities as skilled craftspeople, because their skill was the thing the machine made unnecessary. The senior engineers of 2026 are experiencing the same loss in real time, watching the implementation expertise that defined their careers become economically irrelevant in months.

Segal acknowledges this loss with an honesty that the typical AI book does not. He describes the senior engineer who oscillated between excitement and terror during the Trivandrum training — excitement at the expanded capability, terror at the realization that the implementation work that had consumed eighty percent of his career was now handled by a tool. The remaining twenty percent turned out to be the valuable part — the judgment, the architectural intuition, the taste that separates a product users love from one they tolerate. But the recognition that the valuable part was only twenty percent of what he had been doing forced a reassessment of his entire career.

The constrained vision would observe that this reassessment, however productive in individual cases, has systemic costs that individual case studies cannot capture. The twenty percent that remains valuable is genuinely valuable. But it was developed through the eighty percent that is being eliminated. The judgment and intuition were not separate from the implementation work. They were built through it, layer by layer, through thousands of hours of debugging, testing, failing, and trying again. Remove the implementation work and you remove the process through which the judgment was built.

This is the constrained vision's most powerful argument, and the AI optimists have not refuted it. They have acknowledged it, as Segal does, and then moved on to the benefits, which are real and substantial. But acknowledging a cost and then focusing on the benefits is not the same as addressing the cost. The question remains: If the expertise that AI makes unnecessary was also the process through which higher-order expertise was developed, where will the next generation of senior engineers come from? Who will possess the judgment to direct AI wisely if the process through which judgment was built no longer exists?

The constrained vision does not claim to know the answer. It claims that the question is real, that the optimists have not answered it, and that the failure to answer it is not a minor oversight but a structural deficiency in the optimist's case. The unconstrained vision treats this as a problem to be solved — new forms of training, new educational approaches, new ways to develop judgment that do not require decades of implementation struggle. The constrained vision treats it as a trade-off to be managed — a permanent cost that accompanies the genuine benefits and cannot be eliminated by ingenuity, only mitigated by institutional design.

The distinction matters. If the cost is a problem, then the correct response is innovation — find a new way to build expertise that does not depend on the friction AI has removed. If the cost is a trade-off, then the correct response is management — accept the loss, count it honestly, and build institutions that compensate for it without pretending it does not exist.

Thomas Sowell spent his career arguing that the most dangerous intellectual error is treating trade-offs as problems. Problems have solutions. Trade-offs have only better and worse ways of managing the competing demands. The person who treats a trade-off as a problem will keep searching for the solution that eliminates the cost, and will reject every proposed management strategy as insufficiently ambitious, and will end up with neither a solution nor a strategy — only the certainty that something better is possible and the growing frustration that it has not yet arrived.

The AI skeptics are operating from the constrained vision. Their diagnosis is accurate in ways the optimists have not adequately addressed. Their grief is legitimate. Their concerns about the dissolution of expertise, the intensification of work, the erosion of the friction that produces understanding — these are not irrational fears. They are the constrained vision's predictions, and the early evidence supports them.

But the constrained vision has a weakness that Sowell himself identified, though he did not frame it as a weakness. The constrained vision can diagnose. It cannot prescribe. It can tell you what the trade-offs are. It cannot tell you what to build. It can warn you about the costs of action. It is less equipped to assess the costs of inaction. And in a moment when the technology is advancing at a pace that makes inaction its own form of decision, the constrained vision's caution risks becoming the same thing as the Luddites' refusal — emotionally satisfying, diagnostically precise, and strategically catastrophic.

The constrained vision sees the loss. The question is whether it can also see what grows in the space the loss creates. That question belongs to the unconstrained vision, which has its own evidence and its own blind spots. Those are the subject of the next chapter.

---

Chapter 3: The Unconstrained Vision and AI Optimism

The unconstrained vision is seductive because it is generous. It assumes the best about human potential. It looks at the limits of the present and sees not permanent constraints but temporary obstacles that effort, ingenuity, and the right tools can overcome. It is the vision of the builder, the entrepreneur, the reformer, the person who looks at the world and thinks: this can be made better.

Thomas Sowell did not dismiss the unconstrained vision. He respected it enough to take it seriously, which meant respecting it enough to identify where it went wrong. His argument was never that optimism is foolish. His argument was that optimism untempered by an awareness of trade-offs produces policies whose costs exceed their benefits — and that the optimists, because their framework treats costs as problems to be solved rather than realities to be managed, systematically fail to see when this is happening.

The AI optimists have evidence. Substantial evidence. The empirical record of computing abstraction — from assembly language to compilers to frameworks to cloud infrastructure to AI-assisted development — is a record of capability expanding at every transition. The critics warned of depth lost at every level. The critics were partly right at every level. And the trajectory was toward expansion at every level. The tower went higher. The developer who no longer writes assembly code can build applications of a complexity that assembly-era programmers could not have conceived. The designer who no longer hand-codes CSS can create user experiences that would have consumed an entire team's bandwidth a decade ago. Each abstraction removed difficulty at one level and created new possibilities at a higher one.

This is the unconstrained vision's strongest empirical case, and it is not easily dismissed. It is not a theory about what AI might do. It is a record of what previous abstractions actually did. And it suggests that the pattern — each new tool removes a floor of difficulty and opens a floor of capability above it — will continue with AI.

Segal's account of the Trivandrum sprint is the case study. Twenty engineers, experienced professionals who had been building software for decades, sat in a room in southern India and learned to use Claude Code. By Friday, each one was operating with the leverage of a full team. A feature that had been on the backlog for four months, estimated at six weeks of development time, was completed in three days. An engineer who had never written frontend code built a complete user-facing feature because the tool handled the translation between her intention and the implementation.

The unconstrained vision reads this and sees human potential unlocked. The engineer's capability was always there. It was trapped behind a translation barrier — the gap between what she could imagine and what she could implement, a gap maintained by the accident of which programming languages she had learned and which she had not. The tool did not give her new capability. It released capability she already possessed.

This reading has force. The developer in Lagos who can now access the same coding leverage as an engineer at Google is not receiving charity. She is being given access to a tool that removes an artificial barrier between her intelligence and its expression. The barriers — lack of capital, lack of institutional support, lack of the specific technical training that the market happened to reward — were never measures of her potential. They were measures of her circumstances. The tool changes the circumstances. The potential was always there.

Marc Andreessen, the venture capitalist who published his "Techno-Optimist Manifesto" in October 2023, explicitly invoked Sowell in defense of this position. "We are adherents to what Thomas Sowell calls the Constrained Vision," Andreessen wrote. "Constrained Vision — contra the Unconstrained Vision of Utopia, Communism, and Expertise — means taking people as they are, testing ideas empirically, and liberating people to make their own choices."

This is a remarkable claim, and it reveals something important about how Sowell's framework is being deployed in the AI discourse. Andreessen is using the constrained vision's vocabulary — taking people as they are, testing empirically, liberating individual choice — to justify what is, structurally, an unconstrained-vision program. The techno-optimist manifesto is a vision of limitless progress driven by technology, in which the costs of innovation are temporary problems that further innovation will solve and in which the proper response to every cautionary warning is to build faster. This is not the constrained vision. The constrained vision does not build faster. It builds carefully, counting costs, acknowledging trade-offs, respecting the limits that the unconstrained vision would prefer to transcend.

The appropriation is revealing. It suggests that the constrained vision has cultural prestige that the unconstrained vision, at least in Silicon Valley, does not — that even the most aggressive optimists want to clothe their optimism in the constrained vision's language of humility and empiricism. But the clothing does not change the body underneath. The techno-optimist manifesto is unconstrained in its assumptions: technology can solve any problem, progress is always net positive, the costs of innovation are always outweighed by the benefits, and the people who disagree are motivated by fear rather than evidence.

Sowell himself, writing in the Wall Street Journal in January 2026, was not optimistic. At ninety-five years old, his only sustained engagement with artificial intelligence was a warning. AI had been used to create deepfake imitations of his voice, putting words in his mouth that he had never said — including things "the direct opposite of what I have said." His response was not to celebrate the technology's capability. It was to observe that AI had given new power to people who could not argue effectively against ideas they opposed, and who therefore used technology to fabricate evidence that the ideas had never been expressed.

"To those with the prevailing vision," Sowell wrote, "it is words — not facts — that are crucial. In this context, AI frauds about words have a major role to play." His conclusion was stark: "If there are no serious consequences for either individuals or institutions that create frauds — whether by AI or by silencing other viewpoints — we will have no basis for settling our inevitable differences other than violence."

This is the constrained vision applied to AI by its most distinguished living practitioner. It is not the language of techno-optimism. It is the language of a man who has spent a lifetime studying how institutions fail and who sees in AI a new mechanism of failure — not because the technology is malicious but because the incentives are misaligned and the consequences are insufficient.

The unconstrained vision's weakness, as Sowell identified it across decades of analysis, is its tendency to evaluate proposals by their intended benefits rather than their actual costs. The AI optimist points to the developer in Trivandrum, the engineer in Lagos, the solo builder who shipped a product over a weekend. These are real benefits. They are not fabricated. They represent genuine expansions of human capability that the constrained vision's framework does not easily accommodate.

But the unconstrained vision's evaluation stops there. It does not ask, with sufficient rigor, what the costs are. Not the costs to the people who benefit from AI — those people are, in the short term, clearly better off. The costs to the people who are displaced by it. The costs to the expertise that is being dissolved. The costs to the institutions that are being restructured at a pace that outstrips their capacity to adapt. The costs to the cultural capital — the accumulated skills, habits, and values — that took generations to build and is being made optional in months.

The unconstrained vision treats these costs as transitional. The Luddites lost their craft, but their grandchildren got factory jobs. The typists lost their profession, but the economy created new professions. The pattern, the optimist argues, is that displacement is temporary and expansion is permanent. The pain is real but the trajectory is upward.

The constrained vision observes that "the long run" is a concept available only to people who can afford to wait. The generation that bears the cost of the transition does not experience the long-run trajectory. It experiences the present, and the present is displacement, dissolution, and the loss of the specific expertise that gave life meaning and structure. The factory worker's grandchild benefiting from electrification is no comfort to the factory worker whose skills became worthless last Tuesday.

The strongest version of the unconstrained vision's case acknowledges this. Segal acknowledges it. He writes about the elegists with genuine respect — the senior architect who felt like a master calligrapher watching the printing press arrive, who could feel a codebase the way a doctor feels a pulse, and who knew that this embodied knowledge was being made irrelevant by a tool that could generate code without understanding it. Segal does not dismiss this loss. He holds it alongside the gain and refuses to pretend that the gain eliminates it.

But even Segal, holding both visions, drifts. The gravitational pull of the unconstrained vision is strong in a builder's framework, because builders are, by definition, people who believe that the next thing they build will be better than the last thing. The constrained vision's caution — the insistence on counting costs, the skepticism about costless gains, the warning that trade-offs are permanent rather than temporary — is structurally uncomfortable for a person whose identity is organized around the act of creation.

The unconstrained vision's evidence is real. Each layer of abstraction in computing has expanded capability more than it has reduced understanding. The developer in Lagos is genuinely more empowered than she was five years ago. The imagination-to-artifact ratio has genuinely collapsed. These are not fictions constructed to justify optimism. They are measurements of a transformation that is happening in real time, to real people, with real consequences.

The question the unconstrained vision has not answered — the question that defines the honest middle between the two visions — is whether the AI abstraction is another instance of the pattern or a break from it. Whether this particular layer of capability expansion carries costs that are qualitatively different from the costs of previous abstractions. Whether the dissolution of expertise is, this time, happening at a speed that outstrips the capacity of individuals and institutions to adapt.

The constrained vision says: count the costs before you celebrate the gains. The unconstrained vision says: build the future before the costs of inaction accumulate. Both are right. Neither is sufficient. The tension between them is not a problem to be solved. It is the condition of honest thought in a moment when the evidence supports both and the stakes are too high for either to be dismissed.

---

Chapter 4: Trade-Offs Versus Solutions

The most important sentence Thomas Sowell ever wrote may be this one: "There are no solutions. There are only trade-offs."

Eight words. No qualification. No hedge. The directness is characteristic — Sowell does not believe in softening claims to make them palatable — but the importance is in the substance, not the style. The sentence captures the core of the constrained vision in a form so compressed that its implications take years to fully unpack. If there are no solutions, only trade-offs, then every policy, every technology, every institutional design that promises to solve a problem is, at best, offering a different set of problems in exchange for the current ones. The question is never "Does this solve the problem?" The question is "Is this set of problems preferable to that set of problems, and at what cost?"

The unconstrained vision rejects this framing. It does not deny that trade-offs exist in the present. It denies that they are permanent features of reality. The trade-off between efficiency and depth is not a law of nature. It is a limitation of current technology, current institutions, current understanding. With sufficient ingenuity, the trade-off can be transcended — a system can be designed that delivers both efficiency and depth, that removes the mechanical friction without removing the cognitive benefit.

This disagreement — trade-offs versus solutions — is the structural fault line beneath the entire AI discourse. Every specific argument about AI adoption, AI regulation, AI education, and AI governance is a surface expression of this deeper disagreement. And because the disagreement is structural rather than empirical, no amount of data can resolve it. The same data gets interpreted differently depending on which side of the fault line the interpreter stands on.

Consider the ascending friction thesis that Segal advances in The Orange Pill. The argument is that each technological abstraction removes friction at one cognitive level and relocates it to a higher one. The surgeon who loses the tactile feedback of open surgery gains the ability to perform operations that open hands could never attempt. The programmer who loses the intimate knowledge of memory management gains the ability to build applications of a complexity that assembly-era programmers could not conceive. The developer who loses the struggle of manual debugging gains the time and cognitive bandwidth to focus on architecture, vision, and judgment.

The unconstrained vision reads this as a solution. The trade-off between efficiency and depth has been transcended. The friction has not disappeared — it has ascended. The practitioner at the higher level is not shallower. She is working on harder problems at a higher cognitive floor. The depth is not lost. It is relocated.

The constrained vision reads the same thesis and sees a trade-off being redescribed as a solution. The friction ascended. But did the depth? The surgeon who operates laparoscopically has access to procedures that open surgery could not perform. This is an expansion of capability. It is not an expansion of understanding. The laparoscopic surgeon does not understand the body more deeply than the open surgeon. She understands it differently — through a two-dimensional image rather than through her hands — and the difference is not merely a matter of modality. It is a matter of the kind of knowledge that is being built.

The open surgeon's knowledge was embodied. It lived in the fingers, in the proprioceptive sense of tissue resistance, in the accumulated muscle memory of thousands of procedures. The laparoscopic surgeon's knowledge is cognitive — it lives in the interpretation of visual information, in the coordination of instruments at a remove from the body, in the mental model of three-dimensional space constructed from two-dimensional data. Both are forms of expertise. They are not the same form. And the question of whether the transition from one to the other is a net gain or a net loss depends entirely on what you mean by "understanding."

The constrained vision insists that the question be answered honestly rather than assumed away. The unconstrained vision's claim that friction "ascended" rather than "disappeared" may be accurate as a description of what happened to the work. It is less obviously accurate as a description of what happened to the worker. Did the developer who stopped debugging manually develop a deeper understanding of systems architecture? Or did she develop a broader but shallower understanding — one that covered more ground but penetrated less deeply into any particular domain?

The evidence is mixed, and the constrained vision's contribution is the insistence that mixed evidence be treated as mixed rather than selectively cited to support one conclusion. Segal describes an engineer in Trivandrum who, months after the AI training, realized she was making architectural decisions with less confidence than before and could not explain why. The explanation, when she found it, was that the ten minutes of unexpected discovery buried in four hours of mechanical plumbing — the rare moments when something broke in a way that forced her to understand a connection between systems she had not previously learned — were no longer happening. The tool had removed the plumbing and the discovery simultaneously, and she did not notice the loss of the discovery until the consequences appeared in her decision-making.

This is a trade-off. Not a transitional cost. Not a problem that better tool design will eliminate. A trade-off inherent in the relationship between effort and understanding, between struggle and expertise, between the friction of manual work and the comprehension that manual work produces as a byproduct.

The unconstrained vision has a response. It argues that the lost discovery can be replaced by intentional practice — structured exercises, deliberate study, the kind of directed learning that builds architectural intuition without requiring thousands of hours of mechanical debugging. This is plausible. It may even be correct. But it has not been demonstrated. It is a prediction based on the assumption that the relationship between effort and understanding is contingent rather than necessary — that understanding can be produced by different means if the current means are removed.

The constrained vision's response is that predictions based on what has not been demonstrated should be weighted less heavily than observations of what has actually happened. What has actually happened is that the removal of mechanical friction reduced the engineer's architectural confidence. The prediction that intentional practice will restore it is a hypothesis, not an observation. And the constrained vision's entire intellectual project is to insist that hypotheses about what might work should not be treated as equivalent to evidence about what has actually happened.

This is not a counsel of despair. It is a counsel of honesty. The trade-off between efficiency and depth may be manageable. It is almost certainly manageable. Dams can be built. Structured pauses, protected mentoring time, deliberate practice in understanding systems at levels below the AI abstraction — these are management strategies, and they may be effective. But they are management strategies for a trade-off, not solutions to a problem. The distinction matters because it determines expectations and, through expectations, behavior.

If the cost of AI-accelerated development is treated as a problem, organizations will search for the solution that eliminates it. They will invest in new training programs, new educational approaches, new tools that promise to deliver the benefits of AI without the costs. And when those approaches fail to eliminate the costs entirely — as the constrained vision predicts they will — the organizations will conclude that they have not yet found the right solution and will try again, cycling through interventions without ever accepting that the cost is permanent and must be managed rather than eliminated.

If the cost is treated as a trade-off, organizations will manage it. They will accept that AI-accelerated development produces practitioners who are broader but potentially shallower, and they will build institutional structures that compensate — pairing AI-augmented development with deliberate practice, protecting time for the kind of slow, friction-rich work that builds deep understanding, creating mentoring relationships that transmit tacit knowledge from experienced practitioners to newcomers.

The difference between searching for a solution and managing a trade-off is the difference between frustration and effectiveness. The organization that searches for a solution will be perpetually dissatisfied, because the solution does not exist. The organization that manages a trade-off will be imperfectly but adequately adapted to reality, which is the best the constrained vision believes is achievable.

Sowell applied this analysis to education policy, to housing policy, to criminal justice, to international relations. In every domain, the pattern was the same. The unconstrained vision's pursuit of solutions produced policies whose costs exceeded their benefits, because the vision's framework could not accommodate the possibility that costs were permanent. The constrained vision's management of trade-offs produced outcomes that were imperfect but tolerable, because the vision's framework assumed imperfection from the start.

The AI moment does not resolve this disagreement. It intensifies it. The stakes are higher than in any previous technology transition, because the speed of the transition outstrips the speed of institutional adaptation by a wider margin than ever before. The constrained vision's call for caution is more urgent when the pace of change leaves less time for the dams to be built. The unconstrained vision's call for ambition is more compelling when the capabilities being offered are more powerful than anything in human history.

Both are right. And being right is not enough. The constrained vision that only diagnoses costs without building structures to manage them will watch the river flow past its warnings. The unconstrained vision that only celebrates gains without counting costs will build on a foundation that is already eroding.

The honest position — the position Segal occupies, imperfectly and self-consciously — is that the costs are real and the gains are real and neither can be dismissed, and that the work of the present moment is not to resolve the tension but to build inside it. To count the costs and build anyway. To celebrate the gains and grieve the losses and do both without pretending that either invalidates the other.

Sowell would not endorse this as a solution. There are no solutions. There are only trade-offs. And the trade-off between ambition and caution, between building and counting the cost, between the unconstrained vision's hope and the constrained vision's realism, is the defining trade-off of the AI age.

The question is whether the people making the decisions — the builders, the regulators, the educators, the parents — are aware that the trade-off exists. Because the worst outcomes, in Sowell's framework, are produced not by choosing the wrong side of a trade-off but by denying that the trade-off is there.

Chapter 5: The Knowledge Problem in AI

Friedrich Hayek published a twenty-page essay in 1945 that changed the trajectory of economic thought. "The Use of Knowledge in Society" made an argument so simple that most economists had overlooked it and so profound that the discipline has never fully absorbed it. The argument was this: the most important knowledge in any economy is not the kind that experts possess. It is the kind that nobody possesses — or rather, that everybody possesses, in fragments, scattered across millions of individual minds, none of which holds more than a tiny piece of the whole.

The shipper who knows that an empty vessel is available for return cargo. The real estate agent who knows which neighborhood is about to turn. The farmer who knows that this particular field drains poorly after rain. The factory foreman who knows that this particular machine makes a noise on Thursdays that it does not make on Mondays. None of this knowledge appears in any textbook, any database, any central planning document. It is knowledge of "the particular circumstances of time and place," and it is, Hayek argued, the knowledge on which the functioning of any complex economy depends.

The price system works because it transmits this dispersed knowledge in compressed form. When the shipper discovers an empty vessel, he does not need to inform a central authority. He adjusts his price. The adjustment ripples through the system, affecting the decisions of thousands of people who do not know why the price changed, do not need to know, and could not process the information even if they had it. The price does the processing. It coordinates behavior across an entire economy without requiring any participant to understand the economy as a whole.

Central planning fails, in Hayek's analysis, not because planners are incompetent but because the knowledge they would need to succeed does not exist in a form that can be centralized. The shipper's knowledge of the empty vessel is useful precisely because it is local, contextual, and situational. Extract it from the shipper's particular circumstances and it loses its value. Aggregate it into a database and it becomes a data point stripped of the context that made it actionable. The knowledge is inseparable from the knower's position in the system, and any attempt to separate them destroys the thing that makes the knowledge valuable.

Thomas Sowell built on Hayek's insight throughout his career. In Knowledge and Decisions, published in 1980, Sowell extended the analysis from economic planning to institutional design more broadly. Every institution — a court, a school, a hospital, a corporation — faces the same knowledge problem. The knowledge required to make good decisions is dispersed among the people closest to the relevant circumstances. The further a decision-maker is from those circumstances, the less of the relevant knowledge she possesses, and the more likely her decision is to produce unintended consequences that a person closer to the ground would have anticipated.

This framework has not been applied with sufficient rigor to the AI moment. It should be, because the AI moment is creating a new version of the knowledge problem — one that Hayek did not anticipate and that Sowell's extension of Hayek's framework illuminates with uncomfortable clarity.

Large language models are trained on the aggregated output of millions of human minds. The training data for a model like Claude includes books, articles, code repositories, forum discussions, technical documentation, creative writing, legal briefs, medical literature, and virtually every other form of human knowledge that has been committed to digital text. This is an act of knowledge centralization without historical precedent. The dispersed expertise of the entire literate internet, produced by millions of individuals each possessing knowledge of their particular circumstances, has been aggregated into a system controlled by a small number of companies.

The question Hayek would ask is whether this aggregation preserves the character of the knowledge or destroys it.

The optimist's answer is that it preserves it. The model has learned the patterns. It can generate code that reflects the accumulated wisdom of millions of programmers. It can draft legal arguments that reflect the accumulated practice of thousands of lawyers. It can produce medical summaries that reflect the aggregated findings of decades of clinical research. The knowledge has not been lost. It has been made accessible — democratized, as Segal argues, so that a developer in Lagos can access the same cognitive leverage as an engineer at Google.

The Hayekian answer is more cautious. The model has learned the patterns. But patterns are not knowledge. The programmer's knowledge of why this particular function fails under these particular conditions, the lawyer's knowledge of how this particular judge responds to this particular type of argument, the doctor's knowledge of how this particular patient's symptoms differ from the textbook presentation — this is knowledge of particular circumstances. It is contextual, situational, and inseparable from the knower's position in the system.

When this knowledge is aggregated into training data, the context is stripped away. What remains is the pattern — the statistical regularity that the model extracts from millions of instances. The pattern is useful. It is not the same thing as the knowledge from which it was extracted. The difference between a pattern and the situated knowledge that produced it is the difference between knowing that antibiotics cure infections and knowing that this patient, with this history, taking these other medications, in this stage of this particular illness, should receive this antibiotic at this dosage for this duration. The first is a pattern. The second is knowledge. The first can be centralized. The second cannot — or at least, centralizing it strips away the situational specificity that makes it valuable.

The AI optimist's response to this objection is that context can be provided. The user describes the particular circumstances — the specific patient, the specific codebase, the specific legal jurisdiction — and the model applies its patterns to the context the user supplies. This is how Claude Code works in practice: the developer describes the problem in her own terms, and the model generates a solution that reflects both the general patterns it has learned and the specific context the developer has provided.

This is a genuine advance. It is also not a refutation of the Hayekian objection. Because the context the user provides is the context the user knows she needs to provide. The most important contextual knowledge — the thing the experienced practitioner knows that the inexperienced practitioner does not — is often knowledge of what context matters. The senior engineer does not just know the answer to the question. She knows which questions to ask. She knows which aspects of the system are relevant to this particular problem and which are not. She knows what to look for, and that knowledge — the knowledge of what constitutes relevant context — is precisely the knowledge that cannot be provided to the model, because the person who does not possess it does not know it is missing.

This is the knowledge problem in its purest form, applied to AI. The model can process any context you give it. It cannot tell you which context to give it. The user who knows the right context to provide gets excellent results. The user who does not know gets results that look excellent — syntactically correct, structurally sound, plausible — but that fail in the particular circumstances the user did not think to specify.

Segal describes this phenomenon without naming it as a knowledge problem. His account of working with Claude includes moments when the tool produced output that was "confident wrongness dressed in good prose" — a passage that sounded like insight but was philosophically incorrect, a reference that fit the argument beautifully but misrepresented the source. The smoothness of the output concealed the gap in the knowledge. The prose was polished, the structure was clean, and the seam where the idea broke was invisible until someone with the relevant contextual knowledge — knowledge of what Deleuze actually wrote, in that case — examined it closely.

This is the Hayekian failure mode of centralized knowledge systems, translated from economic planning to cognitive assistance. The central plan looks good on paper. The prices are set, the quantities are specified, the allocation seems rational. The failure appears only when the plan meets the particular circumstances it did not anticipate — the empty vessel, the draining field, the machine that makes a noise on Thursdays. The AI output looks good on screen. The code compiles, the argument is structured, the references are cited. The failure appears only when the output meets the particular circumstances the user did not specify — the edge case, the jurisdictional exception, the patient whose presentation differs from the textbook in ways the model was not prompted to consider.

The optimist argues that this problem will be solved by better models — models that can infer missing context, that can ask the user what she has not thought to specify, that can flag their own uncertainty rather than presenting confident wrongness. This is plausible. Models are improving rapidly. But the Hayekian framework suggests a structural limit on this improvement. The knowledge of particular circumstances is generated by the experience of being in those circumstances. It is not theoretical knowledge that can be derived from general principles. It is practical knowledge that accrues through the specific, situated, embodied experience of working in a domain over time. The shipper knows about the empty vessel because he is at the dock. The engineer knows about the edge case because she has been debugging this system for three years. The doctor knows about this patient because she has been treating him for a decade.

No model, however sophisticated, can replicate the knowledge that comes from being in a particular position in a particular system at a particular time. It can simulate the output of such knowledge by pattern-matching against the outputs of millions of people who were in similar positions. But similar is not identical, and the gap between similar and identical is where the knowledge problem lives.

This does not mean AI is useless. It means AI is a tool whose value depends on the user's capacity to supply the context that the tool cannot generate on its own. The developer with twenty years of experience who uses Claude Code gets better results than the developer with two years of experience, not because the tool works differently for her but because she knows which context matters — which questions to ask, which edge cases to flag, which aspects of the system the tool needs to be told about.

The knowledge problem in AI produces a paradox. The people who benefit most from AI tools are the people who need them least — the experienced practitioners whose contextual knowledge allows them to direct the tool effectively. The people who need AI tools most — the inexperienced practitioners who lack the contextual knowledge to evaluate the tool's output — benefit least, because they cannot distinguish between output that is correct and output that merely looks correct.

Segal's account of the Trivandrum training illustrates this paradox. The senior engineer's deep knowledge of systems architecture was not made redundant by Claude Code. It became the essential input that determined the quality of the tool's output. The tool amplified his expertise. But it amplified it precisely because his expertise consisted of the situated, contextual knowledge that the tool itself could not generate — the knowledge of which questions to ask, which context to supply, which outputs to trust and which to question.

The junior developer, by contrast, received output that compiled and ran and looked correct. Whether it was correct — whether it handled the edge cases, whether the architecture would scale, whether the design decisions embedded in the generated code were appropriate for this particular system — was a question she was not yet equipped to answer. The tool had not made her a senior engineer. It had made her a junior engineer with senior-engineer-level output and junior-engineer-level understanding of that output.

The Hayekian prescription is not to prohibit centralization but to recognize its limits. The price system is valuable because it aggregates dispersed knowledge without requiring anyone to possess all of it. AI is valuable for the same reason — it aggregates the dispersed expertise of millions of practitioners into a tool that any individual can access. But the price system works because it preserves the contextual character of knowledge through the mechanism of voluntary exchange. Each participant acts on her own situated knowledge, and the price aggregates the results.

AI aggregation works differently. It extracts patterns from situated knowledge, strips the context, and presents the patterns as general-purpose output. The output is useful. It is not situated. And the gap between useful and situated is the gap where the knowledge problem operates — silently, invisibly, producing failures that look like successes until they meet the particular circumstances they were not designed to handle.

The practical implication is that AI tools require not less human expertise but more — not in the old sense of implementation skill but in the Hayekian sense of situated judgment. The capacity to evaluate output, to supply missing context, to recognize when the pattern does not fit the particular circumstances. This is the knowledge that cannot be centralized, cannot be trained into a model, and cannot be acquired except through the slow, situated, experiential process of working in a domain long enough to know what the textbook does not contain.

The knowledge problem does not refute the case for AI. It constrains it. The constrained vision's contribution to the AI discourse is precisely this: the insistence that tools have limits, that limits are features of reality rather than failures of engineering, and that the honest assessment of any tool must include what it cannot do as well as what it can.

---

Chapter 6: Systemic Processes Versus Intentional Design

In the spring of 2026, a senior official at the European Commission described the EU AI Act as "the most comprehensive regulatory framework for artificial intelligence in the world." The description was accurate. The regulation classified AI systems by risk level, imposed transparency requirements on high-risk applications, mandated human oversight for systems affecting health, safety, and fundamental rights, and established penalties for non-compliance that could reach seven percent of global annual revenue.

By the time the regulation took effect, the technology it was designed to govern had already changed beyond recognition.

This is not a failure specific to European regulators. It is a structural feature of the relationship between intentional design and systemic processes — a relationship that Thomas Sowell analyzed across decades and that applies to the AI moment with a precision that the participants in the regulatory debate have not appreciated.

The constrained vision trusts systemic processes more than intentional design. Markets, traditions, evolved institutions, and the accumulated practices of millions of individuals making millions of decisions under conditions of local knowledge — these are the mechanisms the constrained vision relies on to coordinate behavior in complex systems. Not because they are perfect. Because the alternative — centralized design by experts who believe they understand the system well enough to direct it — has a worse track record.

The unconstrained vision trusts intentional design. It believes that informed individuals, armed with sufficient knowledge and authority, can design institutions, regulations, and social arrangements that produce better outcomes than the uncoordinated decisions of millions of self-interested actors. The failures of past interventions are not evidence that intervention is inherently flawed. They are evidence that the interveners were insufficiently informed, insufficiently empowered, or insufficiently committed to the right outcomes.

The AI moment tests both positions simultaneously, and both are failing simultaneously, in ways that illuminate the strengths and weaknesses of each.

The systemic processes — the market — are incorporating AI at extraordinary speed. Companies adopt AI tools because the incentive structure rewards adoption. Workers use AI because the alternative is displacement. Organizations restructure around AI because competitors who restructure first capture market advantage. None of this was designed. No central authority planned it. No regulatory body approved it. It is the spontaneous order of millions of actors responding to changed incentives, producing an outcome that no individual intended and no institution controls.

The speed is characteristic of systemic processes. Markets move at the speed of incentives, and the incentives in the AI economy point uniformly toward adoption. The company that does not adopt loses competitive advantage. The worker who does not adopt loses market value. The organization that does not restructure loses efficiency. Each individual decision is rational given the individual's circumstances. The aggregate outcome — the transformation of the knowledge economy at a pace that outstrips every institutional capacity to adapt — was intended by no one and is controlled by no one.

Hayek would recognize this as the spontaneous order he celebrated — the capacity of decentralized decision-making to coordinate complex behavior without central direction. And he would be right that the systemic process is producing an allocation of AI capability that no central planner could replicate. The market is discovering, through millions of experiments conducted simultaneously by millions of actors, which applications of AI are valuable, which business models are viable, which organizational structures are effective. The discovery process is faster, more granular, and more responsive to local conditions than any regulatory process could be.

But Hayek also recognized that spontaneous order requires a framework of rules within which to operate. The market for physical goods requires property rights, contract enforcement, and fraud prevention. Without these institutional preconditions, the spontaneous order degenerates into predation. The buyer who cannot enforce a contract has no reason to trust the seller. The seller who cannot protect his property has no reason to produce. The market works not because it is unregulated but because the regulation it requires — the protection of property, the enforcement of contracts, the prohibition of fraud — is so deeply embedded in the institutional fabric that it has become invisible.

The AI economy is operating without the equivalent institutional fabric. Property rights in training data are undefined. Contract enforcement for AI-generated output is untested. Fraud prevention for AI-generated deepfakes is, as Sowell himself discovered, functionally nonexistent. The spontaneous order is generating enormous value, but it is generating it within an institutional vacuum — a space where the rules that would normally channel self-interested behavior toward mutually beneficial outcomes have not yet been established.

The intentional designers — the regulators, the ethicists, the policy makers — are trying to fill this vacuum. The EU AI Act is the most ambitious attempt. The American executive orders, the emerging frameworks in Singapore, Brazil, and Japan, are others. Each represents an effort to impose intentional design on a systemic process that is already moving faster than the designers can follow.

Sowell's framework predicts the outcome. The intentional design will lag the systemic process. The regulations will address the technology as it existed when the regulations were written, not as it exists when the regulations take effect. The compliance requirements will impose costs that fall disproportionately on smaller actors — the developer in Lagos, the solo builder, the startup without a legal department — while larger actors absorb the costs and use the regulatory barrier as a competitive moat. The net effect will be to concentrate market power rather than distribute it, to protect incumbents rather than empower challengers, and to slow the pace of innovation without reducing the risks that motivated the regulation in the first place.

This prediction is not based on cynicism about regulators. It is based on the structural asymmetry between the speed of systemic processes and the speed of intentional design. Markets move at the speed of incentives. Regulation moves at the speed of legislation — committee hearings, public comment periods, inter-agency review, judicial challenge, amendment, implementation guidance, enforcement action. Each step takes months. In months, the technology changes. The regulation arrives and discovers that the target has moved.

Segal documents this gap in The Orange Pill. The gap between the speed of capability and the speed of institutional response is growing, not shrinking. Companies are restructuring around AI on quarterly timelines. Regulatory frameworks are being designed on multi-year timelines. The workers and students and parents who are adapting in real time are doing so mostly without guidance, mostly by trial and error, because the institutions that should be providing guidance are still debating the terms of the guidance they might eventually provide.

The constrained vision's prescription is not to abandon intentional design but to be realistic about its limitations. Regulation that attempts to direct the course of AI development — to specify which applications are acceptable, which uses are prohibited, which outcomes are required — will fail, because the knowledge required to make these specifications correctly is dispersed among millions of actors whose circumstances the regulators do not understand. Regulation that establishes framework conditions — property rights in training data, liability for AI-generated harm, transparency requirements that allow users to make informed decisions — has a better chance of success, because framework conditions channel behavior without requiring the regulator to possess the situated knowledge that only the participants possess.

This is the Hayekian distinction between rules of conduct and rules of allocation. Rules of conduct establish the parameters within which spontaneous order operates — do not steal, do not defraud, disclose material information. Rules of allocation specify outcomes — this company must compensate creators at this rate, this AI system must achieve this accuracy threshold, this application must undergo this approval process. The constrained vision trusts rules of conduct and distrusts rules of allocation, because rules of conduct work with dispersed knowledge while rules of allocation require centralized knowledge that does not exist.

The unconstrained vision responds that framework conditions are insufficient. The market, left to operate within framework conditions, will produce outcomes that are intolerable — concentration of power, erosion of privacy, displacement of workers without adequate support, the dissolution of cultural capital at a speed that no individual can manage. These outcomes require direct intervention, not merely the establishment of rules within which the market operates. The market will not solve the displacement problem on its own. It will not protect workers whose expertise is being dissolved. It will not preserve the cultural capital that took generations to build. These are tasks that require intentional design, and the constrained vision's skepticism about intentional design is, in this context, a recipe for catastrophe.

The debate is real. The evidence supports elements of both positions. The market is producing valuable outcomes that no regulator could have designed. It is also producing costs that no individual has chosen and that the market, left to its own processes, will not address. The intentional designers are struggling to keep pace, but the framework conditions they might establish — clear property rights, enforceable liability, meaningful transparency — would channel the market's energy toward outcomes that are less likely to produce the catastrophes the unconstrained vision fears.

The honest assessment, within Sowell's framework, is that both mechanisms are needed and neither is sufficient. Systemic processes move faster and allocate more efficiently than intentional design. Intentional design establishes the institutional preconditions without which systemic processes degenerate. The AI moment requires both — a market that discovers applications and allocates resources at the speed of incentives, operating within a framework of rules that protect the people and institutions the market's speed would otherwise destroy.

Building that framework at the speed the moment requires is the challenge. The constrained vision's contribution is the warning that the framework must be modest in its ambitions — rules of conduct rather than rules of allocation, structural incentives rather than specified outcomes, institutional preconditions rather than centralized direction. The unconstrained vision's contribution is the insistence that the framework must exist — that the absence of intentional design is not freedom but the abdication of responsibility for outcomes that the market, left to its own devices, will impose on people who did not choose them.

The two visions need each other. The systemic processes without institutional guardrails produce chaos. The institutional guardrails without respect for systemic processes produce stagnation. The question is whether the people building the guardrails understand the limits of their own knowledge well enough to build modestly, and whether the people celebrating the systemic processes understand the costs of the outcomes well enough to accept that guardrails are necessary.

Sowell spent his career observing that this balance is rarely achieved. The intentional designers overestimate their knowledge and overreach. The market advocates underestimate the costs and resist any constraint. The result is a cycle of crisis and overcorrection — too little regulation until the costs become intolerable, then too much regulation until the stagnation becomes intolerable, then deregulation, then crisis again.

The AI moment is early enough in this cycle that the balance could, in principle, be struck. Whether it will be is a question the framework cannot answer. It can only identify the conditions under which balance is possible and the characteristic errors that prevent it.

---

Chapter 7: The Role of Incentives

Thomas Sowell once observed that it is hard to imagine a more dangerous way of making decisions than to put those decisions in the hands of people who pay no price for being wrong. The observation was about government bureaucracy. It applies with equal force to the AI economy, though the people paying no price for being wrong are different, and the wrongs they are committing are of a different kind.

The AI economy runs on incentives. Not on visions of the future, not on ethical commitments, not on the aspirational language that fills the mission statements of AI companies. Incentives. The specific, measurable, immediate rewards and penalties that determine how people actually behave, as opposed to how they say they will behave at conferences.

The incentive structure of the AI moment is straightforward. AI companies are incentivized to maximize adoption. More users mean more data, more revenue, more competitive advantage, more leverage in the race for market dominance. The metrics that matter — monthly active users, tokens processed, revenue growth, developer adoption rates — all point in one direction: more. More deployment, more engagement, more integration into the workflows of more people in more industries.

This incentive structure does not reward caution. A company that slows deployment to study long-term effects on worker welfare loses market share to a company that deploys first. A company that restricts access to protect cultural capital forfeits revenue to a company that provides unrestricted access. A company that builds friction into its tools — deliberate pauses, mandatory reflection periods, limits on continuous engagement — creates an inferior user experience relative to a competitor that does not.

The incentives are not ambiguous. They do not require interpretation. They push uniformly toward faster deployment, broader access, and deeper integration. Every company in the AI industry understands this. The companies that articulate values of safety and responsibility are not lying about those values. They are operating within an incentive structure that makes the values expensive to maintain and that punishes deviation from the competitive norm.

Anthropic, the company that built Claude, was founded on the premise that AI safety and AI capability should be developed together — that responsible development is not an impediment to progress but a feature of it. This is an admirable premise. It is also one that must be maintained against constant incentive pressure. Every dollar spent on safety research is a dollar not spent on capability development. Every month spent studying the long-term effects of AI deployment is a month in which competitors are deploying without studying. The incentive structure does not reward safety. It tolerates safety to the extent that safety does not impede the metrics that matter.

This is not a moral judgment about AI companies. It is an observation about incentive structures, which is a different thing. Sowell spent his career making this distinction. People respond to incentives. They do not respond to exhortation. The executive who genuinely values worker welfare will, when the quarterly numbers come due, face a specific set of pressures that the welfare commitment must survive. Sometimes it does. Sometimes it does not. But the outcome depends less on the executive's character than on the structure of the incentives she faces.

Consider the organizational incentive. Segal describes a boardroom conversation in which the arithmetic of AI-driven productivity is on the table. If five people using AI tools can produce the work of a hundred, the arithmetic says: reduce headcount. The savings flow directly to the bottom line. The stock price responds. The board is satisfied. The investors are satisfied.

The decision to keep the team, to invest the productivity gains in expanded capability rather than reduced headcount, requires acting against the incentive structure. Segal made that choice. He is explicit about the cost — the quarterly pressure, the board conversations that return, the arithmetic that sits on the table every time. But making that choice required something the incentive structure does not provide: a vision of long-term value that the quarterly metrics cannot capture.

The constrained vision observes that choices that require acting against incentive structures are, by definition, fragile. They depend on the specific character of specific individuals in specific positions. When those individuals move on, the incentive structure reasserts itself. The next executive faces the same arithmetic with different convictions and makes a different choice. The organization that chose capability over extraction under one leader chooses extraction under the next. The incentive structure is permanent. The individual's commitment is contingent.

This is why Sowell insists on structural solutions rather than moral exhortation. Telling executives to prioritize long-term capability over short-term extraction is approximately as effective as telling teenagers to prioritize long-term health over short-term gratification. Some will listen. Most will respond to the incentives as the incentives present themselves. The effective intervention is not to change the people but to change the incentives — to make the long-term choice the rewarded choice rather than the penalized one.

What would this look like in the AI economy? Several possibilities present themselves, each with trade-offs that the constrained vision insists on counting.

Tax incentives for organizations that maintain or expand headcount while adopting AI tools would reward the beaver's choice structurally rather than relying on individual conviction. The trade-off: the incentive would also reward organizations that maintain headcount inefficiently, producing make-work rather than genuine capability development. The policy would need to distinguish between organizations that are investing in people and organizations that are warehousing them, and this distinction requires exactly the kind of situated knowledge that regulators typically lack.

Liability structures that assign costs to AI-generated harm would internalize externalities that the current structure leaves unpriced. If an AI-generated deepfake damages someone's reputation — as Sowell himself experienced — the cost is currently borne by the victim. Assigning the cost to the platform that enabled the deepfake, or the creator who produced it, would change the incentive structure. AI companies would invest more in detection and prevention. Creators would face consequences for fraud. The trade-off: broad liability chills legitimate use, because the line between harmful deepfake and legitimate creative application is not always clear, and the platforms that face liability will err on the side of restriction.

Portable benefits for workers displaced by AI — health insurance, retirement savings, retraining support that follows the worker rather than being tied to the employer — would reduce the cost of displacement for the individual without requiring the organization to bear costs that the competitive structure penalizes. The trade-off: portable benefits are expensive, and the funding mechanisms — taxes on AI companies, payroll contributions, general revenue — each create their own set of incentive distortions.

Each of these proposals has costs. The constrained vision insists that the costs be counted before the proposals are adopted. The unconstrained vision insists that the costs of inaction — displacement without support, fraud without consequence, extraction without accountability — must also be counted. Both are right. The honest assessment weighs the costs of action against the costs of inaction and chooses the set of trade-offs that is preferable to the set of trade-offs it replaces.

Now consider the incentive structure facing workers. The worker who adopts AI tools increases her short-term productivity. The market rewards this. She gets more done. She takes on more responsibility. She is more valuable to her employer — today.

But the adoption also changes what she does. The implementation work that consumed eighty percent of her time is now handled by the tool. What remains is the judgment work — the twenty percent that was always the most valuable part but that was developed through the eighty percent that has been eliminated. If the judgment was built through the implementation — through the debugging, the testing, the failing and trying again — then the elimination of the implementation erodes the process through which the judgment was developed.

The incentive structure does not reward the worker for maintaining her judgment-building process. It rewards her for increasing her output. The worker who spends an hour debugging manually, building the deep understanding that Han and the Berkeley researchers describe, is producing less output than the worker who lets Claude handle the debugging and moves on to the next task. The market does not measure understanding. It measures output. The incentive points toward output, and the worker who follows the incentive optimizes for the metric the market values at the expense of the capacity the market does not measure.

This is the auto-exploitation that Han diagnosed and that the Berkeley data confirmed, reframed in Sowell's vocabulary. The worker is not being exploited by an external authority. She is responding rationally to an incentive structure that rewards short-term output and does not reward long-term capability development. The compulsion is not psychological. It is economic. The worker who resists the incentive — who insists on spending time in friction-rich, judgment-building work that does not appear in the output metrics — bears a real cost. She is less productive by the measures that matter to her employer. She is more likely to be replaced by someone who follows the incentive structure as the incentive structure presents itself.

Sowell would not be surprised by any of this. He spent fifty years documenting the gap between intentions and incentives, between what people say they value and what the structures they inhabit actually reward. The AI economy is not different from any other economy in this respect. It is merely faster, which means the gap between the intended outcomes and the incentive-driven outcomes opens wider in less time.

The constrained vision's prescription is the same here as elsewhere: change the incentives. Do not exhort people to behave differently while the incentive structure rewards the behavior you are trying to change. Do not tell workers to invest in deep understanding while the market penalizes the time investment that deep understanding requires. Do not tell organizations to prioritize long-term capability while the quarterly metrics reward short-term extraction. Do not tell AI companies to prioritize safety while the competitive structure penalizes the pace reduction that safety requires.

Change the incentives, and the behavior will follow. Leave the incentives unchanged, and no amount of exhortation, no number of books, no volume of moral argument will produce the behavior the exhorters desire. People respond to incentives. This is not a theory. It is an observation, confirmed across every domain Sowell studied, and it applies to the AI economy with the same force it applies to every other economy in human history.

The question is not whether the people in the AI economy have good intentions. Most of them do. The question is whether the incentive structure under which they operate aligns their good intentions with good outcomes. In the AI economy of 2026, it does not. The incentives reward speed over safety, output over understanding, extraction over investment, and adoption over the careful evaluation of what adoption costs.

This can be changed. It has been changed in other economies, at other moments, when the costs of the misaligned incentive structure became intolerable and the political will to restructure it emerged. The labor laws that followed electrification. The financial regulations that followed the Great Depression. The environmental protections that followed industrial pollution. Each was a restructuring of incentives — a change in the rewards and penalties that channeled behavior toward outcomes that the previous incentive structure had made intolerable.

Whether the AI economy will produce its own restructuring depends on whether the costs of the current incentive structure become visible before they become catastrophic. Sowell's framework predicts that they will become visible — but only after a period of avoidable damage that a more realistic assessment of incentives would have prevented.

---

Chapter 8: Equality and the AI Divide

Thomas Sowell has spent more time studying inequality than almost any living economist, and his conclusions are uncomfortable for nearly everyone who reads them. He argues that inequality is not produced by a single cause — not by discrimination alone, not by differences in talent alone, not by institutional design alone, not by cultural factors alone. It is produced by all of these simultaneously, in proportions that vary across times, places, and populations, and that cannot be reduced to any single explanatory variable without distorting the evidence.

This refusal to simplify is Sowell's signature. Inequality is a complex phenomenon with complex causes. Anyone who tells you it has a simple explanation is either ignorant of the evidence or manipulating it. The person who says inequality is entirely the product of discrimination is ignoring the evidence of cultural and institutional factors. The person who says inequality is entirely the product of individual talent is ignoring the evidence of structural barriers. The honest analyst holds all the factors in view and refuses to pretend that the picture is simpler than it is.

AI introduces a new factor into this complex picture. The capacity to leverage AI tools effectively varies across individuals, organizations, and nations. It varies in ways that correlate with existing inequalities — education, connectivity, language, institutional support — and in ways that may create new inequalities orthogonal to the existing ones.

The optimist's case, as Segal presents it in The Orange Pill, is that AI lowers the floor. The developer in Lagos who could not previously build a software product because she lacked the team, the capital, and the institutional infrastructure can now build one with Claude Code and an internet connection. The imagination-to-artifact ratio has collapsed. The barrier between having an idea and realizing it has been reduced to the cost of a conversation. This is genuine democratization. It is not nothing.

But the constrained vision asks a question the optimist tends to skip. When the floor rises, does the ceiling stay in place? Or does the ceiling rise faster?

The evidence suggests the ceiling rises faster. The people who benefit most from AI tools are not the people at the floor. They are the people who already possess the situated knowledge, the institutional support, the educational background, and the cultural capital to direct AI tools effectively. The senior engineer in Trivandrum whose deep architectural knowledge became the essential input that determined the quality of Claude's output. The experienced lawyer whose decades of practice allowed her to evaluate AI-generated briefs with the judgment that junior associates lack. The product leader whose integrative thinking allows her to direct AI across multiple domains simultaneously.

These are people who were already advantaged. AI did not create their advantage. It amplified it. The tool that was supposed to democratize capability also amplified the capability gap between those who could direct it wisely and those who could not.

Segal acknowledges this. He writes that "the more capable the person was, the more robust the output they got out of Claude." Entry-level engineers received output that compiled and ran. Senior engineers received output that was architecturally sound, scalably designed, and appropriate for the specific system in which it would operate. The difference was not in the tool. It was in the user — specifically, in the situated knowledge the user brought to the interaction.

This is a pattern that Sowell has documented across multiple domains. Technologies that are introduced as equalizers often become amplifiers, because the capacity to leverage a new tool is itself unequally distributed. The printing press was supposed to democratize knowledge. It did, eventually. It also concentrated wealth and power in the hands of the publishers, the literate, and the institutions that controlled access to printed material. The internet was supposed to democratize information. It did, eventually. It also concentrated market power in the hands of the platforms, the digitally literate, and the institutions that controlled access to the infrastructure.

The pattern is not that democratization fails. It is that democratization and concentration occur simultaneously. The floor rises and the ceiling rises. The question is the rate — whether the floor rises faster than the ceiling, producing convergence, or whether the ceiling rises faster, producing divergence.

The early evidence on AI suggests divergence. The premium on the capacity to direct AI effectively — what Segal calls judgment, what Sowell might call situated knowledge, what the labor market is beginning to call "AI fluency" — is creating a new axis of inequality. Not the traditional axis of capital versus labor, though that axis remains. A new axis: those who can direct AI effectively versus those who cannot.

The factors that determine where an individual falls on this axis are not randomly distributed. They correlate with education. They correlate with language — the tools are built by American companies, trained on predominantly English data, and optimized for English-language workflows. They correlate with connectivity and infrastructure. They correlate with institutional support — the developer at Google who receives structured training in AI tools develops fluency faster than the freelancer in Dhaka who is experimenting alone.

Each of these correlations maps onto existing inequalities. The people who are best positioned to develop AI fluency are the people who were already best positioned in the pre-AI economy. The democratization is real — the developer in Lagos can now access tools she could not access before. But the amplification is also real — the engineer at Google can leverage those same tools at a level the developer in Lagos cannot yet match, because the factors that produce effective AI use are the factors that the engineer already possesses and the developer does not.

Sowell would observe that this is not unique to AI. It is the characteristic pattern of technological change. The telephone democratized communication and concentrated economic power in the companies that controlled the networks. The automobile democratized transportation and concentrated economic power in the companies that manufactured the vehicles and the nations that produced the fuel. The internet democratized information and concentrated market power in the platforms that organized it. In every case, the technology genuinely expanded access while simultaneously creating new concentrations of advantage for those best positioned to exploit it.

The policy response to this pattern has historically taken two forms, each reflecting one of Sowell's visions.

The unconstrained vision prescribes intervention. Tax the winners. Subsidize the losers. Redistribute the gains. Establish training programs, infrastructure investments, and institutional support that allow the people at the floor to develop the capacities they need to benefit from the technology. This approach has a mixed record — sometimes it works, sometimes the subsidies create dependency rather than capability, sometimes the training programs teach skills that are already obsolete by the time the graduates enter the market — but the intent is to accelerate the floor's rise so that it converges with the ceiling.

The constrained vision prescribes patience and framework conditions. Establish property rights, enforce contracts, prevent fraud, and allow the market to discover the most effective allocation of the new capability. The convergence will happen, but it will happen at the market's pace, which is slower than the intervener would like but more sustainable than the intervention the intervener would design. The constrained vision points to the empirical record: the printing press eventually democratized knowledge, the internet eventually democratized information, and the market eventually — over decades, not quarters — distributed the gains broadly enough to produce convergence.

The word "eventually" is doing substantial work in that sentence. Eventually, the factory workers' grandchildren benefited from industrialization. Eventually, the typists' children found new professions. Eventually, the market distributed the gains of each technological revolution broadly enough to raise living standards across the population. But "eventually" spans generations. It spans periods of displacement, dissolution, and genuine suffering for the people who bear the cost of the transition.

The constrained vision's counsel of patience is well-founded empirically. The historical record does support convergence over time. But the counsel is also easier to offer from a position where the cost of the transition is not being borne personally. Sowell, writing from Stanford at ninety-five, counseling patience with the market's pace of adjustment, is not the person whose skills are being dissolved in real time. The developer in Lagos, whose access to AI tools is constrained by bandwidth, infrastructure, and the language in which the tools are built, is not experiencing the democratization at the same pace as the developer in San Francisco.

The honest assessment — the assessment that respects both the empirical record of eventual convergence and the human reality of transitional costs — is that the AI divide is real, that it correlates with existing inequalities, and that the question of whether it narrows or widens depends on decisions being made now, by the people building the tools, the people deploying them, and the people designing the institutional frameworks within which they operate.

Sowell's framework does not prescribe a specific answer. It prescribes a specific method: look at the evidence, count the costs as well as the benefits, acknowledge the trade-offs, and resist the temptation to treat a complex phenomenon as though it had a simple cause or a simple solution.

The AI divide has no simple cause. It is produced by the intersection of education, language, infrastructure, institutional support, cultural capital, and the specific cognitive capacities that determine whether a person can direct AI tools effectively. It has no simple solution. Any intervention that addresses one factor — connectivity, for instance — leaves the others untouched. Any intervention that attempts to address all factors simultaneously requires the centralized knowledge that the Hayekian framework insists does not exist.

What the framework does prescribe is honesty about what is happening. The floor is rising. The ceiling is rising faster. The democratization is real and the amplification is real and they are happening simultaneously. The person who celebrates the democratization without acknowledging the amplification is telling half the story. The person who laments the amplification without acknowledging the democratization is telling the other half.

The whole story includes both halves. It is more complicated than either vision wants it to be. But complications are not excuses for inaction. They are the conditions within which action must be taken — carefully, modestly, with full awareness of the trade-offs, and with the constrained vision's insistence that the costs be counted before the celebration begins.

Chapter 9: The Empirical Record

Thomas Sowell does not trust theories. He trusts evidence. This is not a philosophical preference. It is a methodological commitment forged across fifty years of watching theories fail when they met the particular circumstances they were designed to explain. In The Vision of the Anointed, Sowell documented case after case in which confident predictions by credentialed experts produced outcomes opposite to what was predicted — and in which the experts, confronted with the evidence of their failure, did not revise their theories but redefined their criteria for success. The theory was never wrong. The world was insufficiently cooperative.

The empirical record of computing abstraction is, by Sowell's own standards, the most relevant evidence for evaluating the AI moment. Not the theories about what AI might do. Not the philosophical arguments about what AI should do. The record of what previous abstractions actually did, measured across decades, across industries, across millions of practitioners who adopted new tools and whose outcomes can be observed.

The record is clear. At every major abstraction in the history of computing, the constrained vision issued a warning, the unconstrained vision issued a promise, and the trajectory validated the promise more than the warning — while the warning identified real costs that the promise had systematically underestimated.

Assembly language forced the programmer to think at the level of the machine. Every memory address, every register, every instruction the processor would execute was specified by hand. The knowledge this produced was deep, intimate, and embodied — the programmer understood the machine the way a mechanic understands an engine, through the physical relationship between components. When compilers abstracted away the machine-level detail, the critics said the programmers would lose this understanding. They were right. Almost no programmer working today can write assembly. The knowledge was lost. The capacity to build operating systems, databases, and networked applications of a complexity that assembly-era programmers could not have conceived was gained. The gain exceeded the loss by an enormous margin, measured by the capability of the systems that were built and the number of people who could participate in building them.

High-level languages abstracted away memory management. The critics said programmers would lose the understanding of how data is stored, retrieved, and organized at the hardware level. They were right. Most programmers working in Python or JavaScript have no working knowledge of memory allocation. The capacity to build machine learning pipelines, real-time collaboration tools, and global distribution systems — applications whose complexity would have consumed the entire bandwidth of a team working in C — was gained.

Frameworks abstracted away code structure. The critics said programmers would lose the understanding of routing, templating, and database connection management. They were right. Most framework users could not build the framework they depend on from scratch. The capacity to build applications that serve millions of users with small teams, iterating at speeds that hand-coded architectures could not support, was gained.

Cloud infrastructure abstracted away server management. The critics said organizations would lose the understanding of hardware, network topology, and deployment. They were right. Most cloud users have never touched a physical server. The capacity to scale from zero to global in weeks, to experiment with architectures that would have required months of hardware provisioning to test, was gained.

At every level, the pattern held. The constrained vision's prediction — that something real would be lost — was confirmed. The unconstrained vision's prediction — that something larger would be gained — was also confirmed. The net trajectory was toward expansion. The tower went higher.

Sowell's empirical method demands that this record be taken seriously. Not as proof that the pattern will continue — past performance does not guarantee future results, and Sowell would be the first to say so — but as the strongest available evidence for predicting what the AI abstraction will produce. The evidence says: depth will be lost at one level and capability will be gained at a higher level. The evidence says: the critics will be partly right about the loss and wrong about the trajectory. The evidence says: the practitioners at the higher level will not be shallower in some absolute sense — they will be working on different problems, with different tools, at a different cognitive altitude.

But the evidence also says something the unconstrained vision tends to understate. At every transition, real people bore real costs. The assembly programmers who could not make the transition to high-level languages were not imaginary casualties of an abstract historical process. They were specific individuals whose specific expertise became economically irrelevant in specific labor markets at specific points in time. The fact that the aggregate trajectory was toward expansion does not mean the transition was costless. It means the costs were distributed unevenly — concentrated among the practitioners whose skills were at the level being abstracted away and diffused among the larger population that benefited from the expanded capability.

The constrained vision insists on counting these costs not because it opposes the transition but because it opposes the pretense that the transition is costless. The unconstrained vision's tendency to point at the aggregate trajectory and declare victory obscures the specific, measurable, human costs that the trajectory includes. The factory workers whose grandchildren benefited from electrification experienced the transition as displacement, not progress. The framework knitters of Nottingham experienced the power loom as destruction, not democratization. The experience was real even though the trajectory was real, and the honest assessment must include both.

Now, the question that the empirical record cannot answer. Is the AI abstraction different?

Every previous abstraction removed a layer of technical implementation and created a new layer of capability above it. The transition was from one kind of technical work to another — from assembly to high-level languages, from hand-coded structure to frameworks, from hardware management to cloud services. The practitioner who made the transition was still doing technical work. The nature of the work changed. The cognitive demands changed. But the category — technical implementation — remained.

The AI abstraction removes technical implementation itself. Not one layer of it. The entire category. The developer who describes what she wants in natural language and receives working code is not doing technical work in any sense that the previous abstractions would recognize. She is doing something else — directing, evaluating, specifying, judging. She is working at the layer above technical implementation, and the layer above is not another kind of technical work. It is a different kind of work entirely.

This may be the qualitative break that the historical pattern does not capture. Every previous abstraction relocated the practitioner within the category of technical implementation. The AI abstraction may be relocating the practitioner outside the category altogether — from doing to directing, from implementing to judging, from building to deciding what should be built. The empirical record of previous abstractions, in which practitioners moved to a higher technical floor, may not predict what happens when the entire building of technical floors is managed by a tool and the practitioner is relocated to the roof.

The constrained vision's contribution is the insistence that this possibility be taken seriously. The optimist who cites the historical pattern of abstraction and predicts that AI will follow the same trajectory is making an inference from data that may not apply to the current case. Every previous abstraction was a transition within a category. The AI abstraction may be a transition between categories. And transitions between categories — from agricultural to industrial work, from manufacturing to knowledge work — have historically been far more disruptive, far more costly, and far slower to resolve than transitions within categories.

The empirical record supports optimism about the long-term trajectory. It also supports caution about the transition — specifically, about the assumption that the transition will be as smooth as previous transitions within the same category. The honest assessment weighs the evidence of expansion against the evidence of categorical disruption and refuses to pretend that one body of evidence eliminates the other.

Sowell's method does not produce comfortable conclusions. It produces honest ones. The evidence says the trajectory is likely toward expansion. The evidence also says the costs of this particular expansion may be categorically larger than the costs of previous expansions. Both are findings. Neither is a prediction. And the distance between a finding and a prediction is the distance between what has been observed and what the observer wishes were true.

The empirical record is the best guide available. It is not a guarantee. The constrained vision accepts this. The unconstrained vision, characteristically, does not.

---

Chapter 10: Toward an Honest Assessment

In January 2026, Thomas Sowell sat down — at ninety-five years old, after decades of silence on technology — and wrote about artificial intelligence. He did not write about productivity gains or democratization or the ascending friction thesis. He wrote about fraud.

AI had put words in his mouth. Fabricated videos, using synthetic imitations of his voice, said things he had never said — including, he noted with the specific indignation of a man who has spent his life choosing his words with care, "things the direct opposite of what I have said." The deepfakes circulated on YouTube. Some accumulated over a million views. The comment sections showed audiences credulously accepting the fabrications. Warnings that the content was AI-generated went unheeded. The words — his words, except they were not his words — did the work that words do: they persuaded, they shaped opinion, they entered the discourse as though they were real.

Sowell's response was characteristic. He did not call for a ban on AI. He did not propose a regulatory framework. He identified the incentive structure that made the fraud possible — anonymity without accountability, platforms without liability, no serious consequences for individuals or institutions that create frauds — and he described the logical terminus of a system in which fabricated words face no consequences. "We will have no basis for settling our inevitable differences other than violence."

This is the constrained vision at its most precise. The problem is not the technology. The problem is the absence of institutional structures — property rights, liability, accountability — that would channel the technology toward outcomes compatible with a free society. The technology is a tool. Tools produce outcomes that depend on the incentive structures within which they are deployed. Deploy a powerful tool within an institutional vacuum and the outcomes will be determined by whoever has the strongest incentive to exploit the vacuum, which is usually not the people whose welfare the tool's advocates promised to improve.

Sowell's essay is the capstone of a career spent studying the gap between intentions and outcomes. He did not object to AI because he feared progress. He objected to a specific application of AI — the fabrication of speech attributed to real people — deployed within a specific institutional context — platforms with no liability and creators with no accountability. His objection was structural, not philosophical. Change the incentive structure and the objection dissolves. Leave the incentive structure unchanged and the consequences escalate.

The honest assessment of AI requires holding Sowell's specificity alongside the broader evidence that the Orange Pill cycle has assembled. The technology is powerful. The adoption is rapid. The capability expansion is real. The democratization is genuine. The ascending friction thesis has empirical support across fifty years of computing history. All of this is true. And all of this is true simultaneously with the facts that Sowell identified: the incentive structures are misaligned, the institutional frameworks are inadequate, the costs of the transition are being borne disproportionately by the people least equipped to manage them, and the people making the decisions are not the people paying the price for being wrong.

Sowell's framework does not resolve the tension between these truths. It explains why the tension exists and why it will persist. The constrained vision and the unconstrained vision are not hypotheses that evidence can adjudicate between. They are frameworks that determine which evidence people find relevant. The optimist sees the Trivandrum sprint and finds confirmation. The skeptic sees the Berkeley data and finds confirmation. Both are looking at real evidence. Neither is lying. They are standing on different sides of a fault line that runs deeper than data — a fault line in assumptions about human nature, human knowledge, and the possibility of costless progress.

The productive use of Sowell's framework is not to choose a side but to make the fault line visible. As long as the participants in the AI discourse believe they are arguing about the same thing, the argument will generate heat without light. The optimist will cite capability data. The skeptic will cite displacement data. Each will find the other's evidence irrelevant, because each is applying a framework that values different things.

When the frameworks are made visible, the argument changes character. The optimist can acknowledge that the constrained vision sees real costs — not imaginary costs, not transitional costs that will solve themselves, but real, persistent trade-offs inherent in the structure of the transition. The skeptic can acknowledge that the unconstrained vision sees real gains — not fantasies, not projections, but measured expansions of capability that the constrained vision's framework does not accommodate.

Neither acknowledgment requires abandoning a position. It requires expanding a position to include evidence that the original framework would have excluded. The optimist who acknowledges real costs is not less optimistic. She is more honestly optimistic — optimistic about the trajectory while realistic about the price. The skeptic who acknowledges real gains is not less cautious. He is more usefully cautious — cautious about the costs while realistic about what the caution must not prevent.

This is what Segal attempts in The Orange Pill, and the attempt is more valuable than its imperfections. He holds both visions in view. He presents the constrained vision's evidence — the Luddites were partly right, Han's diagnosis has empirical support, the Berkeley data confirms intensification, the loss of craft knowledge is real — alongside the unconstrained vision's evidence — each abstraction expanded capability, the developer in Lagos has access to tools previously reserved for elites, the imagination-to-artifact ratio has collapsed. He refuses to resolve the tension into either optimism or despair.

The refusal is the honest position. It is also the unstable position, because visions are gravitational fields, and every new piece of evidence pulls the holder toward one field or the other. Segal drifts toward the unconstrained vision, as builders tend to do. The drift is visible. It is also understandable. A person whose identity is organized around creating things will naturally be pulled toward the vision that says creation is expanding. The constrained vision's caution, however well-founded, is structurally uncomfortable for someone who builds for a living.

Sowell's contribution is the insistence that discomfort is not a refutation. The constrained vision is uncomfortable for builders. The unconstrained vision is uncomfortable for those who have been displaced. Discomfort tracks position, not truth. The honest assessment must include the evidence that makes both sides uncomfortable, because the evidence that makes you uncomfortable is usually the evidence you most need to consider.

The AI moment will not be resolved by one vision prevailing over the other. It will be resolved — to the extent it can be resolved — by the quality of the institutions built during the transition. Institutions that count costs as well as benefits. Institutions that change incentive structures rather than exhorting people to behave better within unchanged ones. Institutions that protect the people bearing the costs of the transition without preventing the transition from delivering its genuine benefits.

These institutions do not yet exist. The EU AI Act is an attempt. The American executive orders are attempts. The emerging frameworks in Singapore, Brazil, and Japan are attempts. Each addresses a piece of the problem. None addresses the whole. The gap between the speed of the technology and the speed of the institutional response is growing, and the people in the gap — the workers adapting without guidance, the students navigating without curriculum, the parents deciding without framework — are bearing the cost of the institutional failure.

Sowell's final warning in his January 2026 essay was about the consequences of institutional failure at scale. If there are no serious consequences for fraud, no institutional structures that channel powerful tools toward outcomes compatible with a free society, the result is not stagnation. It is violence. Not because people are violent by nature — though the constrained vision does not assume they are peaceful by nature either — but because violence is what remains when institutions fail to provide legitimate mechanisms for resolving disagreement.

This warning should not be dismissed as hyperbole. It should be treated as what it is: the culminating assessment of a ninety-five-year-old economist who has spent his entire career studying the relationship between institutional design and social outcomes, who has watched institutions fail across dozens of countries and historical periods, and who sees in the AI moment the specific combination of powerful technology and inadequate institutions that has, in previous eras, produced outcomes that no one intended and no one could control.

The honest assessment does not end with a prescription. Sowell does not prescribe. He diagnoses. He identifies the structural features of the situation — the misaligned incentives, the inadequate institutions, the conflict of visions that prevents productive disagreement — and leaves the prescription to the people who possess the situated knowledge he insists the analyst cannot possess from a distance.

The prescription must come from the builders who understand the technology, the workers who bear the costs, the educators who shape the next generation, the parents who make daily decisions about their children's cognitive development, and the policymakers who design the institutional frameworks within which all of these actors operate. Each of them possesses knowledge that the others lack. Each operates within a vision that illuminates some features of the landscape and obscures others.

The honest assessment is that the landscape is more complicated than any single vision can capture. That the costs are real and the gains are real and neither eliminates the other. That the institutions needed to navigate the transition do not yet exist and must be built at a speed the institutional process has rarely achieved. That the people building them will be operating under conditions of uncertainty, with incomplete knowledge, facing trade-offs that have no clean resolution.

There are no solutions. There are only trade-offs. The question is not whether the AI moment will produce costs. It will. The question is whether the people making the decisions understand the costs well enough to manage them, and whether the institutions they build are adequate to the scale of the transition they are trying to navigate.

Thomas Sowell would say: look at the evidence. Count the costs. Distrust the people who promise costless gains. Build institutions that change incentives rather than exhorting people to ignore them. Accept that the outcome will be imperfect, because perfection is not available. And hold both visions in view, because the honest assessment requires evidence that each vision alone would prefer to exclude.

That is the most that any framework can offer. It is also, in a moment of this magnitude, exactly enough.

---

Epilogue

The price I almost got wrong was my own team's labor.

The arithmetic was clean. Twenty-fold productivity. The number sat on the table in a boardroom like a dare. I have written about that moment in The Orange Pill — the quarterly conversation, the investor's logic, the pull toward extraction. What I did not write about, because I did not yet have the vocabulary, was what made the pull so strong. It was not greed. It was not callousness. It was the pure, frictionless seduction of a problem that had a simple answer.

Thomas Sowell gave me the vocabulary. The simple answer was an unconstrained-vision answer — a solution. Reduce headcount by eighty percent. Convert the productivity gain into margin. The stock moves. The board is satisfied. Problem solved.

Sowell's framework showed me why that answer felt so clean and why clean answers should make a builder nervous. There are no solutions. There are only trade-offs. The margin gain was real. The loss of situated knowledge — the architectural intuition, the institutional memory, the judgment that only forms through years of friction-rich work inside a specific system — was also real. The simple answer priced the gain and ignored the loss, because the gain was visible on a spreadsheet and the loss was not visible anywhere until the quarter it showed up as a product failure nobody could diagnose because the people who would have diagnosed it were gone.

I kept the team. I said this in the book. What I can say now, having spent months inside Sowell's framework, is that I understand better why the decision was hard and why it will keep being hard. The incentive structure has not changed. The quarterly pressure returns. The arithmetic is still on the table. My conviction that capability investment is worth more than margin extraction must survive the incentive structure every ninety days, and Sowell is clear-eyed about how often conviction survives incentives: not often enough.

What changed for me in this encounter was not my position on AI. I remain what I was — a builder who believes the river is flowing toward expansion and who intends to keep building in it. What changed was my understanding of why the people who disagree with me are not wrong. The constrained vision sees real things. The elegist mourning the loss of craft knowledge is not being sentimental. She is observing a trade-off that my framework is structurally inclined to underweight. The Luddite who sees his expertise dissolving is not failing to adapt. He is correctly identifying a cost that the adoption curve does not measure.

Sowell did not make me more cautious. He made me more honest about what my optimism costs. Every tool I celebrate is also a trade-off I am choosing. Every abstraction I embrace is also a depth I am forfeiting. The forfeiture may be worth it. The empirical record suggests it usually is. But "usually" is not "always," and the distance between them is where the institutional failures accumulate — the generation that bears the cost, the workers who fall through the gap, the knowledge that is lost before anyone notices it was load-bearing.

I think about Sowell's Wall Street Journal essay — a ninety-five-year-old man whose voice was stolen by the very technology the rest of us are celebrating. Words he never said, attributed to him, watched by millions. The deepfake Sowell is the parable of the AI moment compressed into a single image: a tool of extraordinary power deployed within an institutional vacuum, producing an outcome that nobody intended and nobody can control. The real Sowell, sitting in his study, writing by hand, insisting on facts against rhetoric, is the other parable — the constrained vision's answer to the unconstrained vision's confidence, delivered at the speed of a careful sentence.

Both parables are true. Holding them both is the work.

-- Edo Segal

The AI discourse has split into two camps that cannot hear each other. One celebrates the greatest expansion of human capability in history. The other warns of expertise dissolving, work intensifying,

The AI discourse has split into two camps that cannot hear each other. One celebrates the greatest expansion of human capability in history. The other warns of expertise dissolving, work intensifying, and inequality deepening. Both cite evidence. Both are partly right. And both are missing the deeper argument -- not about technology, but about human nature itself.

Thomas Sowell spent six decades showing that our fiercest disagreements are never really about policy. They are about the assumptions we carry into the room before the argument begins. This book applies his framework -- the constrained vision, the unconstrained vision, the knowledge problem, the role of incentives -- to the AI moment with surgical precision. The result is not comfort for either side. It is clarity about why the sides exist and what each one cannot see.

If you have felt the vertigo of this moment -- the exhilaration and the loss held in the same hand -- Sowell's framework will not tell you which feeling to trust. It will show you why both feelings are earned, and what it costs to ignore either one.

-- Thomas Sowell

Thomas Sowell
“Constrained Vision -- contra the Unconstrained Vision of Utopia, Communism, and Expertise -- means taking people as they are, testing ideas empirically, and liberating people to make their own choices.”
— Thomas Sowell
0%
11 chapters
WIKI COMPANION

Thomas Sowell — On AI

A reading-companion catalog of the 21 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Thomas Sowell — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →