The constrained vision is one of the two fundamental organizing perspectives Sowell identified in A Conflict of Visions. It assumes human beings are constrained by inherent limitations—selfishness, cognitive bounds, moral imperfection—that cannot be eliminated through education, reform, or technological innovation. These limitations are features of the species, not bugs in a system. The constrained vision therefore asks not "How do we make people better?" but "How do we design institutions that channel human nature—as it is—toward outcomes that are tolerable?" The focus is on trade-offs, incentives, unintended consequences, and the limits of deliberate planning. Traditions, markets, and evolved institutions are valued because they embody accumulated problem-solving superior to any individual's or committee's intentional design.
The constrained vision's intellectual genealogy runs through Adam Smith, Edmund Burke, Alexander Hamilton, James Madison, Friedrich Hayek, and Milton Friedman. Smith's invisible hand channeled self-interest toward public benefit without requiring merchants to be altruistic. Burke defended tradition as the repository of wisdom greater than any single generation's reason. Madison's constitutional architecture distributed power to prevent its concentration, assuming officials would abuse concentrated authority. Hayek argued that central planning failed because the knowledge required to plan an economy was dispersed and could not be centralized. Each thinker began from the premise that human beings could not be perfected and that institutional design must accommodate this permanent fact.
Applied to AI, the constrained vision generates a specific set of predictions and prescriptions. It predicts that friction removal will have costs—atrophy of deep expertise, intensification of work, erosion of the struggle through which understanding develops. It predicts that productivity gains will be captured by systems and organizations rather than accruing to workers, because incentive structures reward extraction. It predicts that regulatory efforts will lag technological change and that well-intentioned interventions will produce unintended consequences because regulators lack the situated knowledge required to design effective rules. The constrained vision's prescription is modest institutions—framework conditions, liability structures, property rights—rather than comprehensive planning.
The constrained vision's AI skeptics include Byung-Chul Han, who diagnoses smoothness as cultural pathology; the Berkeley researchers documenting work intensification; and the contemporary Luddites who see expertise dissolving. These voices do not oppose technology but insist on counting its costs. The constrained vision does not claim AI is net-negative—it claims the optimists systematically underestimate trade-offs, overestimate the speed of adaptation, and ignore the human costs borne by those displaced during transitions. Sowell's January 2026 Wall Street Journal essay—warning that AI-enabled fraud without institutional consequences leads toward violence—exemplifies constrained-vision analysis: identify misaligned incentives, predict institutional failure, refuse to assume good intentions produce good outcomes.
Sowell developed the constrained-vision concept through decades reading Enlightenment debates and observing twentieth-century policy failures. His training under Friedman provided the economic toolkit—incentives, dispersed knowledge, spontaneous order. His independent reading in Burke, Smith, and Hayek revealed the philosophical continuity beneath their economic arguments. The crystallization occurred in the mid-1980s when Sowell recognized that his disagreements with academic colleagues were not about data interpretation but about prior assumptions regarding human nature and social possibility. A Conflict of Visions formalized this recognition into a systematic framework that has structured political and social analysis ever since.
Human nature is fixed. Selfishness, cognitive limits, moral imperfection are permanent; institutions must work with these constraints, not against them.
Trade-offs are unavoidable. Every policy choice involves costs; honest assessment requires counting costs alongside benefits, refusing the fantasy of costless gains.
Knowledge is dispersed. Critical information about particular circumstances resides with individuals in those circumstances; centralization destroys context that makes knowledge actionable.
Intentions don't determine outcomes. Behavior responds to incentive structures, not exhortation; well-intentioned policies with misaligned incentives reliably produce perverse results.
Evolved institutions embody wisdom. Markets, traditions, common law accumulate problem-solving through millions of trials; dismantling them based on current generation's reason is hubris.