Herbert Alexander Simon was the most important polymath of twentieth-century social science. His career spanned political science, economics, cognitive psychology, and computer science, and he made foundational contributions to each. His 1947 dissertation, Administrative Behavior, established behavioral organization theory. His 1955 paper 'A Behavioral Model of Rational Choice' introduced bounded rationality and dismantled the foundation of neoclassical economics. His 1955 collaboration with Allen Newell produced the Logic Theorist, widely considered the first artificial intelligence program. His 1962 paper 'The Architecture of Complexity' articulated near-decomposability as the universal structural principle of complex systems. His 1969 Sciences of the Artificial established design as a rigorous form of knowledge. His 1971 paper on organizational information systems identified attention as the binding constraint of the information age. His 1972 Human Problem Solving, with Newell, founded cognitive science as a discipline. He won the Turing Award in 1975 for his contributions to AI and the Nobel Memorial Prize in Economic Sciences in 1978 for his work on decision-making — one of very few scholars to earn both. His legacy persists in every field that takes seriously the question of how bounded minds should design the systems they inhabit.
There is a parallel reading that begins not with attention as an abstract cognitive resource but with the physical substrate that makes AI's information wealth possible. The data centers consuming municipal water supplies, the rare earth mining operations in Congo, the energy grids straining under computational load — these material realities suggest that AI doesn't create attention poverty so much as it redistributes material scarcity upward into cognitive registers. Simon's framework treats information as weightless, but every AI-generated image, every large language model query, every recommendation algorithm cycle burns fossil fuels and depends on supply chains that immiserate precisely those populations least able to participate in the attention economy.
Read through this lens, the "poverty of attention" becomes a luxury problem for those with sufficient material wealth to experience information abundance in the first place. The Berkeley researchers studying task seepage were studying knowledge workers with stable internet, powerful devices, and the educational capital to navigate AI tools. Meanwhile, the lithium miners whose labor enables these systems experience not attention poverty but wage poverty, not information wealth but surveillance capitalism's extractive gaze. Simon's satisficing model assumes agents with genuine choices, but the political economy of AI development concentrates choice in precisely those institutions — tech monopolies, surveillance states, financial capitals — least accountable to human flourishing. The problem isn't that we can't pay attention; it's that the infrastructure of AI attention requires forms of environmental and human exploitation that render the entire framework of "information wealth" a euphemism for systemic extraction.
Simon's intellectual style was relentlessly integrative. He moved between disciplines not by abandoning his previous work but by extending it — his economics was informed by his psychology, his psychology by his computer science, his computer science by his organizational theory. The unifying thread was his commitment to understanding how real decision-makers, with their actual cognitive limitations, navigate complex environments. Every discipline he touched was reshaped by this commitment.
Simon's career at Carnegie Mellon spanned more than half a century. He joined the faculty in 1949, when it was still Carnegie Institute of Technology, and remained until his death in 2001. He played a founding role in establishing Carnegie Mellon as one of the world's leading centers for AI and cognitive science research, and his influence on the university's intellectual culture was extensive enough that the cognitive science community there continues to work within frameworks he established.
Simon's late-career writing returned repeatedly to questions he had been developing for decades: the nature of expertise, the architecture of complex systems, the design of institutions for bounded agents. He was a prolific correspondent, an active reviewer, and an unusually generous intellectual collaborator. The research program he built — through his own work, through the students he trained, through the collaborators with whom he partnered — represents one of the most productive sustained intellectual efforts of the twentieth century.
Simon was born in Milwaukee, Wisconsin, in 1916, to a family that valued intellectual achievement. His early interests were political and mathematical — he studied political science at the University of Chicago and did his dissertation research in municipal administration. The combination of empirical political science with formal analytical tools would become the signature of his subsequent work.
His dissertation, completed in 1943 and published as Administrative Behavior in 1947, was his first major statement of what would become a six-decade intellectual project. Every subsequent book, paper, and collaboration extended the framework the dissertation had articulated: that real decision-makers operate under cognitive constraints, that organizations exist to manage those constraints, and that the design of institutions for bounded agents is the central problem of the social sciences.
Bounded rationality. Simon's most influential concept — that real decision-makers operate under constraints of information, computation, and time — earned him the 1978 Nobel Prize in economics.
Satisficing. The search procedure bounded agents actually perform, in place of the optimization that classical economics assumed.
Near-decomposability. The architectural principle that complex systems tend toward hierarchical forms with strong within-subsystem interactions and weak between-subsystem interactions.
The science of the artificial. The argument that designed things — organizations, software, policies, curricula — deserve rigorous study as a distinct science.
Human problem solving. The framework, developed with Allen Newell, through which structured problem-solving in any domain can be analyzed as heuristic search through formally represented problem spaces.
The tension between Simon's cognitive framing and the materialist critique resolves differently depending on which layer of the system we examine. At the individual user level, Simon's framework dominates (90/10) — knowledge workers genuinely experience attention as their scarcest resource, and AI genuinely intensifies this scarcity through mechanism he identified. The Berkeley study's findings on task seepage validate his predictions with remarkable precision. But shift the frame to infrastructure, and the material critique gains force (70/30) — data centers do consume watersheds, rare earth extraction does destroy ecosystems, and these costs fall on populations excluded from AI's benefits.
The synthetic insight emerges when we recognize that both scarcities operate simultaneously across different system layers. AI creates a double movement: concentrating attention poverty among the information-wealthy while concentrating material poverty among those whose labor and resources support the infrastructure. This isn't contradiction but stratification — different populations experience different scarcities based on their position in the AI production chain. The satisficing mechanisms Simon identified still govern individual behavior, but they operate within political economies that predetermine which options appear satisfactory.
The proper framework, then, treats AI as a scarcity transformation engine rather than a simple abundance creator. It converts material resources (energy, minerals, water) into computational capacity, which generates information wealth, which creates attention poverty among those with access while deepening material poverty among those without. Simon's insight remains foundational — information wealth does create attention poverty — but requires supplementation with infrastructure analysis to capture the full system dynamics. The question isn't whether AI creates attention scarcity (it does) but rather which scarcities it creates for whom, and whether the trade-offs between material and cognitive poverty are ones we would choose if the choice architecture itself weren't predetermined by the existing distribution of resources.