By Edo Segal
The cost that nobody talks about is not the cost of the tool. It is the cost of reaching the tool.
One hundred dollars a month for Claude Code. That number sits in this book like a beacon — proof that the frontier is accessible, that the democratization is real, that anyone with an idea and a subscription can build. I believe that. I have seen it happen in a room in Trivandrum, watched engineers cross boundaries that had defined their careers for decades, watched capability expand in ways that made me want to call everyone I knew and tell them what was possible.
But there is a different cost, older and more stubborn, that the subscription price conceals. The cost of knowing what to ask for. The cost of understanding your domain deeply enough to direct a tool that will build whatever you describe, including the wrong thing, with cheerful competence. The cost of the years it took to develop the judgment that separates a product someone needs from a product that merely works.
That cost has not dropped. If anything, it has risen. And the gap between the collapsing cost of execution and the stubbornly high cost of knowing what execution should produce — that gap is where the next decade lives.
Joel Mokyr has been mapping that gap for forty years. Not in the language of AI, but in the language of economic history, tracing how knowledge moves from the minds that generate it to the hands that apply it. His framework — the distinction between knowing *that* and knowing *how*, the institutional channels that connect the two, the feedback loops that accelerate when the channels widen — is the clearest lens I have found for understanding what the AI revolution actually is beneath the hype and the terror.
It is not a revolution of intelligence. It is a revolution of access. The knowledge was always there. What changed is who can reach it, and at what cost, and through what channel. Mokyr spent his career studying every previous moment when a channel like this opened — the printing press, the scientific society, the patent system, the public university — and what he found is both hopeful and sobering. The expansion always comes. The benefits always materialize. And the institutions that determine whether those benefits reach the many or stay with the few are always, at the moment of transition, inadequate.
They were inadequate when the power loom arrived. They were inadequate when the railroad arrived. They are inadequate now.
This book applies Mokyr's framework to our moment with the rigor his work demands. It will not tell you what to build. It will show you why what you build around the technology matters more than the technology itself.
The river is wider than it has ever been. The dams are the work.
-- Edo Segal ^ Opus 4.6
b. 1946
Joel Mokyr (b. 1946) is a Dutch-born Israeli-American economic historian and the Robert H. Strotz Professor of Arts and Sciences and Professor of Economics and History at Northwestern University. Born in Leiden, the Netherlands, and raised in Israel, Mokyr has spent more than four decades investigating the relationship between technological innovation, institutional development, and long-run economic growth. His major works include *The Lever of Riches: Technological Creativity and Economic Progress* (1990), which surveyed the history of invention from antiquity through the twentieth century; *The Gifts of Athena: Historical Origins of the Knowledge Economy* (2002), which introduced his influential distinction between propositional knowledge (knowing *that*) and prescriptive knowledge (knowing *how*) and argued that the channels connecting the two are the primary drivers of sustained growth; and *A Culture of Growth: The Origins of the Modern Economy* (2016), which traced the cultural and intellectual conditions that made the Industrial Revolution possible, centering the concept of the "Industrial Enlightenment" — the transformation of the relationship between natural philosophy and practical craft in eighteenth-century Europe. Mokyr was awarded the Nobel Memorial Prize in Economic Sciences in October 2025 for his contributions to understanding the role of knowledge and institutions in economic development. He has been a persistent voice on the importance of institutional adaptation during periods of rapid technological change, warning that technology creates possibility but institutions determine whether that possibility produces broadly shared benefit or concentrated extraction.
In 1704, a young instrument maker named John Rowley built an orrery for Charles Boyle, the Fourth Earl of Orrery — a mechanical model of the solar system, with brass planets revolving around a central sun, driven by gears and clockwork. The device was beautiful. It was also, in Joel Mokyr's framework, something far more consequential than a piece of aristocratic furniture. The orrery was a channel. It took propositional knowledge — the Copernican understanding that planets orbit the sun according to mathematical laws — and made it accessible to anyone who could turn a handle and watch the spheres move. A person who could not read Newton's Principia could grasp, through the orrery's mechanism, the essential structure of the solar system. The knowledge existed before Rowley built the device. What changed was who could reach it.
Mokyr's central contribution to economic history rests on the insight that the most consequential developments in the story of human prosperity were not inventions but channels — institutional, cultural, and technical infrastructures that allowed knowledge to flow from the people who generated it to the people who could apply it. The Industrial Revolution did not begin with the steam engine. It began with the creation of what Mokyr calls the "Industrial Enlightenment," a transformation in the relationship between those who understood the natural world and those who made things. Before this transformation, natural philosophy and practical craft occupied separate social worlds. The gentleman who studied optics and the lens grinder who polished glass inhabited different institutions, spoke different languages, moved in different circles. Knowledge existed in abundance. Application existed in abundance. The bridge between them was narrow, unreliable, and often impassable.
The Industrial Enlightenment widened that bridge. Scientific societies — the Royal Society of London, the Lunar Society of Birmingham, the literary and philosophical societies that proliferated across provincial England — created physical spaces where natural philosophers and practical men could meet, exchange ideas, and discover that each possessed something the other needed. Patent law created economic incentives for translating scientific insight into commercial application. The Encyclopédie of Diderot and d'Alembert attempted, with extraordinary ambition, to compile the entirety of useful knowledge into a form accessible to anyone who could read. Technical education expanded, first through informal apprenticeship networks, later through mechanics' institutes and eventually through polytechnic schools. Each of these developments was a channel — a reduction in the cost of moving knowledge from where it was understood to where it could be used.
The critical insight, and the one that distinguishes Mokyr's analysis from simpler accounts of the Industrial Revolution, is that the knowledge itself was not the binding constraint. Humanity understood more about the natural world in 1700 than it could apply. The binding constraint was the cost of access — the friction, in the language of The Orange Pill, between knowing and doing. The orrery reduced that friction for astronomy. The patent system reduced it for commercial invention. The scientific society reduced it for the exchange of experimental results. Each channel, by reducing the cost of access, expanded the population of people who could participate in the application of knowledge to practical problems. And the expansion of that population — not the expansion of knowledge itself — was what produced the sustained economic growth that transformed the world.
This framework, developed across four decades of scholarship, from The Lever of Riches through The Gifts of Athena and A Culture of Growth, illuminates the AI moment with a precision that no purely technical analysis can match. The large language model that crossed the threshold described in the opening chapters of The Orange Pill — the moment when the machine learned to meet humans on their terms, in natural language, without requiring them to translate their intentions into the machine's preferred syntax — is not merely a tool. It is a channel. And measured by the criteria Mokyr has spent his career developing, it is the most powerful channel for the transmission of useful knowledge ever created.
Consider what the channel actually does. Before December 2025, a person who wanted to build software needed to possess prescriptive knowledge — the specific technical skills of programming languages, frameworks, deployment systems, debugging methods — that took years to acquire and was costly to maintain as the technical landscape shifted beneath the practitioner's feet. The knowledge of what could be built existed widely. The knowledge of how to build it existed narrowly. The channel between the two — formal education, bootcamps, documentation, Stack Overflow, mentorship — was wide compared to previous eras but still imposed significant costs in time, money, and cognitive effort. A developer in Lagos, as The Orange Pill describes, might possess extraordinary propositional knowledge — a deep understanding of what her users needed, what the market lacked, what problems were worth solving — and still be unable to build, because the prescriptive knowledge required to convert her vision into working software was gated behind barriers of training, infrastructure, and institutional support.
The natural language interface abolished those barriers for a significant and rapidly expanding class of problems. Not all barriers. Not for all problems. But for enough problems, and with enough speed, that the change was felt as a phase transition rather than an incremental improvement. The imagination-to-artifact ratio that The Orange Pill describes — the collapsing distance between what a person can conceive and what a person can build — is, in Mokyr's framework, a measure of channel efficiency. The ratio collapsed not because new knowledge was created but because the cost of converting existing knowledge into capability approached zero.
The parallel to the Industrial Enlightenment is structural, not merely analogical. The Industrial Enlightenment created channels through which the accumulated scientific knowledge of the seventeenth century could flow to practical application. The computational enlightenment — if the term is permitted — has created a channel through which the accumulated knowledge of human civilization, encoded in the training data of large language models, can flow to anyone who can describe a problem in the language they already speak. The orrery made planetary mechanics accessible to anyone who could turn a handle. The large language model makes software engineering, legal analysis, medical reasoning, financial modeling, and a hundred other domains of prescriptive knowledge accessible to anyone who can form a sentence.
Mokyr himself, in the months following his Nobel Prize in October 2025, described AI in terms remarkably consistent with this framework. Speaking to the Wall Street Journal, he compared AI to the personal computer of the 1980s and 1990s — "a very convenient tool" that "will aggregate information at a dazzling rate" and "give us much better access to knowledge." In his extended interview with Aventine in November 2025, he went further, identifying personalization as AI's most distinctive capability: "In many fields in our world, we have, almost by necessity, to take a one-size-fits-all approach to delivering certain services. But if you can look at each case and fine-tune the service you're delivering to that person, you are changing human life enormously." This is a channel argument. The one-size-fits-all approach is a consequence of the high cost of converting general knowledge into specific application. When AI reduces that cost, the channel widens — not just for software, but for medicine, education, law, and every other domain where the gap between what is known in general and what can be applied to a specific case has constrained human flourishing.
But Mokyr's framework also contains a warning that the triumphalists tend to miss. The Industrial Enlightenment did not produce broadly shared prosperity automatically. The channels it created were exploited first by those who were already positioned to use them — factory owners, merchants, engineers with access to capital and connections. The gains of the first Industrial Revolution took roughly sixty years to translate into improved living standards for working people, a period economic historians call the "Engels pause." During that pause, aggregate productivity rose while wages stagnated, inequality widened, and the social costs of the transition fell disproportionately on the people least equipped to bear them.
The channel was open. The knowledge was flowing. But the institutions that would determine who benefited from the flow — labor law, educational reform, the extension of political franchise — had not yet been built. The technology was running ahead of the institutional infrastructure, and the gap between them was filled with human suffering.
Mokyr identified this pattern with characteristic precision at his Nobel press conference in October 2025: "In the past, we've had major technological changes, but that change was relatively slow, and so institutions had the time to adjust. So labor relations changed, the organization, the work changed, and it all worked out reasonably well. But if technological change is very, very quick, then institutions will fall behind. And once that disequilibrium occurs, societies could be in trouble, and things could happen that nobody expects."
The statement is diagnostic, not fatalistic. Mokyr does not argue that institutional failure is inevitable. He argues that it is the default outcome when the speed of technological change exceeds the speed of institutional adaptation. The Industrial Enlightenment eventually produced institutional responses adequate to its technological revolution — but "eventually" meant generations, and those generations paid the cost of the lag. The question for the AI transition is whether the institutional response can be accelerated, whether the dams can be built before the flood.
The Orange Pill frames this question through the metaphor of the beaver building dams in the river of intelligence. Mokyr's framework translates the metaphor into the analytical language of economic history: the dam is the institution, and the institution is the structure that determines whether a technological channel produces broadly distributed benefit or concentrated extraction. The knowledge is flowing. The channel is wider than any channel in human history. The question — Mokyr's question, the one his entire career has prepared him to ask — is whether the institutions that surround the channel are adequate to the flood.
The answer, in the spring of 2026, is plainly no. The regulatory frameworks being developed in Europe, the United States, and Asia address the supply side of AI — what the companies that build these systems may and may not do. They do not address the demand side — what citizens, workers, students, and parents need to navigate the transition wisely. The educational institutions that would prepare people to work productively with AI are still organized around the skill hierarchies of the previous era. The labor institutions that would protect workers during the transition are weaker in most developed economies than they have been in half a century. The cultural norms that would help people distinguish between productive intensity and compulsive overwork — the distinction The Orange Pill draws between flow and auto-exploitation — are still forming, still contested, still fragile.
The channel is open. The knowledge is flowing faster than it has ever flowed before. And the institutions that will determine whether the flow irrigates or floods are, by any historical standard, inadequate to the volume.
This is not a reason for despair. It is a reason for urgency. The Industrial Enlightenment produced its institutional response eventually — imperfectly, painfully, over generations, but eventually. The question is whether a society that understands the historical pattern can shorten the lag, can build the dams before the flood rather than after it. Mokyr's framework does not guarantee success. It guarantees that the attempt matters more than anything else.
The orrery on Charles Boyle's desk made the solar system legible to anyone who could turn a handle. The large language model makes the accumulated knowledge of human civilization legible to anyone who can form a question. The knowledge was always there. The channel has changed. And the institutions that will determine who benefits from the channel — that will determine whether the computational enlightenment produces an expansion of human capability or a concentration of human extraction — are the work that remains to be done.
Joel Mokyr has spent more than three decades refining a distinction that sounds, at first encounter, like the kind of taxonomic exercise that occupies academics without troubling the real world. The distinction is between propositional knowledge — knowing that something is the case — and prescriptive knowledge — knowing how to do something about it. Propositional knowledge is the understanding that heating iron ore with carbon at sufficient temperatures produces a stronger metal. Prescriptive knowledge is the sequence of specific operations — the temperatures, the timing, the tools, the techniques — required to actually produce that metal in a forge. The first is science. The second is craft. And the distance between them, Mokyr argues, is where the economic history of the modern world actually lives.
The distinction is not original to Mokyr. Philosophers have debated the relationship between "knowing that" and "knowing how" since Gilbert Ryle drew the line in The Concept of Mind in 1949. What Mokyr brought to the distinction was an economic historian's attention to costs. The question he asked was not whether propositional and prescriptive knowledge are different in kind — they are — but what it costs to convert one into the other. And his central finding, documented across the sweep of technological history from ancient metallurgy to the microprocessor, is that the cost of that conversion is the single most important variable in explaining why some societies achieve sustained economic growth and others do not.
Before the Industrial Enlightenment, the cost of conversion was enormous. A natural philosopher might understand, in propositional terms, why a particular chemical reaction produced a useful result. But transmitting that understanding to the craftsman who needed to apply it required a chain of translations — from mathematical formulation to verbal description to practical demonstration to embodied skill — each of which introduced noise, delay, and loss. The philosopher and the craftsman spoke different languages, inhabited different institutions, and operated on different timescales. The knowledge existed at both ends. The channel between them was narrow, expensive, and unreliable.
Every major institutional innovation of the Industrial Enlightenment can be understood as a reduction in this conversion cost. Scientific societies created spaces where philosophers and practitioners could meet in person, bypassing the written channel entirely. The Encyclopédie attempted to codify prescriptive knowledge — the actual techniques of dozens of trades — in a form that could be distributed at the cost of printing rather than the cost of apprenticeship. Patent law created economic incentives for practitioners to make their prescriptive knowledge public rather than hoarding it as trade secrets. Technical education formalized the conversion process itself, creating institutions whose explicit purpose was to take propositional knowledge generated by science and convert it into prescriptive knowledge usable by industry.
Each reduction in conversion cost expanded the population of people who could participate in technological innovation. And the expansion of that population — not the generation of new scientific knowledge, which was proceeding at a pace largely independent of these institutional developments — was what produced the acceleration in technological creativity that economic historians call the Industrial Revolution.
The printing press illustrates the pattern at the level of propositional knowledge. Before Gutenberg, the cost of distributing a single book was roughly equivalent to the cost of a skilled laborer's annual wages. After Gutenberg, it fell by more than an order of magnitude within a generation. The knowledge in those books was not new. The Bible existed before the printing press. The works of Aristotle existed. What changed was who could reach them. The expansion of access did not merely distribute existing knowledge more widely. It created conditions for new knowledge to be generated, because the larger the population of people engaging with existing knowledge, the higher the probability that someone in that population would combine existing ideas in a novel way. The printing press was a channel that widened the river.
The university system illustrates the pattern at the level of knowledge acquisition. Before the modern university, acquiring propositional knowledge required either independent wealth (to purchase books and tutors), ecclesiastical connections (to access monastic libraries), or the rare good fortune of personal apprenticeship to a learned individual. The university reduced the cost of acquisition by concentrating knowledge, teachers, and students in a single institution and distributing the fixed costs across a larger population. Again, the knowledge was not new. What changed was the cost of reaching it.
Against this historical backdrop, Mokyr's framework reveals what the AI transition actually accomplished with a specificity that purely technical descriptions miss. The large language model did not create new knowledge. The training data — the vast corpus of text, code, technical documentation, scientific literature, and accumulated human expression — existed before the model was trained on it. What the model did was reduce the cost of converting propositional knowledge into prescriptive knowledge to near zero for a significant and rapidly expanding class of problems.
The engineer in Trivandrum described in The Orange Pill — the woman who had spent eight years on backend systems and had never written a line of frontend code — possessed extensive propositional knowledge about what user interfaces should do, how they should behave, what users expected. She possessed the understanding. What she lacked was the prescriptive knowledge — the specific syntax, framework conventions, rendering logic, and deployment procedures — required to implement that understanding as working code. In the previous regime, acquiring that prescriptive knowledge would have cost her months of formal study, practice, and the specific kind of embodied learning that only comes through repeated failure.
Claude Code provided the conversion. She described what the interface should do in natural language — the language she already possessed, the language in which her propositional knowledge already lived — and the model converted her description into working implementation. The conversion cost dropped from months of human capital investment to minutes of computation. The knowledge was not new. The channel was.
The Nobel Committee itself recognized this implication. In the 2025 Popular Science Background accompanying Mokyr's Nobel Prize in Economics, the committee wrote that "Mokyr's work shows that AI could reinforce the feedback between propositional and prescriptive knowledge, and increase the rate at which useful knowledge is accumulated." The statement is remarkable for its precision. It does not say that AI creates knowledge. It says that AI reinforces the feedback loop between the two types of knowledge — the cycle in which propositional understanding generates prescriptive techniques, which generate new data and new problems, which generate new propositional understanding, which generates further prescriptive refinement. Mokyr's career-long argument is that this feedback loop is the engine of sustained economic growth. The Nobel Committee was saying, in effect, that AI had supercharged the engine.
The feedback loop has a specific mechanism that the AI transition has accelerated. When prescriptive knowledge is cheap to produce, more experiments get run. When more experiments get run, more data is generated. When more data is generated, propositional knowledge expands. When propositional knowledge expands, new prescriptive possibilities emerge. The loop is self-reinforcing, and its speed is governed by the bottleneck at each stage. For most of human history, the bottleneck was the conversion stage — the expensive, slow, skill-dependent process of turning understanding into technique. AI attacked the bottleneck directly, and the consequence is an acceleration of the entire cycle.
But the cost reduction is not uniform, and the non-uniformity matters enormously for the distributional question that Mokyr's framework insists on asking. The conversion cost has dropped most dramatically for problems that can be fully specified in natural language — software development, document drafting, data analysis, pattern recognition in structured datasets. It has dropped less for problems that require tacit knowledge — the surgeon's feel for tissue, the therapist's reading of a patient's unspoken distress, the teacher's intuition about which student is lost and which is bored. And it has dropped hardly at all for problems that require what Mokyr calls "epistemic base" expansion — the generation of genuinely new propositional knowledge through scientific experimentation, philosophical inquiry, or artistic creation.
This non-uniformity creates a distributional landscape. The workers whose prescriptive knowledge was most fully specifiable in language — programmers, paralegals, data analysts, technical writers — face the sharpest displacement. The workers whose prescriptive knowledge is most deeply tacit — surgeons, therapists, master craftspeople, and teachers at their best — face less immediate displacement but are not immune to the long-term trajectory. And the workers whose value lies in expanding the propositional knowledge base — scientists, philosophers, artists, the people who ask questions rather than answer them — occupy the most durable position, because the feedback loop depends on new knowledge being generated, and generation is the stage AI has affected least.
The Orange Pill captures this landscape intuitively when it argues that the premium is shifting from execution to judgment, from knowing how to build to knowing what should be built. Mokyr's framework makes the intuition precise. The premium is shifting because the cost of prescriptive knowledge has collapsed for specifiable problems, making execution abundant, while the cost of propositional knowledge generation — the cost of genuine insight, genuine novelty, genuine understanding of problems worth solving — remains high. The scarce factor commands the premium. And scarcity has migrated from the hands to the mind, from the craftsman to the architect, from the person who knows how to the person who knows why.
The democratization argument follows from the same logic. When the cost of prescriptive knowledge acquisition is high, access to that knowledge is gated by the institutions that provide it — universities, bootcamps, corporate training programs, mentorship networks. These institutions are geographically concentrated, economically exclusive, and culturally specific. The developer in Lagos, the student in Dhaka, the self-taught tinkerer in rural India — each possessed propositional knowledge that could have generated technological innovation. Each was blocked by the cost of acquiring the prescriptive knowledge required to implement their ideas. When AI reduces that cost to the price of a subscription, the gate opens. Not fully. Connectivity, hardware, language barriers, and economic precarity remain real constraints. But the gate has opened wider than any previous institutional innovation managed to open it, and the population of potential innovators has expanded correspondingly.
Mokyr, characteristically, refused to let the optimism run unchecked. At his Nobel press conference, when asked directly about AI's labor market implications, he expressed concern not about technological unemployment but about the speed of institutional adjustment: "If technological change is very, very quick, then institutions will fall behind. And once that disequilibrium occurs, societies could be in trouble." The cost of knowledge conversion has dropped. The cost of institutional adaptation has not. And the gap between them — the gap between how fast the technology moves and how fast the institutions that channel its benefits can respond — is where the human cost of the transition will accumulate.
The printing press reduced the cost of distributing propositional knowledge. The university reduced the cost of acquiring it. The patent system reduced the cost of incentivizing its production. AI has reduced the cost of converting it into capability. Each reduction expanded the population of participants in the knowledge economy. Each expansion produced gains that were eventually shared broadly. And each transition — without exception — produced a period of institutional lag during which the gains were captured narrowly and the costs were borne by those least equipped to bear them.
The question is not whether AI will expand the knowledge economy. The feedback loop is already accelerating. The question is what institutions will be built to ensure that the expansion benefits more than the people who happened to be positioned at the channel's mouth when it opened.
In 1712, Thomas Newcomen installed a steam engine at a coal mine in Dudley Castle, Staffordshire. The engine did one thing: it pumped water out of the mine. It did this thing badly — consuming enormous quantities of coal for modest mechanical output, breaking down frequently, requiring constant attendance by skilled operators. No one who watched Newcomen's engine wheezing and clanking at Dudley Castle could have predicted the railroad, the steamship, the factory system, or the transformation of human civilization that would follow. The engine was, in Joel Mokyr's taxonomy, a macro-invention: not an incremental improvement on existing technology but a qualitative discontinuity, a device that operated on principles sufficiently different from anything that preceded it that its potential applications could not be deduced from its initial form.
Mokyr distinguishes macro-inventions from micro-inventions with a precision that illuminates the AI transition more clearly than any competing framework. A macro-invention is radical and discontinuous. It cannot be predicted from the trajectory of prior technology. It opens a new frontier of possibility that takes decades or centuries to fully explore. The steam engine was a macro-invention. The printing press was a macro-invention. Electrification was a macro-invention. Each arrived in a crude initial form, performed a narrow initial function, and was dismissed or underestimated by observers who could not see past the limitations of the first instantiation.
A micro-invention, by contrast, is an incremental improvement that exploits the potential opened by a macro-invention. James Watt's separate condenser, which dramatically improved the steam engine's efficiency, was a micro-invention. So were the hundreds of subsequent refinements — higher-pressure boilers, better metallurgy, improved valve mechanisms, new applications from pumping to locomotion to power generation — that transformed Newcomen's crude pump into the engine of the Industrial Revolution. The macro-invention created the possibility space. The micro-inventions explored it. And the exploration took more than a century.
The pattern is consistent across every major technological transition Mokyr has studied. The initial macro-invention is always limited, always crude, always applied first to the most obvious problem it can solve. The printing press initially reproduced the same texts that scribes had been copying — Bibles, classical works, devotional literature. The electric light initially replaced gas lamps in the same fixtures, in the same buildings, for the same purposes. The personal computer initially automated the same calculations that had been done with ledgers and adding machines. In every case, the truly transformative applications — the ones that changed human civilization rather than merely accelerating it — came later, from people who grew up inside the new possibility space and could see opportunities invisible to those who remembered the old paradigm.
The large language model that crossed the threshold in December 2025 was, by Mokyr's criteria, a macro-invention. Not because it was new — large language models had existed for several years, improving incrementally with each generation — but because a specific generation crossed a qualitative boundary that previous generations had only approached. The boundary was the natural language interface: the moment when the cost of communicating with the machine dropped below the cost of learning a specialized language, when the machine began meeting humans on their terms rather than requiring humans to meet it on the machine's terms.
This is the distinction The Orange Pill captures in its description of the "phase transition from water to ice: the same substance, suddenly organized according to different rules." The metaphor is apt precisely because phase transitions are discontinuous. Water does not become gradually more ice-like as it cools. It remains water until it reaches a specific threshold, and then it reorganizes abruptly. The capabilities that produced the December 2025 threshold — transformer architectures, scaling laws, reinforcement learning from human feedback — had been developing incrementally for years. But the experience of using the resulting system was discontinuous. The machine could now hold a conversation. It could interpret ambiguous natural language instructions. It could infer intent from context. It could maintain coherent interaction across thousands of exchanges. The user no longer needed to meet the machine halfway. The translation cost that every previous computing interface had levied — from command lines to graphical interfaces to touchscreens — had been abolished.
Mokyr's macro/micro framework predicts what happens next, and the prediction is both exhilarating and sobering. The exhilarating part: the applications visible in 2025 and early 2026 are the equivalent of Newcomen's pump. They are the first, most obvious uses of the new capability applied to existing problems. Coding assistants. Writing tools. Image generators. Document summarizers. Each valuable. Each impressive. Each a conservative application of a technology whose possibility space has barely been explored.
The truly transformative micro-inventions — the railroads and steamships and factory systems of the AI revolution — have not yet been conceived. They will come from people who spend years immersed in the new capability and discover applications that no one working in 2026 can anticipate, just as no one watching Newcomen's pump in 1712 could anticipate the railroad. The Napster Station described in The Orange Pill — a product built in thirty days that would have taken six to twelve months under the previous paradigm — is an early micro-invention, suggestive of the new possibility space but not yet representative of its full extent. It applied AI capability to an existing problem (building a product) in an existing domain (consumer technology) at an existing level of ambition (a conference demonstration). The applications that will define the AI era will apply AI capability to problems that cannot currently be specified, in domains that do not yet exist, at levels of ambition that the current generation of builders cannot yet imagine.
Mokyr documented this pattern with particular clarity in The Lever of Riches, where he traced the cascade of micro-inventions that followed every major macro-invention in the history of technology. The pattern has a specific structure: an initial period of conservative application (the new technology solving old problems), followed by a period of creative adaptation (the new technology being modified to solve problems it was not designed for), followed by a period of systematic exploitation (institutions reorganizing around the new technology's capabilities), followed by a period of combinatorial explosion (the new technology being combined with other technologies and institutional innovations to produce applications that could not have been predicted from any single component).
The AI macro-invention is currently in the first period — conservative application. Coding assistants are the new technology solving the old problem of software development. Writing tools are the new technology solving the old problem of document production. The second period — creative adaptation — is beginning. The engineer in Trivandrum who used a coding assistant not to write code faster but to write code she had never been trained to write represents creative adaptation: the technology being applied to a problem (cross-domain skill transfer) that it was not designed to solve. The thirty-day product build represents creative adaptation: the technology enabling a timeline and team structure that the previous paradigm could not support.
The third and fourth periods — systematic exploitation and combinatorial explosion — are where the truly consequential changes will occur, and they are still largely in the future. Systematic exploitation will occur when organizations restructure themselves around AI capabilities — when the org chart changes, when the hiring criteria shift, when the definition of a "team" and a "project" and a "product" are reconceived from the ground up rather than retrofitted. The Orange Pill's "vector pods" — small groups whose purpose is to decide what should be built rather than to build it — are early experiments in systematic exploitation. Combinatorial explosion will occur when AI capabilities are combined with other technological and institutional innovations — biotechnology, advanced manufacturing, new educational models, new forms of political organization — to produce applications that no single technology could have generated alone.
Mokyr's framework insists on a temporal discipline that the technology industry habitually violates. The cascade from macro-invention to full exploitation took more than a century for the steam engine, roughly seventy years for electrification, approximately forty years for the personal computer. Each successive transition has been faster than the last, but none has been instantaneous, because the limiting factor is not the technology but the institutional, organizational, and cultural adaptation required to exploit it. Humans do not reorganize their institutions, their educational systems, their career structures, and their cultural norms at the speed of Moore's Law. They reorganize at the speed of politics, pedagogy, and generational turnover — which is to say, slowly.
The implication is that the most important AI applications of the next decade are not the ones being built today. They are the ones that will emerge from the interaction between AI capabilities and institutional innovations that do not yet exist — educational models that integrate AI from the ground up rather than bolting it onto industrial-age curricula, organizational structures that treat AI as a constituent element rather than a productivity tool, economic arrangements that distribute the gains of AI-augmented productivity rather than concentrating them among early deployers. These institutional innovations are the micro-inventions that will determine whether the AI macro-invention produces an expansion of human flourishing or a concentration of human extraction.
At his Nobel press conference, Mokyr compared the difficulty of predicting AI's trajectory to asking Thomas Newcomen how his invention would change the world. "Given AI is just in its early stages," he said in his Aventine interview, "it would be rash and irresponsible of me to say how it will change things." The humility is not false modesty. It is the hard-won conclusion of a historian who has spent decades studying what happens after macro-inventions and has learned that the most consequential applications are always the ones the inventors could not imagine.
What can be said is that the possibility space is vast, the cascade has barely begun, and the institutions that will determine whether the cascade produces broadly shared benefit or concentrated extraction are still being formed. The macro-invention has occurred. The micro-inventions are beginning. The institutional response — the dams, the channels, the norms, the laws — is the work of the next generation. And that work, not the technology itself, will decide what the AI revolution ultimately means.
James Watt did not invent the steam engine. This fact, elementary to any economic historian, is routinely misunderstood by the popular imagination, which compresses the Industrial Revolution into a tidy narrative of singular genius. Newcomen invented the atmospheric engine in 1712. Watt improved it — decisively, brilliantly, but incrementally — by adding the separate condenser in 1769, fifty-seven years later. The separate condenser reduced fuel consumption by roughly seventy-five percent, transforming the engine from a device that could only be operated economically at coal mines (where fuel was essentially free) into a device that could be deployed anywhere. That single micro-invention — an improvement to an existing macro-invention — did more to transform the economic geography of Britain than the original engine had done.
But Watt's condenser was only the beginning. Over the following century, hundreds of engineers, tinkerers, entrepreneurs, and craftsmen produced thousands of micro-inventions that explored the steam engine's possibility space with a thoroughness that no single mind could have achieved. Richard Trevithick built the first high-pressure engine, making steam power portable. George Stephenson applied it to locomotion, creating the railroad. Robert Fulton applied it to navigation, creating the steamship. Samuel Slater adapted it to textile manufacturing, transforming the American economy. Each micro-invention was a creative application of the original macro-invention's capability to a problem the original inventor had not imagined solving. And each, in Mokyr's framework, required not just technical ingenuity but institutional support — patent protection, access to capital, markets for the new products, an educated workforce capable of operating and maintaining the new machines.
The pattern has repeated with every major macro-invention in the historical record. Gutenberg's press was followed by centuries of micro-inventions in typography, papermaking, binding, illustration, and distribution that transformed a device for reproducing biblical text into the infrastructure of modern knowledge. Edison's electric light was followed by decades of micro-inventions in power generation, transmission, motor design, and electrical engineering that transformed a novelty for illuminating parlors into the power system that runs civilization. The transistor, invented at Bell Labs in 1947, was followed by micro-inventions in circuit design, fabrication, software, networking, and user interface that transformed a device for amplifying telephone signals into the information technology that reshaped every aspect of human life.
In every case, three features of the micro-invention cascade are consistent. First, the initial applications are conservative — the new capability applied to existing problems in existing domains. Second, the transformative applications emerge later, often decades later, from practitioners who grew up inside the new possibility space and could see opportunities that the original inventors could not. Third, the pace and direction of the cascade are determined not by the technology alone but by the institutional infrastructure that supports experimentation, rewards creativity, distributes risk, and channels gains.
The AI macro-invention — the natural language interface to accumulated human knowledge that crossed the threshold in December 2025 — is currently in the earliest stage of its micro-invention cascade. The applications visible in 2026 are overwhelmingly conservative: AI applied to existing problems in existing domains. Coding assistants help programmers write code faster. Writing tools help writers produce text more efficiently. Image generators produce visual content for existing design workflows. Document summarizers compress existing text for existing readers. Each application is valuable. None represents a fundamental reconception of what technology can do.
Mokyr's historical framework suggests that the conservative stage is both inevitable and temporary. It is inevitable because the first people to use any new capability are the people whose existing problems it most obviously solves — and those people, by definition, are thinking in terms of the old paradigm. A programmer who uses Claude Code to write code faster is still thinking like a programmer. A writer who uses AI to draft text more efficiently is still thinking like a writer. The tool has changed. The conceptual framework through which the tool is deployed has not.
The temporary nature of the conservative stage is equally predictable, because the new capability attracts new practitioners — people who did not previously participate in the domains the tool serves, who bring different problems, different perspectives, and different ambitions. The engineer in Trivandrum who used Claude Code to build frontend interfaces she had never been trained to build was not using the tool conservatively. She was using it to cross a domain boundary that the previous paradigm had made impassable. She was not doing old work faster. She was doing new work — work that had been inaccessible to her despite her intelligence and motivation, gated behind a prescriptive knowledge barrier that AI dissolved.
This is the leading edge of the second stage: creative adaptation. The technology being applied to problems it was not designed to solve, by people who see possibilities that the original deployers did not. The Orange Pill's account of the Napster Station build — a complete product conceived, designed, and shipped in thirty days — represents a more advanced instance of creative adaptation. The product was not merely code written faster. It was a reconception of what a small team could achieve, a demonstration that the thirty-day product cycle was now feasible for a class of products that had previously required quarters. The timeline changed. The team structure changed. The definition of what constituted a "minimum viable product" changed. These are not efficiency gains. They are paradigm shifts — small ones, early ones, but paradigm shifts nonetheless.
What comes next, if the historical pattern holds, is a period of systematic exploitation: organizations, industries, and institutions restructuring themselves around the new capability rather than bolting it onto existing structures. Mokyr documented this transition in the context of electrification with particular precision. When factories first adopted electric power, they replaced their central steam engines with central electric motors, keeping the same factory layout — the same long shafts, the same belt-driven machines, the same physical arrangement that the steam engine had required. Productivity gains were modest, because the organizational structure had not changed. The electric motor was being used to do what the steam engine had done, only slightly more conveniently.
The productivity revolution came a generation later, when a new cohort of factory designers — people who had grown up with electricity and did not carry the mental model of the steam-powered factory — realized that electric motors could be distributed throughout the factory, one per machine, eliminating the central shaft entirely. This allowed factories to be redesigned from scratch: organized by workflow rather than by proximity to a power source, with flexible layouts that could be reconfigured as products changed. The productivity gains from this reorganization dwarfed the gains from the simple substitution of electric for steam power. But the reorganization required a generation of learning, experimentation, and institutional adaptation.
The AI transition shows every sign of following the same trajectory. Most organizations in 2026 are using AI the way the first electrified factories used electric motors: as a substitute for the previous power source, bolted onto existing organizational structures. The coding assistant replaces the junior developer. The AI writing tool replaces the first draft. The chatbot replaces the customer service representative. In each case, the organizational structure — the hierarchy, the division of labor, the definition of roles and responsibilities — remains unchanged. The technology has been substituted. The organization has not been redesigned.
The organizations that will capture the full value of the AI macro-invention will be the ones that redesign themselves around AI's capabilities — just as the factories that captured the full value of electrification were the ones that redesigned themselves around distributed electric power. The Orange Pill's vector pods — small groups of three or four people whose purpose is to decide what should be built rather than to build it — are early experiments in this redesign. They represent a recognition that when execution becomes abundant, the scarce resource is direction, and organizational structure should reflect the new scarcity rather than the old one.
But systematic exploitation is only the third stage. The fourth — combinatorial explosion — is where the most consequential changes occur, and it is the stage that is hardest to predict, because it involves the combination of AI capabilities with other technological and institutional innovations to produce applications that no single component could have generated alone. The railroad was a combinatorial innovation: the steam engine combined with iron rail technology, new methods of civil engineering, new forms of corporate finance (the joint-stock company), new regulatory frameworks (railroad commissions), and new labor arrangements. No single component produced the railroad. The railroad emerged from the combination of components that had developed independently and found each other in the new possibility space the steam engine had opened.
The AI combinatorial explosion is still in the future, but its contours are beginning to emerge. AI combined with biotechnology is producing new methods of drug discovery, protein structure prediction, and genomic analysis that neither technology could have achieved alone. AI combined with advanced manufacturing is producing new materials, new production methods, and new supply chain configurations. AI combined with educational innovation — which is still in its infancy — may eventually produce personalized learning systems that adapt to each student's cognitive style, pace, and interests in ways that the one-size-fits-all classroom could never achieve. Mokyr identified this last possibility as AI's most distinctive potential contribution: "If you can look at each case and fine-tune the service you're delivering to that person, you are changing human life enormously."
Each of these combinatorial applications requires not just technical capability but institutional infrastructure — regulatory frameworks for AI-assisted drug development, quality standards for AI-designed materials, pedagogical research on AI-integrated education, economic models that distribute the gains of AI-augmented productivity. The micro-inventions that will define the AI era are not purely technical. They are sociotechnical: combinations of technical capability and institutional innovation that neither could produce alone.
Mokyr's insistence on the institutional dimension of micro-invention is the corrective that purely technical forecasts most desperately need. The technology industry's predictions about AI's future are almost exclusively technical: what the models will be able to do, how fast they will improve, what benchmarks they will pass. These predictions may be accurate. They are also insufficient. The steam engine's technical trajectory — from Newcomen's atmospheric pump to Watt's condensing engine to Trevithick's high-pressure locomotive — was determined by physics and engineering. The economic and social trajectory — from mine pump to factory power to railroad to the transformation of global commerce — was determined by institutions. Patent law shaped who could build on Watt's improvement. Capital markets shaped who could finance Stephenson's railroad. Labor law shaped who bore the cost of factory production. Educational institutions shaped who could operate and maintain the new machines.
The micro-inventions that will follow the AI macro-invention will be shaped by the same institutional forces. The question is not what the models will be able to do. It is what the institutions surrounding the models will permit, encourage, finance, regulate, and distribute. The possibility space is vast. The cascade is just beginning. And the direction it takes will be determined not by the engineers who build the models but by the institutional architects — the legislators, the educators, the cultural entrepreneurs, the builders of dams — who determine the conditions under which the models are deployed.
Newcomen could not have imagined the railroad. The engineers building AI systems in 2026 cannot imagine the applications that will define the AI era. What Mokyr's framework insists on is that those applications will emerge not from the technology alone but from the interaction between the technology and the institutions that surround it. Build the right institutions, and the cascade will flow toward broadly shared human flourishing. Fail to build them, and the cascade will flow — as it always has, in the absence of institutional direction — toward those who happen to stand closest to the source.
For most of human history, knowing how to do something was inseparable from the body that knew it. A master weaver's understanding of thread tension lived in fingers that had pulled ten thousand warps. A glassblower's sense of the precise moment when molten silica could be shaped resided not in any formula but in the heat on his face, the color of the glow, the resistance of the blowpipe against his breath. A navigator's capacity to read the sea — the color shifts that signaled shallow water, the wave patterns that indicated a distant landmass, the particular quality of light before a storm — was accumulated across decades of voyaging and transmitted, imperfectly and slowly, through apprenticeship that required physical proximity for years at a stretch. Prescriptive knowledge was embodied knowledge. It lived in hands and eyes and the particular architecture of a nervous system trained by repetition. It could not be written down without catastrophic loss of fidelity. It could not be transmitted at the speed of print. It moved at the speed of demonstration, which is to say, at the speed of human physical presence.
Joel Mokyr identified this embodiment as the central bottleneck in the history of technological progress. Propositional knowledge — the understanding of why things work — could be codified, printed, distributed, debated, and refined at increasing speed from the invention of writing onward. But prescriptive knowledge — the understanding of how to make things work — resisted codification with a stubbornness that no previous information technology could overcome. The Encyclopédie of Diderot and d'Alembert attempted, with heroic ambition, to codify the prescriptive knowledge of dozens of trades in detailed illustrations and written instructions. The attempt was valuable but ultimately inadequate. A person who studied the Encyclopédie's plates on glassblowing did not emerge from the library able to blow glass. The plates conveyed the sequence of operations. They could not convey the feel — the tacit dimension that Michael Polanyi later identified as the component of skilled knowledge that resists articulation.
The cost of prescriptive knowledge acquisition remained high throughout the Industrial Revolution and well into the twentieth century. Apprenticeship systems, vocational schools, corporate training programs, university laboratories — each institution developed specific methods for transmitting prescriptive knowledge, and each method required significant investment of time, money, and physical co-presence. A medical student did not learn surgery from a textbook. She learned it in an operating theater, watching a senior surgeon's hands, then practicing under supervision, then gradually assuming independent responsibility over years of training that could not be compressed without unacceptable risk. The knowledge was real. The transmission channel was narrow, slow, and expensive.
The prescriptive knowledge revolution that Mokyr's framework illuminates — the revolution catalyzed by large language models in 2025 and 2026 — did not eliminate tacit knowledge. Surgeons still need hands trained by practice. Glassblowers still need the feel of the pipe. Therapists still need the capacity to read emotional signals that no language model can detect. What the revolution did was reclassify a vast quantity of knowledge that had been treated as tacit — not because it was inherently resistant to articulation, but because no previous technology could process the articulation at sufficient resolution. Programming knowledge, for decades, was treated as though it were tacit. It was transmitted through apprenticeship-like structures — mentorship, pair programming, code review, years of practice with escalating complexity. A senior engineer's "feel" for code architecture, the intuition about which design patterns would scale and which would collapse, was regarded as embodied knowledge acquired through experience that could not be shortcut.
The large language model revealed that a significant portion of this knowledge was not tacit at all. It was articulable — expressible in natural language — but the expression required a listener capable of interpreting ambiguous, incomplete, contextual language and converting it into precise technical implementation. Previous listeners — compilers, interpreters, documentation — could not perform this conversion. They required the human to pre-process the knowledge into their preferred formal syntax. The language model could receive the knowledge in its natural form — messy, contextual, half-specified, reliant on implication and inference — and produce working implementation.
The reclassification was enormous in scope. Across software development, legal drafting, financial modeling, data analysis, technical writing, and dozens of other knowledge domains, vast stores of prescriptive knowledge turned out to be articulable once the right listener existed. The prescriptive knowledge had not changed. The channel had. And the economic consequences of the reclassification — the redistribution of who could access which capabilities, at what cost, with what institutional support — are still unfolding.
The engineer in Trivandrum whom The Orange Pill describes — the backend specialist who built a complete frontend feature in two days without frontend training — is the case study that makes the reclassification concrete. Her propositional knowledge of what the interface should do was extensive. She understood user experience principles, interaction patterns, the relationship between visual design and functional behavior. This understanding had been acquired through years of working on systems that served users, even though her specific technical work had been confined to backend infrastructure. What she lacked was the prescriptive knowledge — the specific syntax of frontend frameworks, the rendering logic, the CSS conventions, the deployment procedures — that previous paradigms required for implementation.
Under the old regime, acquiring that prescriptive knowledge would have required months of study and practice. The knowledge was available — in documentation, tutorials, bootcamps, and the collective experience of millions of frontend developers — but accessing it at the resolution required for implementation demanded the specific, time-intensive, experience-dependent learning process that characterizes prescriptive knowledge acquisition. Claude Code compressed that process from months to minutes. Not by making her a frontend developer — she could not, after the experience, have explained the rendering pipeline or debugged a CSS layout conflict from first principles. But by converting her natural language description of what the interface should do into working implementation that she could evaluate, modify, and deploy.
Mokyr's framework identifies both the power and the peril of this compression. The power is distributional: when prescriptive knowledge becomes accessible through natural language, the population of people who can participate in technological creation expands dramatically. The developer in Lagos, the student in Dhaka, the career-changer in São Paulo — each gains access to capabilities that were previously gated behind years of specialized training. The floor rises. The expansion of who gets to build is, in Mokyr's terms, the expansion of who gets to participate in the feedback loop between propositional and prescriptive knowledge — the loop that drives sustained economic growth.
The peril is also distributional, but in the opposite direction. The practitioners who invested years in acquiring the prescriptive knowledge that AI has now made cheaply accessible face a structural devaluation of their human capital. This is not a new phenomenon — Mokyr documented it extensively in the context of the Industrial Revolution, where skilled artisans watched decades of craft expertise become economically redundant as machines performed the same operations at lower cost. But the speed of the current devaluation is historically unprecedented. The handloom weaver's displacement took decades. The knowledge worker's displacement — for those whose prescriptive knowledge turns out to be articulable rather than genuinely tacit — is measured in months.
The senior engineer described in The Orange Pill, who spent his first two days in the Trivandrum training oscillating between excitement and terror, embodies the devaluation precisely. His prescriptive knowledge — the deep understanding of systems architecture, debugging methodology, code organization, deployment procedures — had been the foundation of his professional identity and economic value for twenty-five years. The discovery that a significant portion of that knowledge could be transmitted through natural language to a machine, bypassing the years of embodied learning that he had undergone, was not merely an economic threat. It was an identity crisis of the kind Mokyr documented among displaced artisans two centuries earlier.
But the senior engineer's story did not end in displacement. By Friday of the training week, he had arrived at a recognition that Mokyr's framework predicts: the prescriptive knowledge that AI could replicate was the lower layer. The higher layer — the judgment about what to build, the architectural instinct about what would scale and what would break, the taste that separated an adequate system from an elegant one — remained his, and its value had increased precisely because the lower layer had been automated. The tool had not made him redundant. It had exposed what he was actually good at by stripping away the manual labor that had been masking it.
This is the ascending friction that The Orange Pill describes, and Mokyr's knowledge taxonomy makes its mechanism precise. The prescriptive knowledge that AI replicated was the knowledge of how to implement. The knowledge that remained — and appreciated in value — was a hybrid: partly propositional (understanding why certain architectural choices produce better outcomes), partly prescriptive (knowing how to evaluate and direct AI-generated work), and partly something that resists either category — the judgment, cultivated through decades of experience, about which problems are worth solving and which solutions will serve real human needs. This hybrid knowledge is harder to acquire, harder to transmit, and harder to automate than the implementation knowledge it rests upon. Its scarcity increased as the knowledge beneath it became abundant.
The prescriptive knowledge revolution has a geography, and the geography matters. Mokyr has argued throughout his career that the distribution of useful knowledge — who has it, who can access it, what institutions gate or enable access — is the primary determinant of which societies achieve sustained economic growth. The Industrial Enlightenment succeeded in Britain partly because British institutions were unusually effective at distributing prescriptive knowledge: the apprenticeship system, while imperfect, was more accessible than Continental alternatives; the scientific societies were open to practical men in ways that French academies were not; the patent system incentivized disclosure rather than secrecy.
The AI prescriptive knowledge revolution has a global geography that is both more egalitarian and more precarious than any previous knowledge distribution. More egalitarian because the natural language interface is, in principle, accessible to anyone with connectivity and a subscription — an expansion of access that dwarfs the Industrial Enlightenment's most ambitious efforts. More precarious because the infrastructure is controlled by a small number of companies, overwhelmingly based in the United States, whose decisions about pricing, access, language support, and capability deployment will shape the geography of knowledge access for billions of people who have no voice in those decisions.
Mokyr's warning about institutional lag applies here with particular force. The technology has made prescriptive knowledge globally accessible. The institutions that determine who benefits from that access — educational systems, intellectual property regimes, labor market structures, social safety nets — remain organized around the previous paradigm, in which prescriptive knowledge was scarce, geographically concentrated, and transmitted through institutions that took decades to build. The gap between the technology's reach and the institutions' capacity to channel it is the gap in which the human cost of the transition will accumulate — just as it accumulated in the mills of Manchester, in the tenements of industrial cities, in every previous transition where technological capability outran institutional response.
The prescriptive knowledge revolution is real. Its distributional consequences — who gains, who loses, who is positioned to capture the new opportunities and who is stranded by the devaluation of old skills — are still being determined. And the determination, as Mokyr's career has demonstrated with exhaustive historical evidence, will be made not by the technology but by the institutions that surround it.
In 1811, the British Parliament passed the Frame Breaking Act, making the destruction of stocking frames and lace machines a capital offense punishable by death. Within two years, more British soldiers were deployed against Luddite machine-breakers in the Midlands than were fighting Napoleon in the Iberian Peninsula. The government's institutional response to technological disruption was, in its first iteration, the deployment of lethal force against the people who bore the disruption's costs.
The response was not inevitable. It was a choice — a choice shaped by the specific institutional configuration of early nineteenth-century Britain, in which Parliament was dominated by property owners, labor had no formal political representation, and the prevailing legal framework treated property damage as a greater offense than human immiseration. A different institutional configuration — one in which workers had political voice, in which the costs of transition were understood as a collective responsibility rather than an individual misfortune — would have produced a different response. The Factory Acts, the Ten Hours Act, the extension of suffrage, the legalization of trade unions, the creation of public education — all of these institutional innovations eventually arrived, decades later, redirecting the gains of industrialization toward broader distribution. But they arrived after a generation of suffering that was not technologically determined. It was institutionally determined. The machines did not choose to immiserate the weavers. The institutions chose not to protect them.
This is Joel Mokyr's central argument, the thread that runs from The Lever of Riches through The Gifts of Athena to A Culture of Growth, and it is the argument that the AI transition makes more urgent than any development since the transition that generated it. Technology does not determine outcomes. Institutions determine outcomes. The steam engine did not decide who would prosper and who would starve during the Industrial Revolution. The patent system, the labor laws, the educational institutions, the political franchise, the cultural norms about fair dealing and social obligation — these decided. The power loom did not choose to concentrate gains among factory owners for sixty years before distributional institutions caught up. The absence of those institutions — and the specific political choices that delayed their construction — produced the concentration.
The same logic applies, with full force, to the AI transition. The question is not whether AI will increase productivity. Every serious assessment confirms that it will, massively. The question is whether the gains will be captured broadly or narrowly, whether the transition costs will be borne equitably or dumped on the most vulnerable, and whether the institutions that emerged from the previous transition are adequate to the new one. Mokyr's historical analysis provides an unambiguous answer to the last question: they are not. They have never been adequate at the moment of transition. Institutional adequacy is built during the transition, through political struggle, cultural innovation, and the slow, unglamorous work of designing structures that redirect technological power toward broadly shared benefit.
The Orange Pill arrives at the same conclusion through a different route. "The dams are not adequate," the book states in its chapter on the historical pattern of technological transitions. "Not even close." The book's beaver metaphor — the small creature building structures in a river too powerful to stop but not too powerful to redirect — is, translated into Mokyr's analytical vocabulary, a description of institutional construction. Every dam the book describes is an institution: AI Practice frameworks that protect time for human judgment; attentional ecology that studies and manages the cognitive effects of AI-saturated environments; educational reform that teaches questioning over answering; organizational redesign that values direction over execution. Each is a structure built in the river's current. Each requires continuous maintenance against the river's pressure to erode it.
Mokyr identified five institutional domains that determine the distributional outcome of any major technological transition. Each domain is now being tested by the AI transition, and in each domain, the current institutional infrastructure is failing the test.
The first domain is intellectual property. The patent system, which Mokyr credits as one of the critical institutional innovations of the Industrial Enlightenment, was designed to solve a specific problem: incentivizing the disclosure of useful knowledge by granting temporary monopoly rights to inventors. The system worked — imperfectly, contested, but effectively enough — for three centuries because the relationship between an invention and its inventor was relatively legible. A person designed a machine. The machine was patentable. The patent protected the inventor's investment in development.
AI destabilizes this relationship at every point. Who invented the output of an AI system — the person who wrote the prompt, the company that trained the model, the millions of creators whose work constituted the training data? The current intellectual property framework has no coherent answer, and the absence of an answer is not a theoretical inconvenience. It is a distributional crisis. If the output of AI systems is unpatentable and uncopyrightable, the incentive to invest in AI-augmented creation diminishes. If the output is fully owned by the deployer, the people whose creative work trained the model receive nothing. The institutional gap is real, consequential, and unresolved.
The second domain is labor. The labor institutions that eventually redirected the gains of industrialization — collective bargaining, minimum wage laws, workplace safety regulations, the eight-hour day — were built over decades of organizing, legislation, and political struggle. They were designed for a world in which the boundary between work and non-work was relatively clear, in which the employer-employee relationship was the primary structure of economic life, and in which the pace of skill obsolescence was slow enough for retraining to be feasible.
The AI transition violates every one of these assumptions. The Berkeley study documented in The Orange Pill found that AI-accelerated work colonized previously protected time — lunch breaks, commutes, the small gaps between tasks that had served, invisibly, as cognitive rest. The boundary between work and non-work dissolved not because employers demanded it but because the tool was always available and the internalized imperative to achieve converted availability into compulsion. Existing labor institutions have no mechanism for protecting workers from self-exploitation — from the voluntary intensification that occurs when a powerful tool meets a culture that equates productivity with worth.
The third domain is education. Mokyr has argued that the educational response to the Industrial Revolution was both the most consequential and the slowest institutional adaptation. Universal public education, the mechanics' institutes, the polytechnic schools, the reform of university curricula — each took decades to develop and decades more to deploy at scale. The educational institutions that eventually emerged were adequate to the industrial economy they served. They are not adequate to the knowledge economy that succeeded it, and they are emphatically not adequate to the AI economy that is now succeeding the knowledge economy.
The Orange Pill identifies education as "one of the most urgent institutions requiring reform" and warns that educational establishments are "staffed with calcified pedagogy." Mokyr's historical analysis gives this warning its full weight. The current educational system — organized around the transmission of prescriptive knowledge through lecture and assessment, structured around disciplinary silos that reflect the specializations of the industrial economy, credentialed through degrees that measure time served rather than capability acquired — was designed for a world in which prescriptive knowledge was scarce and expensive. In a world where AI makes prescriptive knowledge cheaply accessible, the educational institution's primary value proposition — providing access to knowledge that cannot be obtained elsewhere — has been undermined. The institution must be rebuilt around a different value proposition: developing the judgment, taste, curiosity, and integrative capacity that AI cannot provide.
The fourth domain is social insurance. The welfare states that emerged in the twentieth century — unemployment insurance, disability benefits, public health systems, retirement pensions — were designed for an economy of stable employment, in which most people worked for organizations that provided benefits, in which career transitions were infrequent and manageable, and in which the pace of economic change was slow enough for safety nets to catch most of those who fell. The AI transition is producing career disruptions that are faster, more frequent, and less predictable than the systems were designed to handle. A software engineer whose skills are devalued by AI in six months does not fit the unemployment insurance model, which assumes a temporary interruption in otherwise stable employment. She faces a structural transformation of her profession that may require a fundamental reconception of her career — a reconception that existing social insurance institutions are not equipped to support.
The fifth domain is cultural norms. This is the domain Mokyr explored most deeply in A Culture of Growth, and it is the domain least visible to policymakers and technologists. Institutional construction does not begin with legislation. It begins with the cultural frameworks — the shared beliefs, expectations, and norms — that make certain institutional forms thinkable. The eight-hour day was not thinkable until a generation of cultural entrepreneurs created the moral framework within which limiting the workday could be understood as justice rather than laziness. Universal education was not thinkable until cultural entrepreneurs created the framework within which educating every child could be understood as investment rather than charity.
The AI transition requires cultural frameworks that do not yet exist. The framework for distinguishing between productive flow and compulsive overwork. The framework for evaluating AI-generated output with appropriate skepticism. The framework for understanding that the quality of the question matters more than the speed of the answer. The framework for distributing the gains of AI-augmented productivity rather than accepting their concentration as natural or inevitable. The Orange Pill's insistence on "worthiness" — the argument that humans must develop themselves to be worthy of amplification — is itself a cultural framework in formation, an attempt to establish norms around the responsible use of a technology that amplifies whatever signal it receives.
Mokyr told the Marketplace interviewer in October 2025 that his concern about AI was not the technology but the institutions: "I don't think any of the pessimistic predictions about AI will come true, but I wish I could say the same about institutions and politics." The statement distills his career's argument into a single sentence. The technology will work. The technology will produce gains. The technology will expand the frontier of human capability. Whether the gains are shared, whether the expansion benefits the many or the few, whether the transition costs are borne equitably or dumped on the vulnerable — these are institutional questions. And institutional questions are answered by the quality of the institutions we build, maintain, and defend.
The institutions are not adequate. They have never been adequate at the moment of transition. The question is how fast they can be built, and that question is the most consequential one the AI transition has produced.
In 1830, the Liverpool and Manchester Railway opened for commercial service. The railroad was, by any measure, one of the most transformative micro-inventions of the Industrial Revolution — the steam engine applied to transportation, compressing the travel time between England's industrial heartland and its primary port from a full day by coach to roughly ninety minutes. Within a decade, railroad construction had become the largest single sector of the British economy, employing hundreds of thousands of workers and consuming more iron and coal than any other industry.
The gains were enormous. They were also radically concentrated. The railway companies were financed through joint-stock corporations that sold shares to investors — overwhelmingly the same class of merchants, industrialists, and landowners who had captured the gains of mechanized textile production in the preceding decades. The workers who laid the track, operated the engines, and maintained the infrastructure earned wages that barely exceeded those of agricultural laborers. The communities through which the railways passed found their property values reshaped by decisions in which they had no voice — some enriched by proximity to a station, others impoverished by the destruction of coaching inns, canal traffic, and the economic ecosystems that had developed around slower forms of transport.
The distributional pattern was not accidental. It was structural. The people who deployed the technology first — who had the capital, the connections, the institutional position to build and operate the railways — captured the gains. The people whose skills the technology displaced — coachmen, canal operators, innkeepers, the entire economic ecology of overland transport — bore the costs. And the institutional mechanisms that would eventually redistribute the gains — railroad regulation, worker safety legislation, progressive taxation — did not exist at the moment of deployment. They were built later, under political pressure, over decades.
Joel Mokyr's career-long insistence on this pattern — that every major technological transition concentrates gains initially, and that redistribution requires active institutional construction rather than passive market adjustment — is the framework within which the AI transition's distributional dynamics must be understood.
The AI transition's early distributional landscape is already visible, and it follows the historical pattern with uncomfortable fidelity. The Software Death Cross described in The Orange Pill — the collapse of SaaS valuations as AI commoditizes code — is a distributional event. Value is migrating from one set of economic actors to another. The companies whose value resided primarily in code — thin applications solving singular problems, differentiated by implementation rather than ecosystem — are losing value as the cost of code approaches zero. The companies whose value resides in ecosystems — accumulated data, institutional relationships, compliance certifications, workflow patterns embedded in the muscle memory of entire industries — are retaining or increasing value, because ecosystems cannot be replicated in an afternoon regardless of how powerful the coding assistant.
The migration of value from code to ecosystem is a migration from labor to capital, from the people who write software to the people who own the platforms on which software runs. A programmer's value was her prescriptive knowledge — her ability to implement. When AI commoditizes implementation, her value diminishes. A platform owner's value was his ecosystem — the network effects, the data, the institutional relationships that constitute a moat around his business. When AI commoditizes code but not ecosystems, his moat deepens. The programmer is displaced. The platform owner is enriched. The pattern is precisely the one Mokyr documented in the railroad era: the deployers capture the gains, the displaced bear the costs, and the redistribution mechanisms lag behind.
The same pattern is visible at the organizational level. The Orange Pill describes a twenty-fold productivity multiplier achieved by engineers in Trivandrum using Claude Code. The gains are real and measurable. The distributional question — who captures the twenty-fold gain — is the question Mokyr's framework insists on asking. If the organization converts the productivity gain into headcount reduction, the gains flow to shareholders in the form of higher margins. If the organization converts the gain into expanded capability — more ambitious products, new markets, higher-quality output — the gains flow partly to workers (in the form of more interesting, more valuable work) and partly to customers (in the form of better products). If the organization converts the gain into price reduction, the gains flow to consumers. The technology does not determine the distribution. The organizational decision does. And the organizational decision is shaped by market incentives, cultural norms, and institutional constraints that vary across firms, industries, and nations.
The Orange Pill is honest about this tension. The author describes the conversation in the boardroom where the twenty-fold gain sits on the table beside the obvious arithmetic: if five people can do the work of a hundred, why not just keep five? The author chose to keep and grow the team. But the author is candid that the choice was costly, that the arithmetic was seductive, that the market rewards efficiency more reliably than it rewards vision. The choice to expand rather than contract was an institutional choice — a decision about organizational norms that shaped the distribution of gains. A different norm — the shareholder-value maximization that has dominated corporate governance for four decades — would have produced a different distribution.
At the national level, the distributional question is even more consequential. Mokyr's research on the Industrial Revolution documented that the nations which built effective distributional institutions — labor law, public education, progressive taxation, social insurance — captured the long-term benefits of industrialization more broadly than those that did not. Britain eventually built these institutions, though the process took decades and produced intense political conflict. The nations that industrialized later — Germany, Japan, the Scandinavian countries — were able to build distributional institutions simultaneously with industrial development, learning from Britain's mistakes. The result was a shorter Engels pause and a faster transition to broadly shared prosperity.
The AI transition presents a similar opportunity for institutional learning. The nations that are deploying AI now — primarily the United States, China, and a handful of European and Asian economies — are repeating the early pattern: rapid deployment with inadequate distributional institutions. The nations that will deploy AI later — much of Africa, South Asia, Latin America, and Southeast Asia — have the opportunity to build distributional institutions in parallel with deployment, if they study the early deployers' mistakes and construct appropriate safeguards.
The opportunity is real but fragile. The speed of the AI transition compresses the timeline for institutional learning. The Industrial Revolution's distributional lessons were learned over generations. The AI transition is asking institutions to learn in years. And the institutions that need to learn fastest — educational systems, labor markets, social insurance programs — are precisely the institutions that are most resistant to rapid change, because they are embedded in political structures, cultural norms, and vested interests that make reform slow, contested, and uncertain.
Mokyr's argument that "pessimistic predictions cannot both be right" — that AI cannot simultaneously be so powerful it destroys all jobs and so weak it does not transform the economy — applies to the distributional question as well. If AI is powerful enough to commoditize prescriptive knowledge across dozens of domains, as the evidence increasingly suggests, then it is powerful enough to generate gains sufficient to support broadly shared prosperity. The gains exist. The question is purely institutional: will the gains be distributed, and if so, through what mechanisms?
The historical record is clear on one point: the mechanisms do not emerge spontaneously. The market, left to its own devices, concentrates gains among early deployers. Redistribution requires active construction — legislation, regulation, cultural norm-setting, educational reform. The eight-hour day was not a market outcome. It was a political achievement, won through decades of organizing by people who understood that the market's default distribution was unjust and constructed institutions to alter it.
The AI transition's distributional question will be answered in the same way: through the quality of the institutions that societies choose to build. The technology provides the gains. The institutions determine the distribution. And the distribution determines whether the transition produces a civilization worthy of the tools it possesses — or one more chapter in the long, repetitive history of technological power flowing to those who already have it, while the people who need it most are left to watch the river from the bank.
The gains are coming. The question is not whether they will arrive. It is who will be standing at the channel's mouth when they do, and what structures will exist to ensure the water reaches the fields as well as the mill.
In 1978, a typical American accounting firm employed rows of clerks whose primary function was computation. They sat at desks, pencils in hand, adding columns of figures, cross-referencing ledgers, performing the arithmetic that transformed raw financial data into the reports, statements, and analyses that the firm's clients required. The work was tedious, exacting, and well-compensated relative to other forms of clerical labor, because the capacity for accurate, sustained computation was scarce. A person who could add reliably for eight hours without errors possessed a skill the market valued. The skill premium — the additional compensation earned by the arithmetically capable over the arithmetically average — was real and substantial.
VisiCalc appeared in 1979. Within three years, the electronic spreadsheet had made manual computation not merely less efficient but structurally unnecessary. A single person with a personal computer could perform in minutes the calculations that had occupied a floor of clerks for days. The skill that had commanded a premium — accurate, sustained manual computation — was worthless overnight. Not gradually devalued. Not slowly eroded. Worthless. The machine did it faster, cheaper, and with fewer errors than any human could achieve.
The accountants predicted catastrophe. The professional associations warned of mass unemployment. The clerks whose livelihoods depended on the scarcity of computational skill saw the spreadsheet the way the Nottinghamshire weavers saw the power loom: as a machine that made their hard-won expertise irrelevant.
Joel Mokyr's framework predicts what happened next, and what happened next is the most important data point in the history of skill premium inversions. The number of people employed in accounting did not decline. It increased. Substantially. By the mid-1990s, the American accounting profession employed more people than it had before the spreadsheet, and the people it employed earned more, on average, than their pre-spreadsheet predecessors. The spreadsheet had not eliminated the need for accountants. It had eliminated the need for computation and, in doing so, had exposed a vast landscape of analytical work — strategic tax planning, forensic analysis, financial modeling, risk assessment, advisory services — that had always existed in principle but had been inaccessible in practice because the profession's bandwidth was consumed by arithmetic.
The skill premium inverted. Before the spreadsheet, the premium was on computational accuracy — a prescriptive skill, in Mokyr's taxonomy, that required training and practice. After the spreadsheet, the premium was on analytical judgment — a hybrid of propositional understanding and contextual evaluation that required broader education, deeper experience, and a kind of integrative thinking that the spreadsheet could not replicate. The premium migrated upward. The work became harder at a higher level. And the people who thrived were not the most computationally skilled but the most analytically capable.
This inversion is the structural pattern that Mokyr's framework identifies in every major technological transition, and it is the pattern now repeating, at vastly greater speed and across vastly more domains, in the AI transition. The printing press inverted the premium on memorization. Before print, a scholar's value was partly measured by what he could hold in memory — the texts he had internalized, the passages he could recite, the cross-references he could produce from his own mental library. After print, memorization became less valuable than interpretation, because any scholar could access any text, and the scarce capability was the capacity to read critically, synthesize across sources, and produce original analysis.
The compiler inverted the premium on low-level programming. Before compilers, a programmer's value was measured by her mastery of machine language — the ability to think in binary, to manage memory addresses manually, to produce instructions that the hardware could execute directly. After compilers, that mastery became less valuable than the capacity to design at higher levels of abstraction — to think about algorithms, data structures, system architecture, and user needs rather than register allocations and memory maps. The premium migrated from hardware fluency to design judgment.
Each inversion followed the same three-stage process. First, the technology commoditized a specific form of prescriptive knowledge that had previously been scarce and therefore valuable. Second, the commoditization exposed a higher-order capability — judgment, synthesis, evaluation, direction — that had been masked by the labor-intensive lower-order work. Third, the premium migrated to the higher-order capability, and the people who possessed it found their value increasing even as the people who possessed only the lower-order skill found their value declining.
The AI transition is executing this three-stage process across dozens of domains simultaneously, at a speed that has no historical precedent. In software development, the premium on implementation skill — the ability to write syntactically correct, functionally adequate code in specific programming languages — is declining as AI coding assistants demonstrate competence across the full range of implementation tasks. The premium on architectural judgment — the ability to evaluate what should be built, how systems should be structured, which trade-offs are acceptable, and what the user actually needs — is increasing, because the abundance of implementation capability has made the direction of that capability the binding constraint.
The three shifts that The Orange Pill identifies in its chapter on leadership — the dissolving specialist silo, the rise of integrative thinking, the question becoming the product — are the organizational surface manifestations of this deeper premium inversion. The specialist silo dissolves because the specialist's prescriptive knowledge is no longer scarce. Integrative thinking rises because the capacity to connect across domains — to see how a technical decision affects user experience, business model, and competitive position simultaneously — is the capability the market now lacks. The question becomes the product because when execution is abundant, the binding constraint is knowing what execution should produce.
Mokyr's historical analysis provides both comfort and warning for the people living through this inversion. The comfort is that every previous inversion eventually produced more employment at higher compensation in the affected domain. The spreadsheet produced more accountants, not fewer. The compiler produced more programmers, not fewer. The printing press produced more scholars, not fewer. In every case, the commoditization of a lower-order skill exposed a vast landscape of higher-order work that the lower-order work had been consuming the bandwidth to perform. The demand for human capability did not decline. It migrated upward. And the people who followed the migration found themselves doing more interesting, more valuable, more uniquely human work than their predecessors had performed.
The warning is that the migration was neither automatic nor painless. The accountants who thrived after the spreadsheet were not the same accountants who had been performing manual computation. They were a different cohort — younger, differently educated, selected for analytical capability rather than computational endurance. The clerks who had been adding columns were, in many cases, not retrained. They were displaced, and the displacement was permanent for those who could not or would not acquire the new skills the market demanded. The transition produced winners and losers, and the losers were disproportionately the people who had invested most heavily in the skill that was commoditized — the people whose identity and economic security were most deeply bound to the capability the machine had replicated.
The AI inversion is producing the same distributional dynamic at greater scale and greater speed. The developers who invested decades in mastering specific programming languages and frameworks — the people The Orange Pill describes as watching "the lower floors fill with AI" — face a structural devaluation of their most practiced skills. The migration path is clear: upward, toward judgment, direction, integration, the capacity to ask the right questions rather than implement the given answers. But the migration requires capabilities that the previous career structure did not develop. A person trained for twenty years to execute with precision is not automatically equipped to direct with vision. The skills are different in kind, not merely in degree.
The institutional question — the question Mokyr insists on asking — is who bears the cost of the transition. The historical record is unambiguous: without institutional intervention, the cost falls on the displaced. The market does not spontaneously retrain workers whose skills have been commoditized. It does not provide income support during the transition period. It does not restructure educational institutions to develop the higher-order capabilities the new economy demands. These things happen — eventually, imperfectly, through political struggle and institutional construction. But the "eventually" is the Engels pause, and the pause is measured in decades during which real people bear real costs.
Mokyr emphasized at his Nobel press conference that his primary labor market concern was not unemployment but labor scarcity — the demographic reality that aging populations are producing fewer workers, not more. The inversion of the skill premium, in this context, is not purely a displacement story. It is also a reallocation story: AI handling the prescriptive work that aging societies cannot staff, freeing human workers for the higher-order work that aging societies desperately need — elder care, education, creative problem-solving, the exercise of judgment in complex and ambiguous situations. The optimistic scenario is not that AI replaces workers but that it redirects them upward, toward work that is more valuable, more human, and more necessary.
The optimistic scenario is possible. Mokyr's historical analysis confirms that it has precedents. But the historical analysis also confirms that the optimistic scenario requires institutional construction — educational reform, retraining systems, social insurance, labor market structures that support career transitions rather than punishing them. Without these institutions, the inversion produces displacement on the downside and concentration on the upside: the people who already possessed the higher-order capabilities capturing the gains, while the people who needed institutional support to develop those capabilities are left behind.
The skill premium is inverting. The migration path is visible. The question is whether the institutions that enable the migration — that help people move from the commoditized lower floors to the premium-bearing upper floors — will be built in time. The accounting profession eventually built them: new curricula, new certification requirements, new career structures that rewarded analytical capability rather than computational stamina. The question is whether the AI transition's institutional response can be faster than the accounting profession's, because the technology is moving faster, the displacement is broader, and the people in the gap do not have a generation to wait.
In 1764, a Lancashire weaver named James Hargreaves reportedly watched a spinning wheel topple onto its side and continue to turn, the spindle now vertical rather than horizontal. The observation — if the anecdote is true, and historians debate it — led him to conceive the spinning jenny, a device that could operate multiple spindles simultaneously. The jenny was not the product of scientific knowledge. Hargreaves could not have written a treatise on the physics of angular momentum or the material properties of cotton fiber. His creativity was practical, observational, rooted in decades of embodied experience with the materials and processes of textile production. He saw something that thousands of other weavers had seen — a fallen wheel — and perceived in it a possibility that no one else had perceived.
Joel Mokyr has argued throughout his career that this kind of creativity — the capacity to perceive novel possibilities in familiar circumstances — is the ultimate engine of economic growth, and that it cannot be reduced to any single input. It is not a product of education alone, though education helps. It is not a product of incentives alone, though incentives matter. It is not a product of knowledge alone, though knowledge is necessary. Creativity is an emergent property of specific conditions: sufficient knowledge of a domain to recognize what is possible, sufficient security to take risks, sufficient exposure to diverse influences to make unexpected connections, and sufficient freedom from routine to allow the mind to wander into territory the routine does not visit.
The Industrial Enlightenment, in Mokyr's account, succeeded not because it produced more knowledge or more inventions but because it created conditions for creativity on a scale that no previous civilization had achieved. Patent protection reduced the risk of creative investment by guaranteeing temporary returns. Open publication norms expanded the knowledge base available to potential creators. Scientific societies and coffeehouses created physical spaces where people from different domains could collide and discover that each possessed pieces of a puzzle neither could solve alone. Technical education expanded the population of people with sufficient domain knowledge to perceive novel possibilities within their fields.
Each institutional innovation addressed a specific barrier to creativity. Patents addressed the risk barrier. Open science addressed the knowledge barrier. Social institutions addressed the collision barrier. Education addressed the competence barrier. Together, they produced not just more inventions but a culture of invention — a sustained, self-reinforcing cycle in which creativity was valued, supported, rewarded, and expected.
The AI transition has commoditized execution. When a person can describe what she wants and receive a working implementation in minutes, the act of implementation — which consumed the majority of creative energy in software development, product design, legal drafting, financial modeling, and dozens of other knowledge domains — is no longer the binding constraint. The binding constraint has migrated to the act of conception: knowing what is worth building, what problem deserves solving, what question needs asking.
This migration makes creativity the scarce resource in the AI economy, just as computational accuracy was the scarce resource in the pre-spreadsheet accounting economy and implementation skill was the scarce resource in the pre-AI software economy. The premium attaches to scarcity. And when execution is abundant, the scarce capability is the one that determines what the execution produces — which is to say, the creative judgment that directs it.
But the economic logic that identifies creativity as the new premium also reveals the institutional challenge that the premium creates. Creativity is not a commodity. It cannot be produced on demand, stockpiled, or scaled linearly with investment. It emerges from conditions, and the conditions are specific, fragile, and poorly understood. Mokyr's historical research identifies at least four conditions that have consistently supported creative flourishing, and each is now under pressure from the same forces that have made creativity more valuable.
The first condition is domain knowledge deep enough to perceive novel possibilities. Hargreaves could see the spinning jenny in a fallen wheel because he had spent decades working with spindles, fibers, and the physics of spinning. His creativity was grounded in expertise. The tension with the AI transition is immediate: if AI reduces the need for deep domain expertise by making prescriptive knowledge cheaply accessible, does it also reduce the reservoir of embodied understanding from which creative perception draws? The engineer who uses Claude Code to build across domains she has not mastered may gain breadth at the cost of the depth from which genuinely novel ideas emerge. The Orange Pill's discussion of ascending friction addresses this tension — the friction has not disappeared but relocated — but the question of whether relocated friction produces the same depth of creative foundation is empirically open.
The second condition is economic security sufficient to support risk-taking. Creativity requires experimentation, and experimentation requires the tolerance of failure, and the tolerance of failure requires that failure not be catastrophic. A person who is one bad quarter from bankruptcy does not experiment. She executes the safest possible strategy, which is to say, the least creative one. The AI transition is producing economic insecurity among precisely the population most likely to generate creative applications of the new capability — the experienced practitioners whose domain knowledge gives them the foundation for creative perception but whose economic position is threatened by the commoditization of their implementation skills. If these practitioners are consumed by the anxiety of displacement, their creative capacity is suppressed at the moment it is most needed.
The third condition is exposure to diverse influences that enable unexpected connections. Mokyr's account of the Industrial Enlightenment emphasizes the role of cross-pollination — the collision of ideas from different domains, different traditions, different perspectives — in producing the novel combinations that drive creative progress. The coffeehouses of Enlightenment London, the scientific societies of provincial England, the international correspondence networks that Mokyr calls the "Republic of Letters" — each created conditions for cross-pollination by bringing together people who would not otherwise have encountered each other's ideas.
AI both enhances and threatens this condition. It enhances it by making knowledge from diverse domains accessible through natural language — the engineer can now explore design principles, the designer can explore engineering constraints, and the cross-pollination that previously required physical co-location or extensive reading can occur within a single conversation with a machine that has been trained on the output of every domain. But it threatens it through the same recommendation dynamics that The Orange Pill identifies as a feature of the attention economy: the tendency of algorithmic systems to serve each user more of what they already know and prefer, narrowing the range of exposure rather than expanding it. The large language model, deployed as a research assistant, can expose its user to ideas from unfamiliar domains. The same model, deployed as a productivity tool optimized for speed, can reinforce existing mental models by producing outputs that confirm rather than challenge the user's assumptions.
The fourth condition is time and space for the mind to wander. This is the condition most directly threatened by the AI transition, and it is the one that connects Mokyr's institutional analysis most directly to the critique of smoothness that The Orange Pill develops through Byung-Chul Han's philosophy. Creativity requires what neuroscientists call the default mode network — the mental state that activates when the mind is not engaged in directed task performance, when it wanders, daydreams, makes connections between apparently unrelated ideas. The Berkeley study documented that AI tools colonized precisely these moments — the pauses, the gaps, the interstices of the workday that had previously served as unstructured time for the mind to process, integrate, and recombine. The AI tool was always available, the gap between impulse and execution had shrunk to nothing, and the result was a workday saturated with directed activity in which the wandering mind had no space to operate.
Mokyr's institutional analysis suggests that each of these conditions must be addressed not at the individual level — telling people to take breaks, read widely, cultivate depth — but at the institutional level, through structures that create and protect the conditions for creative flourishing at scale.
Educational institutions bear the heaviest responsibility. The current educational model — organized around the transmission of prescriptive knowledge through lecture and assessment, optimized for producing competent executors in specific professional domains — was designed for an economy in which execution was scarce. The AI economy requires an educational model designed for an economy in which execution is abundant and creativity is scarce. The specific pedagogical implications are substantial. Teaching questioning over answering — the practice The Orange Pill describes, in which students are graded on the quality of their questions rather than the correctness of their answers — is one example. Interdisciplinary education that exposes students to multiple domains rather than drilling them deeply in one is another. Protected time for unstructured exploration — the academic equivalent of Google's famous "twenty percent time," now largely abandoned in the corporate world — is a third.
Organizational structures bear responsibility as well. The vector pods that The Orange Pill describes — small groups whose purpose is to determine what should be built rather than to build it — are experiments in creating organizational conditions for creativity. They protect a space in which the question "What should exist?" can be explored without the pressure of immediate execution. Whether such experiments can be replicated across organizations of different sizes, in different industries, facing different competitive pressures, is the question that will determine whether the creativity premium benefits many organizations or only the elite few that can afford to experiment with organizational design.
The parallel to Mokyr's account of the Industrial Enlightenment is precise. The Enlightenment did not produce creativity by commanding it. It produced creativity by building institutions that created the conditions in which creativity could flourish — patent protection for the risk-takers, open science for the knowledge-seekers, coffeehouses for the cross-pollinators, mechanics' institutes for the practically minded. The AI transition requires an equivalent institutional effort: not the production of creativity by fiat but the construction of conditions — educational, organizational, economic, cultural — in which creativity can flourish at the scale the new economy demands.
The premium on creativity is rising. The conditions for creativity are under pressure. The institutional response that reconciles the premium with the conditions — that builds structures supporting creative flourishing in a world of abundant execution — is the work that will determine whether the AI economy produces a renaissance or a monoculture. Mokyr's historical analysis confirms that the outcome is not predetermined. It is determined by what gets built around the technology, not by the technology itself.
Hargreaves saw the spinning jenny in a fallen wheel because decades of weaving had taught his eyes what to look for. The question for the AI era is whether the institutions we build will produce people with eyes that practiced — people who have spent enough time in their domains, been exposed to enough diverse influences, been given enough security to take risks and enough space to let their minds wander, that they can see in the abundant output of the machine the possibilities that the machine itself cannot perceive.
The creativity premium is the market's signal that this capacity matters more than it has ever mattered. The institutional response is whether we heed the signal.
The concept that anchors Joel Mokyr's career is not a theory of technology. It is a theory of knowledge — specifically, a theory of how the continuous expansion of useful knowledge and the continuous improvement of the channels through which that knowledge flows to practical application are the engines of sustained economic growth. The theory is developed most fully in The Gifts of Athena, where Mokyr traces the relationship between what he calls the epistemic base — the total stock of propositional and prescriptive knowledge available to a society — and the rate of technological progress that society can sustain. His conclusion, supported by evidence spanning three centuries, is that the width of the epistemic base determines the ceiling of technological creativity, and the efficiency of the channels that connect the base to practical application determines how closely a society approaches that ceiling.
The AI transition is, measured by these criteria, the most significant event in the history of the epistemic base since the invention of the scientific method. Not because it has expanded the stock of propositional knowledge — though it will, as AI-assisted research discovers patterns in data sets too vast for human analysis. But because it has widened the channels between the existing stock of knowledge and the population of people who can apply it with a speed and comprehensiveness that no previous institutional innovation approached.
Consider the magnitude of the channel expansion in Mokyr's terms. Before the AI threshold of December 2025, the channels through which useful knowledge reached practical application were numerous but narrow. Formal education required years of investment and was geographically concentrated in wealthy nations. Apprenticeship required physical proximity and personal relationships. Documentation required literacy in the specific technical language of the domain. Professional networks required social capital accumulated over decades. Each channel had been widened by previous institutional innovations — the university, the textbook, the internet, the search engine — but each still imposed significant costs in time, money, and social position. A person with a brilliant idea and no access to the channels could not convert her idea into capability, regardless of her intelligence or motivation.
The large language model widened every channel simultaneously. A person who could describe a problem in natural language — any natural language, though English remained dominant in 2026 — could now access prescriptive knowledge across dozens of domains at the cost of a monthly subscription. The cost of formal education, the requirement of physical proximity, the need for domain-specific technical literacy, the advantage of social networks — each barrier was reduced, not eliminated, but reduced to a degree that the effective population of potential knowledge appliers expanded by orders of magnitude.
Mokyr's framework predicts that this expansion will produce a corresponding acceleration in the feedback loop between propositional and prescriptive knowledge. More people applying knowledge means more experiments being run. More experiments means more data being generated. More data means more patterns being discovered. More patterns means more propositional knowledge being added to the epistemic base. More propositional knowledge means more prescriptive possibilities emerging. The loop is self-reinforcing, and its speed is governed by the volume of flow through the channels. When the channels widen, the loop accelerates.
The Nobel Committee's observation that "AI could reinforce the feedback between propositional and prescriptive knowledge, and increase the rate at which useful knowledge is accumulated" is, in this context, a prediction of extraordinary consequence. If the feedback loop accelerates, the rate of useful knowledge accumulation increases. If the rate increases, the ceiling of technological creativity rises. If the ceiling rises, the range of problems that human civilization can address — from disease to climate change to the logistical challenges of an aging global population — expands correspondingly.
Mokyr himself framed the stakes in precisely these terms during his Aventine interview: "I see the human race facing a number of extremely dangerous existential problems, above all climate change, how governments are spending more than they're taking in and building a debt crisis that will crush society, and how that's compounded by demographic change. I'm really worried about these things. My great hope is that, precisely because artificial intelligence is a general purpose technology, we will be able to deploy it in order to prevent the worst of these things from happening."
The optimism is characteristic, grounded in historical analysis rather than wishful thinking. Mokyr has spent decades documenting the pattern: useful knowledge expands, channels improve, the epistemic base widens, technological creativity accelerates, and the problems that seemed insoluble from the previous generation's vantage point yield to the expanded capability of the next. The pattern is real. It has held across three centuries of industrial and post-industrial civilization. There is no reason, from the standpoint of economic history, to assume it will break now.
But there is also no reason to assume it will hold automatically. The pattern held because institutions were built — slowly, imperfectly, through political struggle and cultural innovation — to ensure that the expanding epistemic base produced broadly distributed benefits rather than concentrated extraction. The scientific societies ensured that new knowledge was published rather than hoarded. The patent system ensured that inventors were rewarded for disclosure. The educational system ensured that the population capable of applying knowledge expanded with the knowledge base. The labor institutions ensured that the workers displaced by technological change were not permanently excluded from the expanding economy. Each institution was a dam in the river — a structure that redirected the flow of expanding knowledge toward broad benefit.
The AI transition has produced the most dramatic channel expansion in human history. It has also produced the most dramatic gap between channel capacity and institutional readiness in human history. The channels are wide open. The knowledge is flowing. And the institutions that determine who benefits from the flow are — as this book has argued from its first chapter, drawing on Mokyr's own warnings — inadequate to the volume.
The institutional gaps are specific and identifiable. Educational institutions organized around the transmission of prescriptive knowledge in disciplinary silos are inadequate to an economy that rewards integrative judgment across domains. Labor institutions designed for stable employment relationships are inadequate to a labor market in which skill obsolescence is measured in months. Intellectual property regimes designed for individual inventors are inadequate to a creative economy in which AI-generated outputs draw on the collective work of millions. Social insurance systems designed for temporary unemployment are inadequate to structural career transitions that require fundamental reconceptions of professional identity. Cultural norms that equate productivity with worth are inadequate to a world in which productivity has been automated and worth must be found elsewhere.
Each gap is a section of riverbank eroding in real time. The water flows through regardless. The question is whether the erosion is managed — dams built, channels redirected, the flow guided toward fertile ground — or whether the erosion proceeds uncontrolled, carving channels that serve the geography of least resistance rather than the geography of greatest need.
Mokyr's warning at his Nobel press conference — "if technological change is very, very quick, then institutions will fall behind, and once that disequilibrium occurs, societies could be in trouble" — is the statement that encapsulates the challenge. The technology is moving quickly. The institutions are not. The disequilibrium is already visible in the distributional dynamics of the early AI economy, in the anxiety of displaced workers, in the inadequacy of educational systems, in the cultural confusion about whether the transformation should be celebrated or feared.
The historical pattern provides both reassurance and urgency. Reassurance because every previous disequilibrium was eventually resolved — institutions were built, the epistemic base was channeled, the gains were distributed, the civilization that emerged was more capable and more prosperous than the one that preceded it. Urgency because "eventually" is not a timeline. It is a measure of institutional effort. The Industrial Revolution's disequilibrium lasted sixty years. The people who lived through those sixty years — the weavers, the factory children, the displaced artisans — did not experience reassurance. They experienced the gap between technological capability and institutional response as the defining fact of their lives.
The AI transition is producing its own gap. The people living in the gap right now — the knowledge workers whose skills are being commoditized, the students navigating an educational system that has not adapted, the parents trying to prepare children for a world they do not understand — are not comforted by the historical pattern. They need institutions that work now, not eventually. They need dams that are built at the speed of the river, not at the speed of political consensus.
Mokyr's framework does not provide the institutions. It provides the understanding of why they matter and what happens when they fail. The understanding is necessary but not sufficient. The construction is the work that remains.
The epistemic base is wider than it has ever been. The channels are more efficient than any that preceded them. The feedback loop between propositional and prescriptive knowledge is accelerating. The ceiling of technological creativity is rising. And the institutions that will determine whether the expanding capability produces a civilization worthy of its tools, or one more chapter in the repetitive history of gains captured by the few while the many bear the costs, are being built right now, by the people who choose to build them, in the gap between what the technology makes possible and what the institutions make real.
The growth of useful knowledge in the age of the amplifier is not in question. The growth is happening. The direction of the growth — toward broad human flourishing or narrow extraction — is the question. And the answer, as Mokyr's entire career has demonstrated, will be found not in the technology but in what is built around it.
The river is wider than it has ever been. The dams are the work.
The framework that unsettled me most in this journey was not any single idea of Mokyr's but the gap between two of them.
On one side: the feedback loop. Propositional knowledge generating prescriptive knowledge generating new propositional knowledge, the engine of sustained growth, now supercharged by the widest channel in human history. I have seen that loop accelerate in my own work. I have watched engineers in Trivandrum cross domain boundaries that existed for decades, watched a product materialize in thirty days that should have taken six months, watched the imagination-to-artifact ratio collapse in real time. The acceleration is real. It is exhilarating. It makes you want to build.
On the other side: the Engels pause. Sixty years. The period during which aggregate productivity rose and working-class living standards declined. Not because the technology failed — the technology worked spectacularly — but because the institutions that would redirect its benefits toward the many had not yet been built. The people in the gap did not experience a feedback loop. They experienced a mill town.
Sixty years is not an abstraction. It is a human life. It is the span between a child born into displacement and that child's grandchild finally reaching stability. The technology was not the villain of the story. The absence of institutions was.
What Mokyr showed me — what I could not see clearly until I worked through his framework — is that the exhilaration and the danger are products of the same mechanism. The wider the channel, the faster the feedback loop, the greater the gains — and the greater the gap when institutions fail to keep pace. The acceleration I celebrate at three in the morning with Claude is the same force that, without institutional channeling, will concentrate its benefits among those of us already positioned to capture them and leave everyone else watching from the bank.
I cannot build the institutions. That requires legislators, educators, organizers, the slow and unglamorous work of people who will never feel the thrill of a thirty-day product cycle. But I can build the culture that makes the institutions thinkable. Mokyr calls the people who do this "cultural entrepreneurs," and the term reframed what I think this book is trying to do. Not to predict the future. Not to celebrate the technology. To plant in the cultural soil the conviction that the gains must be shared, that the dams must be built, that the river's power is real and the river's indifference is also real, and that the difference between irrigation and flood is never determined by the water.
It is determined by what we build around it.
-- Edo Segal
The AI revolution is not a revolution of intelligence. It is a revolution of access -- the most dramatic widening of the channel between human knowledge and human capability since the printing press. Joel Mokyr spent four decades proving that every previous channel expansion, from scientific societies to patent systems to public universities, followed the same pattern: extraordinary gains, radical concentration, and a painful institutional lag that determined whether the benefits reached the many or stayed with the few.
This book applies Mokyr's framework to the AI transition with the urgency the moment demands. His distinction between propositional and prescriptive knowledge reveals exactly what large language models commoditized, what they did not, and where the human premium has migrated. His concept of the Industrial Enlightenment illuminates why the institutions surrounding AI matter more than the technology itself.
The gains are coming. The question is who will be standing at the channel's mouth -- and what dams will exist to ensure the water reaches the fields.
QUOTE:

A reading-companion catalog of the 15 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Joel Mokyr — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →