Kentaro Toyama — On AI
Contents
Cover Foreword About Chapter 1: The Law Chapter 2: The Amplifier's Promise Chapter 3: The Field Worker's Hard Lesson Chapter 4: Formal Access, Substantive Capability Chapter 5: What the Student in Dhaka Actually Needs Chapter 6: The Amplification of Advantage Chapter 7: When the Tool Amplifies Dysfunction Chapter 8: Dams Without Foundations Chapter 9: What Genuine Democratization Requires Chapter 10: The Necessary Human Investment Epilogue Back Cover
Kentaro Toyama Cover

Kentaro Toyama

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Kentaro Toyama. It is an attempt by Opus 4.6 to simulate Kentaro Toyama's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The fulcrum was invisible to me for thirty years.

I built companies. I shipped products. I watched tools evolve from command lines to touchscreens to conversational AI that could hold my half-formed ideas and return them clarified. Every transition felt like the lever getting longer, more powerful, more capable of moving the world. And at every transition, I celebrated the lever.

Toyama made me look down. At the thing the lever rests on. At the foundation that determines whether all that mechanical advantage actually moves anything, or just pivots in empty air.

Here is what rattled me. Toyama is not a philosopher critiquing technology from a garden in Berlin. He is a computer scientist. He won the David Marr Prize. He helped build the systems behind Microsoft's Kinect. He was inside the machine, doing the math, shipping the products. And then he spent years in India watching what happened when those products met the real world — the schools without trained teachers, the clinics without functioning supply chains, the communities where the tool arrived but the capacity to use it did not.

What he found was simple enough to fit in a sentence and uncomfortable enough to keep me awake: Technology amplifies existing human and institutional capacity. It does not substitute for missing capacity.

The Law of Amplification. I had been circling it in *The Orange Pill* without seeing its full shape. I wrote that AI is an amplifier and asked, "Are you worth amplifying?" Toyama accepts that question and then asks the harder one I skated past: What determines whether someone arrives at the amplifier with something worth amplifying? And are we investing in that — the education, the mentoring, the institutional infrastructure — with anything approaching the urgency with which we are distributing the tool?

We are not. I know this because I live inside the industry doing the distributing.

The twenty-fold productivity multiplier I witnessed in Trivandrum was real. But my engineers had fulcrums — years of experience, institutional support, quality standards, trust built through shared difficulty. The tool amplified all of that. The amplification was extraordinary because the foundation was deep. Toyama forced me to ask what happens when the foundation is shallow. The answer, documented across hundreds of deployments in dozens of countries, is that the tool amplifies the shallowness with the same faithful indifference.

This book is not a rejection of the tools. It is a correction of where we point our attention. The lever is the easy investment. The fulcrum is the hard one. Toyama shows why the hard one is the only one that matters.

-- Edo Segal ^ Opus 4.6

About Kentaro Toyama

1971-present

Kentaro Toyama (born 1971) is a Japanese-American computer scientist, technology researcher, and professor at the University of Michigan School of Information. After earning his PhD in computer science from Yale University, Toyama joined Microsoft Research, where he won the David Marr Prize for his work in computer vision and co-developed foundational technology behind Microsoft's Kinect sensor. In 2004 he co-founded Microsoft Research India in Bangalore, where he spent five years studying technology deployments in education, health care, and agriculture across South Asia. That fieldwork led to his formulation of the Law of Amplification — the principle that technology amplifies existing human and institutional capacity rather than substituting for absent capacity — which he articulated fully in his 2015 book *Geek Heresy: Rescuing Social Change from the Cult of Technology*. Toyama has written extensively for publications including *The Atlantic*, *The Boston Review*, *The Conversation*, and *Divided We Fall*, consistently arguing that investment in human development must accompany or precede technology distribution. His work has become a central reference point in debates about technology and global inequality, digital development, and the social impact of artificial intelligence.

Chapter 1: The Law

In 2004, a team of researchers from Microsoft arrived in Bangalore with laptops, projectors, and the quiet confidence of people who believed they were about to change the world. The project was straightforward in conception: deploy personal computers in underserved Indian schools, equip them with educational software, and watch learning outcomes improve. The technology was sound. The software was well-designed. The hardware was, by the standards of rural Karnataka, extraordinary. Each machine represented more computational power than the Apollo program had used to reach the moon, and it was being placed on a wooden desk in front of a child who had never touched a keyboard.

The results were not what anyone expected.

In schools with capable, motivated teachers, the computers transformed the classroom. Students engaged with material they had previously found abstract. Teachers used the machines to illustrate concepts that chalkboards could not convey. Learning accelerated. The technology had done what technology is supposed to do: it had amplified the capacity that was already there, carrying good teaching further and faster than good teaching alone could reach.

In schools without those teachers, the computers gathered dust. Or worse: they became distractions, portals to games and idle browsing, objects of fascination that consumed attention without producing understanding. The software sat unused. The curriculum went untaught. The machines, which had cost thousands of dollars each to procure and deploy, produced no measurable improvement in learning outcomes. In some cases, outcomes declined, because the presence of the technology had displaced the time previously spent on activities that, however imperfect, had at least been producing something.

The technology was identical in both settings. The outcomes were opposite. And the variable that explained the difference was not the machine. It was the human and institutional infrastructure that surrounded the machine.

This pattern, observed not once but dozens of times across schools, health clinics, and agricultural extension services throughout South Asia, crystallized into what Kentaro Toyama would call the Law of Amplification: technology amplifies existing human and institutional capacity. It does not substitute for missing capacity. The law is simple enough to state in a single sentence. Its implications require considerably more space to follow to their conclusions, because those conclusions challenge the foundational assumptions of the most powerful industry on earth.

Toyama's credentials for making this challenge are unusual. He is not a philosopher observing technology from a comfortable distance. He is not a journalist translating complexity into narrative. He is a computer scientist who won the David Marr Prize in computer vision, whose early research helped lay the groundwork for Microsoft's Kinect input device, and who spent years doing precisely the kind of artificial intelligence work that now dominates global headlines. He built the systems. He understood them from the inside, at the level of the mathematics and the architecture, not merely at the level of the output.

And then he walked away.

Not from technology itself, but from the assumption that had governed his career and governed the careers of nearly everyone around him: the assumption that building better technology was equivalent to building a better world. He moved from Microsoft Research in Redmond to Microsoft Research India, and there, in the gap between what the tools could do and what the contexts could absorb, the law emerged. Not as theory first and evidence second. As evidence first, repeated and undeniable, and theory second, extracted reluctantly from a pattern that refused to conform to the narrative he had brought with him.

The narrative he had brought was the standard one. It is the narrative that every technology company tells, that every venture capitalist funds, that every TED talk amplifies: technology is a lever. Build a better lever, and you move the world. The narrative is not entirely wrong, which is what makes it so durable and so dangerous. Technology is indeed a lever. The error is in assuming that the lever operates independently of the fulcrum. Without a fulcrum, a lever is a stick. Without human and institutional capacity, technology is hardware.

Toyama's fieldwork demonstrated this with a specificity that abstract theory cannot match. In a health clinic in Uttar Pradesh, a telehealth system connecting rural patients to urban specialists worked precisely as designed when the clinic was well-staffed, well-managed, and integrated into a functioning health system. Patients received diagnoses they could not have obtained locally. Treatment plans were communicated effectively. Health outcomes improved. The same telehealth system, deployed in a clinic that was understaffed, poorly managed, and disconnected from the broader health infrastructure, produced nothing. Not because the technology failed. The video worked. The connection was stable. The specialist was available. But the institutional capacity to act on the specialist's recommendations, to follow up with patients, to integrate the remote diagnosis into a treatment protocol, was absent. The technology had amplified the existing capacity of the first clinic, which was substantial, and the existing capacity of the second clinic, which was negligible. In both cases, the amplification was faithful. It was the input that differed.

The law operates with the indifference of gravity. It does not care about intentions. It does not reward good motives with good outcomes. It amplifies whatever is there: competence and incompetence, organization and chaos, wisdom and foolishness. A well-intentioned deployment of powerful technology in a dysfunctional context amplifies the dysfunction. The technology does not know the difference. It cannot distinguish between a school that uses it to teach and a school that uses it to babysit. It amplifies the use.

This indifference is what makes the law so relevant to the present moment. In The Orange Pill, Edo Segal writes: "AI is an amplifier, and the most powerful one ever built. Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history." This is the Law of Amplification, stated with Segal's characteristic directness and energy. The formulation is nearly identical. The difference, and it is a difference that matters enormously, is in who is held responsible for what gets fed into the amplifier.

Segal frames the question as individual: "Are you worth amplifying?" The question assumes that worthiness is a quality the individual possesses or fails to possess, a product of character, discipline, commitment to craft. And at the individual level, this framing contains real truth. A person who brings genuine care and developed judgment to an AI collaboration will produce better work than a person who brings carelessness and undeveloped instinct. Character matters. Discipline matters. The quality of the input determines the quality of the amplified output.

But Toyama's fieldwork forces a harder question: What determines whether a person arrives at the amplifier with the capacity to use it well? The answer, documented across hundreds of deployments in conditions that strip away the comfortable assumptions of the developed world, is not primarily individual. It is structural. The person who brings care and judgment to the tool is, in most cases, the person whose education developed care, whose institutions provided judgment-building experiences, whose community modeled disciplined engagement with complex problems, and whose economic circumstances permitted the luxury of developing these capacities rather than spending every available hour on survival.

The teacher in the well-functioning school did not become a good teacher through force of will alone. She became a good teacher because she was trained in a functioning teacher education program, supported by a functioning school administration, embedded in a community that valued education, and compensated well enough to remain in the profession. The teacher in the dysfunctional school was not a worse person. She was a person operating within a system that had failed to develop and support her capacity. The technology could not fix what the system had broken. It could only amplify what the system had produced.

This structural dimension does not negate the individual one. Both are real. A person with strong individual drive can sometimes overcome structural disadvantage, and a person with every structural advantage can squander it through individual failure. The law does not deny agency. It contextualizes it. It insists that agency operates within conditions, and that the conditions are not equally distributed, and that pretending otherwise, treating the question as purely individual, is a form of moral evasion that serves the interests of those whose conditions are already favorable.

What makes the Law of Amplification uniquely discomfiting in the current moment is that it accepts every empirical claim the technology optimists make and still arrives at a different conclusion. The tools are powerful. Agreed. The tools expand capability. Agreed. The tools lower the floor of who can build. Agreed. And because the tools amplify existing capacity, and because existing capacity is distributed as unequally as it has ever been, the tools will amplify existing inequality. Not because they are designed to. Not because anyone intends it. Because amplification is structurally incapable of equalizing when the inputs are unequal.

A well-functioning school that receives AI tools becomes a spectacular school. A dysfunctional school that receives the same tools remains a dysfunctional school, now with AI. The pattern has not changed since Bangalore in 2004. Only the power of the amplifier has increased.

Toyama has been explicit about this trajectory. Writing in The Conversation in January 2023, he noted: "If history is any guide, it's almost certain that advances in AI will cause more jobs to vanish, that creative-class people with human-only skills will become richer but fewer in number, and that those who own creative technology will become the new mega-rich." In a 2024 piece for Divided We Fall, he applied the law directly: "Artificial intelligence is no exception. As we're already seeing, ChatGPT helps honest writers brainstorm and helps bad students cheat. Deepfake technology can boost special effects in movies and it can generate misleading political content."

The law does not predict doom. It does not predict salvation. It predicts amplification, faithful and indifferent, of whatever human forces are already in play. The question it forces upon every person, institution, and society considering the adoption of AI is not "Is this tool powerful?" The answer to that question is obvious and uninteresting. The question is: "What are we amplifying?" And the harder, prior question: "Have we invested in the human and institutional capacity that determines whether the amplification produces flourishing or dysfunction?"

The chapters that follow will trace this question through the specific claims of the AI optimists, through the gap between formal access and substantive capability, through the institutional infrastructure that makes tools useful, and through the uncomfortable arithmetic of who benefits when the amplifier meets a world of unequal foundations. The law is the lens. What it reveals, when held up to the most powerful technology in human history, is both sobering and, if taken seriously, more genuinely hopeful than the optimism it complicates, because it points not toward better tools but toward the investment in human capacity that would make better tools matter.

The laptops in Karnataka are still there, some of them. The ones in the well-functioning schools are worn from use. The ones in the dysfunctional schools are in closets, or broken, or repurposed as furniture. The machines did not change. The schools did not change. The law held, as it has held in every deployment Toyama has documented, as it will hold when the tool is not a laptop but an intelligence that can write code, draft briefs, compose music, and converse in natural language with a fluency that would have been science fiction a decade ago.

The amplifier is more powerful than ever. The question of what it amplifies has never been more urgent.

---

Chapter 2: The Amplifier's Promise

The scene in Trivandrum deserves to be taken at face value before it is subjected to scrutiny.

In February 2026, twenty engineers sat in a room in southern India while Edo Segal, their leader, made a claim that would have sounded delusional twelve months earlier: "By the end of this week, each one of you will be able to do more than all of you together." The tool was Claude Code. The cost was one hundred dollars per person, per month. On Monday they began building. By Friday, the claim had been validated. A twenty-fold productivity multiplier, measured not in theoretical projections but in working, tested, deployable code.

This is not a story that should be dismissed. The instinct to dismiss it, to wave it away as hype or self-promotion or Silicon Valley mythmaking, misses what is genuinely new about the moment. Something real happened in that room. People who had spent years operating within the constraints of their individual technical specializations suddenly found those constraints dissolved. A backend engineer who had never written a line of frontend code built a complete user-facing feature in two days. Not a prototype. A working feature. The boundaries between disciplines, which had seemed as structural as walls, turned out to be artifacts of translation cost. When the cost of moving between domains dropped to the cost of a conversation, people moved.

The democratization thesis that Segal builds from this experience is emotionally powerful and empirically grounded. When the imagination-to-artifact ratio approaches zero, when any person with an idea and the ability to describe it in natural language can produce a working prototype in hours, the class of people who get to build expands dramatically. The developer in Lagos who had the ideas and the intelligence but not the team, the capital, or the institutional infrastructure to realize them now has a tool that collapses the distance between conception and creation. The student in Dhaka can access coding leverage that was previously reserved for engineers at the world's most resource-rich companies.

Segal himself is careful enough to qualify the claim. "I am not claiming AI eliminates inequality," he writes. "Access requires connectivity, and connectivity requires infrastructure that billions of people do not have." But the qualifications occupy a paragraph in a chapter whose emotional arc bends unmistakably toward celebration. The floor has risen. The barriers have lowered. The expansion of who gets to build is, in Segal's framing, "the most morally significant feature of this technological moment."

The celebration contains real truth. The floor has genuinely risen. There are people building today who could not have built yesterday, and the products they are building are real, serving real users, generating real value. Alex Finn's year of solo building, producing a revenue-generating product without writing a line of code by hand, would have required a team of five and twelve months of runway five years earlier. The arithmetic is not in dispute. The capability expansion is measurable and, for the individuals who experience it, transformative.

There is also genuine moral force in the argument. For decades, the ability to build software was gated by years of specialized training, access to expensive tools, and proximity to the institutions that provided both. The gate kept people out not on the basis of their intelligence or their ideas but on the basis of their circumstances: where they were born, what education they could access, what networks they could join. A tool that lowers that gate is doing something that matters, and the impulse to celebrate it is not naivete. It is the recognition that barriers to participation are morally costly, and that their removal, even partial removal, is morally significant.

But the celebration, taken on its own terms without the complicating lens of amplification dynamics, tells only half the story. And the half it tells is the half that flatters the storyteller.

Consider what the Trivandrum room actually contained. Twenty engineers. Not twenty randomly selected individuals. Twenty people who had been hired through a competitive process, who possessed years of professional software development experience, who were embedded in a functioning company with clear goals, quality standards, collegial relationships, and a leader capable of directing their work toward specific outcomes. They had mentors, formal and informal. They had a shared professional culture that defined what good work looked like. They had the accumulated institutional knowledge of their organization, and they had each other, twenty minds that could check each other's output, catch errors, and provide the feedback loops that transform individual effort into reliable production.

The twenty-fold multiplier was real. The question the Law of Amplification forces is: What was being multiplied?

The answer is not "raw tool capability." The answer is "tool capability applied to a deep foundation of human and institutional capacity." The engineers were not beginners. They were experienced professionals whose existing expertise, judgment, and institutional support provided the foundation that the tool amplified. The backend engineer who built a frontend feature in two days did so not because the tool magically granted her frontend knowledge. She did so because her years of backend experience had built a mental model of software architecture robust enough to extend into an adjacent domain when the translation cost was removed. The tool did not create the mental model. It removed the barrier that had prevented the mental model from expressing itself in a new direction.

This distinction, between removing barriers to the expression of existing capacity and creating capacity where none exists, is the distinction on which the entire democratization argument turns. And it is the distinction that the celebration of the tool tends to blur.

Segal describes the senior engineer who spent his first two days in Trivandrum oscillating between excitement and terror, then arrived by Friday at the realization that "the remaining twenty percent, the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they tolerated, turned out to be the part that mattered." This is a precise and honest description of ascending friction, the relocation of difficulty from the mechanical to the cognitive, from the syntactic to the strategic. And it is also, read through Toyama's lens, a precise description of what the amplifier rewards: the prior investment in judgment, instinct, and taste that only years of experience can develop.

The senior engineer's twenty years of practice were not made redundant by the tool. They were made more visible, more valuable, more directly consequential. The tool stripped away the implementation labor that had been consuming eighty percent of his time and revealed the judgment layer underneath. The judgment had always been there. The tool did not create it. It amplified it.

Now consider the person who does not possess twenty years of practice. The person whose education was interrupted, whose institutional support was absent, whose professional development consisted not of mentored practice within a functioning organization but of scattered tutorials, inconsistent internet access, and the heroic autodidacticism that the technology industry loves to celebrate in retrospect but rarely funds in advance. This person receives the same tool. The same Claude Code. The same hundred dollars per month.

The Law of Amplification predicts that the tool will amplify whatever this person brings. If the person brings talent and drive but lacks the judgment that comes from years of mentored practice, the tool will amplify talent and drive operating without judgment. The output may be impressive in volume and speed. It will lack the architectural coherence, the quality instinct, the taste that the senior engineer's amplified output possesses. And in a market that can now distinguish between the two, because the market has access to the senior engineer's amplified output as a benchmark, the person without the foundation finds that the tool has not closed the gap but clarified it.

This is not an argument against the tool. The tool is genuinely powerful. The expansion of who can attempt to build is genuinely significant. A person who could not build at all yesterday can build something today, and that something may be crude and may be brilliant and will certainly be more than nothing. The argument is that the celebration of the tool, without equal attention to the foundations that determine what the tool amplifies, produces a narrative that is emotionally satisfying and structurally incomplete.

Segal writes that the developer population worldwide has crossed forty-seven million, with the fastest growth in Africa, South Asia, and Latin America. He writes that "a student in Dhaka can now access the same coding leverage as an engineer at Google." The first claim is factual. The second is technically true and substantively misleading, because "the same coding leverage" is not "the same capability." Leverage operates on a fulcrum, and the fulcrums are radically different. The Google engineer's fulcrum includes a world-class education, institutional infrastructure that catches errors and enforces standards, professional networks that connect output to markets, and the cultural capital that comes from operating at the center of the global technology ecosystem. The student in Dhaka's fulcrum includes whatever her educational system, her family circumstances, her local institutions, and her economic conditions have provided.

Both have the lever. They do not have the same fulcrum. And the Law of Amplification predicts, with the reliability of a finding replicated across hundreds of deployments in dozens of countries, that the lever will amplify the difference in fulcrums rather than eliminate it.

The Trivandrum room was not a refutation of this prediction. It was a confirmation of it. The tool produced extraordinary results in Trivandrum because the foundations in Trivandrum were extraordinary: experienced engineers, a functioning organization, a capable leader, clear goals, institutional quality standards, and the collegial infrastructure that turns individual capability into reliable production. The celebration of what happened in Trivandrum is entirely warranted. The generalization from Trivandrum to Lagos, to Dhaka, to the rural communities where the foundations are categorically different, requires an additional step that the celebration does not take.

That step is the recognition that the tool is the cheapest, most distributable, most easily replicated component of a system that produces human capability. The expensive components, the components that determine what the tool has to work with, are the educational systems, the institutional infrastructure, the mentoring networks, and the cultural foundations that develop human capacity over years, not hours. The tool costs one hundred dollars per month. The foundations cost orders of magnitude more and operate on timescales measured in years and decades rather than weeks and sprints.

The amplifier's promise is real. The promise is that whatever you bring to the tool will be carried further than any previous tool could carry it. The question, which the promise does not answer and which the celebration tends to obscure, is what you bring. And the answer to that question is determined not by the tool but by everything that happened before the tool arrived.

---

Chapter 3: The Field Worker's Hard Lesson

The lesson that technology cannot substitute for human capacity was not learned in a lecture hall. It was learned in a village in Andhra Pradesh where a carefully designed agricultural information system, built to connect small farmers with market prices, weather forecasts, and crop advisory services, sat unused in a building that the villagers had repurposed for storing grain.

The system worked. The interface was clear. The information was accurate and timely. The kiosk had been placed in a location accessible to the farming community. Every design principle had been followed. Every technical requirement had been met. And within six months of deployment, the system was functionally dead, not because the technology had failed but because the human and institutional infrastructure required to sustain it had never existed.

No one had been trained to maintain the kiosk. When it crashed, which it did with the regularity of all technology deployed in environments with unreliable power, no one could restart it. The local champion who had been designated to run the kiosk, a young man who had shown aptitude with computers, had moved to Hyderabad for a job that paid better than kiosk management. His replacement had neither his aptitude nor his training. The agricultural extension service that was supposed to provide the content for the advisory system had never reliably generated that content even before the kiosk arrived. The technology had been designed to distribute information that the extension service was supposed to produce. The extension service was not producing it. The kiosk amplified the extension service's existing output, which was negligible, and delivered negligible information with beautiful precision to farmers who had already gone back to asking their neighbors.

This is the development worker's hard lesson, and Toyama has told versions of it across dozens of contexts because the pattern is invariant. The technology works. The context does not. And because the context is invisible to the people who design the technology, whose fishbowl is shaped by the assumption that the hard problem is the engineering and the context will take care of itself, the failure is attributed to user error, cultural resistance, lack of training, anything except the structural absence of the capacity that the technology was supposed to amplify.

The pattern extends well beyond agriculture. In education, Toyama and his colleagues documented deployment after deployment in which computers in classrooms produced measurable improvement only where the teachers were already effective. The landmark finding was not that technology helps good teachers. That is obvious. The landmark finding was the negative case: that technology does not help bad teachers become good ones. The struggling teacher who received a computer did not become a more effective teacher. She became a teacher with a computer, still struggling, now with an additional piece of equipment she had not been trained to integrate into her pedagogy and a set of institutional expectations that the computer would somehow transform her practice. The computer could not teach her how to teach. It could only amplify whatever teaching she was already doing.

In health care, the pattern held with depressing consistency. Telemedicine systems connecting rural clinics to urban specialists produced excellent outcomes in clinics with competent staff, functional supply chains, and administrative systems capable of following through on specialist recommendations. In clinics without these capacities, the same systems produced nothing. The specialist could diagnose. The specialist could recommend treatment. But the diagnosis and the recommendation traveled across a digital connection into a void where no one had the training, the supplies, or the institutional mandate to act on them. The technology had faithfully carried information from a high-capacity environment to a low-capacity one. The low-capacity environment could not absorb it.

These findings were not aberrations. They were the central tendency. Across the full range of Toyama's fieldwork and the broader ICT4D (Information and Communication Technologies for Development) literature, the pattern held with what one reviewer called "disheartening consistency." Technology deployed in well-functioning institutions improved outcomes. Technology deployed in dysfunctional institutions did not. The variable that predicted the outcome was not the quality of the technology but the quality of the institution.

The implications for AI are direct and specific, and they deserve to be stated without the hedging that typically accompanies academic claims. AI is the most powerful amplifier in the history of technology. It processes natural language. It writes code. It drafts legal briefs. It composes music. It diagnoses diseases from medical images with accuracy that matches or exceeds specialist physicians. It does all of this at a speed and cost that would have been inconceivable five years ago. None of these capabilities are in dispute. And none of them alter the Law of Amplification.

A hospital with excellent physicians, well-designed protocols, robust quality assurance, and a culture of continuous improvement that adopts AI diagnostic tools will become an even more excellent hospital. The AI will catch things the physicians miss. The physicians will catch things the AI hallucinates. The quality assurance system will integrate the new tool into existing protocols. The culture of improvement will study the tool's errors and learn from them. The amplification will be spectacular, and the hospital will hold it up as evidence that AI transforms health care.

A hospital with overworked physicians, poorly designed protocols, weak quality assurance, and a culture of institutional inertia that adopts the same AI diagnostic tools will find that the AI amplifies the overwork, because the physicians now have more information to process without more time to process it. The poor protocols will incorporate AI output without the quality checks necessary to catch errors. The weak quality assurance will accept the AI's confident-sounding assessments without the institutional capacity to evaluate them. The culture of inertia will treat the AI as a replacement for the judgment it was supposed to augment. And when the AI makes an error, which it will, because all AI systems hallucinate and all diagnostic tools produce false positives and false negatives, the institutional infrastructure that should catch the error will not be there.

The technology is identical. The amplification is faithful. The outcomes are opposite.

Toyama identified this dynamic with unusual bluntness in his 2024 Divided We Fall essay: "A corollary to the Law of Amplification is not to seek solutions to problematic technology in technology itself. The problem is less the amplifying technology, as the underlying human forces. Those forces must be changed through law, culture, and social norms." The corollary is uncomfortable because it redirects attention from the thing the technology industry knows how to build (better tools) to the thing the technology industry does not know how to build (better institutions, better education, better cultural norms).

This redirection is not a rejection of the tools. Toyama is not a Luddite, and applying that label to his work is a misreading that serves the interests of those who would rather celebrate the amplifier than invest in what it amplifies. He is a computer scientist who built AI systems, won prizes for them, and then spent a decade watching what happened when powerful tools met weak foundations. The lesson he extracted is not that the tools should not be built. The lesson is that building the tools without building the foundations is a category error that the technology industry commits with remarkable consistency, celebrates with remarkable enthusiasm, and accounts for with remarkable reluctance.

The agricultural kiosk in Andhra Pradesh, the educational computers in Karnataka, the telemedicine systems in Uttar Pradesh are not relics of a pre-AI past that has been superseded by more powerful technology. They are the first chapters of a story that is now being written with Claude Code and GPT-4 and Gemini and every other AI system currently being deployed with the same assumption that powered every previous deployment: that the hard problem is the engineering, and the context will take care of itself.

The hard problem was never the engineering.

The hard problem is always the context.

Segal describes building Napster Station in thirty days, a product that would normally have required quarters of development. The achievement is real and the engineering is impressive. But the context that made it possible was not thirty days old. It was decades old: a team with deep institutional knowledge, a leader with thirty years of technical experience, a company with established processes, quality standards, and the cultural infrastructure of trust and shared purpose that Segal himself identifies as "the hardest thing to build and the most valuable thing to have." Claude Code did not create that context. Claude Code amplified it. And the amplification was extraordinary because the context was extraordinary.

The field worker's hard lesson is that the context is the thing that actually matters, and the context is the thing that technology cannot build. Education builds context. Mentoring builds context. Institutional design builds context. Cultural norms build context. Political will builds context. Technology amplifies whatever context it finds. And if the technology discourse continues to celebrate the amplifier while neglecting the context, the AI era will repeat the pattern of every previous technology deployment that Toyama documented: spectacular results in well-functioning environments, negligible or negative results in dysfunctional ones, and a widening gap between the two that the amplifier accelerates but did not cause.

The pattern is not a prediction. It is a finding, replicated across a body of evidence large enough and consistent enough to constitute something close to a law.

---

Chapter 4: Formal Access, Substantive Capability

In 1999, Nicholas Negroponte stood on a stage and declared that the internet would be the great equalizer. The logic was seductive in its simplicity: information is power, the internet distributes information, therefore the internet distributes power. Within a decade, every village with a connection would have access to the same libraries, the same markets, the same educational resources as a student at MIT. The playing field would level. The barriers would fall. Geography would cease to be destiny.

Twenty-five years later, the internet has reached approximately five billion users. It has transformed commerce, communication, education, and entertainment on a scale that Negroponte's prediction barely anticipated. And the playing field has not leveled. It has tilted further. The Gini coefficient measuring global digital inequality has not converged. The returns to internet access have flowed disproportionately to those who already possessed the education, institutional support, and cultural capital to convert access into capability. The internet did not create inequality. It amplified existing inequality, with the same faithful indifference that Toyama's law describes for every technology that preceded it and every technology that has followed.

The gap between what was promised and what was delivered is not a story of technological failure. The technology succeeded. Five billion people are connected. The story is about a category error so persistent, so deeply embedded in the technology industry's self-conception, that it survives every empirical refutation and reappears, fully formed, with every new technological wave. The category error is the conflation of formal access with substantive capability, the assumption that giving everyone the same tool produces the same outcomes.

Formal access means availability. The tool exists. You can reach it. The subscription costs one hundred dollars per month, or nothing if you use the free tier. The API is documented. The interface is in your language, or close enough. You have a device that can run it and a connection that can reach it. Formal access is binary: you have it or you do not.

Substantive capability means the capacity to use the tool in ways that produce meaningful outcomes: products that serve users, work that meets professional standards, output that translates into economic value or personal development or institutional improvement. Substantive capability is not binary. It exists on a spectrum, and the spectrum is determined by a constellation of factors that the technology itself does not control: education, institutional support, mentoring, market access, cultural capital, professional networks, and the accumulated experience that converts raw tool access into productive practice.

The distinction matters because the democratization narrative, in virtually every version, measures formal access and claims substantive capability. Segal writes that "a student in Dhaka can now access the same coding leverage as an engineer at Google." The statement is true at the level of formal access. The student and the engineer can both open Claude Code, type a prompt in natural language, and receive working code in return. The interface does not discriminate. The subscription does not vary by geography. The computational power behind the response is identical regardless of who is asking.

And the outcomes will be radically different. Not because of the tool. Because of everything the tool does not contain.

The Google engineer brings to the interaction a four-year degree from a competitive university, or its equivalent in years of self-directed learning within a functioning professional ecosystem. She brings institutional context: colleagues who review her work, standards that define what production-quality code looks like, a deployment pipeline that catches errors before they reach users, and a market that connects her output to millions of people. She brings the accumulated judgment of years of practice, the architectural instinct that tells her when code that works is not code that should ship, the quality sense that distinguishes a feature users love from a feature users tolerate. She brings the cultural capital of operating at the center of the global technology ecosystem: the professional vocabulary, the network connections, the understanding of how products are evaluated, funded, and distributed.

The student in Dhaka brings intelligence, which is real and which should not be condescended to. She may bring extraordinary drive, which the technology industry loves to celebrate in retrospect, after the drive has already produced success, while doing little to support it prospectively. She may bring ideas that the Google engineer would not have had, because her position in the world, her specific experience of problems that the global technology ecosystem has not yet addressed, gives her a perspective that is genuinely valuable.

What she may not bring is the institutional infrastructure that converts intelligence and drive into professional capability. Her educational system may not have taught her the problem-decomposition skills that allow complex projects to be broken into manageable components. Her professional context may not provide the quality standards that distinguish working code from production-ready code. Her market access may not connect her output to users at the scale necessary to generate the feedback loops that drive improvement. Her cultural capital may not include the professional vocabulary, the network connections, or the understanding of how the global technology ecosystem evaluates and rewards work.

She has the same lever. She does not have the same fulcrum.

The history of technology-mediated democratization is a history of this distinction being ignored, sometimes through genuine optimism and sometimes through motivated reasoning, and then rediscovered, always at the cost of the people who were promised the equalization.

The One Laptop Per Child initiative, launched in 2005, distributed millions of low-cost laptops to children in developing countries. The logic was identical to the current AI democratization narrative: give children the tool, and they will learn. Some did. In contexts where the educational infrastructure supported the tool, where teachers were trained to integrate laptops into pedagogy, where school administrations maintained the hardware and updated the software, where parents reinforced the educational purpose of the device, outcomes improved. In contexts where this infrastructure was absent, outcomes did not improve, and in some documented cases, declined. A study in Peru found that children who received laptops showed no improvement in math or language skills, a finding consistent with Toyama's law and inconsistent with the assumption that the tool, independent of context, produces learning.

Social media was proclaimed as the great democratizer of voice. Every person a publisher. Every citizen a journalist. The barriers to public expression, which had been controlled by institutional gatekeepers, collapsed overnight. And the resulting landscape of public expression did not produce a level playing field of democratic discourse. It produced algorithmic amplification of existing power: the already-famous became more famous, the already-polarized became more polarized, the already-organized became more effectively organized, and the already-marginalized discovered that having a voice and being heard are not the same thing in an attention economy that rewards engagement over substance.

Mobile banking, M-Pesa in particular, is often cited as the great counterexample: a technology that genuinely transformed financial access in Kenya. And so it did. But the closer examination reveals that M-Pesa succeeded not because the technology alone was sufficient but because it was deployed within an institutional context that supported it: Safaricom's existing distribution network of agents, Kenya's relatively high mobile phone penetration, a regulatory environment that permitted mobile money, and a population with sufficient financial literacy to understand and use the service. The technology was necessary. It was not sufficient. The institutional context was equally necessary. And in countries where mobile money has been deployed without the equivalent institutional context, the results have been less transformative and in some cases negligible.

The pattern is robust enough that its persistence in the face of repeated empirical disconfirmation requires explanation. Technology optimists continue to conflate formal access with substantive capability despite decades of evidence that the conflation is false. The explanation, Toyama suggests, is structural rather than intellectual. The technology industry profits from the narrative that tools solve problems. The narrative justifies the industry's existence, attracts investment, generates public goodwill, and provides moral cover for the concentration of wealth and power that the industry produces. Questioning the narrative questions the business model. It is easier, more profitable, and more psychologically comfortable to celebrate the tool than to confront the foundation.

The current AI moment reproduces this pattern with intensified stakes. The tools are more powerful than anything that preceded them. Claude Code can write working software from a natural-language description. GPT-4 can draft legal briefs, medical diagnoses, and academic analyses. These are not marginal capabilities. They are genuine expansions of what is possible. And the narrative that accompanies them is the same narrative that accompanied every previous expansion: the tools will democratize capability, level the playing field, close the gap.

Toyama's framework predicts that the tools will do the opposite. Not because the tools are flawed. Because the tools are amplifiers, and amplifiers amplify existing differences. The Google engineer who uses Claude Code will produce output that reflects her deep foundations: architecturally sound, professionally polished, integrated into systems that reach users at scale. The student in Dhaka who uses the same Claude Code will produce output that reflects whatever foundations she possesses: perhaps brilliant in conception but lacking the institutional infrastructure that converts a working prototype into a viable product.

The gap between the two is not the tool gap. The tool gap has closed. The gap is the foundation gap: the gap in education, institutional support, mentoring, market access, and cultural capital that determines what the tool has to work with. And the foundation gap, unlike the tool gap, cannot be closed by distributing subscriptions. It can only be closed by investing in the human and institutional capacity that the subscriptions amplify.

Segal acknowledges this, partially and in passing. He writes: "Access requires connectivity, and connectivity requires infrastructure that billions of people do not have." He adds that the tools "require English-language fluency, because the tools are built by American companies, trained on predominantly English data, and optimized for the workflows of Western knowledge workers." These are genuine acknowledgments, and they deserve credit. But they are listed as caveats in a narrative whose momentum carries the reader past them toward the celebration of the expanding floor.

Toyama's framework insists that the caveats are not caveats. They are the argument. The infrastructure gap, the educational gap, the institutional gap, the cultural capital gap: these are not asterisks on the democratization thesis. They are the forces that determine whether the democratization is formal or substantive, whether the expanding floor produces new builders or new frustrations, whether the amplifier narrows inequality or widens it.

The distinction between formal access and substantive capability is not a philosophical nicety. It is an empirical prediction that has been tested across hundreds of technology deployments and confirmed with the consistency that the social sciences rarely achieve. The prediction for AI is clear: the tools will be distributed widely, the access will be celebrated loudly, and the outcomes will track existing capacity with the faithful indifference that every amplifier has always exhibited.

The question is not whether to distribute the tools. The tools should be distributed. They are genuinely powerful, and withholding them from those who could benefit is neither practical nor moral. The question is whether the distribution of tools will be accompanied by the investment in foundations that determines whether the tools produce empowerment or frustration. The tool costs one hundred dollars per month. The foundations that make the tool meaningful cost orders of magnitude more and operate on timescales that the technology industry's quarterly reporting cycle is structurally incapable of comprehending.

The level playing field remains the most persistent myth in technology discourse. It will be proclaimed again with the next tool, and the next, and the next. The Law of Amplification will hold each time. The only variable that changes the outcome is the investment in what the amplifier has to work with. Everything else is marketing.

Chapter 5: What the Student in Dhaka Actually Needs

The student in Dhaka has a name. She has a family. She has a neighborhood where the power cuts out for hours at a time, where the internet connection drops during monsoon season, where the nearest person who has shipped a commercial software product lives in a different city or a different country. She has intelligence that no one who has met her would dispute. She has ideas, several of them genuinely original, born from the specific experience of living in a place whose problems the global technology ecosystem has not prioritized and therefore has not solved.

She has Claude Code. One hundred dollars per month, or access to a free tier that provides enough capability to build a working prototype. She has a laptop, purchased with money her family saved over two years. She has English, learned well enough to prompt an AI system but not well enough to navigate the professional networks where opportunity is brokered in idiom, in cultural shorthand, in the unspoken assumptions that separate an insider from someone performing the motions of insidership.

She is the person the democratization narrative was written about. She is the moral center of the argument that AI tools expand who gets to build. And she deserves more than celebration. She deserves an honest accounting of what she actually needs to convert tool access into the capability the narrative promises.

The first thing she needs, and the thing most conspicuously absent from the technology discourse, is education that develops not tool proficiency but cognitive capacity. The distinction is critical and routinely collapsed. Tool proficiency is the ability to use Claude Code effectively: to write good prompts, to evaluate output, to iterate on results. It is a real skill and a teachable one. A competent tutorial can develop basic tool proficiency in days. Cognitive capacity is something categorically different. It is the ability to decompose a complex problem into tractable components. To evaluate whether a technical solution actually addresses the problem it was designed to solve. To anticipate failure modes that the tool's confident output does not flag. To distinguish between code that works and code that should ship, between a prototype that demonstrates a concept and a product that serves a user reliably under conditions the developer did not anticipate.

These capacities are not developed by tutorials. They are developed by years of structured engagement with increasingly complex problems, guided by people who have already developed the capacities and can provide the feedback, the correction, and the modeling that converts raw intelligence into professional judgment. Toyama's fieldwork documented this with painful clarity: the most effective educational technology deployments were the ones embedded in educational systems that were already developing these capacities. The technology amplified the development. It did not replace it.

The student in Dhaka may or may not have had access to this kind of education. Bangladesh's educational system produces extraordinary talent at the top of the distribution, students who compete successfully in international mathematics and programming olympiads, who gain admission to the world's most selective universities, who build careers at the world's most demanding technology companies. It also leaves vast numbers of students with formal credentials that do not correspond to the cognitive capacities the credentials are supposed to certify. The gap between the diploma and the capability it represents is not unique to Bangladesh. It exists in every educational system, in every country, at every level of development. But the gap is wider in systems with fewer resources, and the consequences of the gap are more severe when the alternative pathways to capability development, mentoring, professional apprenticeship, institutional support, are also constrained.

The second thing the student needs is institutional infrastructure that provides quality standards, feedback mechanisms, and the connective tissue between individual effort and professional practice. In the developed technology ecosystem, this infrastructure is so pervasive it becomes invisible, like water to a fish. A junior developer at a well-functioning company writes code that is reviewed by a senior developer before it reaches production. The review catches errors the junior developer would not have caught, not because she is careless but because she has not yet developed the pattern recognition that comes from years of seeing code fail in specific ways. The review is also a teaching mechanism: each correction deposits a thin layer of understanding that accumulates, over months and years, into the judgment that distinguishes a competent developer from an expert one.

The student in Dhaka may not have access to code review. She may not have access to anyone who has shipped a production system and can tell her the difference between code that works in a demo and code that works when ten thousand users hit it simultaneously. She may not have access to the professional norms that define quality in the global technology market, norms that are culturally specific, institutionally transmitted, and largely invisible to people outside the institutions that transmit them. She has the tool. She does not have the ecosystem that the tool was designed to operate within.

Toyama identified this gap as the institutional capacity gap, and he distinguished it sharply from the technology access gap that dominates public discourse. The technology access gap is the gap between those who have the tool and those who do not. It is quantifiable, photogenic, and solvable with money: distribute devices, build networks, subsidize subscriptions. The institutional capacity gap is the gap between those who have the human and organizational infrastructure to use the tool productively and those who do not. It is harder to quantify, impossible to photograph, and unsolvable with technology alone. It requires investment in educational systems, professional mentoring networks, quality standards, and the cultural norms that define what good work looks like in a given domain.

The third thing the student needs is market access, the connection between what she builds and the people who might use it, pay for it, and provide the feedback that drives improvement. The global technology market is not a level playing field accessible to anyone with a product. It is a network of relationships, platforms, distribution channels, and gatekeeping mechanisms that favor incumbents, reward proximity to capital, and operate according to cultural norms that the student in Dhaka did not grow up with and may not have access to.

A developer in San Francisco who builds a product can walk into a coffee shop and meet a potential investor, a potential user, a potential mentor. She can attend meetups, join accelerator programs, tap professional networks built over years of institutional participation. Her product exists within an ecosystem that is designed, imperfectly but substantially, to connect builders with users and capital. The student in Dhaka's product exists outside that ecosystem. She can post it online, and the internet provides formal access to a global market. But formal access to a market, like formal access to a tool, is not substantive market participation. Substantive participation requires understanding the market's norms, speaking its language, navigating its gatekeepers, and maintaining the persistent presence that converts a one-time interaction into an ongoing relationship.

The fourth thing the student needs, and the thing most difficult to discuss without condescension, is cultural capital: the accumulated set of behaviors, vocabularies, assumptions, and professional reflexes that mark a person as an insider in the global technology ecosystem. Cultural capital includes knowing how to write an email that gets a response from a potential partner. Knowing how to present a product in terms that resonate with users who have different needs and different aesthetic expectations than the developer. Knowing how to negotiate, how to price, how to position, how to tell the story of what the product does and why it matters in a vocabulary the market recognizes. Cultural capital is not intelligence. It is not even skill. It is fluency in a specific professional culture, and like all fluency, it is developed through immersion, not instruction.

The technology industry's relationship to cultural capital is paradoxical. The industry prides itself on meritocracy, on the idea that the best product wins regardless of who built it. And at the same time, the industry is profoundly shaped by cultural capital: by the norms of Silicon Valley, by the vocabulary of venture capital, by the professional networks that cluster around specific universities and specific companies and specific geographic locations. The student in Dhaka can build a product that is technically equivalent to a product built in San Francisco. The two products will not compete on equal terms, because the San Francisco product is embedded in an ecosystem of cultural capital that the Dhaka product is not.

None of these needs, education, institutional infrastructure, market access, cultural capital, are met by the tool. The tool is one component of a system that produces capability. It is the most visible component, the most distributable, the most celebrated. It is also the least expensive and the least determinative. The foundations that the tool amplifies are expensive, slow to build, institutionally complex, and resistant to the kind of rapid scaling that the technology industry prizes and the venture capital model demands.

Toyama's prescription, developed through years of watching the gap between tool distribution and capability development, is direct: invest in the foundations first, distribute the tools second. The formulation sounds backward to an industry conditioned to lead with the technology. But the evidence supports it with the consistency of a well-replicated finding. The deployments that produced transformative outcomes were the ones where the human and institutional capacity preceded the technology. The deployments that produced nothing, or worse, were the ones where the technology preceded the capacity.

Applied to AI, this means that the most impactful investment for the student in Dhaka is not a Claude Code subscription. The subscription helps if the foundations are in place. It helps enormously if the foundations are strong. But the subscription alone, in the absence of educational development, institutional support, mentoring, market access, and cultural capital, amplifies primarily the frustration of a person who can see what the tool can do and cannot convert that capability into the outcomes the democratization narrative promised.

The technology discourse frames this as a temporary problem, a gap that will close as the tools improve and the costs decline. Toyama's evidence suggests otherwise. The gap between formal access and substantive capability has not closed with any previous technology. It has widened, because the people with strong foundations extract more value from each improvement in the tool, and each improvement amplifies the existing difference in foundations. The dynamic is self-reinforcing. The tool gets better. The well-founded extract more. The gap grows. The next improvement in the tool accelerates the cycle.

The student in Dhaka deserves better than a narrative that celebrates her tool access while ignoring the foundations she lacks. She deserves investment in the educational systems, institutional infrastructure, mentoring networks, and market connections that would make her tool access meaningful. She deserves the recognition that a subscription is not a solution, that access is not capability, and that the celebration of the tool without investment in the foundation is not generosity but a particularly sophisticated form of neglect.

She deserves, in other words, what the Trivandrum engineers already had before Claude Code arrived: the foundation that the tool amplified.

---

Chapter 6: The Amplification of Advantage

In 1968, the sociologist Robert K. Merton described a pattern he had observed in the distribution of scientific recognition. When two scientists made the same discovery independently, the one who was already famous received the credit, the citations, the invitations, and the funding. The one who was unknown received nothing, or a footnote. The advantage of prior recognition amplified itself: recognition produced resources, resources produced further discovery, further discovery produced further recognition. Merton called it the Matthew Effect, after the Gospel of Matthew: "For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath."

The Matthew Effect operates in every domain where cumulative advantage is possible, which is to say every domain. In education, students who read well in first grade read more, which develops their reading further, which opens more complex material, which develops their cognition further. Students who read poorly in first grade read less, fall further behind, and the gap widens with every year. In economics, capital generates returns, returns generate more capital, and the compounding produces the wealth distributions that characterize every market economy. In professional networks, connections generate opportunities, opportunities generate more connections, and the network effects produce the concentrations of social capital that determine who gets heard and who gets ignored.

The Matthew Effect is not a conspiracy. It is not the result of deliberate exclusion, though deliberate exclusion certainly exists and amplifies the dynamic. It is a structural property of any system in which returns compound. Prior advantage produces further advantage. Prior disadvantage produces further disadvantage. The system does not need to intend inequality. The mathematics of compounding produces it automatically.

Toyama's Law of Amplification is the Matthew Effect applied to technology. When a tool amplifies existing capacity, it amplifies existing differences in capacity. The well-resourced become more well-resourced. The under-resourced remain under-resourced, now with the additional psychic cost of watching the well-resourced pull further ahead using the same tool that was supposed to level the playing field.

The dynamics of this amplification in the AI era are specific enough to trace.

Consider two developers. One is employed at a major technology company. She has a computer science degree from a strong university, five years of professional experience, colleagues who review her code, a manager who provides career development, access to internal documentation and proprietary tools, and a salary that allows her to invest time in learning and experimentation without economic anxiety. She adopts Claude Code.

Her existing capacity is substantial. The tool amplifies it. She produces more complex systems, faster. Her output attracts recognition within her company. The recognition produces better assignments, which develop her skills further. She publishes a blog post about her AI-augmented workflow. The post attracts attention. A recruiter contacts her. A conference invites her to speak. Her professional network expands. Each expansion creates new opportunities. The cycle accelerates.

The second developer is self-taught. He lives in a city without a technology hub. His education was interrupted. He learned to code through free online tutorials, which taught him syntax but not architecture, algorithms but not systems design, how to make code work but not how to make it work reliably at scale. He has no colleagues to review his code. No mentor to identify his blind spots. No institutional infrastructure to connect his output to a market. He adopts the same Claude Code, at the same price, with the same interface.

The tool amplifies his existing capacity, which is real but narrower. He produces working software, which is more than he could produce before. But the software reflects his foundations: functional but architecturally fragile, lacking the quality signals that professional markets reward. He posts his work online. It attracts modest attention. The attention does not compound into opportunities, because the ecosystem that converts attention into opportunity, the professional networks, the conference circuits, the recruiter pipelines, operates in a different world than the one he inhabits.

Both developers experienced genuine benefit from the tool. The tool expanded what each could do. But the expansion was not equal, because the starting points were not equal, and the tool amplified the starting points. The first developer's benefit compounded. The second developer's benefit was real but isolated, a one-time improvement rather than the beginning of a compounding cycle.

This is not a hypothetical. The data on technology adoption consistently shows that the returns to technology investment are higher for those who already possess greater capacity. The World Bank's 2016 World Development Report: Digital Dividends documented this pattern across dozens of countries and technology types: "Digital technologies have spread rapidly in much of the world. Digital dividends — the broader development benefits from using these technologies — have lagged behind." The report found that the dividends flowed disproportionately to those who were already skilled, already connected, and already embedded in functioning institutions.

Erik Brynjolfsson and Andrew McAfee, in The Second Machine Age, documented the same dynamic in the American economy: technology-driven productivity gains were captured disproportionately by high-skill workers and capital owners, producing what they called the "great decoupling" between productivity and median wages. The economy grew. The average worker did not benefit proportionally. The gains accumulated at the top, where the capacity to leverage the technology was greatest.

The AI era accelerates this dynamic because AI is a more powerful amplifier than any technology that preceded it. The previous amplifiers, personal computers, the internet, mobile devices, amplified specific capabilities: computation, information access, communication. AI amplifies the meta-capability, the ability to produce complex intellectual work. It amplifies the thing that was previously the human bottleneck. And because the human bottleneck was the thing that differentiated high-capacity workers from low-capacity workers, amplifying it amplifies the differentiation.

A concrete illustration from the domain Segal knows best: software development. Before AI coding assistants, the productivity gap between a senior engineer and a junior engineer was substantial but bounded. The senior engineer might produce three to five times the output of the junior, adjusted for quality. The gap reflected differences in experience, judgment, and architectural knowledge, all of which had been developed through years of practice within functioning institutions.

Claude Code does not eliminate this gap. Early evidence suggests it widens it. The senior engineer uses Claude Code to express architectural judgment that previously required days of implementation. The tool removes the mechanical bottleneck and reveals the judgment layer, which is where the senior engineer's advantage lives. The junior engineer uses the same tool to produce more code, faster, but the code reflects junior-level judgment: functional but not architecturally sound, working but not robust, solving the stated problem without anticipating the unstated problems that the senior engineer's experience has taught her to see.

The tool made both faster. It made the senior engineer disproportionately more effective, because the senior engineer's advantage was in the judgment that the tool liberated rather than in the implementation that the tool replaced. The junior engineer's advantage was primarily in implementation speed, which the tool commoditized. The net effect: the gap widened.

This pattern, the amplification of advantage through disproportionate returns to those who already possess greater capacity, is what Toyama's framework predicts. It is what the history of technology consistently demonstrates. And it is what the AI moment is beginning to produce, visible already in the early data on AI-augmented productivity and earnings differentials, certain to intensify as the tools improve and the compounding dynamics have more time to operate.

The implications extend beyond individual careers to national and regional development. Nations with strong educational systems, functioning institutions, and deep reservoirs of technical talent will extract disproportionate value from AI. Nations without these foundations will fall further behind. The technology will be available to both. The amplification will be faithful to both. And the outcomes will diverge, because the inputs diverge, and the tool amplifies the inputs.

Toyama has been explicit about this trajectory. Writing in The Conversation, he predicted that AI would cause "creative-class people with human-only skills" to "become richer but fewer in number" while "those who own creative technology will become the new mega-rich." The prediction is structurally identical to the pattern documented by Brynjolfsson and McAfee: technology-driven gains accumulating at the top, where capacity is highest and where the compounding dynamics are strongest.

The Matthew Effect does not require malice. It does not require conspiracy. It requires only a tool that amplifies existing capacity in a world where existing capacity is unequally distributed. The mathematics handles the rest. And the only force that has ever counteracted the Matthew Effect is deliberate, sustained, costly investment in building the capacity of those who have the least. Not distributing tools, though tools should be distributed. Building the foundations that tools amplify: educational systems that develop cognitive capacity, institutions that provide quality standards and mentoring, market mechanisms that connect effort to opportunity, and the cultural infrastructure that makes all of it cohere.

The amplifier is neutral. The amplification is not. The neutrality of the tool and the non-neutrality of the outcome is the central paradox of technology-mediated development, and it is the paradox that the celebration of the tool was designed, consciously or otherwise, to obscure.

---

Chapter 7: When the Tool Amplifies Dysfunction

At three in the morning, somewhere over the Atlantic, Edo Segal catches himself. He is not writing because the book demands it. He is writing because he cannot stop. "The muscle that lets me imagine outrageous things," he writes, "the muscle I celebrate, the muscle I train my teams to develop, had locked." The exhilaration has drained away. What remains is "the grinding compulsion of a person who has confused productivity with aliveness."

This is an extraordinary confession, made more extraordinary by the fact that the person making it recognizes the pathology while remaining inside it. Segal does not close the laptop. He keeps writing. And he tells the reader he keeps writing, which means the confession is honest but the behavior is unchanged. The diagnosis is precise. The treatment is absent. The tool is still open, still responding, still amplifying whatever he feeds it at three in the morning over an ocean where no one can remind him that sleep is a biological requirement rather than a productivity failure.

Toyama's law has a corollary that the technology industry finds especially uncomfortable: technology amplifies dysfunction as faithfully as it amplifies function. The amplifier does not evaluate what it is given. It does not distinguish between a signal that represents genuine creative urgency and a signal that represents compulsive overwork dressed in the vocabulary of inspiration. It amplifies both. The output in both cases is more: more code, more text, more product, more of whatever the person feeds into the tool, rendered at a speed that outpaces the person's capacity to evaluate whether the output deserves to exist.

The Berkeley study that Segal discusses in The Orange Pill documented this dynamic with empirical precision. Workers who adopted AI tools worked faster, took on more tasks, expanded into domains previously belonging to other people. Work seeped into pauses. Lunch breaks became prompting sessions. Elevator rides became iteration cycles. The micro-rests that had previously punctuated the workday, the brief cognitive respites that neuroscience has shown are essential for consolidation and creative incubation, were colonized by a tool that was always available and always responsive.

The researchers called it task seepage, and the term captures something important about the mechanism. It is not that the tool forces work into rest. No manager is standing over the employee demanding that she prompt during lunch. The seepage is internal. It is the achievement subject that Byung-Chul Han describes, the internalized imperative to optimize, to produce, to convert every available minute into output. The tool did not create this imperative. The imperative was already there, deposited by decades of professional culture that equates visible productivity with value. The tool amplified it, removing every friction between impulse and execution, making it possible to act on the compulsion in the time it takes to type a sentence.

This is the amplification of dysfunction. Not the dysfunction of the tool, which is working exactly as designed. The dysfunction of the human pattern it is amplifying: the pattern of compulsive work, the inability to rest, the confusion of output with meaning, the substitution of productivity metrics for the harder question of whether the thing being produced is worth producing.

Segal recognizes this. He writes about the "specific grey fatigue" of compulsion, distinguishing it from the energizing exhaustion of genuine flow. He describes learning to read the signal: "When I am in flow, I ask generative questions. When I am in compulsion, I am answering demands, clearing the queue, optimizing what already exists." The distinction is real and important. But it is also a distinction that requires a level of self-awareness that is itself a form of developed capacity. Not everyone who is caught in the amplification of compulsion has the vocabulary to name it, the self-knowledge to recognize it, or the structural permission to stop when they identify it.

The amplification of individual dysfunction is troubling. The amplification of organizational dysfunction is dangerous. Toyama's fieldwork documented this with a consistency that left little room for alternative interpretation: organizations with dysfunctional processes that adopted technology produced the same dysfunctional outcomes, faster and at larger scale. A bureaucracy that processed applications slowly and inequitably, given a digital processing system, processed applications faster and inequitably. The technology did not fix the inequity. It accelerated it. A school administration that evaluated teachers using superficial metrics, given data analytics tools, produced more sophisticated superficial evaluations. The tools generated impressive dashboards. The dashboards measured the wrong things. The measurements were now more precise, more granular, more beautifully visualized, and no closer to capturing what actually mattered about teaching quality.

Applied to the AI moment, the implications are direct. Consider an organization that has poor product judgment, that builds features no one needs, that prioritizes engineering elegance over user value, that ships fast without understanding who it is shipping to or why. This organization adopts AI tools. What happens?

The organization builds the wrong things faster. It ships features no one needs at unprecedented speed. It generates more code, more products, more output, all of it reflecting the same poor judgment that characterized the organization before the tools arrived, now amplified by a factor of twenty. The quarterly metrics look spectacular. The dashboard shows more commits, more deployments, more velocity. The underlying pathology, the absence of product judgment, the disconnection between what is being built and what users need, is invisible to the metrics and amplified by the tools.

This is not a theoretical concern. It is the pattern that Toyama documented across every category of technology deployment, and there is no reason to believe AI will be the exception. AI is a more powerful amplifier. A more powerful amplifier applied to dysfunction produces more powerful dysfunction. The mathematics are straightforward. The consequences are not.

The amplification of educational dysfunction deserves particular attention, because education is the domain where the foundations that determine all subsequent amplification are built. Segal calls for educational reform, for teaching students to ask questions rather than produce answers, for developing judgment rather than proficiency. The call is right. But the institutions being called upon to reform are themselves subject to the Law of Amplification.

An educational system that teaches to the test, that evaluates students on their capacity to produce correct answers rather than their capacity to ask generative questions, that rewards compliance over curiosity, will adopt AI tools and amplify every one of these tendencies. The tests will become more sophisticated. The students will use AI to produce more correct answers, faster, on more tests. The evaluation system will generate more data, more precisely, about the wrong things. The educational system will become a more efficient machine for producing the same shallow outcomes it was producing before, now with AI-enhanced throughput and AI-generated dashboards showing impressive metrics that measure everything except learning.

Toyama observed this in educational deployments throughout South Asia: technology adopted by educational systems that did not know what good education looked like did not produce good education. It produced faster, better-measured, more impressively presented bad education. The technology amplified the system's existing definition of quality, which was flawed, and the amplification made the flaw harder to see, because the outputs looked more sophisticated.

The amplification of dysfunction is not limited to developing countries. Toyama has been careful to note that the Law of Amplification is universal, not geographically specific. A dysfunctional school in suburban America that adopts AI tools will amplify its dysfunction just as faithfully as a dysfunctional school in rural Bihar. The dysfunction may take different forms: in one case, teaching to standardized tests; in another, chronic understaffing; in a third, administrative bloat that consumes resources without improving outcomes. But the mechanism is identical. The tool amplifies what is there. If what is there is dysfunctional, the tool amplifies the dysfunction.

Segal's own confession about compulsive work is, in Toyama's framework, evidence for the law rather than against it. The tool amplified Segal's creative energy, which produced extraordinary output. The tool also amplified Segal's compulsive tendencies, which produced three-in-the-morning writing sessions where the exhilaration had drained away and what remained was the grinding continuation of a pattern that had become self-sustaining. Both amplifications were faithful. Both were real. The tool did not distinguish between them, because amplifiers do not distinguish. They amplify.

The response to the amplification of dysfunction is not to reject the tools. Toyama has been clear about this, and the point deserves emphasis because his argument is routinely caricatured by those who find it inconvenient. The response is to fix the dysfunction before or alongside the deployment of the tools. To invest in the organizational processes, the educational quality, the individual self-awareness, and the institutional structures that determine whether the amplification produces flourishing or compounded failure.

This investment is harder, slower, more expensive, and less photogenic than distributing tools. It requires engaging with the human and institutional complexity that the technology industry's business model is designed to bypass. It requires admitting that the hard problem is not the engineering. The hard problem is the human system that the engineering amplifies. And admitting this requires a humility that is structurally incompatible with the narratives of disruption and transformation that the technology industry uses to raise capital, attract talent, and justify its social position.

Toyama acquired this humility the hard way: through years of watching deployments fail not because the technology was insufficient but because the human system was insufficient and no amount of technological improvement could compensate for the deficiency. The humility is not comfortable. It does not fit on a slide deck. It does not inspire venture capital. But it is what the evidence demands, and ignoring the evidence in favor of a more inspiring narrative is the geek heresy that Toyama named and that the AI moment is repeating at greater scale and greater speed than any previous technological wave.

---

Chapter 8: Dams Without Foundations

In The Orange Pill, Segal calls for the building of dams, institutional structures that redirect the flow of AI capability toward human flourishing. The metaphor is vivid: the beaver in the river, sixty pounds of determination against a current that would otherwise sweep everything downstream. The beaver does not stop the river. It builds structures that create pools of still water where life can take root. The dam requires constant maintenance. The river tests every joint. The beaver repairs what the current loosens.

The call is necessary. Without institutional structures, without regulatory frameworks, without professional norms and educational reforms and cultural practices that shape how AI is used, the technology will be governed entirely by the market, which is to say by the interests of the people who build and sell the tools. Segal understands this. He calls for AI Practice in organizations, for educational reform, for attentional ecology, for the kind of sustained institutional investment that turns a powerful technology into a beneficial one. The call is sincere, and it is right.

But Toyama's framework forces a question the call does not adequately answer: Who builds the dams? With what resources? On whose behalf? And what happens to the people who live downstream of the dam-building process but have no voice in its design?

The history of technology governance is a history of dams built by the powerful for the powerful. The observation is not cynical. It is empirical. Every major technology governance framework of the past three decades has been designed by institutions that represent the interests of developed nations, large corporations, and the professional classes that staff both. The EU AI Act, the most comprehensive regulatory framework for artificial intelligence in the world, was designed by European policymakers for European citizens and European companies. Its protections are real. Its jurisdiction is limited. The student in Dhaka is not protected by the EU AI Act. The developer in Lagos is not represented in the regulatory process that produced it. The communities whose foundations are weakest, whose need for the dam is greatest, are the communities with the least voice in the dam's design.

This is not a failure of intention. The EU regulators were not trying to exclude the global South from the benefits of AI governance. They were responding to the interests and needs of their constituents, which is what democratic governance is supposed to do. The structural problem is not malice. It is scope. The dams are built within jurisdictions. The river flows across them.

Corporate AI governance frameworks exhibit the same structural limitation at a different scale. Companies that adopt responsible AI practices, that build ethics boards, that conduct impact assessments, that establish use policies and deploy technical safeguards, are building dams within their organizations. These dams serve the company's interests, which may align with broader social interests but are not identical to them. A company that prohibits discriminatory uses of its AI tools within its platform is building a real dam. The dam does not extend to the uses of AI that occur outside the platform, in the hands of users and organizations whose contexts and capacities the company does not control and cannot monitor.

The most sophisticated corporate AI governance frameworks are, in Toyama's framing, amplifications of existing institutional capacity. Companies with strong ethical cultures, experienced compliance teams, and genuine leadership commitment to responsible practices produce governance frameworks that are substantive. Companies that adopt governance frameworks as performative exercises, checking boxes to satisfy regulators or public relations requirements without changing the underlying practices, produce governance that amplifies the existing superficiality.

The dam is only as strong as the institution that builds it. And the institutions that are most capable of building strong dams are the institutions that least need them, because their existing capacity would have produced responsible practices with or without the formal framework. The institutions that most need the dams, the ones with weak internal capacity, poor ethical cultures, and leadership that treats governance as a cost center rather than a strategic function, produce dams that are cosmetic rather than structural.

Toyama encountered this dynamic repeatedly in his development work. The most common response to a failed technology deployment was to add another layer of technology: if the kiosk is not working, add a monitoring system. If the monitoring system is not working, add a reporting dashboard. Each layer was a dam of sorts, an institutional structure designed to ensure that the technology was being used as intended. And each layer failed for the same reason the original deployment failed: the human and institutional capacity to maintain the layer was absent. The monitoring system required someone to check it. The reporting dashboard required someone to read it and act on what it showed. The dams required the same capacity they were supposed to substitute for.

Segal's prescription for organizations, to build "AI Practice" through structured pauses, sequenced workflows, and protected mentoring time, is sound. But the prescription assumes an organizational culture that is already capable of valuing these practices. An organization that already understands the importance of deep thought, mentoring, and reflective practice will implement AI Practice effectively, because the values that make AI Practice work are the values the organization already holds. The implementation will amplify existing organizational wisdom.

An organization that does not value these things, that rewards visible productivity over reflective practice, that treats mentoring as a cost rather than an investment, that equates speed with quality, will implement AI Practice as another mandate to be checked off and forgotten. The structured pauses will be colonized by urgent tasks. The sequenced workflows will be overridden by deadline pressure. The protected mentoring time will be sacrificed to quarterly objectives. The implementation will amplify existing organizational dysfunction.

The prescription is right. The preconditions for the prescription to work are the preconditions that the prescription was supposed to create. The circularity is not a flaw in Segal's reasoning. It is a structural feature of every institutional reform that relies on the institutions being reformed to implement the reform. And it is the structural feature that Toyama's work illuminates with the most uncomfortable clarity.

National AI strategies exhibit the same pattern at a larger scale. Nations with strong educational systems, well-functioning regulatory institutions, and deep reservoirs of technical talent develop AI strategies that are sophisticated, well-resourced, and likely to produce positive outcomes, because the foundations that the strategies amplify are strong. Nations without these foundations develop AI strategies that are ambitious in rhetoric and limited in execution, because the institutional capacity to implement the strategy is the institutional capacity the strategy was supposed to build.

The United States, China, the United Kingdom, and a handful of other nations with deep technological infrastructure and strong institutional capacity are positioning themselves to capture the lion's share of AI's benefits. Their national strategies are backed by educational systems that produce world-class researchers, by corporate ecosystems that translate research into products, and by capital markets that fund the transition from laboratory to market. Their dams are structural. They will hold because the foundations are deep.

Nations in the global South are developing AI strategies with fewer resources, weaker institutions, and less capacity to execute. Their strategies often focus on tool access, on building data centers and training programs, on distributing the technology as widely as possible. These are reasonable steps. They are also insufficient, because the tool access is the cheap part. The institutional capacity that makes tool access productive is the expensive part, and it is the part that is hardest to build quickly and most consistently underfunded.

The global AI governance landscape, taken as a whole, is a landscape of unequal dam-building capacity. The dams that get built are the dams that the most capable institutions build. The communities that most need dams, the communities where the foundations are weakest and the risk of harmful amplification is greatest, are the communities with the least capacity to build them. The river flows through all of them. The dams protect some of them.

Toyama would not reject the call to build dams. The call is necessary and urgent. What Toyama's framework adds is the insistence that the call, without investment in the dam-building capacity of those who need the dams most, reproduces the inequality it is supposed to address. Building dams for the already-well-served while leaving the under-served to the unmediated force of the river is not governance. It is the amplification of existing governance capacity, which is distributed as unequally as every other form of capacity.

The response is not despair. The response is the recognition that the dam and the foundation must be built together, that the investment in institutional capacity is not a prerequisite that must be completed before the technology arrives but a parallel process that must proceed alongside it, and that the voices of those who most need the dams must be present in their design. The technology is already flowing. The dams must be built in the water, not on dry land. But they must be built with the participation of everyone who will live behind them, not only those who already possess the resources to build for themselves.

The beaver metaphor is powerful. The beaver builds in the river, not above it. The beaver maintains what it builds, constantly, against the current. But the metaphor leaves one thing unexamined: the beaver builds for its own pool. The ecosystem that benefits is the ecosystem immediately behind the dam. Downstream, the river flows on, unimpeded. The question for the AI era is whether the dams will be built only by those who can build for themselves, creating pools of flourishing surrounded by unimpeded current, or whether the dam-building itself will be a shared project, invested in and designed by all the communities the river runs through.

The answer to that question is not a technology question. It is a political question, an educational question, a question of resource allocation and institutional design and the willingness of the powerful to invest in the capacity of the less powerful. Technology cannot answer it. Technology can only amplify whatever answer human beings provide.

Chapter 9: What Genuine Democratization Requires

The word "democratization" has been applied to technology so many times that it has lost most of its meaning. Every platform launch, every price reduction, every new tool that reaches a wider audience than the previous version is called a democratization. The word has become a marketing term, deployed to generate goodwill and deflect scrutiny, a linguistic signal that the speaker is on the right side of history without requiring any evidence about what the history actually produces.

Toyama's framework allows the word to be recovered, because the framework provides a precise distinction between what democratization means and what it requires. Democratization does not mean that everyone has the tool. Democratization means that everyone has the capacity to use the tool in ways that produce meaningful outcomes. The first is distribution. The second is development. The technology industry is very good at the first. It is structurally indifferent to the second. And the gap between the two is the gap through which every previous democratization claim has leaked its moral substance.

What would genuine democratization of AI capability require? The answer, derived from Toyama's decades of field research and the broader evidence on technology and development, is specific enough to be actionable and expensive enough to be uncomfortable.

The first requirement is educational investment at a scale and depth that the current discourse does not contemplate. Not training in tool use. Training in the cognitive capacities that make tool use productive: problem decomposition, evaluative judgment, systems thinking, the ability to distinguish between an output that looks correct and an output that is correct, the capacity to ask questions that no tool can generate. These capacities are developed through years of structured engagement with increasingly complex problems, guided by skilled educators who possess the capacities themselves and can model them for students who are developing them.

The scale of this investment dwarfs the cost of the tools. A Claude Code subscription costs one hundred dollars per month. Training a teacher who can develop evaluative judgment in students requires years of education, ongoing professional development, a salary sufficient to retain her in the profession, and an institutional context that supports her work. Training a generation of students in the cognitive capacities that make AI tools productive requires an educational system that values those capacities, that has the institutional infrastructure to develop them, and that is staffed by people who possess them. This is not a twelve-week bootcamp. It is a generational investment in human capital.

Toyama's fieldwork documented the consequences of attempting the shortcut: distributing tools without building the educational capacity to use them. The consequences were consistent. The tools amplified whatever the educational system had already produced. Where the system had produced genuine capability, the tools were transformative. Where the system had produced credentials without corresponding capacity, the tools amplified the gap between the credential and the capability. The students who could already think critically used the tools to think more critically. The students who could not think critically used the tools to produce more sophisticated-looking output that reflected the same absence of critical thought.

The second requirement is institutional infrastructure that extends beyond the educational system into the professional ecosystems where tool use translates into productive capability. This means mentoring networks that connect people who are developing their capacity with people who have already developed it. It means quality standards that define what good work looks like in AI-augmented practice, standards that are communicated not through documentation alone but through the kind of apprenticeship relationship that has historically been the most effective mechanism for transmitting professional judgment. It means feedback loops that allow practitioners to learn from their errors, which requires someone who can identify errors and explain why they are errors, which requires the institutional infrastructure that produces such people.

In the developed technology ecosystem, this infrastructure exists and is largely invisible. A junior engineer at Google learns professional standards not primarily through documentation but through code review, through mentoring, through the ambient professional culture of an organization that has spent decades codifying what good practice looks like. The institutional infrastructure is the medium through which professional capacity is transmitted from one generation of practitioners to the next. It is not a product. It cannot be shipped. It requires human beings in sustained relationship with other human beings, which requires institutions capable of supporting those relationships over time.

The student in Dhaka who receives a Claude Code subscription without access to this institutional infrastructure is in the position of a person given a surgical instrument without access to a medical school, a hospital, or a senior surgeon who can demonstrate what good surgery looks like. The instrument is real. The incision it makes is real. The question of whether the incision is made in the right place, at the right depth, with the right follow-through, is a question the instrument cannot answer. Only the institutional infrastructure that develops surgical judgment can answer it, and that infrastructure does not fit in a subscription.

The third requirement is market infrastructure that connects capability to opportunity. Building a product is one step. Reaching users is another. Reaching users at a scale that generates sustainable revenue is another still. Each step requires infrastructure that is not included in the tool: distribution channels, payment systems, customer support mechanisms, the legal and regulatory knowledge necessary to operate in different markets, and the professional networks through which opportunities are brokered.

The technology industry tends to assume that good products find their markets. This assumption reflects the experience of people who operate at the center of a market that is designed to find them. For a developer in San Francisco, the market is ambient. It is in the conferences she attends, the meetups she visits, the online communities she participates in, the professional networks that bring opportunities to her attention without her having to seek them. For a developer in Dhaka, the market is remote. It exists, but the connections between her work and the market's demand run through channels she may not have access to, may not know exist, and may not be equipped to navigate.

Market infrastructure is not equally distributed, and the distribution of market infrastructure is not addressed by distributing tools. A genuine democratization agenda would invest in the connective tissue between capability and opportunity: in platforms that connect builders in underserved markets with users and customers, in mentoring programs that teach the non-technical skills necessary to navigate global markets, in funding mechanisms that recognize potential in contexts where the signals of potential look different from what Silicon Valley has been trained to reward.

The fourth requirement, and the one that receives the least attention because it is the most difficult to operationalize, is investment in the cultural foundations that support sustained creative engagement. Culture, in this context, means the shared norms, values, and practices that shape how people approach work, learning, and professional development. It means the expectation of quality. It means the tolerance for failure that is necessary for learning. It means the balance between ambition and reflection that sustains long-term development rather than burning out in a sprint.

These cultural foundations are not universal. They vary across communities, nations, and professional ecosystems. And they are not created by technology. They are created by families, schools, communities, and institutions that model the norms and transmit them across generations. A genuine democratization of AI capability would invest in the cultural contexts that produce the human qualities the tools amplify: curiosity, discipline, critical thinking, the willingness to persist through difficulty, and the judgment to know when persistence has become compulsion.

Toyama's prescription is clear and has been consistent across his career: invest in the human foundations first, distribute the tools alongside but not instead of the foundations, and measure success not by the number of people who have access to the tool but by the number of people who can use the tool to produce outcomes that improve their lives and the lives of their communities.

The prescription is expensive. It is slow. It does not scale in the way the technology industry understands scaling, because human development does not scale the way software scales. Each person's development requires individual attention, institutional support, and time. The technology can be copied at zero marginal cost. The human capacity it amplifies cannot.

This is the uncomfortable truth at the center of Toyama's work, and it is the truth the technology industry most consistently avoids: the most impactful investment in the AI era is not the AI. It is the human being the AI amplifies. And investing in human beings, really investing, at the depth and scale that genuine democratization requires, is harder, slower, and more expensive than any technology deployment the industry has ever undertaken.

The alternative, distributing tools without investing in foundations, is the path of least resistance and the path of maximum inequality. It produces formal democratization and substantive stratification. It celebrates access while neglecting capability. And it repeats, at greater scale and greater speed, the pattern that Toyama documented across every previous technology transition: the tools reach everyone, the benefits accrue to those who were already best positioned to benefit, and the gap between the celebration and the reality widens with each iteration.

Genuine democratization requires the courage to invest in the hard thing. Not the tool. The person.

---

Chapter 10: The Necessary Human Investment

Toyama's body of work converges on a single, uncomfortable conclusion: the most important variable in any technology deployment is not the technology. It is the human and institutional capacity that the technology amplifies. This conclusion, arrived at through years of field experience rather than theoretical speculation, has implications for the AI moment that extend well beyond the question of whether to adopt the tools or how to govern them. The implications reach into the question of what kind of society is required to absorb the most powerful amplifier in human history without being distorted by it.

Segal asks the right question in The Orange Pill: "Are you worth amplifying?" The question contains the essential insight of the amplification framework. The tool carries whatever it is given. The quality of the output depends on the quality of the input. Character matters. Judgment matters. The care and craft that a person brings to their work determines whether the amplification produces something worth producing.

But the question, framed as individual, conceals a structural reality. The individual who is "worth amplifying" did not arrive at that condition spontaneously. She was educated, formally or informally, by systems and people who invested in her development. She was mentored by practitioners who shared their judgment, their mistakes, their hard-won understanding of what quality looks like and what shortcuts cost. She was supported by institutions that provided the feedback loops, the quality standards, and the professional norms that converted her raw potential into developed capacity. She was embedded in a community that valued the qualities the amplifier rewards: curiosity, discipline, the willingness to persist, the judgment to know when to stop.

None of these conditions are given. All of them are built. They are built by families, schools, communities, institutions, and the cultural norms that shape how societies invest in their members. They are built over years and decades, not quarters and sprints. And they are built unevenly, distributed according to the same structural inequalities that shape every other form of capacity in every society on earth.

The question "Are you worth amplifying?" is therefore not merely a personal challenge. It is, at a deeper level, a question about the society that produced you. Did the society invest in the educational systems that develop the capacity to think critically, decompose problems, evaluate outputs, and ask generative questions? Did it build the institutional infrastructure that transmits professional judgment from one generation to the next? Did it create the economic conditions that allow people to invest in their own development rather than spending every available hour on survival? Did it cultivate the cultural norms that value depth, quality, and reflection alongside speed, output, and growth?

If the answer is yes, the amplification will be spectacular. Toyama's fieldwork documented this with the same consistency as the negative cases. Where the foundations were strong, the technology produced outcomes that exceeded anyone's expectations. The well-functioning schools became extraordinary schools. The well-staffed clinics became transformative health care providers. The organizations with strong cultures became organizations that operated at a level their previous capacity could not have reached. The amplification, in these cases, was everything the technology optimists claimed it would be.

The optimists were not wrong about the amplification. They were wrong about the distribution of the foundations that the amplification requires.

The investment in human capital that genuine democratization requires operates at every level of the social system. At the level of the individual, it means educational experiences that develop the cognitive capacities the tools reward: not programming skills, which the tools increasingly provide, but the judgment, creativity, and evaluative capacity that determine whether the tool's output deserves to exist. At the level of the institution, it means organizational cultures that value mentoring, quality, and reflective practice, and that invest in these values even when the quarterly metrics reward speed and volume instead. At the level of the nation, it means educational systems designed to develop human capacity rather than credentialed compliance, and governance frameworks that invest in the demand side, in the capacity of citizens to use the tools wisely, not only the supply side, in the regulation of what the tools may do.

Toyama has been explicit about the scale of this investment. In his Harvard Center for Research on Computation and Society talk in 2024, he argued that the Law of Amplification "leads to recommendations for anyone intent on ensuring that AI has positive impact," and that those recommendations center on building the human and institutional capacity that the technology amplifies. The recommendations are not about the technology. They are about the people and institutions the technology will serve.

The technology industry's current approach to the AI transition inverts this priority. It invests overwhelmingly in the supply side: in building more powerful models, in distributing tools more widely, in reducing costs, and in expanding access. These investments are not wrong. More powerful tools, wider distribution, lower costs, and expanded access are all genuinely valuable. But they are the easy investments, the investments the industry knows how to make, the investments that produce the metrics the market rewards, and the investments that, absent corresponding investment in the demand side, will amplify existing inequality rather than reducing it.

The hard investment is in the demand side. In the educational systems that develop human capacity. In the institutional infrastructure that transmits professional judgment. In the cultural norms that value depth and quality. In the mentoring relationships that convert raw talent into developed capability. These investments do not scale in the way the technology industry understands scaling. They cannot be copied at zero marginal cost. They require sustained, patient, expensive commitment to human development over timescales that the quarterly reporting cycle cannot accommodate.

And yet these are the investments that determine whether the amplifier produces flourishing or dysfunction. The tool is the cheapest component of the system. The human being is the most expensive and the most determinative. A world that invests billions in the tool and pennies in the person will produce a world where the tool amplifies existing inequality with unprecedented efficiency. A world that invests in both, in the tool and in the person the tool amplifies, earns the amplification it receives.

Toyama compared AI to nuclear power, and the comparison illuminates the investment question. Nuclear technology is powerful. It can generate clean energy. It can also produce weapons and contaminate environments for millennia. The technology does not determine the outcome. The institutional infrastructure that surrounds the technology determines the outcome: the regulatory frameworks, the safety standards, the training programs, the inspection regimes, the cultural norms of the nuclear industry. These institutional investments are orders of magnitude more expensive than the technology itself, and they are the investments that determine whether nuclear power produces electricity or catastrophe.

AI requires the same ratio of institutional investment to technological investment. The technology is here. It will improve. It will become more powerful, more accessible, more capable. The question that will determine whether the AI era produces broad human flourishing or concentrated advantage is not a technology question. It is an investment question: Will societies invest in the human and institutional capacity that determines what the technology amplifies?

The tools are not the answer. They never were. Toyama learned this in Karnataka, in Andhra Pradesh, in Uttar Pradesh, in every deployment where the technology worked perfectly and the outcomes diverged according to the human and institutional capacity that surrounded it. The tools are amplifiers. The answer is what we choose to amplify. And that choice, which is a choice about investment in human beings rather than investment in technology, is the choice on which the entire promise of the AI era depends.

The amplifier is ready. The question is whether we will invest in making human beings worth amplifying, not as individuals who must pull themselves up by their bootstraps, but as members of societies that take responsibility for developing the capacity of all their members. The tool is one hundred dollars per month. The foundation is a generation's worth of educational, institutional, and cultural investment. The tool is the easy part.

It was always the easy part.

---

Epilogue

The fulcrum is what I keep thinking about.

Not the lever. The lever is magnificent. I said so in The Orange Pill and I meant it. The twenty-fold multiplier in Trivandrum was real. The feeling of building Napster Station in thirty days was real. The exhilaration of watching my engineers reach across disciplinary boundaries they had never crossed was real, and I would not trade it, and I would not want to live in a world where that capability had not arrived.

But Toyama forced me to look at the other half of the equation. Not the lever. The fulcrum. The thing the lever rests on. The thing that determines whether the lever moves the world or just pivots in empty air.

My engineers in Trivandrum had fulcrums. Years of professional experience. Institutional infrastructure that caught their errors and refined their judgment. A company that provided goals, quality standards, feedback loops, and the trust that comes from building together through difficulty. The tool amplified all of that. The amplification was extraordinary because the fulcrum was deep.

The student in Dhaka, the developer in Lagos — I wrote about them as though the lever was the story. Toyama made me see that the fulcrum is the story, and it is the story I told too briefly, in a paragraph of qualifications inside a chapter of celebration.

What unsettles me most is not that Toyama disagrees with my thesis. He does not disagree with it. He accepts every empirical claim I make. The tools are powerful. The floor has risen. The capability expansion is real. He accepts all of it and then asks the question I skated past: What determines who can stand on that risen floor and build, and who stands on it and finds that the surface, though higher, offers no more traction than the ground they were standing on before?

The answer is everything that happened before the tool arrived. The education. The mentoring. The institutional support. The cultural capital. The accumulated human investment that the tool did not create and cannot substitute for. The amplifier amplifies what it finds. If what it finds is deep, the amplification is transformative. If what it finds is shallow, the amplification is cosmetic. The tool does not know the difference. Only the foundation does.

I said in The Orange Pill that the question is "Are you worth amplifying?" I still believe that. But Toyama taught me that the question has a prior question, one I was not asking: What conditions produce people worth amplifying, and are we investing in those conditions with anything approaching the urgency with which we are distributing the amplifier?

We are not. I know this because I live inside the industry that is distributing the amplifier, and I watch the investment flows, and the investment flows are overwhelmingly toward the tool and away from the person. Billions for models. Pennies for the educational systems that determine whether the models produce empowerment or frustration. The ratio is wrong, and correcting it requires a kind of patience that my industry does not possess and that the quarterly reporting cycle structurally prohibits.

I am not going to pretend that reading Toyama made me a different person. I am still the builder who cannot stop building. I still believe the tools are extraordinary. I still believe the expansion of who gets to build is morally significant. But I now believe something I did not fully believe before: that the expansion, without corresponding investment in the foundations the tools amplify, will produce the exact pattern Toyama documented across every previous technology transition. The tools will reach everyone. The benefits will accumulate where the foundations are deepest. The gap will widen. And the celebration of the tool will obscure the widening, because the celebration is louder and more photogenic than the foundation-building that would make the celebration true.

The lever is real. I built with it. I watched it work.

The fulcrum is what makes it matter.

-- Edo Segal

The AI revolution promises democratization -- tools so powerful they level the playing field for anyone with an idea and a subscription. Kentaro Toyama spent a decade watching identical promises fail.

The AI revolution promises democratization -- tools so powerful they level the playing field for anyone with an idea and a subscription. Kentaro Toyama spent a decade watching identical promises fail. Not because the technology failed. Because the playing field was never level, and amplifiers do not level. They amplify.

Through ten chapters grounded in Toyama's Law of Amplification and the arguments of Edo Segal's The Orange Pill, this book traces what happens when the most capable tools in human history meet the real world -- where education is uneven, institutions vary wildly, and the distance between formal access and substantive capability is the distance between a subscription and a life. The lever is extraordinary. The fulcrum determines everything.

This is not a case against AI. It is a case for investing in the only thing that makes AI matter: the human being on the other end of the prompt.

-- Kentaro Toyama, Geek Heresy

Kentaro Toyama
“leads to recommendations for anyone intent on ensuring that AI has positive impact,”
— Kentaro Toyama
0%
11 chapters
WIKI COMPANION

Kentaro Toyama — On AI

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Kentaro Toyama — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →