In 2004, Toyama's team at Microsoft Research India deployed personal computers with educational software in schools across rural Karnataka. The technology was identical across schools. The software was well-designed. The deployment followed best practices. And the outcomes diverged sharply along a single axis: the capacity of the teachers who received the machines. In schools with capable, motivated teachers, learning outcomes improved measurably. In schools without such teachers, the computers gathered dust, became distractions, or in some cases produced measurable declines in learning as time previously spent on instruction was displaced by poorly supervised machine use. The pattern, observed across dozens of schools, crystallized into the law that has structured Toyama's subsequent career.
The study's importance lies not in the finding itself — that good teachers plus technology produces better outcomes than weak teachers plus technology — but in the specific structure of the finding. Before the deployment, the optimistic hypothesis had been that technology could compensate for weak teaching, that the computer could become a pedagogical partner that raised the floor of what students learned regardless of the teacher's capacity. The deployment refuted this hypothesis definitively. The computer did not raise the floor. It amplified the ceiling. Where the ceiling was high, the computer made it higher. Where the ceiling was low, the computer did nothing.
The study's methodology was robust enough to resist the usual objections. It was not a single anecdote but a pattern across dozens of schools. It controlled for the obvious confounders: hardware identical, software identical, training identical. The variable that explained the outcome was the teacher capacity that preceded the deployment. Subsequent deployments in Uttar Pradesh (health care), Andhra Pradesh (agriculture), and across South Asia produced the same pattern in different domains. The generalization was not made from a single case but from a replicated pattern.
The study's implications for AI are direct. An AI tool distributed to schools with capable teachers will amplify their teaching; distributed to schools without them, it will not produce teachers. The same logic applies to every other sector. An AI diagnostic tool in a well-staffed hospital will improve diagnosis; in an understaffed hospital, it will produce diagnoses that cannot be acted upon. An AI advisory tool in a functioning agricultural extension service will improve advisory quality; in a non-functioning service, it will produce recommendations that reach no farmer. The machine is faithful. The context determines the outcome.
The study also illustrates a methodological point that has become central to Toyama's work: the failure of technology deployments is often invisible to the institutions that deploy them. The Bangalore computers were counted, their distribution was reported, their specifications were documented. The null or negative outcomes in the weak schools did not register in the metrics the funders tracked. The optimistic story — computers deployed, access expanded, technology reaching rural India — coexisted with the null results because the metrics were chosen to measure distribution, not outcomes. The AI industry's current metrics exhibit the same structure, and the same risk of celebrating distribution while missing the outcomes.
The study was conducted through Microsoft Research India from 2004 onward, following Toyama's relocation from Microsoft Research Redmond to the newly established Bangalore lab. Results were published across a series of academic papers and synthesized in Geek Heresy (2015). The finding has been replicated across subsequent ICT4D research and is now considered foundational in critical development studies.
Same technology, different outcomes. The variable that predicted outcomes was not the technology but the institutional and human capacity that received it.
No floor-raising. The computer did not compensate for weak teaching; it amplified whatever teaching was already present.
Replicated across sectors. The pattern appeared in health care, agriculture, and financial services, not only in education.
Invisible failures. The metrics that funders tracked measured distribution, not outcomes, allowing the null results to coexist with optimistic reporting.
The law's empirical foundation. The study is the origin case of the Law of Amplification and remains its clearest empirical demonstration.