Packaged interventions are the operational embodiment of solutionism. They consist of a pre-designed product or program that has been shown to work in one context, packaged for distribution at scale, and deployed in other contexts with the expectation that the packaging will preserve the effect. The logic is compelling: if it worked there, it should work here; if it worked for some, it should work for all. The logic is also, in Toyama's documentation, wrong with enough consistency to warrant a term. The intervention that worked in one context did so because of the context, not despite it. Extracting the intervention from the context and distributing it elsewhere is the category error that produces the failures Toyama's fieldwork catalogued.
The pattern appears across development sectors. In education, packaged curricula are designed in research contexts and distributed to classrooms whose teachers, students, and institutional conditions differ from those in which the curriculum was developed. The curriculum that transformed one classroom produces nothing in another, and the failure is attributed to implementation problems rather than to the category error of expecting a package to carry the context with it. In health care, packaged protocols developed in well-resourced hospitals are distributed to clinics that lack the supporting infrastructure — the pharmacy supply chains, the staff training, the follow-up mechanisms — and the protocols produce measurable harm. In agriculture, packaged farming techniques developed for research conditions are distributed to smallholders whose soil, climate, inputs, and market access differ, and the techniques fail or, in some cases, destroy livelihoods.
The AI era is producing packaged interventions at unprecedented scale. AI tools trained and validated in high-resource contexts are deployed globally with the expectation that their performance will generalize. In many domains it does; in many others, it fails in ways that track precisely the contextual differences that the packaging ignored. A medical imaging AI trained on scans from American hospitals performs worse on scans from hospitals with different imaging equipment, different patient demographics, and different imaging protocols. A language model trained primarily on English produces systematically worse outputs in low-resource languages — not because the underlying task is harder but because the training distribution does not include the contextual information the task requires.
The response to these failures repeats the pattern Toyama documented: more technology. Better fine-tuning, better prompt engineering, better evaluation frameworks. Each of these may be valuable. None substitutes for the insight that the failures are not technical but categorical. The intervention is not packaging a context; it is distributing a product. The product arrives. The context does not. The outcomes track the context.
Toyama's prescription is not to stop building packaged interventions — they have legitimate uses in contexts where the packaging conditions hold — but to stop expecting them to substitute for intrinsic growth. The intervention can amplify existing capacity; it cannot create capacity where none exists. Distinguishing between these two functions is the analytical work that determines whether a given deployment will produce benefit or waste.
The term and the framework were introduced in Geek Heresy (2015), drawing on case studies from Toyama's fieldwork and from the broader development literature. The analytical structure has roots in James C. Scott's Seeing Like a State and its account of how high-modernist planning fails when it attempts to impose abstract schemas on contexts whose specific realities the schemas cannot capture.
The packaging illusion. The assumption that a successful intervention can be extracted from its context and distributed elsewhere preserves the package but loses the context.
Failure as categorical, not technical. The failures of packaged interventions are not caused by implementation problems but by the category error of expecting packages to substitute for contexts.
AI as the current iteration. The AI industry's deployment practices repeat the packaged-intervention pattern at higher amplification, with the same structural prediction of uneven outcomes.
The alternative: amplification only. Packaged interventions can legitimately amplify existing capacity, but they cannot substitute for missing capacity, and expecting substitution is the error to avoid.
The response pattern. When packaged interventions fail, the industry default is more technology — a response that repeats the original error rather than correcting it.