The term is borrowed from medicine, where iatrogenic describes harm caused by the treatment itself — the hospital-acquired infection, the side effect of the drug. Applied to education and cognitive development, it names a specific failure mode: a learning intervention that produces the appearance of development while undermining the development it was designed to produce. The scaffold that never withdraws is iatrogenic in precisely this sense. It produces impressive performance while potentially preventing the independent development that performance should reflect. The Bruner volume proposes the concept as the sharpest available diagnostic for AI partnership's most distinctive risk — not failure of tools, but harm produced by their success.
The possibility of iatrogenic harm does not imply AI scaffolding should be abandoned. The Bruner volume is explicit about this. The productivity gains are real. The democratization of capability is genuine. The expansion of who gets to build matters morally and practically. But the gains and the harm are not mutually exclusive. A treatment can be genuinely beneficial and genuinely iatrogenic simultaneously, if benefits accrue in one dimension (production) and harms accumulate in another (development).
The patient feels better. The underlying condition progresses. The treatment works and the treatment damages, simultaneously, because the working and the damaging operate at different levels. This is what makes iatrogenic harm so difficult to detect. The metrics that would reveal it — independent capability over time, transfer to novel problems, metacognitive awareness — are not the metrics that drive product development or user satisfaction.
Ivan Illich developed the concept of iatrogenesis in his 1976 Medical Nemesis at three levels: clinical (harm from individual treatments), social (dependency on medical institutions eroding self-care), and cultural (medicalization replacing the human capacity to experience illness meaningfully). The Bruner volume's iatrogenic learning operates primarily at the first two levels: clinical harm from individual AI interactions that atrophy specific capabilities, and social harm from institutional dependency on AI infrastructure that erodes populations' unaugmented competence.
The test for iatrogenic learning is the withdrawal test. If scaffold removal reveals capability that has been internalized — if the builder discovers she can do alone what she previously needed support for — the intervention was not iatrogenic. If removal reveals capability that depended on the scaffold, or worse, capability that has degraded over time under AI partnership, the intervention was iatrogenic, however impressive the supported performance.
The Bruner — On AI volume coins the term iatrogenic learning explicitly, drawing on the medical concept (documented since Hippocrates, formalized in twentieth-century epidemiology) and on Illich's 1976 extension of iatrogenesis to institutional harm. Related concepts in adjacent literatures include the ironies of automation and moral deskilling.
Iatrogenic, not malicious. The harm arises from the treatment's operation, not from any designer's intent.
Invisible during treatment. Because benefits and harms register in different dimensions, iatrogenic patterns are not detected by the metrics that track benefits.
Compatible with genuine benefit. Iatrogenic harm does not negate real benefits; the two coexist at different levels.
Detection requires withdrawal. The only reliable test for iatrogenic learning is observing what the learner can do when the treatment is removed.
Stakes scale with adoption. At individual scale, iatrogenic learning means personal skill atrophy; at societal scale, it threatens populations' unaugmented competence.
Whether current AI scaffolding produces iatrogenic learning is the central empirical question the framework poses. No longitudinal study has yet produced definitive evidence. Early findings from MIT Media Lab's cognitive debt research (2025) and related work suggest the concern is warranted; definitive demonstration awaits the kind of large-scale withdrawal studies the industry has no incentive to conduct.