Developmental time is biological: the rate at which synapses form, the span of critical periods, the trajectory of prefrontal myelination. It proceeds on its own schedule, indifferent to the pace of human deliberation. Research time is institutional: the funding cycles, the IRB approvals, the cohort recruitment, the years of follow-up, the peer-review process, the publication timelines. Christakis's television research required decades because it was built on the NLSY cohort, which had been tracking children since 1979. By the time the 2004 paper was published, the children whose television exposure it documented were adults. The AI timeline permits no such retrospection. ChatGPT launched in November 2022; Claude Code crossed its capability threshold in late 2025. The first AI-native cohort is passing through the sensitive period now. By the time longitudinal studies produce results, the cohort will have completed the sensitive period — their calibrations fixed, the window closed.
The gap is not new but is now acute. Every previous media transition produced some version of the research-lag problem: television, video games, social media. In each case, the research eventually caught up, and in each case, it caught up after the first exposed cohort had passed through critical periods. The AI case compresses both the exposure timeline (faster adoption) and the capability escalation (more potent stimulation), widening the gap rather than narrowing it.
The gap's ethical weight is that it forces a decision under uncertainty without the possibility of deferring the decision. The default of waiting for definitive evidence is itself a decision — the decision to permit unstudied exposure during the sensitive period. The precautionary principle is the framework that acknowledges the impossibility of deferral and structures action on available evidence.
Research strategies to narrow the gap exist but are underfunded and underutilized. Prospective cohort studies that begin tracking exposure now, using measurement instruments that capture qualitative dimensions of interaction. Experimental studies in controlled settings that can produce results faster than longitudinal designs. Animal-model work on developmental overstimulation. Natural experiments exploiting cross-institutional variation in AI policies. None of these produce definitive evidence within the sensitive period of the first cohort, but collectively they narrow the gap enough to make clinical guidance less speculative.
The institutional response must parallel the research strategy. Clinical guidelines issued now, transparently provisional and revisable as evidence accumulates, are more useful than perfect guidelines that arrive after the sensitive period has closed. The AAP television guidelines were issued before every question was answered because the developmental window did not wait. The AI guidelines must follow the same logic.
The framework is articulated in this volume building on Christakis's repeated public observation that the screen-time literature was always catching up to the technology. The formal application to AI and children is the volume's contribution.
Asymmetric schedules. Biology proceeds at two million synaptic connections per second; research proceeds at the pace of grants and peer review.
Non-deferrable decision. Waiting for definitive evidence is itself a decision — the decision to permit unstudied exposure.
Gap is widening. AI's combination of faster adoption and greater potency widens the gap rather than narrowing it.
Research strategies to narrow it. Prospective cohorts, experimental work, natural experiments, and animal models collectively reduce the gap, if funded.
Institutional implication. Provisional clinical guidance issued under uncertainty is more useful than perfect guidance arriving after the sensitive period has closed.