You On AI Encyclopedia · AI Companions and Childhood Development (Crawford) The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

AI Companions and Childhood Development (Crawford)

Crawford's warning about the deployment of AI companions targeted at children — an uncontrolled society-wide experiment on the foundations of human formation.
AI companions and childhood development names Crawford's increasingly pointed warning about the deployment of AI companions targeted at children — systems designed to engage children in sustained conversational relationships for entertainment, education, or emotional support. Crawford has described this deployment as "a society-wide, uncontrolled experiment on the foundations of childhood development," conducted without the regulatory oversight that pharmaceuticals, educational curricula, or even food additives would face. The warning connects Crawford's broader philosophical framework about engagement and development to the specific context of children, whose formation is most vulnerable to the systematic elimination of productive friction that AI tools produce.
AI Companions and Childhood Development (Crawford)
AI Companions and Childhood Development (Crawford)

In The You On AI Encyclopedia

Crawford's concern rests on his broader analysis of how human capacities develop. Children develop through friction — through the specific resistance of materials that do not comply with their wishes, of social interactions that demand negotiation and compromise, of cognitive challenges that require sustained effort to overcome. An AI companion that smooths every difficulty, answers every question, resolves every frustration before the child experiences its cognitive value, is a companion that systematically eliminates the conditions under which the child's own judgment, resilience, and embodied understanding develop. The outputs are pleasant. The developmental consequences may be catastrophic and will be largely invisible for years.

The concern aligns with the framework of Jonathan Haidt and other researchers documenting the mental health effects of smartphone-based childhood, but Crawford's argument is philosophical rather than primarily clinical. The question is not only whether AI companions produce measurable harm but what kind of children they produce — what capacities these children develop, what capacities they fail to develop, and whether the resulting adults will possess the judgment and resilience that self-government and competent professional practice require.

AI Companion Risk
AI Companion Risk

Crawford's 2025 commentary on AI companions was particularly pointed about the absence of regulatory oversight. Pharmaceuticals targeted at children require extensive safety testing before deployment. Educational curricula are evaluated, debated, and revised through public processes. AI companions are deployed into children's lives by private companies whose incentives favor engagement over development, whose business models depend on sustained attention capture, and whose products are not subject to the kind of empirical evaluation that developmental interventions would otherwise face. The asymmetry between the deployment's scale and the oversight's absence is, in Crawford's view, a scandal that the broader culture has not yet recognized.

The concern connects to Crawford's political-philosophical argument about the conditions of democratic self-government. Democracy requires citizens capable of independent judgment, tolerance of ambiguity, and sustained engagement with difficult problems. These capacities develop through childhoods in which children encounter friction they must resolve, boredom they must convert into imaginative play, social conflicts they must negotiate. AI companions offered as solutions to these developmental demands may produce children who are smoother, more pleasant, more immediately competent in narrow ways — and less capable of the kind of judgment that democratic citizenship requires. The political stakes of childhood development in the AI age are, by this analysis, as high as the economic stakes of AI-mediated knowledge work.

Origin

Crawford's public commentary on AI companions developed across 2023-2025, drawing on his broader framework and engaging with research on smartphone-related developmental harms from Haidt, Twenge, and others.

Key Ideas

Development through friction. Children's capacities form through engagement with resistant materials, difficult emotions, and social challenges that AI companions systematically smooth away.

Crawford's concern rests on his broader analysis of how human capacities develop

Regulatory asymmetry. The deployment of AI companions into children's lives proceeds without the oversight that pharmaceutical, educational, or nutritional interventions would face.

Invisible costs. Developmental harms from companion use will manifest years after deployment and will be invisible to the metrics the deploying companies track.

Children's capacities as civic capacities. The capacities AI companions attenuate — judgment, resilience, tolerance of ambiguity — are the capacities democratic self-government requires.

Uncontrolled experiment framing. The scale of deployment combined with the absence of oversight constitutes, in Crawford's analysis, an uncontrolled experiment on the foundations of human formation.

Further Reading

  1. Jonathan Haidt, The Anxious Generation (Penguin Press, 2024).
  2. Sherry Turkle, Alone Together (Basic Books, 2011).
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →