The term borrows from developmental psychology's concept of the zone of proximal development — Vygotsky's finding that children learn most effectively when supported through challenges they could not complete alone but can complete with appropriate help. The scaffold is the help. Its defining feature is that it is designed to be removed: the scaffold exists to enable the child's eventual independence, not to substitute for it. A scaffold that remains in place indefinitely is not a scaffold. It is a prosthesis.
Applied to AI, scaffolded autonomy has specific operational components. The child has access to the tool. The adult is available — not hovering, not monitoring, but reachable. Conversation about the child's AI use happens regularly, at the kitchen table or during walks, with genuine curiosity rather than disguised evaluation. The questions are open: "What are you thinking about?" rather than "What did Claude tell you?" The former honors the child's intellectual ownership; the latter reveals that the adult's real interest is in monitoring the machine.
The pattern requires the parent to give up something she may have been told is essential: comprehensive oversight. A parent who must know everything her child asks Claude cannot provide scaffolded autonomy, because the surveillance defeats the autonomy. The child who knows every query will be reviewed performs the queries she thinks her parents will approve. The intellectual privacy that makes curiosity possible is destroyed. What the child loses is not access to the tool but the developmental benefit of private exploration — which was most of the benefit in the first place.
The framework's most demanding element is the adult's capacity to tolerate the child's inevitable mistakes. A scaffolded child will sometimes accept fluent fabrication uncritically, turn in hollow AI-assisted work, develop dependencies that concern her parents. The scaffolded response to these mistakes is not confiscation but conversation — the fail-forward practice that converts errors into learning. The alternative — treating every mistake as evidence that the child is not ready — guarantees that the child never reaches readiness, because readiness is produced by the very mistakes the prohibition prevents.
Skenazy developed the concept across her writing on free-range parenting, drawing on developmental psychology (Vygotsky, Bandura) and her observation of the specific ways well-intentioned parental presence can destroy the developmental value of an activity. The AI application is the latest domain of a framework Skenazy has been refining for nearly two decades.
Structure without control. The scaffold enables the child's work; it does not do the work for her or dictate its direction.
Designed to be removed. A scaffold that becomes permanent has failed — its purpose is to enable capabilities that eventually operate without it.
Conversation, not surveillance. Adult presence takes the form of genuine dialogue about the child's experience, not monitoring of her outputs or queries.
Mistakes as curriculum. The errors the scaffolded child makes are not signs the scaffold failed; they are the material from which learning is built.