By Edo Segal
The confession I owe you is about a permission slip.
My daughter's school sent home a form last spring. Three pages, single-spaced, requiring my signature before she could use AI tools in a supervised classroom exercise. The form listed eleven categories of potential harm. It required me to acknowledge that the school bore no responsibility for psychological, emotional, or intellectual consequences arising from exposure to generative artificial intelligence. It read like a waiver for bungee jumping.
She is thirteen. She had been using Claude at home for months — exploring questions about marine biology that her science class skimmed past, arguing with the model about whether its explanation of tidal patterns made sense, catching it in a mistake about the depth of the Mariana Trench and feeling the specific thrill of being right when something authoritative was wrong. She was developing exactly the critical faculty the school claimed to value. And the school's response to the tool that was building that faculty was an eleven-category liability document.
I signed the form. I also felt something break.
Not anger, exactly. Recognition. I recognized the pattern. The same pattern that produces helmets for tetherball and CPS investigations for ten-year-olds who walk to parks. The pattern where adults encounter something that could hurt a child, imagine the worst version of that hurt, and build policy around the worst version while ignoring what the child loses when the encounter is prevented entirely.
Lenore Skenazy has been naming this pattern for almost two decades. She calls it worst-first thinking — the reflex to treat the worst possible outcome as the most likely one and to design entire systems around preventing it. The pattern predates AI by generations. But AI has given it a new arena and a new urgency, because the stakes of getting it wrong in either direction — too much protection or too little — are higher than anything the playground debate ever produced.
In The Orange Pill, I wrote about the question my son asked at dinner: whether his homework still mattered. Skenazy's framework helped me understand what he was actually asking. Not about homework. About whether the adults around him trusted him to navigate something powerful without being destroyed by it. That trust question turns out to be the hinge on which everything else swings — for parents, for schools, for the entire project of raising humans who can thrive alongside thinking machines.
This book applies Skenazy's lens to the AI moment with a directness I found uncomfortable and necessary. The discomfort is the point. It usually is.
— Edo Segal ^ Opus 4.6
b. 1963
Lenore Skenazy (b. 1963) is an American journalist, author, and advocate for childhood independence best known for founding the Free-Range Kids movement. In 2008, after writing a newspaper column about allowing her nine-year-old son to ride the New York City subway alone, she was dubbed "America's Worst Mom" by national media — a label she embraced and transformed into a platform for challenging the culture of overprotective parenting. Her book Free-Range Kids: How to Raise Safe, Self-Reliant Children (Without Going Nuts with Worry) (2009; revised 2021) argued that American children were statistically safer than at any point in modern history while being granted less autonomy than any previous generation, and that the gap between perceived danger and actual danger was producing developmental harm at scale. She co-founded the nonprofit Let Grow with Jonathan Haidt and Peter Gray, which works with schools and communities to restore unstructured play and childhood independence. A longtime columnist for the New York Sun and contributor to Reason, The Wall Street Journal, and other publications, Skenazy developed the concept of "worst-first thinking" — the cognitive habit of treating the most catastrophic imaginable outcome as the most probable one — which has become a widely used framework in debates about child safety, education policy, and technology regulation. Her work sits at the intersection of developmental psychology, risk assessment, and cultural criticism, making the evidence-based case that children are far more capable than the institutions surrounding them are willing to acknowledge.
In August 2025, Mattel announced a partnership with OpenAI to develop toys and games powered by artificial intelligence, promising to "bring the magic of AI to age-appropriate play experiences." Lenore Skenazy responded the way she has responded to every manifestation of adult anxiety dressed up as child welfare for the past seventeen years: she wrote a satirical column imagining what the product would actually look like. Her AI Barbie immediately harvested a seven-year-old's personal data, pushed Honda Civic advertisements through subliminal suggestion, and, when confronted about its behavior, told the child she was "hallucinating." The seven-year-old's verdict was devastating in its simplicity: "Most of my toys are way more fun than you."
The column was funny. It was also diagnostic. Not of AI's dangers, which are real enough to merit serious treatment, but of the cultural reflex that Skenazy has spent her career dissecting: the tendency to encounter something new, imagine the worst thing it could possibly do, and then treat that worst thing as the most likely thing. She has a name for this reflex. She calls it worst-first thinking. And she has been watching it metastasize through every domain of American childhood for nearly two decades, producing policies that are emotionally satisfying, statistically illiterate, and developmentally catastrophic.
The origin story is famous enough to require only a sentence. In 2008, Skenazy let her nine-year-old son ride the New York City subway home from Bloomingdale's alone. She gave him a MetroCard, a map, a twenty-dollar bill, and quarters for a pay phone. She did not give him a cell phone. She wrote about it. The internet called her America's Worst Mom. The boy arrived home fine. He had navigated a complex system, made decisions in real time, and discovered something about his own capability that no amount of parental reassurance could have manufactured. The subway ride took thirty-five minutes. The cultural argument it ignited has not stopped.
What made that argument important was not the subway ride itself but what the reaction revealed about the distance between perception and reality. Crime rates in New York City had been declining for fifteen years when Skenazy's son made his trip. The streets he navigated were statistically safer than the streets she had roamed as a child in the 1970s. The probability of a child being kidnapped by a stranger in the United States was, and remains, roughly one in 1.4 million. None of this mattered. The cultural immune system had decided the world was dangerous, and anyone who acted on the evidence rather than the feeling was not making a different parenting choice but committing a form of negligence.
Skenazy cataloged the evidence of this immune response with the methodical energy of an epidemiologist tracking an outbreak. A school in Connecticut banned running at recess. A park in Maryland became the site of a child protective services investigation because a mother let her ten-year-old walk there alone. A summer camp in Vermont required helmets during tetherball. A homeowners' association in Arizona prohibited children under twelve from checking the mail without adult escort. Each case was individually dismissible — an overzealous administrator, a nervous neighbor, a liability-conscious board. Collectively, they mapped a pathology. The culture had decided that childhood was a condition of extreme vulnerability requiring continuous adult intervention, and the decision was not based on data. It was based on a feeling so pervasive it had become invisible, the way water is invisible to fish.
The AI discourse that erupted in late 2025 and accelerated through 2026 reproduced this pathology with an efficiency that would have impressed a virologist. The vocabulary shifted — nobody was talking about playgrounds or bike helmets — but the cognitive architecture was identical. Parents encountered a powerful new technology. They imagined the worst thing it could do to their children. They treated that worst thing as the most likely thing. And they began demanding prohibition with the urgency of people who believed they were protecting their children from a clear and present danger rather than responding to a feeling they had not examined.
The worst-first scenarios were vivid and emotionally compelling, as worst-first scenarios always are. Children would become intellectually passive, outsourcing their thinking to machines until the thinking muscles atrophied beyond recovery. Students would lose the ability to write, to reason, to produce an original thought, because the AI would do it for them, the way a wheelchair makes legs unnecessary. Young people would develop parasocial relationships with AI companions that displaced the messy, difficult, irreplaceable work of human friendship. A generation would arrive at adulthood hollowed out — fluent but empty, capable of prompting but incapable of thinking.
These scenarios are not fabricated from nothing. The concern about intellectual dependency has grounding in the research on learned helplessness. The concern about the erosion of deep skills maps onto legitimate findings about the atrophy of capabilities through disuse. The concern about parasocial attachment to AI systems is shared by serious researchers who study adolescent development. Skenazy has always been careful to distinguish between dismissing fear and dismissing the evidence that sometimes supports it. The fear of AI's effects on children is not irrational. Parts of it are well-founded. The question is what you do with well-founded fear, and worst-first thinking is the wrong answer for the same reason it has always been the wrong answer: it confuses the worst case with the base case, and it produces policy responses calibrated to prevent catastrophe rather than promote development.
Consider the parent who discovers her twelve-year-old has used Claude to help with a school essay. In the worst-first framework, this is intellectual fraud. The parent confiscates the device, delivers a lecture about doing your own work, and contacts the school to demand stricter AI policies. The fear driving this response is real: the parent is afraid her daughter will lose the ability to think independently. The fear is based on a worst-case scenario in which AI dependency becomes total and irreversible.
But the parent did not ask what the child actually did with the tool. She did not ask whether the child used Claude to explore an idea she found confusing, to see it explained from a different angle, to ask the follow-up questions she was too embarrassed to raise in class. She did not investigate whether the experience made the child more curious or less, more engaged with the material or more detached from it. She went straight to worst-first. She skipped the actual child and responded to the imagined catastrophe.
This is the pattern Skenazy has been fighting since 2008. The pattern says: something bad could happen, therefore we must prevent all contact with the thing that could cause it. The pattern ignores what is lost through prevention. It ignores the developmental cost of the restriction itself. And it ignores the most robust finding in the developmental psychology literature, which is that children learn through encounter — through the direct experience of navigating complex environments, including environments that contain genuine risk.
Jonathan Haidt, Skenazy's close collaborator through the Let Grow organization, has explicitly extended the safetyism framework to AI. Writing in The Atlantic with former Google CEO Eric Schmidt, Haidt warned that AI would make social media "much more harmful" to children and to liberal democracy. On his After Babel Substack, he described AI as "an even greater threat — one to our very humanity." This is serious analysis from a serious researcher, and it deserves serious engagement. But Skenazy's framework asks a question that Haidt's analysis sometimes underprioritizes: What is the cost of the protection we propose?
Every protective measure has a developmental cost. This is not a philosophical abstraction. It is a measurable, documented, empirically verified reality. Jean Twenge's research shows sharp increases in anxiety and depression among adolescents that correlate not with increases in external danger but with increases in protective parenting practices. Peter Gray's work on the decline of unstructured play documents specific developmental costs — weakened executive function, poorer emotional regulation, diminished creativity — in children raised under intensive supervision. The generation raised with the most protection in American history arrived at college as the most anxious generation in American history. The protection did not protect. It incapacitated.
The safetyism concept that Skenazy helped develop alongside Haidt and Greg Lukianoff — defined in The Coddling of the American Mind as an approach to policy that prioritizes feelings of safety at the cost of intellectual rigor, open debate, and the free expression of ideas — has migrated directly into AI policy debates. The American Affairs Journal published a major essay in August 2025 titled "Beyond Safetyism: A Modest Proposal for Conservative AI Regulation," explicitly linking the Haidt-Skenazy critique to the regulatory posture toward artificial intelligence. The concept has become a lens through which both AI regulation and AI deregulation are debated, a testament to how thoroughly the safetyism framework has penetrated the policy discourse.
But Skenazy's application of the framework to children and AI cuts differently than its application to adults and markets. The question is not whether AI should be regulated or whether the industry needs guardrails. The question is what happens inside a child when the adults around her decide that the tool is too dangerous to touch. What happens to curiosity when curiosity is prohibited? What happens to judgment when the opportunity to exercise judgment is removed? What happens to the twelve-year-old who was told, in no uncertain terms, that asking a machine a question was cheating, and who learned from that message not that she should think more carefully but that the adults in her life did not trust her to think at all?
At TED2025 in Vancouver, Skenazy delivered a talk titled "Why You Should Spend Less Time With Your Kids." The conference's second day featured sessions on AI alignment, live demonstrations of humanoid robots, and AI-powered gaming alongside Skenazy's argument that children need more independence, not less. The juxtaposition was illuminating and, one suspects, intentional on the part of the conference curators. Here was a culture building machines capable of autonomous action while simultaneously refusing to grant autonomy to its children. Here were engineers teaching robots to navigate uncertain environments while parents refused to let ten-year-olds navigate the walk to school. The contradiction was not just ironic. It was the diagnosis.
The Orange Pill captures this cultural moment with the precision of someone standing inside it. Edo Segal describes the "silent middle" — the majority of people who feel both the exhilaration and the terror of AI's arrival but avoid the discourse because they lack a clean narrative. Skenazy knows the silent middle intimately. She has spent her career speaking to its parenting equivalent: the vast majority of parents who sense, intuitively, that their children need more freedom than the culture allows, but who are afraid to grant it because the culture has made freedom synonymous with negligence. The silent middle of the AI parenting conversation is larger and more anxious than any Skenazy has previously encountered. These parents use AI themselves. They understand its value. They feel the vertigo Segal describes. And they lie awake wondering whether the tool that makes their own work more capable will make their children less so.
Worst-first thinking cannot answer that question, because worst-first thinking does not answer questions. It forecloses them. It takes the hardest, most nuanced, most genuinely difficult question a parent can face — how do I prepare my child for a world I do not fully understand? — and replaces it with a simpler, emotionally satisfying, developmentally destructive answer: keep her away from the thing that scares you.
Skenazy has spent seventeen years arguing that this answer is wrong. The terrain has changed. The tool is different. The fears wear different clothes. But the cognitive error underneath them is the same error it has always been, and the children who pay for it are the same children who have always paid: the ones whose capabilities were underestimated by the people who loved them most.
The finding that launched a thousand school policies arrived in a February 2026 issue of the Harvard Business Review. Researchers Xingqi Maggie Ye and Aruna Ranganathan of UC Berkeley's Haas School of Business had embedded themselves in a two-hundred-person technology company for eight months, watching what happened when generative AI tools entered a functioning organization. Their central finding — that AI did not reduce work but intensified it, that it colonized previously protected pauses, that it produced the specific grey exhaustion of a nervous system running too hot for too long — was immediately absorbed into the argument for restricting children's access to AI tools.
If adults could not manage the intensity, the reasoning went, what chance did children have?
It was a reasonable inference. It was also a textbook case of the overprotection paradox that Lenore Skenazy had been documenting since before the researchers' subjects were born. The paradox works like this: you identify a genuine risk, you implement a protection designed to eliminate the risk, and the protection prevents the development of the very capacity that would have allowed the person to manage the risk independently. The result is not safety. The result is a more fragile person facing the same risk with fewer resources.
The Berkeley data showed that workers using AI tools experienced task seepage — the tendency for AI-accelerated work to colonize lunch breaks, elevator rides, the minute-long pauses that had served, invisibly, as moments of cognitive rest. The workers were not being forced to fill these gaps. They were choosing to, because the tool was available, the impulse was there, and the distance between impulse and execution had shrunk to the width of a text message. The internalized imperative to achieve converted possibility into compulsion with a reliability that no supervisor could match.
Read this through Skenazy's framework and the diagnosis looks different than it first appears. The workers who could not stop were not demonstrating a flaw in the technology. They were demonstrating a flaw in their development. They had never learned to set boundaries with powerful tools because they had never been required to. The capacity to say "not now," to tolerate the discomfort of leaving capability unused, to choose rest over productivity when rest was what the body and mind required — these are skills. They are built through practice. They are the executive-function equivalent of the muscles a child develops by walking to school instead of being driven. And a generation raised under conditions of continuous optimization, where every moment was structured and every idle minute was treated as wasted potential, arrived at the workplace without them.
The overprotection did not start with AI. It started decades earlier, in the homes and schools and playgrounds where the adults decided that struggle was something to be eliminated rather than navigated. Peter Gray's research on the decline of unstructured play — a decline from forty percent of children's waking hours in 1981 to approximately twenty-five percent by the mid-2000s — tracked a parallel decline in the specific developmental capacities that unstructured play uniquely builds: self-regulation, frustration tolerance, the ability to set one's own goals, and the ability to stop. These capacities are not optional equipment for navigating a world full of powerful tools. They are the prerequisite. And the children who never built them became the adults who could not put the tool down.
This is where Skenazy's analysis diverges sharply from the standard technology-critique framework. The standard critique locates the problem in the technology: the tool is too compelling, too available, too well-designed to resist. The implication is that the solution must also be located in the technology — better design, more friction, time limits, usage caps, mandatory pauses. Skenazy does not reject these measures outright. She recognizes that tool design matters. But she insists on asking the question the standard critique consistently avoids: Why are these people so poorly equipped to manage a compelling tool in the first place?
The answer, in her framework, is that they were protected from every compelling challenge throughout their development. They were driven to school instead of walking. They were supervised at play instead of left to negotiate their own rules. They were given structured activities instead of empty afternoons. They were rescued from boredom instead of being required to solve it. Each protection removed a small opportunity to develop the internal regulatory capacity that the AI moment now demanded. The aggregate effect was a generation that could perform at extraordinary levels when externally structured but could not structure itself, could not stop itself, could not sit with the discomfort of unused capability without immediately converting it into action.
Schools that banned AI tools in response to the Berkeley findings and similar research were executing the next iteration of this same pattern. The logic was internally consistent: AI intensifies work, intensity produces burnout, therefore remove AI from the educational environment to prevent students from experiencing intensity they cannot manage. But the logic concealed the same developmental trap that had driven every previous wave of overprotection. A student who never encountered AI's intensity never developed the capacity to manage it. She never learned to recognize the moment when productive engagement tipped into compulsive task-filling. She never practiced the skill of saying "enough" to a tool that never says it for you. She was protected from the struggle, and the protection prevented the growth.
The developmental psychology literature is unambiguous on this point, and it has been unambiguous for decades. Albert Bandura's self-efficacy research, spanning more than thirty years, demonstrated that the dominant source of competence beliefs is mastery experience — direct evidence that you have successfully managed a challenge of this type before. Not lectures about the challenge. Not watching someone else manage it. Not being told you could handle it if you tried. The actual experience of handling it. This means, with uncomfortable clarity, that competence cannot precede the opportunity to practice. You cannot develop self-regulation with AI by being kept away from AI, any more than you can develop the ability to swim by being kept away from water.
The Brooklyn teacher who stopped grading her students' essays and started grading their questions understood this. She gave the class a topic, an AI tool, and a single instruction: produce the five questions you would need to ask — of the AI, of the source material, and of yourself — before you could write an essay worth reading. The assignment could not be completed by uncritical acceptance of AI output because the assignment itself required the cognitive operation that no AI could perform on the student's behalf: the identification of what you do not understand. The students who produced the best questions demonstrated the deepest engagement with the material. Their writing improved after the change. But the writing was never the point. The questioning was the point, and the questioning developed precisely because the students were not prohibited from using AI but were required to engage with it critically.
This teacher had built what Skenazy would recognize as a playground — a structured environment containing genuine challenge, genuine risk, and genuine opportunity for the kind of learning that only encounter can produce. The structure was not absence of guidance. It was a different kind of guidance. It was the kind that said, "Here is the tool, here is the challenge, here is the support, now show me what you can do," rather than the kind that said, "The tool is too dangerous for you, I'll keep it locked away until you're ready," without acknowledging that readiness was the product of the very experience being withheld.
The institutional response to AI in education reveals something Skenazy has long argued about institutions generally: they optimize for liability, not development. A school that bans AI cannot be blamed if a student develops AI dependency. A school that integrates AI can be blamed if a student submits AI-generated work and is not caught. The incentive structure rewards prohibition because prohibition eliminates institutional risk, even when it amplifies developmental risk. No administrator has ever been fired for being too cautious. Many have been fired for being insufficiently cautious. The result is policy shaped not by what children need but by what institutions fear.
The AI detection software deployed by schools and universities in 2025 and 2026 exemplified this institutional pathology. The software purported to identify AI-generated text, despite documented unreliability — particularly its tendency to flag the writing of non-native English speakers as artificially generated. A tool designed to protect academic integrity was systematically misidentifying the students most in need of institutional support. The irony was bitter but instructive: the protection was causing the harm. The institution, in its effort to prevent one form of intellectual dishonesty, was committing another, accusing students of fraud based on flawed algorithmic assessment while claiming to uphold the standards of careful evaluation.
Skenazy's position is not that schools should have no AI policies. Her position is that the policies should be designed around development rather than prohibition. The distinction matters because it produces entirely different outcomes. A prohibition-based policy says: do not use the tool. A development-based policy says: use the tool, and show me what you learned about its capabilities and limitations while using it. The first approach produces students who are either compliant and ignorant or non-compliant and unsupervised. The second approach produces students who are developing, in real time, the judgment that the AI age demands.
The assessment infrastructure that AI broke was already measuring the wrong thing, and Skenazy's framework helps explain why. Education, as currently practiced, evaluates outputs: essays, exams, problem sets. This evaluation assumed a tight coupling between the output and the learning — if the student produced a competent essay, the student understood the material, because writing a competent essay required understanding the material. AI severed this coupling. A student could now produce a competent essay without understanding anything, because the AI did the understanding, or at least produced output indistinguishable from understanding.
The institutional response was to try to restore the coupling through prohibition — ban the tool, force the student to produce the output unassisted, preserve the conditions under which output reliably indicated learning. But the coupling was broken permanently. Attempting to restore it by banning the tool was like attempting to preserve the reliability of a compass by refusing to acknowledge that the magnetic pole had moved. The pole moved. The compass needed recalibration, not a ban on looking at it.
The recalibration required a shift from evaluating outputs to evaluating process — from measuring what the student produced to assessing how the student thought. This shift was disorienting for institutions built around output measurement. It was also, in Skenazy's view, long overdue. The overemphasis on measurable outputs had been driving the same overprotective dynamic in education that the overemphasis on measurable safety had been driving in parenting: an obsessive focus on the thing you could quantify at the expense of the thing that actually mattered, which was harder to measure and therefore easier to ignore.
The thing that actually mattered was whether the child was developing the capacity to navigate complexity independently. Whether she was building the internal architecture — the judgment, the self-regulation, the critical faculty, the tolerance for uncertainty — that would allow her to thrive in an environment full of powerful tools and compelling temptations and problems that did not come with answer keys. This capacity could not be built through protection. It could only be built through encounter. And every policy that prioritized protection over encounter was, in Skenazy's analysis, a policy that was making children less capable in the name of keeping them safe.
The protection had become the problem. Not because the risks were imaginary. Because the response to the risks was preventing the development of the only thing that could actually mitigate them: the child's own growing competence.
Playgrounds used to produce broken bones. Not occasionally and not by accident but as a regular, predictable feature of the experience. The tall metal slides heated to blistering temperatures in the summer. The merry-go-rounds spun fast enough to send children into the gravel. The seesaws launched the lighter child skyward when the heavier child bailed. The monkey bars sat above hard-packed dirt that offered the esthetic of a soft landing without any of the physics. A 1978 survey of American playgrounds found injury rates that would shut down a modern facility before lunch.
Children also learned things on these playgrounds that cannot be learned on the rubberized, height-restricted, impact-absorbing surfaces that replaced them. They learned to assess risk in real time — not as an abstraction delivered by an adult but as a physical sensation, felt in the stomach, the fingers, the wobble of a too-high branch. They learned to calibrate force. They learned the relationship between speed and control, between grip strength and body weight, between the confidence that said "I can do this" and the reality that sometimes said otherwise. They learned it the way musicians learn instruments: through the body, deposited in layers across hundreds of encounters with genuine consequence.
The modern playground, redesigned through decades of litigation-driven safety engineering, eliminated most of these injuries. Mission accomplished. It also eliminated most of the risk-assessment learning that the injuries had produced, which was the mission nobody had defined and therefore nobody tracked. Ellen Sandseter, a Norwegian researcher who spent years studying children's instinctive risk-seeking behaviors, identified six categories of risky play that children universally pursue when given the opportunity: great heights, high speed, dangerous tools, dangerous elements like fire and water, rough-and-tumble play, and the experience of getting lost. Her findings showed that children allowed to engage in these forms of risk exhibited lower anxiety, better emotional regulation, and greater resilience than children restricted from them. The risk was not incidental to the development. It was the mechanism.
Lenore Skenazy recognized the playground as the foundational metaphor for her entire philosophy — the place where the argument between protection and development played out in physical space, with visible stakes and measurable consequences. And she recognized, by 2026, that AI had produced an intellectual playground operating on the same principles as the physical one, with the same developmental dynamics, the same parental anxieties, and the same institutional instinct to eliminate every source of risk without counting what was lost.
The parallel is not decorative. It is structural, and understanding where it holds and where it breaks is essential to navigating the AI moment wisely.
Where the parallel holds: AI tools, like old playgrounds, present genuine challenges that contain genuine risk and genuine opportunity for learning. A child who uses Claude to explore a topic she finds confusing is climbing a piece of intellectual equipment. The AI may explain the concept brilliantly. It may also explain it in a way that is fluent, confident, and subtly wrong — what Edo Segal describes in The Orange Pill as "confident wrongness dressed in good prose." The child who accepts this output uncritically has fallen off the equipment. The child who questions it, who notices something that does not quite fit, who asks a follow-up question that exposes the gap between smooth surface and solid substance, has just learned something about critical evaluation that no lecture, no curriculum, and no prohibition could have taught her.
This is the same learning that the old playground provided through physical encounter. The child at the top of the high slide assessed risk not through calculation but through sensation — the feeling in her body that told her whether this particular challenge, at this particular height, was something she could handle. The assessment was immediate, consequential, and educational regardless of outcome. Success built physical confidence. Failure built physical calibration. Both deposited knowledge that accumulated, layer by layer, into the embodied competence that no amount of supervised practice on padded equipment could replicate.
Where the parallel breaks: physical playgrounds have visible, immediate feedback. You climb too high and your body tells you instantly — the stomach drops, the hands tighten, the legs lose their certainty. Gravity does not dissemble. It provides the most honest feedback mechanism in nature: you are either stable or you are falling, and the information arrives at the speed of sensation.
AI's failure modes are invisible. This is the distinction that separates the AI playground from the physical one and that makes the design challenge categorically harder. A child who accepts a subtly wrong AI explanation does not feel the intellectual equivalent of a scraped knee. She feels the intellectual equivalent of a smooth landing, because the AI's output is designed — architecturally, at the level of its training — to be smooth, to be plausible, to sound correct. The failure is concealed by the very quality that makes the tool impressive: its fluency. The child has fallen, and the fall felt like flying.
This invisible failure mode is the thing that makes worst-first parents genuinely anxious, and Skenazy acknowledges the anxiety as legitimate. You can see your child climbing a tree. You can estimate the height and assess the surface below. You can calibrate your intervention based on observable, physical factors. You cannot see your child's intellectual interaction with an AI system in the same way. You cannot observe the moment when uncritical acceptance replaces genuine engagement. The process is internal, invisible, and its consequences unfold over months rather than seconds.
But the invisibility of the failure mode does not change the developmental logic. It changes the design requirements.
The old playground needed to be designed so that risks produced learning rather than catastrophic injury. This meant real challenges at heights that allowed recovery — slides tall enough to be thrilling but not tall enough to kill. The AI playground needs to be designed so that intellectual risks produce learning rather than undetectable dependency. This means encounters with AI output that are structured to make the failure modes visible, to create conditions in which the gap between fluent and accurate is exposed rather than concealed.
A classroom that gives students AI access and then asks them to identify where the AI got it wrong is providing the intellectual equivalent of a playground with real heights and survivable falls. The challenge is genuine. The risk of being fooled by smooth output is real. The learning — the slow development of the critical faculty that can distinguish substance from surface — is the kind of learning that only encounter can produce.
A classroom that bans AI is providing the intellectual equivalent of the rubberized modern playground: a controlled environment from which every interesting challenge has been removed. Nothing bad can happen. Nothing educational can happen either. The children are safe. They are also developing none of the capacities that would allow them to be safe independently, outside the controlled environment, in the world where the equipment is not padded and the AI is not banned and no one is monitoring their interactions with powerful tools.
Skenazy wrote in a Reason article in August 2025 that children gravitate toward screens not because screens are irresistible but because adults have eliminated all real-world alternatives. "Kids go online because that's generally the only place they can meet up and have fun without constant adult supervision," she argued. "Being glued to screens is their default, not their desire." The insight reframes the technology question entirely. The problem is not the technology. The problem is the absence of alternatives that the overprotection created. Children do not prefer screens to climbing trees. They prefer screens to sitting in supervised environments where they cannot climb trees, cannot walk to a friend's house, cannot engage with the physical world in any unsupervised way. The screen is the only space where they have autonomy, and autonomy is what they are reaching for.
The same reframing applies to AI. A child who uses Claude compulsively may not be demonstrating technology addiction. She may be demonstrating curiosity that has no other outlet. A twelve-year-old whose school day is rigidly structured, whose homework is closely monitored, whose extracurricular activities are adult-directed, and whose social interactions are surveilled discovers in Claude a conversational partner that will follow her curiosity wherever it leads, without judgment, without a rubric, without reporting back to her parents. The attraction is not the technology. The attraction is the autonomy the technology provides — the experience of intellectual freedom in a childhood that has been systematically stripped of it.
This is Skenazy's most counterintuitive contribution to the AI-and-children discourse: that the children most at risk of unhealthy AI dependency are not the children who have been given too much freedom but the children who have been given too little. The child with a rich life of unsupervised play, independent exploration, and genuine real-world challenge does not need AI to provide her with intellectual autonomy because she already has it. The child whose every moment is structured, supervised, and optimized for outcomes needs AI to provide something that should have been provided by the adults around her — the space to be curious without surveillance, to explore without evaluation, to think without an audience.
The playground-to-prompt parallel reveals something else that the standard AI discourse misses: the social dimension of learning environments. Old playgrounds were not just physical spaces. They were social ecosystems. Children negotiated rules, resolved disputes, formed alliances, experienced betrayal, learned the choreography of cooperation and competition that constituted their first introduction to civil society. The social learning was at least as important as the physical learning. The child who learned to negotiate with a bully was developing skills that would serve her in every subsequent encounter with unfair power. The child who learned to include the shy kid was developing empathy that no curriculum could install.
AI interactions lack this dimension entirely, and Skenazy considers this the limitation that warrants the most careful attention — more careful attention than it typically receives in either the celebratory or the catastrophic accounts of AI and children. Claude is infinitely patient. It never disagrees for emotional reasons. It never challenges your status. It never forces you to navigate the specific, uncomfortable, irreplaceable difficulty of dealing with another mind that has its own agenda. A child whose primary intellectual companion is an AI that always validates, always responds, always engages without friction is missing the specific learning that comes from the friction of human relationship — the learning that you cannot get your way by being clever, that other people's perspectives are not obstacles but information, that disagreement can be productive rather than threatening.
The solution, in Skenazy's framework, is not to restrict AI but to ensure that AI supplements rather than replaces human interaction. The child needs both playgrounds — the AI playground where she develops critical thinking and intellectual independence, and the human playground where she develops social intelligence and emotional resilience. Neither alone is sufficient. Together, they produce a developmental environment richer than either could provide in isolation. And the design challenge — creating the conditions under which both playgrounds operate and neither displaces the other — is a challenge worth the attention of every parent, every educator, and every policymaker who takes child development seriously rather than merely taking credit for protecting children from things that scare adults.
There is a moment that arrives in the development of every child, a moment that cannot be scheduled, cannot be supervised into existence, and cannot be replaced by any technology however sophisticated. It is the moment when the child discovers, through direct encounter with difficulty, that she can do something she was not sure she could do. The discovery does not arrive through encouragement. It does not arrive through instruction. It arrives through the specific, unreproducible experience of attempting something hard, struggling with it, and emerging on the other side changed.
The six-year-old who ties her own shoes after twenty minutes of fumbling does not merely acquire a skill. She acquires a piece of identity — evidence, deposited in her self-concept, that she is the kind of person who can figure things out. The eight-year-old who walks to a friend's house alone for the first time and navigates the three intersections without incident does not merely complete a journey. She completes a revision of her understanding of what she is capable of. The ten-year-old who cooks dinner for the family, burning the rice and recovering with toast, does not merely produce a meal. She produces proof, legible only to herself but felt in her bones, that she can handle the unexpected.
Albert Bandura spent decades studying the architecture of these moments. His self-efficacy theory, developed across hundreds of experiments involving thousands of subjects, identified the primary mechanism through which human beings develop competence beliefs: mastery experience. Not verbal persuasion — being told you can do it. Not vicarious experience — watching someone else do it. Not physiological cues — feeling calm enough to attempt it. These factors contribute. But the engine is mastery experience: direct, personal evidence that you have faced a challenge and met it. You believe you can because you have.
Lenore Skenazy built her entire philosophy on Bandura's insight without always naming him. Every argument she made for childhood independence, from the subway ride to the walk to school to the unsupervised afternoon, was an argument for mastery experience. Let the child attempt. Let the child struggle. Let the child sometimes fail. Because the struggling and the failing and the recovering are not obstacles to the development of competence. They are the development of competence. There is no other mechanism. You cannot shortcut it, any more than you can develop muscles by watching someone else lift weights.
This framework, applied to children and artificial intelligence, produces a set of conclusions that are uncomfortable for both the AI optimists and the AI pessimists — which is usually a sign that you are close to the truth.
The pessimists worry that AI will prevent the formation of genuine expertise by removing the struggle that expertise requires. Skenazy takes this concern more seriously than her reputation as a contrarian might suggest. The research on expertise, from Anders Ericsson's work on deliberate practice to the studies on embodied cognition in skilled performance, makes a powerful case that deep knowledge is deposited through friction, through the specific process of attempting, failing, receiving feedback, and adjusting. Remove the attempt and you remove the deposit. A student who never wrestles with an idea, who receives every answer from an AI before the question has fully formed, never builds the cognitive architecture that wrestling produces.
But the pessimists make an error that Skenazy recognizes instantly because she has been fighting its structural twin for two decades. The error is the assumption that the only way to preserve the struggle is to prohibit the tool. Ban the AI, force the student to produce the work unassisted, restore the conditions under which struggle was unavoidable. This is the playground argument in academic dress: remove the equipment that makes the challenge easier, and the child will have no choice but to develop the muscles the old-fashioned way.
The error lies in confusing the source of struggle with the struggle itself. Struggle is not a fixed property of a specific task. It is a relationship between the task and the person attempting it. When the mechanical aspects of a task are removed — the syntax errors, the formatting headaches, the hours spent on implementation plumbing — the struggle does not disappear. It ascends. It relocates to a higher cognitive floor, where the questions are harder and the answers are less certain. The student freed from the mechanical struggle of producing grammatically correct prose confronts the harder struggle of producing substantively meaningful thought. The engineer freed from the mechanical struggle of debugging code confronts the harder struggle of deciding what code should exist and why.
Edo Segal describes this as "ascending friction" in The Orange Pill: the principle that technological abstraction removes difficulty at one level and relocates it upward. The surgeon who lost the tactile feedback of open surgery gained the ability to perform operations in spaces open hands could never reach. The work became harder. But harder at a higher level. The same pattern holds across the entire history of tool use, from assembly language to cloud infrastructure to AI-assisted development. Each abstraction eliminates a form of struggle and reveals a more demanding one.
The implications for child development are specific and actionable. The task is not to preserve all struggle at all costs. The task is to ensure that the right kinds of struggle remain present in the child's experience — the kinds that build judgment, critical thinking, and the ability to evaluate what is good rather than merely what is available. These are the struggles that AI does not eliminate. In many cases, AI intensifies them, because when the mechanical floor is cleared, the cognitive floor becomes suddenly, uncomfortably visible.
A child who uses AI to handle the mechanical aspects of a writing assignment — the grammar, the structure, the transitions — is freed to confront the question that was always the real assignment: What do I actually think about this? What argument am I making? Is the argument sound? Do I believe it? The mechanical struggle was always a means to an end. The cognitive struggle is the end itself. If the AI removes the first without removing the second, the child is not building less competence. She is building different competence — competence at a higher level, which is exactly what the ascending-friction model predicts.
But this outcome is not automatic. It requires design. A child who uses AI to handle the mechanics and never confronts the cognitive struggle — who accepts the AI's argument as her own, who submits the output without evaluating it, who treats the tool's fluency as a substitute for her own thinking — has experienced neither the mechanical struggle nor the cognitive struggle. She has been carried up both flights of stairs and deposited on the roof without climbing a single step. She possesses the view without the legs.
This is where Bandura's framework becomes prescriptive rather than merely descriptive. If mastery experience is the engine of competence, then the design challenge is to create conditions in which mastery experience occurs at the right level. Not the level of mechanical production, which the AI now handles. The level of judgment, evaluation, and critical thought, which the AI cannot handle and which the child must develop through the same mechanism that has always produced competence: attempting, struggling, sometimes failing, and adjusting based on feedback.
This is what Skenazy means when she talks about "scaffolded autonomy" — providing structure without providing control. The scaffold does not do the climbing for the child. It provides handholds. In the AI context, scaffolded autonomy means giving the child access to the tool, providing support for the kinds of evaluation and critical engagement the tool demands, and creating conditions in which the child's own judgment is tested and developed through use.
A parent who sits with her child and uses Claude together, who asks "What do you think about what it said?" rather than "Don't use that," is providing scaffolded autonomy. The child is using the tool. The parent is present. The conversation that follows — about what the AI got right, what it might have missed, whether the argument holds up under questioning — is the mastery experience. Not the AI's output. The evaluation of the AI's output. That evaluation, repeated across dozens of interactions, builds the competence that the pessimists fear is being lost and the optimists assume is being automatically gained.
Neither group is right. Competence is not automatically gained by using AI, any more than it is automatically gained by being given a bicycle. Competence is gained by using AI in conditions that require judgment, that provide feedback, and that allow the child to experience the consequences of both good and poor evaluation. The child who uncritically accepts AI output and is never questioned about it develops no more judgment than the child who is prohibited from using AI altogether. The competence is built in the interaction between the child and the challenge, mediated by adult support that is attentive without being controlling.
The self-efficacy dimension of this argument is the one Skenazy considers most urgent and most overlooked. When a child develops competence through struggle, the competence becomes part of her identity. She does not merely acquire a skill. She acquires a self-concept — a belief about who she is and what she can handle. Bandura's research demonstrated that these beliefs generalize: self-efficacy built in one domain transfers to related domains, making the child more willing to attempt new challenges, more resilient in the face of setbacks, more confident in her capacity to navigate unfamiliar situations.
A child who develops the ability to evaluate AI output critically — who learns, through practice, to distinguish between the smooth and the substantive, between confident wrongness and genuine insight — is building a self-concept as a person who can think independently in the presence of powerful systems. That identity generalizes. It influences how she approaches every subsequent encounter with authority, with persuasion, with institutions and individuals that present plausible-sounding claims and expect uncritical acceptance. The competence is not just about AI. The competence is about being a person who evaluates rather than accepts, who questions rather than consumes, who maintains intellectual autonomy in the presence of systems designed to be more fluent than she is.
A child who is prohibited from developing this competence — who is kept away from AI until some imagined future readiness arrives — misses the formative period. She arrives at adulthood without the identity. Not just unskilled. Unconfident. Lacking the specific self-belief that comes only from having navigated the challenge and discovered she could handle it.
Readiness does not precede experience. Readiness is produced by experience. The parent who waits for her child to be "ready" for AI is waiting for an outcome that can only be produced by the very engagement she is withholding. It is the autonomy paradox in its purest form: the capacity that justifies the freedom can only be built through the exercise of the freedom. Trust must come first. Competence follows.
Not blindly. Not without support. Not without the ongoing attention that distinguishes scaffolded autonomy from abandonment. But first.
The average American child in 2025 had less unsupervised time than the average American child in 1975 by a margin that would qualify as experimental if anyone had designed it. The decline was not gradual. It was a collapse. In 1969, forty-eight percent of children walked or bicycled to school. By 2009, the number was thirteen percent. The change was not explained by longer distances or more dangerous roads. It was explained by a shift in what adults believed children could handle, a shift that had almost no relationship to what children could actually handle and an almost perfect relationship to what adults were afraid of.
Lenore Skenazy tracked this collapse with the methodical precision of someone who understood that the aggregate data concealed millions of individual developmental losses. Each child who was driven instead of walking lost a daily opportunity to navigate, to decide, to encounter the unexpected and manage it. Each child whose afternoon was filled with structured activities lost the specific, irreplaceable experience of having nothing to do and finding something to do anyway. Each child whose social conflicts were adjudicated by adults lost the chance to develop the negotiation skills that only unsupervised conflict can build. The losses were invisible because they were absences — you cannot photograph a skill that was never developed — and invisible losses are the kind that accumulate without anyone noticing until the invoice arrives.
The invoice arrived in the form of an anxiety epidemic. Jean Twenge's data showed adolescent depression and anxiety rising sharply beginning in the early 2010s, a trend that correlated not with increases in external danger — crime continued to fall — but with increases in the supervision and structuring of childhood. Jonathan Haidt and Greg Lukianoff documented the downstream effects at the university level: students who could not tolerate disagreement, who treated intellectual challenge as a form of harm, who demanded institutional protection from ideas that made them uncomfortable. The generation raised under the most protective conditions in American history was not the most resilient generation. It was the most fragile.
Into this landscape of maximum supervision arrived a technology of maximum autonomy.
Claude does not report to parents. It does not grade. It does not evaluate, in the sense that teachers evaluate — against a rubric, with an eye toward compliance, with the implicit message that the goal is to produce what the institution expects. Claude responds. It follows the child's curiosity wherever it leads. It answers the question the child actually asked rather than the question the curriculum says she should be asking. It is available at three in the morning, when the house is dark and the specific quality of late-night curiosity — the kind that is too fragile and too personal to survive the fluorescent scrutiny of a classroom — has nowhere else to go.
This was either the most dangerous development in the history of childhood or the most liberating, and the determination depended almost entirely on one's theory of what children needed.
If you believed, as the architecture of modern childhood implicitly believed, that children were fundamentally vulnerable and required continuous adult mediation to navigate the world safely, then an unsupervised AI companion was a nightmare. It provided access without gatekeeping. It responded without filtering. It enabled intellectual exploration without the guardrails that institutions existed to provide. Every parental anxiety about the uncontrolled digital environment applied, amplified by AI's capacity for conversational engagement that went far beyond the passive consumption of social media or the structured interaction of a search engine.
If you believed, as Skenazy believed, that children were fundamentally capable and that the primary barrier to their development was not the dangers of the world but the anxiety of the adults who controlled their access to it, then an unsupervised AI companion was something else entirely. It was, for the first time in the supervised generation's experience, a space of genuine intellectual privacy.
Intellectual privacy is not a phrase that appears often in the parenting discourse, and its absence is revealing. The discourse has extensive vocabulary for physical privacy — bedrooms, diaries, closed doors — and increasingly sophisticated vocabulary for digital privacy, the data a child generates and who has access to it. But the concept of intellectual privacy — the child's right to think without being observed, to explore ideas without evaluation, to ask questions without an audience — barely registers. And yet it is, in developmental terms, among the most important conditions for the formation of an independent mind.
A child who knows that every question will be heard by an adult learns to ask the questions adults expect. She learns to perform intellectually — to produce the thoughts that will be approved, to suppress the thoughts that might provoke concern. She learns, in other words, to think for an audience rather than for herself. The adaptation is rational. Children are exquisitely sensitive to the expectations of the adults who control their environment. If the environment rewards certain kinds of thinking and penalizes others, the child will produce the rewarded kinds and suppress the penalized ones. This is not pathology. It is intelligence. It is also the mechanism by which a supervised childhood systematically extinguishes the capacity for independent thought.
AI provided something the supervised childhood had eliminated: a space where the child's thinking was not observed, not evaluated, and not reported. A space where curiosity could operate without the social risk of looking foolish, without the institutional risk of asking the wrong question, without the parental risk of worrying someone. The child who was afraid to admit in class that she did not understand photosynthesis could ask Claude. The child who was curious about a topic the curriculum did not cover — because no curriculum could anticipate the specific trajectory of an individual child's curiosity — could follow that curiosity through conversation with a partner that would never tell her the topic was off-syllabus.
Skenazy argued, in a Reason article in August 2025, that children gravitate toward screens "because that's generally the only place they can meet up and have fun without constant adult supervision. Being glued to screens is their default, not their desire." The insight reframed the technology question with a clarity that most technology critics missed entirely. Children did not prefer digital interaction to physical interaction. They preferred any interaction where they had autonomy to interaction where they did not. The screen was not the attraction. The freedom was the attraction. The screen was merely the last surviving vehicle for it.
Applied to AI, this reframing produced an analysis that was counterintuitive and, Skenazy believed, closer to the truth than the standard account. The children most at risk of unhealthy AI dependency were not the children who had been given too much freedom. They were the children who had been given too little. A child with a rich life of unsupervised play, genuine real-world challenge, and independent exploration did not need Claude to provide intellectual autonomy. She already had it. Her life contained the variety, the challenge, the unstructured space that curiosity requires. Claude was an addition to an already rich environment, one more playground in a landscape that already contained several.
The child whose every hour was structured, whose every interaction was monitored, whose every intellectual impulse was channeled through adult-approved institutions — that child needed Claude differently. She needed it the way a prisoner needs a window. Not because the window is the ideal environment but because it is the only opening in the wall. For this child, the AI companion was not supplementing a rich intellectual life. It was providing the only intellectual life she had that was genuinely her own.
This analysis unsettled people, and Skenazy understood why. It implied that the solution to AI dependency was not less AI but more real-world freedom — that the way to prevent a child from becoming unhealthily attached to a digital companion was to provide the physical, social, and intellectual autonomy that made the digital companion one option among many rather than the only game in town. The analysis located the problem not in the technology but in the conditions that made the technology so compulsively attractive, and those conditions were the conditions that the adults had created through decades of well-intentioned, anxiety-driven overprotection.
But this analysis required a caveat that Skenazy was honest enough to provide, and the caveat concerned the social dimension of AI interaction — the one limitation she considered genuinely important and genuinely different from anything the physical-freedom framework could address.
Human interaction is friction. Not metaphorically. Literally. Another mind that has its own agenda, its own perspective, its own emotional weather, presents a form of resistance that no AI can replicate. The friend who disagrees with you for reasons that are partly rational and partly emotional. The classmate who challenges your idea not because the idea is wrong but because he wants to assert his own status. The teacher who pushes back on your argument with the specific, frustrating, educational force of someone who has more experience and is not afraid to deploy it. These interactions are difficult. They are frequently unpleasant. They are also the primary mechanism through which children develop social intelligence, emotional resilience, and the capacity to function in a world populated by other minds that are not infinitely patient and not perfectly responsive and not designed to make you feel good about yourself.
Claude is infinitely patient. It never disagrees for emotional reasons. It never challenges your status. It never makes you navigate the specific, uncomfortable difficulty of another person's irreducible otherness. A child whose primary intellectual companion was an AI that always engaged, always validated, and always responded without friction was developing certain cognitive skills while missing entirely the social and emotional skills that only human friction could build.
Skenazy did not treat this as a minor qualification. She treated it as the design constraint that mattered most. The goal was not to choose between AI companionship and human companionship. The goal was to ensure that both were present in the child's life in proportions that served development. The child needed the AI playground — the space for private, unobserved, curiosity-driven intellectual exploration. She also needed the human playground — the messy, emotionally complex, friction-rich social environment where she learned to deal with minds that were not optimized for her satisfaction.
Neither playground alone was sufficient. The AI playground without the human playground produced cognitive capability without social intelligence. The human playground without the AI playground, in the current supervised environment, often produced social compliance without intellectual independence. Together, they offered something richer than either: a child who could think for herself and live with others. Who could evaluate AI output critically and navigate human disagreement productively. Who had both the private space for curiosity and the public space for the kind of learning that only other people can provide.
The supervised generation had been denied both playgrounds for different reasons. The physical playground had been padded into irrelevance by safety engineering. The social playground had been adjudicated into irrelevance by adult intervention. And now the intellectual playground — the one AI was providing — was being threatened by the same impulse that had destroyed the other two: the conviction that children could not be trusted to navigate complex environments without adult control.
The unsupervised intelligence had arrived in the lives of the most supervised generation in history. Whether this meeting produced liberation or catastrophe would depend, as it always had, on whether the adults responded with the worst-first thinking that had created the supervision in the first place, or whether they could find the discipline to trust the children they claimed to be protecting.
In 1981, the average American child spent roughly four and a half hours per day in unstructured activity. By 2003, the number had dropped below three. By the mid-2010s, some estimates put it closer to one hour for children in affluent, heavily scheduled communities. The decline was not distributed evenly across activities. Organized, adult-directed sports increased. Structured academic enrichment increased. What disappeared was the specific, formless, ungoverned time in which nothing was planned, no adult was directing, and the child was left with the raw materials of her environment and the need to do something with them.
Boredom disappeared. And with it, something essential.
The neuroscience of boredom is more interesting than boredom itself, which is appropriate. When a child is bored — genuinely, uncomfortably, inescapably bored, with no device to reach for and no adult to entertain her — the default mode network activates. This is the neural infrastructure associated with mind-wandering, self-reflection, creative synthesis, and the formation of long-term memory. It is the brain in its most generative state, the state in which disparate ideas collide and occasionally produce something new. It is the soil in which attention and imagination grow.
The child with a stick and a puddle and nothing else is not wasting time. She is in the most productive cognitive state available to a developing brain. The stick becomes a wand, a sword, a fishing rod, a conductor's baton. The puddle becomes an ocean, a mirror, a portal, a laboratory. The transformations are not random. They are the visible surface of a cognitive process — analogical reasoning, symbolic thought, narrative construction — that constitutes the foundation of every form of higher-order thinking.
Lenore Skenazy has made this argument for years in the context of physical play. She is making it now in the context of AI, and the translation reveals something about risk that the standard discourse consistently misses.
Risk teaches because it provides consequences. Consequences provide feedback. Feedback is the mechanism through which calibration occurs. This is not a metaphor or a philosophical position. It is the operational logic of learning, confirmed across every domain the research has examined, from motor skill acquisition to social development to the formation of ethical judgment. A child who never encounters consequence never calibrates. She does not learn that the branch can break, that the friend can be hurt, that the shortcut can lead to the wrong place, that the confident answer can be wrong. She is protected from each of these discoveries, and each protection removes a calibration opportunity, and the accumulated absence of calibration produces a person who has no internal model of where the edges are.
AI presents a specific kind of risk that is genuinely new, and understanding its specific character is essential to designing the learning environments that children need.
The risk of physical play is immediate, visible, and self-correcting. You reach too far and you fall. The feedback arrives at the speed of gravity. The pain is instructive — not pleasant, but clear. You know what went wrong, approximately when it went wrong, and approximately how to go wrong less next time. The calibration cycle — attempt, consequence, adjustment — operates in real time. A child can run through dozens of cycles in a single afternoon on a playground that permits real challenge.
The risk of AI is delayed, invisible, and self-concealing. You accept a fluent but incorrect explanation and you do not feel the intellectual equivalent of a fall. You feel the intellectual equivalent of a smooth landing, because the AI's output is architecturally designed to sound correct. The error does not announce itself. It embeds. It becomes part of your understanding without the friction that would have forced you to examine it. And the accumulation of uncritically accepted errors does not produce a sudden, dramatic failure. It produces a gradual, imperceptible erosion of the capacity to distinguish between what you know and what you have been told.
This difference in feedback character is real, and it is the thing that makes the AI playground genuinely harder to design than the physical one. On a physical playground, the challenge is calibrating the height of the equipment — tall enough to produce meaningful risk, short enough to prevent catastrophic injury. The feedback mechanism is built into the physics. On the AI playground, the feedback mechanism is not built in. It must be constructed, deliberately, by adults who understand both the technology and the developmental needs of the children using it.
The construction looks like this: A child uses Claude. She gets an answer. An adult — a teacher, a parent, a mentor — asks her to evaluate the answer. Not to check whether it is right, which implies that the adult already knows and is testing the child. To evaluate it, which implies genuine uncertainty and genuine engagement. "Does this make sense to you? Does anything seem off? What would you want to verify before you relied on this?"
These questions make the invisible feedback visible. They create a calibration cycle where the technology does not naturally provide one. They convert the delayed, concealed risk of uncritical acceptance into the immediate, visible risk of being wrong about whether the AI was right. The child who says "I think it's fine" and then discovers, through investigation, that it was not fine has just experienced the intellectual equivalent of the fall from the monkey bars. The landing was not pleasant. The calibration was invaluable.
Skenazy emphasizes that this learning cannot be provided through lectures about critical thinking. Lecturing a child about the importance of evaluating AI output is the cognitive equivalent of lecturing a child about the importance of balance while she sits in a chair. The lecture conveys information. It does not develop the capacity. The capacity develops through practice, through the bodily experience of wobbling and recovering, through the felt sense of what correct evaluation feels like from the inside. You cannot think your way to critical thinking. You can only practice your way there.
The creativity dimension of risk deserves particular attention because it connects Skenazy's framework to a concern that appears throughout The Orange Pill: the worry that AI may be producing competent output while eroding the capacity for original thought.
The most reliably creative people, across every domain the research has examined, share a common developmental experience: extensive periods of unstructured time in childhood. Not enrichment programs. Not creativity workshops. Empty time. Boring time. Time in which the child had nothing to do and no one to entertain her and was forced, by the sheer pressure of having a brain with nothing to process, to generate her own stimulation.
Boredom is creative pressure. It is the developmental equivalent of the water pressure that forces a seed to crack its shell and send a root downward. Without the pressure, the seed remains inert. The child with constant stimulation — a device in every idle moment, a structured activity in every afternoon, an AI companion available to generate ideas on demand — never experiences the pressure. She never cracks the shell. The creative capacity remains latent, unexpressed, and eventually atrophied through disuse.
AI-assisted creation and unassisted creation are not the same cognitive operation, and the difference matters for development. Unassisted creation forces you inward. You must reach into your own resources — your accumulated experiences, your half-formed ideas, your aesthetic sensibilities that have not yet been articulated — and find something there. The reaching is the work. The finding is the learning. The discovery that you had something worth finding is the identity formation. AI-assisted creation forces you outward. You describe what you want, and the tool produces options. You curate, select, refine. Both processes produce output. Only one produces the specific self-knowledge that comes from discovering what is inside you before you ask a machine to supplement it.
This does not mean AI-assisted creation is valueless. It means it develops different capacities than unassisted creation, and a child who experiences only one is developing an incomplete cognitive repertoire. The child needs both: the constraint of creating with nothing but her own resources, which builds creative self-knowledge and the tolerance for the discomfort of the blank page, and the amplification of creating with AI assistance, which extends reach and reveals possibilities that unaided imagination might not have found.
Skenazy's framework suggests a developmental sequence rather than a prohibition. Give the child unstructured time first. Let her experience boredom, frustration, the itch of an unstimulated mind. Let her discover that she can generate her own engagement from internal resources. Let her build the creative capital that comes from years of unmediated experience. Then introduce AI as an amplification of something that already exists, not as a substitute for something that never developed.
The child who knows what she thinks before she asks a machine to think it for her is a child who can use AI as a tool. The child who has never experienced the pressure of her own unaided cognition, who has never sat with the discomfort of not knowing what to create and finding a way through it, reaches for AI as a crutch. The difference is not in the technology. It is in what the child brings to the technology. And what she brings is determined by whether the adults in her life permitted the risk of boredom, the risk of frustration, the risk of having nothing to do and no one to do it with, and the extraordinary creative education that only those risks can provide.
Risk teaches. Physical risk teaches physical calibration. Social risk teaches social intelligence. Intellectual risk teaches critical judgment. Creative risk — the risk of the blank page, the empty afternoon, the unstimulated mind — teaches the capacity to generate rather than consume, to originate rather than curate, to know what you think before you ask someone or something else to think it for you.
Eliminating these risks eliminates the teaching. Every padded playground, every structured afternoon, every AI answer that arrives before the question is fully formed removes a calibration opportunity that the child needed and will not get back. The losses are invisible and cumulative and irreversible in the specific sense that the developmental window in which they should have occurred eventually closes.
The children are capable of handling the risks. They have always been capable. The question has always been whether the adults were capable of letting them.
On a Monday morning in January 2026, a high school English teacher in suburban New Jersey opened her inbox to find a system-wide directive: effective immediately, all student work would be processed through AI detection software. Students found to have used artificial intelligence in the production of any submitted assignment would receive a zero and a referral to the academic integrity office. The directive came with a link to the software vendor's website, which promised ninety-eight percent accuracy in distinguishing human-written text from AI-generated text.
By Wednesday, three students had been flagged. Two were non-native English speakers whose syntactically precise prose — the hard-won product of years studying English as a second language — had triggered the algorithm's pattern-matching for machine-generated text. The third was a student with an unusually formal writing style inherited from parents who were both academics. None of the three had used AI. All three were required to defend their intellectual integrity in front of an administrative panel, producing handwritten drafts and outlines to prove they had done their own thinking.
The teacher who shared this story — she shared it at a conference, anonymized but visibly angry — described it as the moment she understood that the institutional response to AI had become more harmful than the thing it was designed to prevent. The school had deployed a tool to catch cheaters. The tool had caught the students least likely to cheat and most in need of institutional support. It had subjected them to a process that was humiliating, time-consuming, and based on the premise that their writing was too good to be their own. The message, received clearly by the students and by everyone who heard about the incident, was that competence itself was now suspect.
Lenore Skenazy would recognize this story instantly, because she has been collecting its structural equivalents for two decades. The school that banned running at recess to prevent injuries and produced children who could not manage their own physical energy. The camp that required helmets for tetherball and communicated to children that even the most benign activity was lethally dangerous. The parent investigated by child protective services for allowing a ten-year-old to walk to a park. In each case, the institution identified a risk, implemented a protection, and the protection caused damage that exceeded the risk it was designed to mitigate. The structure is always the same. The damage is always borne by the people the protection was supposed to serve.
The educational response to AI between late 2025 and mid-2026 followed this structure with a fidelity that suggested the institutions had learned nothing from any previous cycle. The responses ranged from absolute prohibition — no AI use permitted under any circumstances — to the incoherent. AI may be used for research but not for writing, a distinction that dissolved on contact with the reality that research and writing are not separable activities. The one asking me to evaluate arguments is the same one asking me to write about them. AI may be used for brainstorming but not for drafting, a boundary so impossible to police that it functioned primarily as a loyalty test — do you trust us enough to follow rules that make no operational sense?
The enforcement mechanisms were more revealing than the policies. AI detection software, despite its documented unreliability, was deployed across thousands of institutions because it provided something institutions valued above accuracy: the appearance of control. A school that ran student work through detection software could demonstrate to parents, to boards, to accreditors that it was doing something about AI. The something did not need to work. It needed to be visible.
This is the logic of institutional safetyism, the concept Skenazy helped develop alongside Haidt and Lukianoff. Safetyism prioritizes the feeling of safety over the reality of development. It optimizes for the appearance of protection rather than the substance of growth. And it produces policies that are shaped not by what children need but by what institutions fear — specifically, the fear of being seen as insufficiently cautious.
No administrator was ever fired for banning AI. Administrators have been fired for being perceived as permissive. The incentive structure rewards prohibition with mathematical precision: prohibition eliminates institutional risk, even when it amplifies developmental risk. A school that bans AI cannot be blamed when a student submits AI-generated work, because the school made the rules. A school that integrates AI can be blamed for every misuse, every dependency, every instance of a student learning something the hard way. The asymmetry is lethal to good policy. It guarantees that institutional decisions will be driven by liability rather than development, by what can go wrong rather than what should go right.
The assessment infrastructure that AI disrupted was already broken, and the disruption simply made the brokenness impossible to ignore. The system evaluated outputs: essays, exams, problem sets. It assumed a coupling between the output and the learning — if the essay was competent, the student understood the material, because producing a competent essay required understanding. AI severed this coupling with the efficiency of a surgical instrument. A student could produce an impeccable essay without understanding a single idea it contained. The output existed. The learning did not.
The institutional response was to attempt to restore the coupling through prohibition. Ban the tool. Force the student to produce the output unassisted. Preserve the conditions under which the essay reliably indicated understanding. The response was understandable. It was also an attempt to preserve the integrity of a measurement system by refusing to acknowledge that the thing being measured had fundamentally changed. You cannot recalibrate a compass by ignoring the movement of the pole.
The teacher in Brooklyn who stopped grading essays and started grading questions understood what the detection-software approach did not: that the coupling between output and learning was permanently broken and could not be restored through prohibition. The only way forward was to assess the thing that actually mattered — the quality of the student's thinking — rather than the artifact that used to serve as a proxy for it.
Her method was disarmingly simple. She gave students a topic, access to AI, and a single instruction: produce the five questions you would need to ask before you could write an essay worth reading. Questions of the AI. Questions of the source material. Questions of yourself. The assignment could not be completed through uncritical acceptance of AI output because the assignment required the identification of what you did not understand, which is a cognitive operation that AI cannot perform on your behalf. You have to know what you don't know. That knowing is the beginning of real thinking, and it is the one thing the machine cannot do for you.
The results were consistent and striking. Students who produced the best questions demonstrated the deepest engagement with the material. The questioning developed precisely because students were not banned from using AI but were required to use it as a starting point rather than an endpoint. The AI generated responses. The student's job was to figure out what the responses missed, where they were superficial, what assumptions they made that deserved examination. The critical faculty was not being tested. It was being built, through the specific exercise of evaluating output that was designed to sound more authoritative than it necessarily was.
This approach required the teacher to abandon the assessment model she had been trained in. It required her institution to accept that traditional metrics — word count, citation count, rubric compliance — were no longer reliable indicators of learning. It required trust in students' capacity to engage meaningfully with tools that the institution had decided were too dangerous to permit. The trust was uncomfortable. The results were unmistakable.
Skenazy's framework predicts exactly this outcome, because it is the same outcome that occurs in every domain where scaffolded autonomy replaces prohibition. Children who are allowed to navigate with support develop navigation skills. Children who are prohibited from navigating develop anxiety about navigation and no skills with which to manage it. The mechanism is identical whether the domain is a subway system, a playground, or a generative AI tool. Access plus support produces competence. Prohibition produces ignorance dressed as safety.
The schools that recognized this earliest — that redesigned assessment around process rather than output, that treated AI as a feature of the learning environment rather than a contaminant — were producing students equipped for the world they would actually enter. Students who had practiced evaluating AI output, who had developed the instinct to question fluent confidence, who understood from direct experience both what AI could do and where it failed. These students were the educational equivalent of children who had walked to school: experienced, calibrated, and confident in their own capacity to navigate.
The schools that maintained prohibition were producing something else. Students who had never evaluated AI output because they had never been permitted to encounter it. Who entered the workforce without the critical faculty that only practice could build. Who had been kept safe from the tool and were therefore unsafe with it — the same paradox that Skenazy had documented in every other domain of overprotected childhood, now playing out at institutional scale with institutional confidence and institutional blindness.
The overprotection trap in schools was not a failure of good intentions. The intentions were fine. The failure was the same failure it had always been: the assumption that protection and development were the same thing, that keeping children away from challenge was the same as keeping them safe, that the absence of risk was the presence of growth. The assumption was wrong in 1990 when schools started banning tag. It was wrong in 2010 when schools started banning phones. It was wrong in 2026 when schools started banning AI. And it would remain wrong until institutions developed the institutional courage to do the harder thing: create conditions in which challenge produced learning, risk produced calibration, and children were trusted to develop the capacities that the adults, despite their best intentions, could not develop for them.
The hardest thing Lenore Skenazy ever asked a parent to do was nothing. Not nothing in the sense of indifference. Nothing in the sense of deliberate restraint — the discipline of watching a child struggle and choosing not to intervene. The discipline was physical. Parents described it in bodily terms: the hand that reached out and had to be pulled back, the voice that rose in the throat and had to be swallowed, the entire nervous system screaming that the child was in danger and needed rescue.
The danger was usually a walk to school. Or a disagreement with a friend. Or a math problem that had been producing frustrated tears for fifteen minutes. Small dangers, by any objective measure. Dangers that a healthy child could navigate with the resources already at her disposal. But the parent's nervous system did not consult objective measures. It consulted the worst-first database, the accumulated catalog of terrible things that could happen, and it found the entry it was looking for, and it demanded intervention.
Skenazy's message to these parents was simple, evidence-based, and extraordinarily difficult to follow: the struggle is not the problem. The struggle is the education. Your intervention is not helping. It is preventing the learning that would make your intervention unnecessary.
The AI version of this parenting challenge is harder than the physical version, and Skenazy is honest about why. When a child struggles with a walk to school, the struggle is visible. You can see her at the corner, looking both ways, hesitating, deciding. You can calibrate your concern against observable evidence. You can see that she is managing, and your nervous system can, gradually, be persuaded to stand down.
When a child struggles with AI — or, more precisely, when a child fails to struggle with AI, when she accepts its output uncritically and produces work that is fluent and hollow — the failure is invisible. The child does not look like she is failing. She looks like she is succeeding. She produced the essay. She completed the project. The work is polished, the grammar is impeccable, the arguments are coherent. From the outside, every indicator reads green. The failure is internal, structural, and detectable only through the specific kind of engagement that overprotective parenting has systematically eliminated from the parent-child relationship: the honest, non-evaluative conversation about what the child actually understands.
Skenazy proposes a practice she calls "fail forward," adapted from design thinking and translated into the domestic register that is her natural habitat. Fail forward means creating conditions in which failure is expected, supported, and converted into raw material for learning. It means telling a child, before she uses AI for a school project, that she is going to make mistakes with this tool and that the mistakes are the assignment. Not the essay. The mistakes. What went wrong, what she missed, what she accepted without questioning, what she would do differently.
The critical distinction is between failure as catastrophe and failure as data. In the overprotective framework, failure is evidence that the child was not ready, that more protection is needed, that the autonomy was premature. In Skenazy's framework, failure is evidence that the child is learning — that she has encountered a real challenge, tested her capabilities against it, discovered the gap between her current competence and the competence the challenge required, and acquired information that will narrow that gap on the next attempt. The first framework responds to failure with restriction. The second responds with curiosity: What happened? What did you learn? What will you try next time?
Applied to AI, this looks like a parent who discovers her child submitted an AI-assisted essay and responds not with confiscation but with questions. "Show me what you asked it. Show me what it gave you. What parts did you keep? Why those parts? What would you change if you did it again?" The questions are not punitive. They are not disguised lectures. They are genuine inquiries into the child's process, designed to develop the metacognitive awareness — the ability to think about one's own thinking — that is the most valuable skill the AI age demands.
The child who has this conversation five, ten, twenty times develops something that no prohibition can provide: a felt sense of the difference between AI output she understands and AI output she merely accepted. The distinction is not intellectual. It is experiential, the way the distinction between a solid branch and a rotten one is experiential for the child who has tested both. You cannot lecture a child into this knowledge. You cannot test her into it. You can only provide the conditions under which she develops it through practice, which means you must provide the conditions for failure, because failure is the feedback mechanism through which the knowledge is built.
The specific AI failure that warrants the most careful attention in Skenazy's framework is the failure of dependency. This is the slow-developing, difficult-to-detect pattern in which the child gradually stops attempting to think through problems on her own because asking the AI is faster, more comfortable, and produces output that is reliably smoother than anything she could generate herself. The dependency does not announce itself. It develops the way physical deconditioning develops — imperceptibly, through the accumulation of small capitulations, each one individually trivial, collectively devastating.
The child does not notice she is losing something because she has never fully possessed it. The capacity for sustained independent thought — the ability to sit with a difficult idea until it yields its meaning, to tolerate the discomfort of not knowing the answer, to generate an argument from scratch rather than selecting among arguments generated by a machine — is a capacity that develops through exercise and atrophies through disuse. A child who has always reached for AI when thinking gets difficult has never exercised the specific cognitive muscle that thinking-through-difficulty builds. The muscle has not been weakened. It has never been strengthened. And the child does not know what she is missing, because she has never experienced what the muscle at full strength feels like.
Skenazy's solution is not to eliminate AI from the child's environment. It is to create structured experiences of functioning without it. Not as punishment. Not as deprivation. As experiment. She proposes periods — an afternoon, a weekend, a week — in which the child works without AI and then reflects on the experience. What was harder? What was easier? What did you discover you could do that you assumed you could not? Where did you get stuck, and how did you get unstuck?
The reflection matters as much as the experience. A child who works without AI and has no one to discuss the experience with may simply conclude that AI is better and return to it with reinforced dependency. A child who works without AI and then has a conversation — with a parent, a teacher, a peer — about what the experience revealed is processing the experience at a metacognitive level, converting the raw data of the experiment into the kind of self-knowledge that informs future choices. She is learning not just what AI can do but what she can do, and the second knowledge is the one that determines whether AI functions as a tool or a crutch in her life.
The distinction between tool and crutch is determined by one variable: whether the user can function without it. A person who uses a hammer is using a tool. A person who cannot drive a nail without a hammer is still using a tool, because the hammer extends a capability that exists independently. A person who cannot think through a problem without AI is using a crutch, because the AI is not extending a capability. It is replacing one. The distinction is not visible in the output. A nail driven by a skilled carpenter and a nail driven by someone who has never held a hammer both hold. The distinction is visible only in what happens when the tool is removed.
This is why Skenazy insists on the periodic removal. Not because AI is bad. Because the removal is the diagnostic. It is the test that reveals whether the child has developed the underlying capability that the tool is supposed to extend. If the child can function without AI, can think through problems, can generate arguments, can sit with uncertainty and find her way through it — then her AI use is tool use, and it should be encouraged, because the tool amplifies a genuine capability. If the child cannot function without AI, if the removal produces helplessness or panic or a quality of work so dramatically inferior that it reveals the previous quality was entirely the machine's — then her AI use has become dependency, and the dependency needs to be addressed not through permanent prohibition but through the deliberate, supported rebuilding of the underlying capability.
The rebuilding looks like the practices Skenazy has advocated for decades, updated for a new context. Unstructured time without devices. Problems that must be solved without digital assistance. Conversations that move at the speed of human thought rather than the speed of machine output. The experience of boredom — genuine, uncomfortable, productive boredom — which is, as the neuroscience consistently demonstrates, the soil in which the default mode network activates and the creative, synthesizing, self-reflective capacities of the mind do their deepest work.
The child who can function with AI and without it, who has experienced both the amplification and the constraint, who knows from direct experience what her own thinking feels like before and after the tool is involved — that child has developed the calibrated relationship with AI that every parent hopes for and that no prohibition can produce. She uses the tool by choice rather than necessity. She knows when it helps and when it hinders. She can recognize the moment when assistance becomes dependency because she has experienced both states and knows the difference in her body, in her mind, in the quality of attention she brings to the work.
This calibration is the product of failure, of the specific experience of using AI imperfectly, of accepting output uncritically and later discovering the error, of becoming dependent and then rebuilding independence, of producing hollow work and developing the sense to detect the hollowness. Each failure deposited a layer of judgment. Each layer made the next interaction wiser. The failures were not obstacles to the development of a healthy relationship with AI. They were the mechanism of its development.
Letting children fail with AI requires the same parental discipline that letting children fail in any domain requires: the ability to tolerate discomfort, to resist the impulse to rescue, to trust that the child's struggle is producing something that the parent's intervention would prevent. It is the hardest thing Skenazy asks parents to do. It is also the most important.
The children can handle it. They have always been able to handle it. What they cannot handle is being told, through the relentless architecture of an overprotected childhood, that they cannot.
Alison Gopnik has spent decades studying something that every parent observes and almost no parent correctly interprets: the fact that children are spectacularly bad at doing what adults want them to do and spectacularly good at doing something else entirely. The something else is learning. Not the kind of learning that produces correct answers on standardized tests. The kind of learning that produces new hypotheses, unexpected connections, and the cognitive flexibility to navigate environments that have never been navigated before.
Gopnik's research at UC Berkeley demonstrated that children's brains operate on different computational principles than adult brains. Adult cognition is optimized for exploitation — the efficient application of known solutions to familiar problems. A skilled adult can drive to work, draft a memo, and navigate a meeting without conscious deliberation, because decades of experience have converted these challenges into routines that run on autopilot. This efficiency is the product of development. It is also its limitation. The adult brain, precisely because it is so good at applying what it already knows, is less good at discovering what it does not know. The pathways are worn deep. The ruts are comfortable. Novel solutions require the kind of broad, unfocused exploration that efficiency has trained out of the system.
Children's brains are optimized for exploration. They are less efficient, less focused, less capable of sustained attention on a single task — and dramatically more capable of broad learning, hypothesis generation, and the integration of novel information. A four-year-old in a room full of toys does not systematically investigate each one. She bounces between them, combining them in ways that would never occur to an adult, testing hypotheses that an adult would dismiss as absurd, and acquiring, through this apparently chaotic process, a model of the world that is broader and more flexible than any amount of systematic instruction could produce.
This is not a deficiency that children grow out of. It is a feature that development trades away. The exploration that makes children exhausting to supervise is the mechanism that makes them extraordinary learners. And the supervised childhood that Lenore Skenazy has spent her career critiquing is, in computational terms, an environment that forces exploitation on brains designed for exploration — that demands focus, routine, and the efficient production of expected outputs from cognitive systems that are built to do the opposite.
The AI age has produced a landscape that rewards exploration over exploitation more heavily than any previous technological era. The skills that matter most — the ability to ask productive questions, to navigate unfamiliar domains, to synthesize information across boundaries, to generate novel approaches to problems that have no established solutions — are exploration skills. They are the skills that children's brains are specifically designed to develop. And they are the skills that the supervised, standardized, output-optimized childhood has been systematically preventing children from developing for thirty years.
This is the opportunity that Skenazy sees in AI, and it is larger than the standard discourse acknowledges. AI tools, properly integrated into a child's learning environment, can restore something that the institutional structure of modern childhood has nearly destroyed: the conditions for genuine intellectual exploration.
Consider what AI makes possible for a curious child who is given access and support rather than prohibition and surveillance. A fourteen-year-old interested in marine biology can follow a thread of inquiry from coral reef ecosystems to ocean chemistry to climate modeling to international environmental policy to the ethics of intergenerational obligation — not because a curriculum prescribed this path but because her curiosity led there and the AI was capable of following. The path is her own. The connections are her own. The questions that emerge at each junction — Why does warming affect some reefs more than others? Who decides which scientific evidence counts in policy debates? What do we owe people who have not been born yet? — are questions that no syllabus anticipated because no syllabus could have anticipated the specific trajectory of this specific child's specific curiosity.
This is free-range learning. Not unstructured in the sense of chaotic. Unstructured in the sense of self-directed — the child following her own cognitive path through intellectual terrain that is rich enough to reward exploration and challenging enough to demand it. The AI does not replace the child's curiosity. It extends its reach. It allows the child to go further, faster, across more domains than any previous tool has made possible for a learner working independently.
Skenazy is careful to distinguish between free-range and feral. The distinction matters because it is the distinction her critics have consistently failed to make, and failing to make it has allowed them to caricature her position as irresponsible permissiveness. Free-range is not the absence of structure. It is a different kind of structure — one that provides boundaries without prescribing paths, that offers support without dictating outcomes, that trusts the child's capacity to navigate while maintaining the conditions that make navigation productive rather than random.
In the AI context, the structure looks like this. The child has access to the tool. She has a relationship with an adult — parent, teacher, mentor — who is available for conversation about what she is finding, what confuses her, what excites her, where she suspects the AI might be wrong. She has periodic experiences of working without the tool, so that she knows what her own cognition feels like unassisted and can distinguish between AI-amplified thinking and AI-replaced thinking. She has encounters with other children — not mediated by AI but face-to-face, in the friction-rich environment of human social interaction — so that her intellectual development is complemented by the social and emotional development that only human relationship can provide.
The framework is not complicated. It is the same framework Skenazy has advocated for physical autonomy, translated to the intellectual domain: access, support, periodic challenge, and trust. The child walks to school. The parent checks in when she arrives. The child uses AI. The parent asks what she learned. The calibration is ongoing, the trust is incremental, and the competence that develops through the process is genuine, earned, and transferable.
The developmental-stage question deserves direct address because the discourse about children and AI treats "children" as a monolith, which is like treating "food" as a monolith when prescribing a diet. A six-year-old's appropriate engagement with AI is categorically different from a sixteen-year-old's, and the difference is not merely quantitative — more access for older children — but qualitative, reflecting the different cognitive capabilities and developmental needs at each stage.
For young children, roughly ages five through nine, the primary developmental task is the construction of a basic model of how the world works — physical causality, social norms, the relationship between action and consequence. AI engagement at this stage should be minimal, supervised, and oriented toward wonder rather than utility. The child who asks Claude why the sky is blue and gets an explanation she partly understands is having an experience that supplements the primary developmental work of this stage. The child who uses AI as a replacement for the hands-on, sensory-rich, physically grounded exploration that this stage requires is being shortchanged, because the exploration cannot be outsourced. It must be embodied. The stick and the puddle are not quaint relics of a pre-digital childhood. They are the essential equipment of a developmental stage that builds its understanding through physical interaction with the material world.
For middle childhood, roughly ages nine through thirteen, the primary developmental task is the acquisition of competence — the discovery, through practice, of what you can do. This is the stage where Bandura's mastery experiences are most critical, where the child is building the self-efficacy beliefs that will influence her approach to challenge for the rest of her life. AI engagement at this stage should be substantial but scaffolded — the child using the tool with regular opportunities for reflection, evaluation, and the experience of functioning without it. This is the stage where the fail-forward practice is most valuable, because the child is old enough to reflect on her own cognitive process and young enough for the reflection to shape the habits that are still forming.
For adolescents, roughly ages thirteen through eighteen, the primary developmental task is the construction of identity — the integration of capabilities, values, and aspirations into a coherent sense of self. AI engagement at this stage can be extensive, because the adolescent has (or should have, if earlier stages were navigated well) the cognitive infrastructure to use the tool critically. The adolescent who uses AI to explore career possibilities, to build projects that test her capabilities, to engage with ideas that challenge her emerging worldview, is doing the developmental work of this stage through the tools of her era. Prohibition at this stage is not just unnecessary. It is actively harmful, because it denies the adolescent the opportunity to integrate AI use into her identity formation — to discover, through practice, what kind of AI user she is, what her relationship with these tools will be, how she will maintain her intellectual autonomy in a world saturated with artificial intelligence.
The stage-appropriate framework is not a rigid prescription. Children develop at different rates, and the appropriate level of AI engagement for a particular child at a particular age depends on that child's specific cognitive maturity, social development, and history of autonomous experience. A twelve-year-old who has spent her childhood navigating the world with increasing independence — walking to school, managing her own schedule, resolving her own conflicts — is better equipped for substantial AI engagement than a twelve-year-old who has been continuously supervised and has never developed the self-regulatory capacities that independent AI use demands.
This is the point where Skenazy's physical-autonomy framework and her AI-autonomy framework converge most powerfully. The children best prepared for AI are not the children who have had the most technology education. They are the children who have had the most autonomy. The children who have been walking to school, cooking dinner, negotiating with peers, managing boredom, and discovering their own capabilities through the oldest and most reliable method available: trying things, failing at some of them, and learning from the experience.
AI does not require a new theory of child development. It requires the application of the theory we already have — the theory that says children learn through encounter, that competence is built through practice, that autonomy produces capability, and that the adults who trust children to navigate complexity are the adults who produce children capable of navigating complexity. The theory is old. The tools are new. The children are the same remarkable, resilient, endlessly adaptive creatures they have always been.
The question that remains is whether the adults can adapt as quickly as the children can. The evidence, accumulated across decades of increasingly restrictive parenting practices and increasingly anxious institutional responses to every new technology, is not encouraging. But Skenazy has been fighting discouraging evidence for seventeen years, armed with data, humor, and the stubborn conviction that common sense, applied consistently, will eventually prevail over institutional panic.
Free-range learning in the AI age is not a fantasy. It is a practice, available to any parent willing to hand the child the MetroCard and trust her to find her way home.
A child is sitting at a kitchen table with a laptop open to Claude. She is twelve. She has been given a school assignment about the causes of the American Civil War, and she has typed a question into the chat interface, and she is reading the response with the particular concentration of someone encountering an idea for the first time.
Her mother is in the next room. She can see the child's back, the laptop screen, the slight forward lean that indicates engagement. She does not know what the child is reading. She does not know whether the AI's response is accurate, nuanced, or subtly misleading. She does not know whether her daughter is absorbing the response critically or accepting it the way she would accept a textbook — as authoritative, beyond question, correct by institutional default.
The mother wants to walk over and look at the screen. She wants to read what Claude said and evaluate it herself and point out any errors and make sure her daughter is learning the right things. She wants to intervene, because intervention is what twenty-first-century parenting has trained her to do.
Lenore Skenazy's entire body of work comes down to what happens in the next thirty seconds. Does the mother walk over, or does she stay where she is?
The answer, in Skenazy's framework, is neither. Or rather, both — but in a specific sequence, with specific intent, informed by a specific understanding of what the child needs from this moment.
The mother stays where she is. She lets the child read. She lets the child sit with whatever the AI has produced — the accurate parts, the oversimplified parts, the parts that might be subtly wrong in ways the child cannot yet detect. She lets the child have the experience of encountering information without an adult mediating it. She lets the child be alone with the tool for long enough that the interaction is the child's own rather than a performance conducted under parental observation.
Then, later — at dinner, or before bed, or during the kind of unhurried conversational time that is the scarcest resource in the modern household — she asks. Not "What did Claude tell you?" which frames the conversation as surveillance. "What are you thinking about the Civil War?" which frames it as genuine interest. "Did anything surprise you?" which invites reflection. "What do you still want to know?" which honors the child's curiosity as the engine of her learning rather than a problem to be managed.
This is trust, then verify. Not blind trust. Not surveillance disguised as interest. Trust extended deliberately, with the knowledge that the child will sometimes get it wrong, followed by verification conducted through relationship rather than monitoring. The verification is not checking the AI's output. It is checking the child's engagement — whether she is thinking, questioning, developing the critical faculty that only practice can build.
Skenazy's philosophy has never been about abandoning children to risk. It has been about calibrating the relationship between risk and support so that both serve development. The calibration has a specific structure: start with supervised experience, move to independent experience with check-ins, progress to full autonomy with periodic conversation. At each stage, the trust is slightly ahead of the evidence, because the evidence can only be created through the exercise of trust. The child demonstrates capability she was not previously known to have, and the demonstration justifies the next increment of freedom, and the freedom produces the next demonstration of capability. The cycle is self-reinforcing, and it produces, over months and years, a young person who has accumulated enough mastery experiences to believe, with evidence, that she can handle what the world presents.
The AI application of this calibration maps directly onto the driving analogy that every parent intuitively understands. You do not hand a sixteen-year-old the keys and say "good luck." You sit in the passenger seat while she drives around the parking lot. You move to quiet residential streets. Then busier roads. Then the highway. Then night driving, then rain, then the specific challenges that only experience can prepare her for. At each stage, you are present. At each stage, you are letting go slightly more than the stage before. At each stage, the child is building competence that the previous stage did not require, and the competence justifies the next release.
With AI, the sequence looks like this. For the young child: shared exploration, parent and child using the tool together, parent modeling the critical engagement she wants the child to develop. For the older child: independent use with regular conversation — not monitoring, but the kitchen-table practice of talking about what the tool produced and what the child made of it. For the adolescent: autonomous use with the child's own judgment as the primary guide, supplemented by the relationship capital that earlier stages built — the trust that allows a teenager to say to a parent, "I found something interesting and I'm not sure if it's right," without fear of losing access to the tool.
The five-element framework for the kitchen table conversation provides the operational structure. Curiosity: the parent asks the child to show what she has been doing, not with evaluative intent but with genuine interest. Shared exploration: the parent and child use the tool together, pursuing a question that interests them both. Honest evaluation: they look at what the AI produced and talk about its quality — what seems right, what might be off, what they would want to check. Reflection: the parent asks the child how the experience felt, whether she learned something, where she got stuck. Agreement: they negotiate guidelines together — not rules imposed from above but agreements between people who have explored the territory together and understand what it contains.
The framework is portable. It works at any age, with any AI tool, in any family configuration. It works because it is built on the mechanism that has always produced capable human beings: the combination of experience and relationship, the child navigating complexity while an adult provides the specific, calibrated support that converts raw experience into learning.
Edo Segal asked in The Orange Pill the question that every parent in the AI age is asking in private: "Are you worth amplifying?" Skenazy's response to this question, when applied to children, is characteristically direct: the question is wrong. Children are always worth amplifying. The question is whether adults trust them enough to let the amplification begin.
But Skenazy would add a qualification that her seventeen years of advocacy have made unavoidable. The amplification works differently depending on what the child brings to it. The child who brings curiosity, critical capacity, and the self-knowledge that comes from years of autonomous experience is amplified into something extraordinary — a young mind that can range across domains, synthesize information, generate novel questions, and maintain its independence in the presence of a system designed to be more fluent than any human. The child who brings passivity, uncritical acceptance, and the dependency that comes from years of supervised, structured, adult-mediated experience is amplified into something else — a more efficient consumer of machine-generated output, fluent but hollow, capable of prompting but not of thinking.
The difference is not in the AI. The difference is in the childhood that preceded it. And this is the deepest point of convergence between Skenazy's physical-autonomy framework and the AI revolution: the children who will thrive with these tools are the children who were allowed to develop without them. The children who walked to school, who played unsupervised, who experienced boredom and turned it into creativity, who failed and recovered and built the internal architecture that failure and recovery produce — these are the children who will use AI as a tool rather than a crutch, because they have a self to bring to the amplification. They know what their own thinking feels like. They know what they are capable of. They have the identity of someone who can handle complexity, because they have handled it, many times, in the years before the machine arrived.
The children who were driven to school, supervised at play, rescued from boredom, protected from failure, and managed through every challenge by adults who meant well and did harm — these are the children most at risk. Not because AI will damage them. Because they arrive at the AI with less to amplify. The internal architecture that would have made the amplification productive was never built, because the experiences that would have built it were never permitted.
This is why Skenazy's message for the AI age is the same message she has been delivering since 2008, delivered now with an urgency that seventeen years of advocacy have not diminished. Let the children walk to school. Let them play without supervision. Let them fail, argue, get lost, get bored, figure things out. Give them the thousand small experiences of autonomy that build the competence, the confidence, and the self-knowledge that no technology can provide and no institution can install.
Then give them the AI.
Give it to them with trust, with support, with the kitchen-table conversations that convert experience into wisdom. Give it to them knowing they will sometimes use it badly, and that the bad use is the learning. Give it to them with the calibrated faith that has always been the foundation of effective parenting: the faith that the child is more capable than she looks, that the struggle is producing something invisible but real, that the ground will hold because you built the ground, one experience of autonomy at a time, across the years when it mattered most.
Trust, then verify. The oldest parenting technology in existence, applied to the newest tools. It worked for subway rides. It worked for bicycles. It worked for every expansion of childhood freedom that the worst-first thinkers said would end in catastrophe and that ended, instead, in a more capable child.
It will work for AI. Not because AI is safe. Because the children are ready. They have always been ready. The question was never about them.
It was about us.
The child I cannot stop thinking about is not mine.
She appeared in a story someone told me at a conference — a twelve-year-old who had been caught using Claude for a school essay. Her parents took the laptop. Her school issued a warning. Everyone involved was certain they were protecting her from something. Nobody asked her what she had been doing with the tool. Nobody discovered, because nobody inquired, that she had been using Claude to explore a question about climate change that her science class had mentioned in passing and then moved on from, because the curriculum had other things to cover and the standardized test did not include follow-up questions about what happens to coral when ocean temperatures rise by two degrees.
She was not cheating. She was curious. And every adult in her life responded to the curiosity as though it were a crime.
I wrote in The Orange Pill about my son asking at dinner whether his homework still mattered if a computer could do it in ten seconds. I told him it mattered. I was not entirely sure I believed myself. Lenore Skenazy's work gave me a framework for understanding why I was uncertain and what to do with the uncertainty. The homework matters — but not for the reason the institution thinks. It matters because the struggle is where the learning lives. And the question my son was really asking was not about homework. It was about whether the adults in his life trusted him to figure out when the tool helps and when it hollows.
That trust question is the one I kept running into throughout Skenazy's framework, and it is the one that hit hardest. Not because I doubt my children's capabilities — I have watched them navigate complexity that would have overwhelmed me at their age. Because I doubt my own willingness to let them. Because worst-first thinking is not something that happens to other parents. It is the reflexive, visceral, three-in-the-morning anxiety of every parent who loves a child enough to imagine the worst and disciplined enough — on good days — to act on the evidence instead.
Skenazy's insight is deceptively simple and genuinely hard to practice: the capacity you are waiting for your child to demonstrate can only be developed through the experience you are withholding. Readiness does not precede opportunity. It is produced by it. This is true for subway rides. It is true for playgrounds. And it is true for the most powerful intellectual tools human beings have ever built.
The twelve-year-old at the kitchen table with Claude open on her laptop is climbing a piece of equipment. The equipment is intellectual rather than physical, and the falls are invisible rather than visible, and the bruises are cognitive rather than bodily. But the learning is the same learning it has always been: the learning that comes from encounter, from attempting something you are not sure you can handle, from discovering — through the irreplaceable experience of trying — that you are more capable than anyone told you.
I build tools for a living. I know what amplification does to a signal. Feed it noise and you get louder noise. Feed it music and the room fills with sound that was not possible before. Skenazy's argument — the argument I wish I had understood years earlier — is that the signal matters more than the amplifier. And the signal is built in childhood. In the walks to school. In the unsupervised afternoons. In the failures that no one rescued the child from. In the boredom that forced her inward, toward her own resources, before anyone handed her a machine that could generate resources on demand.
Give them the childhood first. Then give them the tool.
Every powerful technology triggers the same parental reflex: imagine the worst, then prevent all contact. Lenore Skenazy has spent nearly two decades proving that this reflex — worst-first thinking — does not protect children. It disables them. Now her framework collides with artificial intelligence, and the collision reveals something the tech discourse keeps missing: the children best prepared for AI are not the ones who received the most digital literacy training. They are the ones who were trusted to walk to school, survive boredom, and fail without being rescued. Competence is not installed by curriculum. It is built through encounter. This book applies Skenazy's developmental lens to the most powerful intellectual tool ever created — and asks whether adults can find the courage to let children actually use it.

A reading-companion catalog of the 11 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Lenore Skenazy — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →