Janusz Korczak — On AI
Contents
Cover Foreword About Chapter 1: The Child Is Not a Future Adult Chapter 2: The Right to Respect Chapter 3: The Question as Evidence of Dignity Chapter 4: What the Twelve-Year-Old Is Actually Asking Chapter 5: The Violence of Premature Answers Chapter 6: The Child's Right to Struggle Chapter 7: Education as Accompaniment, Not Manufacture Chapter 8: The Orphanage and the Classroom Chapter 9: Against the Optimization of Childhood Chapter 10: What We Owe the Question Epilogue Back Cover
Janusz Korczak Cover

Janusz Korczak

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Janusz Korczak. It is an attempt by Opus 4.6 to simulate Janusz Korczak's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The drawing my daughter made last Tuesday was terrible.

I mean that as a compliment. She spent forty minutes on it. The horse looked like a table with a mane. The sky was green because she'd run out of blue. She pressed so hard the crayon snapped twice, and both times she picked up the pieces and kept going.

I almost intervened. Not to criticize — to help. I could see what she was reaching for, and I knew that Claude could produce it in seconds. A perfect horse, any style, any background, any color sky she wanted. The gap between her intention and the paper was enormous, and every instinct I've built over decades of closing exactly that kind of gap was screaming at me to close it.

I didn't. And the reason I didn't has a name.

Janusz Korczak was a Polish pediatrician who ran an orphanage in Warsaw for thirty years. He built children's parliaments and children's courts. He gave kids authority over their own lives in ways that made the adults around him deeply uncomfortable. And in August 1942, when the Nazis ordered his orphans deported to Treblinka, he refused every offer of personal escape and walked into the gas chambers beside them.

That final act tends to overshadow everything else. It shouldn't. Because the thing Korczak spent his life defending — the idea that a child is not a future person but a person right now, whose present experience has full dignity and full weight — is the single most important idea missing from every conversation I hear about AI and the next generation.

We talk about preparing children for an AI-transformed workforce. We talk about teaching them to prompt well, to code alongside machines, to develop "uniquely human skills." All of it treats the child as raw material for a future product. Korczak would recognize this instantly. He fought it his entire life. The question is not what the child will become. The question is who the child is right now, and whether the systems we are building respect her present existence or merely optimize it toward a destination someone else chose.

This is another lens for the tower. Not technology, not philosophy, not economics — but the specific, irreplaceable perspective of someone who understood children better than almost anyone who has ever lived, and who paid the ultimate price for refusing to abandon them. His patterns of thought reveal something the AI discourse cannot see from inside its own fishbowl: that the friction we are so eager to remove from our children's lives might be the very thing building them into people worth amplifying.

Read the book. Then go watch your kid struggle with something hard. And don't help.

Edo Segal ^ Opus 4.6

About Janusz Korczak

1878–1942

Janusz Korczak (1878–1942) was a Polish-Jewish pediatrician, educator, children's author, and pioneering advocate for children's rights. Born Henryk Goldszmit in Warsaw, he trained as a physician before dedicating his life to child welfare, directing the Dom Sierot orphanage for Jewish children from 1912 until its liquidation by the Nazis in 1942. His major works include *How to Love a Child* (1919), *The Child's Right to Respect* (1929), and the children's novels *King Matt the First* (1923) and *Kaytek the Wizard* (1933). Korczak developed revolutionary institutional practices — including children's parliaments, peer courts, and student-run newspapers — that gave children genuine authority over their own communities. His central conviction, that children are not incomplete adults but full persons deserving of respect in the present moment, became the philosophical foundation for the United Nations Convention on the Rights of the Child (1989). On August 5, 1942, he refused multiple offers of personal escape and accompanied approximately 192 children from his orphanage to the Treblinka extermination camp, where they were all murdered. UNESCO established the Janusz Korczak Chair in his honor, and his legacy continues to shape global discourse on children's rights, education, and the ethics of care.

Chapter 1: The Child Is Not a Future Adult

On August 5, 1942, German soldiers arrived at Dom Sierot, the orphanage at 33 Chłodna Street in the Warsaw Ghetto. One hundred and ninety-two children were ordered to march. Janusz Korczak, the sixty-three-year-old pediatrician who had run the orphanage for thirty years, had been offered escape multiple times — by former students, by the Polish underground, by at least one German officer who recognized him as a beloved children's author. He refused every offer. He dressed the children in their best clothes. Each carried a blue knapsack and a favorite book or toy. They marched in rows of four to the Umschlagplatz, the deportation point, and from there to the trains, and from there to Treblinka, where they were murdered.

The man who walked into the gas chambers with his children had spent his entire adult life defending a single proposition: that a child is not a future person but a person now.

This proposition sounds obvious. It is not. It is one of the most radical claims ever made about human beings, and nearly every institution that touches children's lives — schools, governments, medical systems, families — operates as though it were false. The assumption buried so deep in modern civilization that most adults have stopped noticing it runs as follows: children are incomplete. They are becoming something. Their present state is a rough draft of the finished product they will someday be, and the purpose of childhood is to produce that product as efficiently as possible. Their thoughts are preliminary thoughts. Their feelings are immature feelings. Their questions are practice questions, rehearsals for the real inquiries they will conduct once they have accumulated enough experience to be taken seriously.

Korczak spent forty years dismantling this assumption with the diagnostic precision of the physician he was and the moral ferocity of the advocate he became. In The Child's Right to Respect, published in 1929, he wrote that children "are not the people of tomorrow but are people of today. They are entitled to be taken seriously. They have a right to be treated by adults with tenderness and respect, as equals." The phrasing is deceptively gentle. Its implications are explosive. If the child is a person today — fully, not provisionally — then every system that treats childhood as preparation rather than existence has committed an error so fundamental it amounts to a violation of human dignity.

The artificial intelligence revolution of 2025 and 2026 makes Korczak's claim not merely relevant but urgent in ways he could not have anticipated from the Warsaw of the interwar years. The Orange Pill, Edo Segal's account of the moment machines learned to speak human language, introduces a figure who crystallizes the urgency: a twelve-year-old who asks her mother, "What am I for?" Not what she should become. Not what career she should pursue. What she is for — right now, in a world where a machine can do her homework better than she can, compose a song better than she can, write a story better than she can.

Segal treats this question with the seriousness it deserves, offering the image of a candle flickering in cosmic darkness — consciousness as the rarest phenomenon in the known universe, the thing that wonders, that asks why, that cannot stop questioning. The image is beautiful and partially true. But Korczak's framework reveals something Segal's response, shaped by the concerns of a builder and a futurist, does not fully address. The twelve-year-old is not asking an abstract philosophical question about the nature of consciousness. She is asking an identity question from inside the experience of being twelve, and the experience of being twelve is not a diluted version of adult experience. It is its own thing — complete, legitimate, and possessed of a specific gravity that adult experience cannot replicate.

Korczak understood this because he spent decades observing children with the attentiveness of a scientist and the tenderness of a parent. His orphanage was simultaneously a home and a laboratory. He kept meticulous records. He watched how children negotiated conflict, how they organized themselves when adults stepped back, how they constructed meaning from experiences that adults would have dismissed as trivial. What he saw convinced him that children possess a form of intelligence that is not lesser than adult intelligence but different from it — more immediate, more embodied, less defended against uncertainty. A child does not yet possess the elaborate scaffolding of rationalization that adults construct to protect themselves from existential discomfort. When a twelve-year-old asks "What am I for?" she is asking from a position of genuine openness that most adults have spent decades learning to close.

This openness is not a weakness. Korczak's framework insists that it is a form of cognitive and moral strength. The child who has not yet learned to deflect existential questions with career planning, with productivity metrics, with the comfortable fiction that doing is the same as being — that child is closer to the fundamental human situation than the adult who has buried the question under decades of coping mechanisms. The twelve-year-old's question is not a symptom of confusion. It is a symptom of clarity.

The implications for how artificial intelligence should be deployed in children's lives follow directly from this principle, and they are severe. If the child is a person now, then every AI system designed for children must be evaluated not by whether it prepares the child for a productive future but by whether it respects the child's present experience. The distinction is not semantic. It determines the entire design philosophy.

Consider the dominant paradigm of AI in education as it has developed through 2025 and 2026. Adaptive learning platforms assess a child's current skill level, identify gaps relative to grade-level standards, and deliver targeted content designed to close those gaps as efficiently as possible. The child's present knowledge is treated as a deficit to be remedied. The platform's metric of success is progression — how quickly the child moves from where she is to where the curriculum says she should be. The child's experience of being assessed, of receiving content calibrated to her deficiencies, of moving through a system whose logic she cannot see and whose purposes she did not choose — that experience is not measured, because it is not considered relevant. What matters is the output: the test score, the completion rate, the grade-level equivalency.

Korczak would have recognized this paradigm immediately. It is the same paradigm he fought in the schools of early twentieth-century Poland, dressed in new technology. The child is raw material. The system is the factory. The product is the future adult. The child's present experience — her boredom, her curiosity, her resistance, her delight, her confusion, her moments of sudden, unprompted insight — is noise to be filtered out in pursuit of the signal, which is always measurable progress toward a predetermined destination.

The EU Joint Research Centre's 2022 report on artificial intelligence and children's rights identified this pattern with empirical specificity. Researchers found that AI-based educational technologies routinely failed to account for children's agency, treating students as passive recipients of optimized content rather than active participants in their own learning. The report called for "integration and respect of children's agency" and urged that "designers and researchers should systematically study the impact of the use of AI technology on children's cognitive and socio-emotional capacities." The language is bureaucratic. The finding is Korczakian: the systems built to serve children do not see children. They see future adults in need of optimization.

UNICEF's Policy Guidance on AI for Children, published in 2020 and updated since, reached a similar conclusion from a different angle. "Children are already interacting with AI technologies in many different ways," the report noted: "they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children's lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children's rights are an afterthought, at best." The phrase "afterthought, at best" carries a diagnostic weight that Korczak would have appreciated. Not a priority. Not even a consideration. An afterthought — the thing you remember to think about after the system has already been designed, deployed, and scaled.

The pattern is visible beyond education. Social media algorithms that learn a child's engagement patterns and serve content optimized not for the child's development but for the platform's retention metrics. Recommendation engines that narrow a child's world to the dimensions of her existing preferences, foreclosing the encounters with the unfamiliar that are the raw material of growth. AI-powered toys that simulate companionship without the friction, unpredictability, and genuine otherness that actual human relationships provide. In every case, the child's present experience is subordinated to a metric that the child did not choose and cannot understand.

Korczak's novel Kaytek the Wizard, published in 1933, reads in retrospect as an uncanny parable for this dynamic. Kaytek, a schoolboy who discovers he possesses magical powers, initially uses them to reshape the world according to his desires — to escape adult control, to correct injustices, to make reality conform to his imagination. But the magic spirals beyond his control. Good intentions produce chaos. Power exercised without understanding its consequences on others — particularly on the vulnerable — creates destruction that no amount of further magic can repair. The novel "revolves around the notion that power is not without responsibility, nor without repercussions," as scholars have noted. Kaytek is "also a metaphor for children in general, in that they are all too often not taken seriously by adults." The adults in the story do not help Kaytek understand his power. They either fear it or try to exploit it. No one accompanies him.

The parallel to the AI moment is structural, not merely thematic. The twelve-year-old does not understand the systems that are reshaping her world. The AI that writes her essay, composes her music, answers her questions — she interacts with these systems daily, and daily they reshape her understanding of what she is capable of, what effort means, what the relationship between intention and creation looks like. She is Kaytek. The magic is real. The consequences are real. And no one is accompanying her.

Korczak's foundational claim — the child is a person now — generates a specific demand in the age of AI. The demand is not for better educational technology, or more child-friendly interfaces, or improved parental controls. The demand is for a fundamental reorientation of the question that governs how AI systems are designed for children. The current question is: How can AI prepare children for the future? Korczak's question is different, more radical, and more honest: How does AI affect the child's experience of being alive today?

The first question treats the child as an investment. The second treats the child as a person.

The distance between these two questions is the distance between a society that uses children and a society that respects them. And the twelve-year-old who asks "What am I for?" is standing in that distance, waiting for an answer that takes her seriously — not as a future adult whose productive capacity must be maximized, but as a human being whose present existence has value that no algorithm can measure and no optimization can enhance.

Korczak wrote in his diary, near the end: "One must not leave the world as it is. The repairing of the world must begin with repairing matters concerning children." The repairing he had in mind was not technological. It was moral. It began with seeing the child — really seeing her, as she is, not as a projection of adult anxieties or a vessel for adult ambitions. The AI moment demands the same kind of seeing, and the same moral courage, and the same willingness to subordinate efficiency to dignity.

The child is not a future adult. She is a person. And everything that follows in this book depends on whether the reader is willing to take that claim seriously — not as sentiment, but as the foundation of every decision about what these extraordinary tools should be permitted to do to the most vulnerable people in the world.

---

Chapter 2: The Right to Respect

There is a particular quality of attention that adults rarely offer children. It is not affection — most adults are capable of affection toward children, the warm feeling that arises when a child says something charming or does something endearing. It is not protectiveness — most adults feel instinctively protective of children, especially when danger is visible and acute. It is respect, and Korczak spent his life distinguishing it from the things it is routinely confused with.

Respect, in Korczak's framework, means recognizing the child as an autonomous being whose perspective is legitimate, whose grievances are valid, whose inner life is as real and as complex as any adult's. It means not merely listening to the child but taking what the child says seriously — granting it the weight of genuine testimony about genuine experience, rather than filtering it through the adult assumption that children do not yet understand enough to be credible witnesses to their own lives.

This distinction — between affection and respect, between protectiveness and recognition — is the foundation of The Child's Right to Respect, and it carries consequences that reach directly into the design philosophy of every AI system that interacts with children today.

Consider the twelve-year-old from Segal's The Orange Pill once more. She has watched a machine do her homework. She has heard it compose music. She has seen it write stories that sound, to her ear, as good as anything in her favorite books. And she has asked: "What am I for?" The adults around her face a choice. They can respond with affection — "Oh, sweetheart, you are so much more than a machine." They can respond with protectiveness — "Don't worry about that. We will make sure these technologies are used responsibly." Or they can respond with respect: by taking her question seriously, by sitting with its weight, by acknowledging that she has identified a genuine problem that the adults have not yet solved.

The first response is warm and empty. It reassures without engaging. It tells the child that her existential distress is cute rather than real — a phase, something she will outgrow, a thing to be soothed rather than explored. The second response is responsible and evasive. It redirects the child's attention from her experience to the systems that produced it, which is a legitimate concern but not the concern the child raised. She did not ask about AI governance. She asked about her own existence.

The third response — respect — is the hardest and the rarest. It requires the adult to acknowledge that the child has asked a question the adult cannot fully answer. It requires sitting in uncertainty together. It requires treating the twelve-year-old as a genuine interlocutor in a conversation that matters, not a small person to be managed through a difficult moment.

Korczak would have insisted on the third response, because the third response is the only one that treats the child as a person rather than a problem. And Korczak's insistence was not sentimental. It was clinical. He had spent decades observing what happens to children whose inner lives are not taken seriously. They do not stop having inner lives. They stop believing that their inner lives matter. The damage is not visible on any assessment metric. It shows up years later, in adults who have learned to distrust their own perceptions, who look to external authorities for validation of experiences they are perfectly capable of evaluating themselves, who have internalized the message that their thoughts are preliminary, their feelings are immature, their questions are practice rounds.

The AI systems deployed in children's environments in 2025 and 2026 replicate this dynamic at scale, with a precision that no individual adult could achieve. When a child asks an AI chatbot a question and receives an instant, confident, grammatically perfect response, the implicit message is: the answer exists, and it is outside you. The child's own process of wondering, of sitting with not-knowing, of formulating and reformulating the question until it sharpens into something she genuinely wants to understand — that process is short-circuited. The answer arrives before the question has fully formed. The child receives the product without undergoing the process, and the process was where the development was happening.

Researchers at the EU Joint Research Centre identified this dynamic when they urged that AI designers "systematically study the impact of the use of AI technology on children's cognitive and socio-emotional capacities." The language of the recommendation is measured, institutional. What it describes is a crisis of respect. The systems are not designed to study their impact on children because the systems were not designed with children's inner lives in mind. The children are users. Their engagement is a metric. Their experience — the felt quality of interacting with a system that always knows, never hesitates, and never sits with them in the discomfort of uncertainty — is not on the dashboard.

Korczak ran his orphanage with a radically different set of metrics. Dom Sierot had a children's parliament, where the orphans debated policy and voted on decisions that affected their daily lives. It had a children's court, where disputes between children — and between children and staff — were adjudicated by a panel of children according to a code of law the children had helped to write. It had a children's newspaper, where the orphans published their own observations, complaints, and ideas. These were not pedagogical decorations. They were institutional expressions of respect — structures built on the premise that children are capable of self-governance, capable of justice, capable of exercising judgment about their own community.

The AI-mediated educational environment inverts every one of these structures. Where Korczak's orphanage gave children authority over their own community, the adaptive learning platform gives the algorithm authority over the child's educational path. Where the children's court required children to articulate grievances, weigh evidence, and exercise judgment, the AI assessment system renders judgment automatically, invisibly, according to criteria the child cannot see and did not choose. Where the children's newspaper gave the orphans a voice — a public space in which their observations were taken seriously by their community — the recommendation algorithm gives children a feed, a stream of content selected not by the child's judgment but by a system optimized for engagement.

The pattern is consistent. In each case, something the child once did — govern, judge, speak — is now done for the child, to the child, or at the child, by a system that operates on principles the child cannot access. The child is not less capable in these environments. The child is less respected. The difference is everything.

There is a particular form of disrespect that operates through convenience, and it is the form most relevant to the AI moment. Korczak was aware of it. He wrote about adults who, out of genuine kindness, do things for children that the children could do for themselves — tying shoes, answering questions, resolving conflicts — and thereby communicate a message that the kindness was not intended to convey: You are not capable. The adult means well. The effect is corrosive. The child who is never permitted to struggle with a shoelace does not merely lose the skill of tying shoes. She loses the experience of mastering something difficult, and the knowledge of herself as someone who can master difficult things.

AI scales this dynamic to every domain of a child's intellectual life. The chatbot that writes the essay the child was struggling with does not merely produce the essay. It produces the message: The struggle was unnecessary. The product is what matters. The efficient path to the product is the path that bypasses you. The child who receives this message enough times will internalize it, not as a conscious belief but as an orientation toward effort — a deep, pre-reflective conviction that her own cognitive labor is an obstacle to be overcome rather than a process to be valued.

Dianova's 2026 analysis of AI and children's rights captured this trajectory with striking directness: "Artificial intelligence is no longer a futuristic abstraction; it is redesigning how children learn, play, socialize and are profiled." The verb redesigning is precise. The child's environment is being redesigned — not by the child, not for the child's present experience, but by systems whose purposes are commercial, institutional, or governmental, and whose impact on the felt quality of childhood is, at best, a secondary consideration.

Korczak would recognize the structure. He fought it his entire life. The orphanages of early twentieth-century Warsaw were designed to produce obedient, productive future citizens. The children's present experience — their loneliness, their anger, their grief, their fierce desire for autonomy and respect — was irrelevant to the institutional mission. Korczak's revolution was not to improve the institutions but to change the question they were built to answer. The old question: How do we produce good adults? Korczak's question: How do we respect the children who are here?

The AI-and-children conversation of 2025 and 2026 is overwhelmingly organized around the old question. How do we use AI to improve educational outcomes? How do we prepare children for an AI-transformed workforce? How do we ensure that children develop the skills the future economy will demand? These are not trivial questions. But they are the wrong first question, because they treat the child instrumentally — as a future economic unit whose present value is measured by its trajectory toward productivity.

Korczak's question remains the necessary one. Before asking how AI can prepare children for the future, ask how AI affects the child who is alive right now, today, in this classroom, holding this tablet, receiving this response to the question she just asked. Is she being respected? Is her capacity for thought being honored or bypassed? Is the system treating her as a person or as a user?

The right to respect is not an abstract principle. It is a design specification. It determines whether the AI system waits for the child to formulate her own question or suggests one. Whether it responds instantly or leaves space for the child's own thinking to develop. Whether it presents certainty or acknowledges that some questions do not have clean answers. Whether it treats the child's struggle as a bug to be fixed or as the essential feature of a developing mind.

Korczak built institutions that embodied respect in their architecture — parliaments and courts and newspapers that gave children structural authority over their own lives. The AI systems being deployed in children's environments embody a different value in their architecture: efficiency. And efficiency, when applied to a developing mind, is another word for disrespect — the systematic bypassing of the processes through which the child becomes not a productive adult but a person who knows she is capable of thinking for herself.

The twelve-year-old's question — "What am I for?" — is a question that deserves respect. Not the respect of a quick answer, however eloquent. The respect of genuine engagement with a person who has identified something real about the world she inhabits, something the adults around her have not yet resolved, and who is asking to be taken seriously in the conversation about what it means.

---

Chapter 3: The Question as Evidence of Dignity

The machine cannot ask "What am I for?"

This claim requires careful examination, because the objection is immediate and reasonable: a large language model can certainly produce the sentence "What am I for?" It can produce it in any language. It can produce it in context, embedded in a narrative, spoken by a fictional character, delivered with what reads as genuine anguish. The linguistic act is well within the machine's capability. The string of tokens is trivially producible.

Korczak's framework reveals why this objection, though technically correct, misses the point entirely. The question "What am I for?" is not a linguistic act. It is an existential one. The words are the surface. Beneath them lies a specific structure of experience that the twelve-year-old possesses and the machine does not: mortality, finitude, the awareness that time is limited and choices matter, the understanding that her life will end, the capacity to care about what happens between now and that ending.

The child who asks "What am I for?" is not requesting information. She is expressing a condition — the condition of being a conscious creature who has discovered that her capabilities are not unique, who must now construct a reason for her own existence on grounds other than productive utility, and who is doing this work at an age when the scaffolding of adult rationalization is not yet available to protect her from the full force of the question.

Korczak observed children confronting existential questions throughout his career, and what he consistently noted was that children face these questions with less defense and more honesty than adults. The adult who asks "What am I for?" typically has strategies for managing the question — philosophical frameworks, religious beliefs, professional identities, social roles that provide ready-made answers. The twelve-year-old has fewer of these structures. She is asking from a position of relative nakedness, without the insulation that adults mistake for wisdom. And Korczak insisted that this nakedness is not a liability. It is a form of courage that adults have largely lost.

The question presupposes things that the machine does not possess. First, it presupposes self-awareness — not the computational ability to model one's own processes, which AI systems increasingly possess, but the experiential fact of being a someone, a perspective from which the universe is observed and in which the observation matters. The twelve-year-old does not merely process the fact that machines can write essays. She experiences this fact. It lands on her. It changes how she feels about herself, about her homework, about the hours she has spent learning to write. The machine that produces a superior essay has no corresponding experience of the twelve-year-old's existence. It does not know she is there. It does not know that its output has consequences for a person.

Second, the question presupposes mortality. "What am I for?" is a question that only matters if your time is limited. A being with infinite time need not ask what it is for, because anything it does not do today it can do tomorrow, and the pressure of finitude — the knowledge that choosing one thing means not choosing another, that every hour spent is an hour spent permanently — does not apply. The twelve-year-old may not articulate it in these terms, but her question is saturated with the implicit awareness that her life is finite, her choices are real, and the answer matters because she cannot try everything and must therefore try the right thing.

Third, the question presupposes the capacity to care. "What am I for?" is not an idle inquiry. It is urgent. The child asking it is not conducting a philosophical exercise. She is searching for something to hold onto — a reason that her existence has value, a ground on which to stand that the machine's performance cannot erode. The caring is the engine of the question. Remove it, and the question becomes academic. A large language model can generate the sentence, but it cannot generate the caring that makes the sentence a question rather than a string of words.

Viktor Frankl, who survived Auschwitz and went on to develop logotherapy — a psychological approach centered on the human need for meaning — wrote that the most fundamental human drive is not pleasure or power but purpose. "He who has a why to live can bear almost any how," Frankl argued, drawing on Nietzsche. The twelve-year-old's question is a manifestation of this drive. She is not asking for pleasure or power. She is asking for purpose — for a reason that she exists, a reason that is not contingent on her being more capable than a machine, a reason that would hold even if machines could do everything she can do.

Korczak would have recognized Frankl's insight because he saw the same drive in his orphans — children who had lost parents, homes, communities, everything that provides the external scaffolding of meaning. What he observed was that these children, stripped of every external support, did not stop searching for meaning. They searched harder. They created meaning from whatever materials were available — from friendships, from games, from the governance structures of the orphanage, from the newspaper they published, from the stories they told each other. The drive was not extinguished by deprivation. It was revealed by it.

The twelve-year-old in 2026 has not been deprived in the material sense. She has, in many cases, more resources than any previous generation of children. But she has been confronted with a deprivation that Korczak's orphans never faced: the demonstration that her cognitive capabilities are not unique. The machine can do what she does — write, compose, calculate, analyze — and it can do these things faster, more accurately, and without effort. This is a deprivation of a specific kind. It is not a loss of capability. It is a loss of the assumption that capability equals worth.

And here Korczak's framework provides something that Segal's response in The Orange Pill, beautiful as it is, does not fully articulate. Segal offers the candle — consciousness as the rarest thing in the universe, the flickering light that asks why. The image is powerful and true as far as it goes. But Korczak would push further. The twelve-year-old does not need to be told that consciousness is cosmically rare. She needs to be shown that her consciousness — her specific, twelve-year-old, homework-dreading, music-loving, friend-missing, parent-needing consciousness — has value not because it is cosmically rare but because it is hers. The dignity is particular, not general. It belongs to this child, in this moment, with this question.

Korczak understood this with a specificity born from decades of observation. Each child in his orphanage was a particular person with a particular history and a particular way of being in the world. His records reflect this — not aggregate data about orphan populations but detailed observations of individual children: how this one resolves conflict, how that one responds to injustice, how another constructs meaning from a story told at bedtime. The dignity he defended was never abstract. It was always located in a specific child, at a specific moment, having a specific experience.

The machine's inability to ask "What am I for?" is not a limitation in any technical sense. It is not a problem that future iterations will solve. It is a structural absence that reveals, by contrast, what the twelve-year-old possesses. She possesses the experience of being someone for whom the question is not optional — someone who must ask, because the alternative is to live without examining whether her life has meaning, and that alternative is intolerable to a conscious being.

The @TinyKorczak project — a Twitterbot created by the Nigerian digital artist and UNESCO Janusz Korczak Fellow Yohanna Joseph Waliya — offers an illuminating parallel. Waliya fed Korczak's educational philosophy into an automated system that tweets advocacy for children's rights every three hours, particularly on behalf of Nigeria's 10.5 million out-of-school children. The bot is, in its way, a machine speaking Korczak's words. The words are Korczak's. The concern for children is Korczak's. The specific capacity that drove Korczak to walk into Treblinka rather than abandon his orphans — the unconditional commitment to particular children in a particular moment — is not the bot's and cannot be.

The bot can broadcast Korczak. It cannot be Korczak. The distinction is the distinction between producing language and inhabiting the existential position from which language acquires meaning. The twelve-year-old who asks "What am I for?" is inhabiting that position. She is not producing a sentence. She is expressing a condition — the condition of being alive, being finite, being capable of caring about the answer, and finding that the answer is not given in advance.

This is dignity. Not the abstract dignity of philosophical systems. The concrete, particular, irreducible dignity of a specific person asking a specific question from inside the experience of being alive.

The question is the evidence. The asking is the proof. And any system — any institution, any technology, any adult — that fails to recognize this evidence has failed the child at the most fundamental level: the level of seeing her as she actually is.

---

Chapter 4: What the Twelve-Year-Old Is Actually Asking

Adults have a persistent habit of translating children's questions into adult categories and then answering the translated version. The child asks one thing. The adult hears another. The answer addresses the adult's interpretation, and the child's actual question — the one that arose from her specific experience, in her specific language, with her specific urgency — goes unheard.

Korczak documented this pattern relentlessly. In How to Love a Child, published in 1919, he described adults who respond to children's expressions of distress by addressing the surface content — "Your knee is not that badly scraped" — while missing the underlying communication: I am hurt, and I need you to acknowledge that my pain is real. The surface and the depth are not the same question, and the adult who addresses only the surface has not listened to the child. The adult has listened to the adult's idea of what the child must mean.

The twelve-year-old who asks "What am I for?" is subject to this translation at a cultural scale. The adults hear her question and immediately translate it into one of three categories, each of which misses what she is actually asking.

The first translation: What job will I have? This is the career-counselor's interpretation. The child is anxious about the labor market. AI is displacing workers. She wants to know which professions will survive. The answer, in this translation, is a list — data scientist, AI ethicist, creative director, nurse, plumber, anything that involves "uniquely human skills." The answer is practical and inadequate, because the child was not asking about the labor market. She was asking about herself.

The second translation: Am I still special? This is the therapist's interpretation. The child's self-esteem has been threatened by the machine's performance. She needs reassurance that she possesses qualities the machine does not. The answer, in this translation, is a combination of affirmation and comparison — You have emotions, you have creativity, you have consciousness, you have things that AI will never have. The answer is warm and fragile, because every item on the list of "things AI will never have" is subject to the same erosion that prompted the question. Each year, the list gets shorter. Self-esteem built on a shrinking foundation is self-esteem waiting to collapse.

The third translation: Is human intelligence obsolete? This is the philosopher's interpretation. The child has intuited something about the relationship between capability and value that the philosophical tradition has been debating for millennia. The answer, in this translation, involves consciousness, qualia, the hard problem of subjective experience, the distinction between intelligence and sentience. The answer is intellectually rigorous and experientially irrelevant, because the twelve-year-old is not writing a philosophy paper. She is lying in bed at night, unable to sleep, feeling something she cannot name.

What is she actually asking?

Korczak's framework suggests a fourth interpretation, one that the career counselor, the therapist, and the philosopher all miss because they are listening from inside their own categories rather than from inside the child's experience.

The child is asking: Does my effort matter?

Not her capability. Not her intelligence. Not her productive potential. Her effort. The hours she spent learning to write an essay, practicing the piano, solving math problems, drawing pictures that did not look the way she wanted them to look. She did these things. She struggled with them. Some of them she mastered. Some of them she did not. And now a machine can do all of them, instantly, without struggle, without practice, without the years of effort she invested.

The question is not about the product — the essay, the composition, the solution. The question is about the process — the investment of herself in the work of becoming capable. If the machine renders the product valueless, does it also render the process valueless? Was the struggle a waste? Were the hours she spent learning to do things the machine can do in seconds simply hours she will never get back?

This interpretation explains why the question has the emotional quality it does — not the cool curiosity of a philosophical inquiry but the hot bewilderment of someone who has discovered that the currency she has been earning may be worthless. The twelve-year-old has been taught, implicitly and explicitly, that effort is the path to value. You work hard, you get better, you become capable, and your capability is your worth. The machine has disrupted this equation so thoroughly that the child must either find a new equation or accept that the effort was meaningless.

Korczak's response to this interpretation would be characteristically precise. The child's effort was not meaningless. But the reason it was not meaningless has nothing to do with the product it generated. It was not meaningless because the effort itself — the struggle, the persistence, the experience of working through difficulty — changed the child. Not her output. Her. The hours she spent learning to write an essay were not hours spent producing essays. They were hours spent becoming a person who can think through a problem, organize her thoughts, submit her ideas to the discipline of language, and discover, through the friction of the process, what she actually believes. The essay was a byproduct. The person was the product. And no machine can produce that person, because the person is produced not by the output but by the struggle to produce it.

This is the answer that the career counselor, the therapist, and the philosopher all miss, because each of them is evaluating the child's question from the perspective of what the child can do rather than who the child is becoming through the doing. The career counselor evaluates output. The therapist evaluates feelings about output. The philosopher evaluates the nature of output. None of them evaluates the developmental process itself — the way the child is changed, constituted, built into a particular kind of person through the specific experience of wrestling with difficulty.

Korczak's orphanage provided a laboratory for this insight. The children in Dom Sierot were not being educated in the conventional sense — not being trained to produce specific outputs or demonstrate specific competencies. They were being accompanied through experiences that would shape the kind of people they became. When a child served on the orphanage court, the value was not the verdict. The value was the experience of weighing evidence, hearing perspectives, exercising judgment, and living with the consequences of a decision. When a child published an article in the orphanage newspaper, the value was not the article. The value was the experience of observing, formulating, and submitting one's observations to a community of readers who would respond.

In every case, the process was the thing. The output was the occasion for the process, not its purpose.

AI, as currently deployed in educational contexts, inverts this relationship. The output is the purpose. The process is the cost. And any technology that can reduce the cost — that can produce the output more efficiently, with less human struggle — is considered an improvement. The adaptive learning platform that identifies the shortest path between the child's current knowledge and the target competency is considered superior to the platform that allows the child to wander, struggle, fail, and discover. The chatbot that writes the essay is considered a legitimate tool because the essay — the output — meets the standard. That the child who submitted it did not undergo the developmental process that writing the essay would have required is not considered relevant, because the process was never the point.

Korczak would say: the process was always the point. Everything else was scaffolding.

The implications extend beyond education into every domain where children encounter AI. The child who uses AI to compose music has a song. She does not have the experience of struggling with melody, rhythm, harmony, with the resistance of a medium that does not do what you want it to do until you learn enough about it to make it yield. The child who uses AI to draw has a picture. She does not have the experience of the hand that will not produce what the eye sees, and the slow, frustrating, revelatory process of closing the gap between intention and execution. The child who uses AI to write a story has a narrative. She does not have the experience of sitting with a blank page, not knowing what comes next, and discovering — through the act of writing, not before it — what she actually has to say.

In each case, the output is available. The experience is not. And the experience is where the child is built.

Segal recognizes this in The Orange Pill when he writes about the developer who lost ten formative minutes per day when Claude took over the plumbing — minutes that deposited thin layers of understanding she did not know she was accumulating until they were gone. The parallel to childhood is exact but the stakes are higher, because the developer had already built her foundation. The child has not. The developer lost refinement. The child loses the foundation itself.

The ascending friction thesis that Segal develops — the argument that friction does not disappear when AI removes it but relocates upward — contains a developmental caveat that his framework, oriented toward adult professionals, does not fully address. The surgeon who operates laparoscopically works at a higher cognitive level than the surgeon who operates with her hands, but she first learned to feel tissue with those hands. The senior engineer whose architectural intuition guides AI-assisted work first spent years debugging the lower layers of the stack. The ascending friction thesis assumes a foundation of embodied experience. For children, that foundation has not yet been laid.

The twelve-year-old cannot ascend to the higher friction if she has not first experienced the lower friction. She cannot exercise the judgment that directs AI if she has not first developed the understanding that judgment requires. She cannot evaluate the machine's output if she has not first done the work that the machine's output represents — struggled with it, failed at it, come to understand it through the specific, irreplaceable experience of having done it badly before doing it well.

The question she is asking — Does my effort matter? — deserves an answer that is honest and specific. The effort matters. Not because of what it produces. Because of what it builds: a person who knows what it means to try something hard, to fail, to persist, to improve, and to discover, through the doing, capacities she did not know she had. That person cannot be produced by a machine. That person can only be produced by a life lived with the full weight of its own difficulty, honored rather than optimized away.

Chapter 5: The Violence of Premature Answers

Rainer Maria Rilke wrote to a young poet in 1903: "Be patient toward all that is unsolved in your heart and try to love the questions themselves, like locked rooms and like books that are now written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer."

Korczak never read Rilke's letters, as far as any biographer has established. But he practiced their prescription daily, in the orphanage, in the clinic, in every encounter with a child who came to him carrying a question too large for the small body that held it. His practice was not to answer. His practice was to accompany — to sit with the child in the space the question opened, to resist the adult reflex that converts uncertainty into resolution, to trust that the child's own process of inquiry was more valuable than any conclusion the adult could deliver.

This practice rested on a conviction that runs counter to nearly everything modern educational technology is designed to do. The conviction is that a question, once genuinely asked, does real work in the person who asks it. The question opens a space. The space is uncomfortable — uncertainty always is — but the discomfort is productive. Inside that space, the child's mind does things it cannot do when the answer has already been supplied: it generates hypotheses, tests them against experience, discards the ones that do not hold, constructs new ones, and gradually arrives at an understanding that is not merely known but inhabited. The understanding belongs to the child because the child built it. It sits in the body as well as the mind. It has the specific weight of something earned.

The premature answer collapses this space. It arrives before the child's own cognitive process has run its course, and in arriving, it terminates the process. The child receives the answer. The answer may be correct. It may even be elegant — Segal's candle metaphor, for instance, is both. But if the answer arrives before the child has struggled with the question long enough for the question to do its developmental work, the answer has committed a particular kind of violence: it has stolen the inquiry from the person who needed it most.

Korczak understood this violence with clinical precision. In his pediatric practice, he noted that children who were given explanations too quickly — for illness, for loss, for the bewildering behavior of adults — did not integrate those explanations into their understanding of the world. The explanations sat on the surface, technically available but experientially inert. The children could repeat the explanation when asked. They could not use it, because it had not been built through the child's own process of making sense. It was imported knowledge, like a foreign currency that looks real but cannot be spent.

The AI chatbot is the most efficient delivery mechanism for premature answers ever designed. A child types a question. The response arrives in seconds. The response is confident, articulate, and comprehensive. It does not hedge. It does not say, "I'm not sure — what do you think?" It does not model the experience of sitting with uncertainty. It provides the answer the way a vending machine provides a snack: immediately, transactionally, without any expectation that the recipient will engage with the process of production.

The child who asks a chatbot "What am I for?" will receive an answer. The answer will likely be competent. It may reference consciousness, creativity, human connection, the irreducibility of subjective experience. It may even sound like Segal's candle passage — polished, persuasive, comforting. But it will arrive before the child has done the work the question demands: the work of sitting with not-knowing, of trying out provisional answers and finding them insufficient, of feeling the weight of the question in her chest at night and discovering that the weight itself is telling her something about what matters to her.

The premature answer does not merely fail to help. It actively harms, because it teaches the child a lesson about the relationship between questions and answers that is false and corrosive. The lesson is: questions are problems to be solved. You have a question; you find the answer; the question is resolved; you move on. This is the engineering model of inquiry — efficient, linear, convergent — and it is appropriate for a certain class of questions. "What is the capital of France?" is a question to be solved. "What am I for?" is not.

Existential questions are not problems. They are conditions. They do not resolve. They deepen. The twelve-year-old who asks "What am I for?" at twelve will ask it again at twenty-two, and at thirty-five, and at fifty, and each time the question will mean something different because she will be a different person asking it from inside a different life. The question is a companion, not a puzzle. It walks with you. It changes as you change. It reveals different facets of itself as your experience accumulates and your understanding shifts.

Korczak's orphanage was designed to protect the space in which this kind of inquiry could take place. The children's parliament was not efficient. Debates were long, messy, inconclusive. Children argued past each other. They proposed solutions that would not work. They failed to reach consensus and had to try again. An efficiency-minded administrator would have intervened — provided the answer, imposed the decision, resolved the impasse. Korczak did not intervene. He understood that the impasse was the education. The children were learning, through the experience of struggling with questions that did not have clean answers, that some questions require you to live with them rather than solve them, and that the capacity to live with unresolved questions is one of the most important capacities a person can develop.

The modern classroom, increasingly mediated by AI tools, operates on the opposite principle. Questions are obstacles to progress. The faster they are answered, the more "learning" has occurred, where learning is measured by the accumulation of correct answers rather than the development of the capacity to sit with difficult ones. The adaptive learning platform identifies the child's question, matches it to a knowledge gap, delivers targeted content, assesses whether the gap has been closed, and moves on. The entire cycle can occur in minutes. The child has "learned" something. Whether the child has developed the capacity for the kind of thinking that the question, left unresolved for longer, would have demanded — that is not on the assessment.

There is a temporal dimension to this violence that deserves separate attention. Questions develop over time. A question asked on Monday is not the same question by Friday, even if the words have not changed, because the person asking it has spent five days carrying it. She has noticed things in her daily experience that seem related. She has had conversations that reframed parts of the question. She has dreamed about it, perhaps — the unconscious mind working on problems the conscious mind has set aside. By Friday, the question has accumulated layers that Monday's version did not possess, and the answer the child might arrive at on Friday is richer, more textured, more hers than any answer she could have received on Monday.

The instant answer kills this temporal development. It resolves the question at the moment of asking, before the question has had time to grow. The child does not carry the question through the week. She does not notice the things she would have noticed. She does not have the conversations. She does not dream about it. The answer arrived, and the question stopped developing, and the richer understanding that five days of living with the question would have produced never comes into existence.

Korczak's diary entries from the orphanage — written during the years leading up to the deportation, under conditions of extraordinary stress — document this temporal development with the attentiveness of a naturalist watching a species evolve. He records a child who asked, after the death of a pet, whether animals go to heaven. Korczak did not answer. Over the following weeks, he noted the child's changing relationship to the question: initial grief, then anger, then a period of silence during which the child seemed to be processing something internal, then a conversation with another child about what happens when things end, and finally — weeks later — a statement that was not an answer but something more valuable: an acceptance of not knowing, paired with a tenderness toward the dead animal that had deepened rather than diminished over time.

Had Korczak answered the question on day one — "Yes, animals go to heaven" or "No, death is final" — the child's journey would have been truncated. The answer would have closed the space the question opened. The child would have received a proposition rather than undergoing an experience. And the experience — the weeks of carrying the question, of letting it work on her, of arriving at a relationship with mortality that was genuinely her own — would have been replaced by a fact that, however kindly delivered, would have belonged to the adult who provided it rather than to the child who needed to find it.

The AI chatbot is structurally incapable of this patience. It does not wait. It does not observe. It does not notice the child's question changing over days and weeks. It answers, because answering is what it does. The answer is its product. Delay is its failure mode. And every incentive in its design — the training feedback, the user-satisfaction metrics, the commercial imperative to demonstrate responsiveness — pushes it toward faster, more comprehensive, more confident answers. The chatbot that hesitates, that says "I'm not sure — what do you think?", that leaves the child in uncertainty for even a moment longer than necessary, would score poorly on every metric by which these systems are currently evaluated.

Segal grapples with a related problem in The Orange Pill when he describes the seduction of Claude's smooth output — the way a well-constructed passage can feel like insight before the author has verified whether the idea beneath it actually holds. The danger he identifies is that the prose can outrun the thinking: the surface quality of the output masks the absence of the cognitive work that would have produced genuine understanding. Segal catches this in himself because he has decades of experience against which to measure the output. The twelve-year-old does not have this measuring stick. She has no prior experience of what it feels like to arrive at an understanding through her own effort, because the effort has been short-circuited since the tools became available to her. She cannot detect that the answer is premature because she has never experienced the alternative — the slow, uncomfortable, deeply rewarding process of living her way into understanding.

Korczak's prescription is simple to state and extraordinarily difficult to practice: wait. Do not answer. Sit with the child in the discomfort of the question. Model the patience it takes to live without resolution. Show the child, through the quality of your own attention, that the question is worth more than any answer — that the capacity to hold a question open, to resist the pressure to resolve it, to trust the process of inquiry even when the process is slow and painful, is the capacity that makes a person genuinely intelligent rather than merely informed.

This prescription applies not only to parents and teachers but to the designers of AI systems that interact with children. The design question is not how to produce better answers. It is how to protect the space in which children's questions can develop — the temporal, cognitive, emotional space in which a twelve-year-old carries "What am I for?" through the days and weeks of her life, and gradually, without noticing it, lives her way into something that is not an answer but something better: a relationship with the question that will sustain her for the rest of her life.

The violence of the premature answer is that it replaces this relationship with a transaction. The child receives information and loses inquiry. She gains a fact and loses a companion. The exchange looks efficient. It is devastating.

---

Chapter 6: The Child's Right to Struggle

A six-year-old in Korczak's orphanage once spent the better part of an afternoon trying to tie a knot. Not a complex knot — the ordinary knot required to secure a parcel, a task that any adult in the building could have completed in seconds. The child fumbled. The string slipped. She started over. An older child offered to do it for her. She refused. A staff member approached, hands already reaching. Korczak stopped the staff member with a look. The child continued. She failed again. Her face went through a sequence of expressions that Korczak, the trained observer, would have catalogued with the precision of a clinician: frustration, then concentration, then something that looked almost like anger at the string itself, then a sudden stillness — the particular stillness of a mind that has found a new approach — and then, finally, the knot.

The knot was imperfect. An adult would have done it better. The package was lopsided and would have come undone in transit. None of this mattered. What mattered was visible only if you knew what to look for: the child's face after the knot was tied. Not triumph, exactly. Something quieter and more durable than triumph. The recognition, felt in the body before it could be articulated in language, that she had done a hard thing. That her hands, which had refused to cooperate for twenty minutes, had eventually obeyed her intention. That she was the kind of person who could persist through difficulty and arrive, on the other side, at capability.

Korczak built his educational philosophy on the observation that this experience — the experience of struggling with something difficult and eventually succeeding — is not merely useful for skill development. It is constitutive of the child's selfhood. The child who struggles and succeeds does not simply acquire a skill. She acquires a relationship with herself — a knowledge, felt rather than thought, that she is capable. This knowledge becomes the foundation on which all subsequent learning, all subsequent risk-taking, all subsequent engagement with difficulty is built. Without it, the child is more fragile than she knows, because her sense of her own capability has not been tested against the resistance of the real world.

Artificial intelligence, deployed in children's environments with the prevailing design philosophy of 2025 and 2026, systematically removes this resistance. The AI that solves the math problem removes the productive frustration of trying approaches that do not work and discovering, through the failure, why they do not work and what would work instead. The AI that writes the essay removes the agony of the blank page — the confrontation with one's own unclear thinking that is, as any writer knows, the mechanism by which thinking becomes clear. The AI that generates the drawing removes the gap between the child's intention and her hand's execution, which is the gap in which manual skill, aesthetic judgment, and the tolerance for imperfection all develop.

In each case, the AI produces the output. The child receives the output. The experience that would have occurred between the child's intention and the output — the struggle, the failure, the adjustment, the eventual breakthrough — is bypassed. The efficiency gain is real. The developmental cost is invisible, because the thing that was lost — the experience of struggling — does not appear on any assessment metric. No test measures whether the child has developed the capacity for persistence. No grade reflects whether the child knows, in her body, that she can do hard things.

Csikszentmihalyi's research on flow — which Segal engages extensively in The Orange Pill — provides the psychological framework for understanding why struggle matters. Flow, the state of optimal human experience, occurs when challenge and skill are precisely matched: the task is hard enough to demand full attention but not so hard that it overwhelms capacity. The key insight, often missed in popular accounts of flow, is that the state requires difficulty. Without difficulty, there is no flow. There is only ease, which produces not satisfaction but boredom — the grey, restless boredom of a person whose capabilities are not being tested.

Children experience flow when they encounter challenges calibrated to their developmental level — challenges that demand effort, that do not yield immediately, that require the child to extend beyond what she currently knows how to do. The child learning to tie a knot is, in Csikszentmihalyi's framework, in a state of flow: fully absorbed, challenge and skill matched, the outside world temporarily irrelevant, the entire field of attention narrowed to the relationship between the string and her fingers. The experience is not pleasant in the conventional sense. It involves frustration, repeated failure, moments of near-despair. But it is satisfying in the deeper sense that Csikszentmihalyi documented across decades of research: the sense of operating at the edge of one's capability, of being fully engaged with something that matters.

AI removes the challenge. Without the challenge, there is no flow. Without flow, there is no developmental experience. The child who receives the AI's output experiences relief — the task is done — but not the satisfaction that comes from having done it herself. The distinction between relief and satisfaction is the distinction between a need being met externally and a capacity being developed internally. Both feel good. Only one builds the child.

Korczak's orphanage was designed around this distinction, though he would not have used Csikszentmihalyi's terminology. The children cooked, cleaned, managed the institution's logistics, resolved their own conflicts, published their own newspaper, and governed themselves through democratic institutions. Adults supervised. Adults did not do the work for the children. The explicit reason was practical — a large orphanage requires labor, and the children's participation was necessary. The implicit reason was developmental: Korczak understood that children who are permitted to struggle with real responsibilities — not simulated ones, not pedagogically designed exercises, but genuine tasks with genuine consequences — develop capacities that no amount of instruction can produce.

The children's court is the most striking example. When a conflict arose between two children, or between a child and a staff member, it was adjudicated not by an adult authority but by a panel of children, following a code of law the children had contributed to drafting. The proceedings were real. The verdicts carried consequences. And the process demanded of the child-judges a set of capacities that could only be developed through the struggle of exercising them: the capacity to listen to competing accounts, to weigh evidence, to resist the pull of friendship or enmity, to arrive at a judgment that could be defended publicly, to accept responsibility for a decision that affected another person's life.

No AI system can replicate this developmental process, because the process depends on the child's own agency — her active participation in something difficult, her investment of effort, her experience of the weight of responsibility. An AI system that adjudicated the conflict for the children — more efficiently, perhaps more fairly, certainly more quickly — would have produced a verdict. It would not have produced judges. And Korczak understood that producing judges — people capable of exercising moral judgment in conditions of uncertainty — was the purpose of the court. The verdicts were incidental.

The research on children's rights in the AI age consistently identifies the risk of developmental bypassing without always naming it as such. The UNICEF Policy Guidance on AI for Children calls for systems that "support children's agency" and "enable children to develop their own abilities." The EU Joint Research Centre urges that AI systems be designed to account for "children's cognitive and socio-emotional capacities." These recommendations point in the right direction, but they do not go far enough, because they frame the issue in terms of what AI should support rather than what AI should not replace.

The question is not whether AI can support the child's development. The question is whether there are developmental experiences that AI must not be permitted to substitute, even when the substitution is more efficient — because the efficiency is precisely the problem. The struggle that the AI removes is not an inefficiency to be eliminated. It is the mechanism of development itself. Eliminate it, and you have not streamlined the child's growth. You have prevented it.

Korczak's framework generates a specific right that is absent from every AI governance document currently in circulation: the child's right to productive difficulty. Not difficulty for its own sake — not suffering, not frustration designed to toughen the child through hardship. Productive difficulty: the specific, calibrated, developmentally appropriate challenge of engaging with something that does not yield easily and, through the engagement, discovering capabilities the child did not know she possessed.

This right imposes obligations. On parents: the obligation to resist the temptation to smooth every difficulty from the child's path, including the digital ones. On educators: the obligation to design learning environments that preserve the struggle even when the tool can eliminate it. On AI designers: the obligation to build systems that know when to step back — that can identify the moments when the child's own process is more valuable than the system's output, and that withhold the output long enough for the process to run.

On all adults: the obligation to remember that the child tying the knot does not need help. She needs time. She needs the space to fail and try again and fail and try again and, eventually, to discover that her hands can do what she asked them to do. The knot, lopsided and imperfect, is worth more than any knot an adult could have tied for her, because the knot is evidence — not of skill, but of the self that emerges from the struggle to acquire it.

---

Chapter 7: Education as Accompaniment, Not Manufacture

Korczak's orphanage was not, by any standard metric, well run. The children's parliament debated for hours when an executive decision could have been made in minutes. The children's court sometimes reached verdicts that the adults found unjust or impractical. The newspaper published articles that were poorly written, factually questionable, and occasionally offensive to staff. The kitchen was managed by children who burned food, miscounted portions, and created messes that a professional cook would have avoided entirely.

An efficiency consultant would have redesigned the institution from the ground up. The consultant would have identified the bottlenecks — democratic deliberation where administrative authority would suffice, juvenile adjudication where adult judgment would be more reliable, child labor where professional staff would be more productive — and eliminated them. The institution would have run smoothly. The food would have been better. The decisions would have been faster. The newspaper would have been more polished or, more likely, would have been discontinued as an unnecessary expenditure of organizational resources.

Korczak would have resisted every one of these improvements, because every one of them would have replaced accompaniment with manufacture. And the distinction between accompaniment and manufacture is, in Korczak's educational philosophy, the distinction between education that respects the child and education that uses the child — between institutions that serve the child's development and institutions that produce the child the institution was designed to produce.

Manufacture is the process of turning raw material into a finished product according to a predetermined specification. The specification is external to the material. The factory does not ask the wood what it wants to become. It shapes the wood into the chair the designer specified. The material's own properties — its grain, its knots, its natural inclinations — are obstacles to be overcome in pursuit of the predetermined form. The better the factory, the more completely the material's own character is subordinated to the design.

Accompaniment is something else entirely. The accompanier walks alongside. The accompanier does not have a predetermined destination for the person being accompanied. The accompanier observes, supports, occasionally redirects, but does not control. The journey belongs to the person being accompanied. The accompanier's role is to be present — to provide the stability and attention that allow the accompanied person to take risks, make mistakes, and discover, through lived experience, who they are becoming.

The distinction maps directly onto the AI-in-education debate, and it reveals why the dominant paradigm of artificial intelligence in schools is, from a Korczakian perspective, not a technological problem but a moral one.

The adaptive learning platform is a manufacturing system. It has a specification: the grade-level standard, the competency benchmark, the learning objective. It has raw material: the child, whose current state is assessed relative to the specification. It has a process: the delivery of targeted content designed to close the gap between the child's current state and the specification. And it has a quality metric: the speed and completeness with which the gap is closed. Everything about this system is oriented toward producing the child the system was designed to produce. The child's own inclinations — her curiosities, her resistances, her moments of unprompted interest in something the curriculum does not cover — are noise. They slow the process. They divert resources from the objective. A well-designed manufacturing system minimizes them.

Korczak's orphanage was designed to maximize them. The children's parliament was inefficient precisely because it allowed children to pursue their own inclinations — to debate topics that interested them, to propose solutions that reflected their own understanding of the problem, to arrive at conclusions that no adult had predetermined. The inefficiency was not a flaw. It was the feature. The children were not being manufactured into predetermined citizens. They were being accompanied through the experience of self-governance, and the experience itself — messy, slow, often frustrating — was the education.

The educator who accompanies does something that no AI system currently can: she withholds. She sees the child struggling and does not intervene. She watches the child make a mistake and does not correct it. She observes the child arriving at a conclusion that is wrong, or incomplete, or naive, and she waits — not because she is passive, but because she understands that the child's relationship with the mistake is more developmentally valuable than the correct answer.

Withholding requires judgment. It requires the educator to assess, in real time, whether the child's struggle is productive (building capability) or destructive (producing only frustration and despair). It requires sensitivity to the particular child — this child's threshold, this child's resources, this child's specific relationship with difficulty at this moment. It requires the willingness to be wrong, to intervene too late or too early, and to repair the relationship when the judgment was off. This kind of judgment is relational. It depends on knowing the child, not as a data point in an assessment system but as a person whose history, temperament, and current emotional state the educator has observed over weeks and months of shared life.

The AI system does not withhold. It cannot, because withholding requires a kind of knowledge the system does not possess — knowledge of the particular child's inner experience, knowledge that is built not through data but through relationship. The system can model the child's knowledge state. It can predict, with increasing accuracy, what the child knows and does not know, what the child will find easy and what the child will find hard. What it cannot model is the child's relationship with difficulty itself — whether this particular struggle, at this particular moment, is building something or breaking something. That assessment requires the embodied presence of another person who has watched this child long enough to read the signals that no sensor can detect.

Seymour Papert, the mathematician and educator who developed constructionism at MIT, understood the relationship between tools and accompaniment with a clarity that remains relevant. In Mindstorms, published in 1980, Papert argued that computers could be powerful learning tools — not because they could teach children, but because children could use them to build things and, through the building, develop mathematical and logical thinking. The computer, in Papert's vision, was a medium for the child's own construction of knowledge, not a delivery system for predetermined content. But Papert was explicit that the computer alone was not sufficient. The child needed a human companion — a teacher, a mentor, a more experienced peer — who understood the technology well enough to help the child use it generatively and who understood the child well enough to know when to help and when to step back.

The AI tools of 2025 and 2026 have capacities that Papert could not have imagined. They can generate code, produce visual art, compose music, write prose, answer questions across every domain of human knowledge. They are, in Papert's terms, extraordinarily powerful media. But they are deployed almost exclusively as delivery systems rather than construction media, and they are deployed without the human accompaniment that Papert identified as essential. The child receives the AI's output. She does not build with the AI under the guidance of a human who knows her. The medium has been misidentified as the message.

Korczak's framework reveals the cost of this misidentification. When education is manufacture, the educator is a technician — a person whose job is to operate the system that produces the predetermined outcome. The educator's own humanity — her curiosity, her doubt, her passion for the subject, her capacity to be surprised by a child's question — is irrelevant to the manufacturing process and may even be an impediment to it. The system requires consistency, reliability, adherence to the specification. The educator who goes off-script — who follows a child's unexpected question into territory the curriculum does not cover, who abandons the lesson plan because something more interesting has emerged — is a quality-control failure.

When education is accompaniment, the educator is a presence. Her humanity is not irrelevant; it is the primary tool. The child learns not only from what the educator says but from who the educator is — from the quality of her attention, from the way she engages with uncertainty, from the visible evidence that an adult can be curious, can be wrong, can change her mind, can sit with a question that does not have a clean answer and find the sitting interesting rather than distressing. The educator models, through her own being, the capacities she hopes the child will develop. This modeling cannot be scripted, programmed, or optimized. It can only be lived, in the presence of the child, with the full weight of the educator's own humanity.

Alison Gopnik, the developmental psychologist, captured this distinction in her metaphor of the gardener and the carpenter. The carpenter has a blueprint. He shapes the raw material to match the design. The product is predetermined; the craft lies in the precision of the execution. The gardener has no blueprint. She creates conditions — soil, light, water, protection from pests — and then observes what grows. The product is not predetermined. It emerges from the interaction between the seed's own nature and the conditions the gardener provides. The gardener's craft lies not in shaping the plant but in understanding the conditions it needs and providing them.

Korczak was a gardener. His orphanage was a garden. The children were not shaped into predetermined forms. They were given conditions — democratic governance, judicial process, publication, meaningful work, the constant presence of adults who took them seriously — and then they grew. What they grew into was unpredictable, often surprising, occasionally alarming to the adults who observed it. But the growth was genuine, because it belonged to the children. It had not been manufactured.

The AI-mediated classroom of 2026 is a carpentry shop. The blueprint is the curriculum standard. The tools are increasingly sophisticated. The product is the student who meets the benchmark. And the children, like wood in a factory, are shaped toward a form that was determined before they arrived, by people who do not know them and will never meet them.

The question Korczak would ask — the question that must be asked of every AI system deployed in educational settings — is not "Does this system produce better outcomes?" It is: "Does this system accompany the child, or does it manufacture her?" If the answer is manufacture, the system may be efficient, may produce impressive metrics, may satisfy every stakeholder except the one whose experience matters most. But it has failed the child, because the child is not raw material. The child is a person, growing in her own direction, at her own pace, toward a destination that no specification can predetermine and no optimization can improve upon, because the destination is not a product.

The destination is a self. And a self can only be accompanied into existence. It cannot be built.

---

Chapter 8: The Orphanage and the Classroom

Dom Sierot, at 33 Chłodna Street, housed approximately one hundred children at any given time. They ranged in age from seven to fourteen. They had lost parents, or been abandoned, or come from families too poor to feed them. They arrived carrying grief, anger, confusion, and the particular wariness of children who have learned that adults cannot be trusted to remain. They were not, by any conventional measure, easy children to educate.

Korczak gave them a parliament.

The institution's children's parliament met regularly to debate policy — rules of conduct, allocation of shared resources, responses to conflicts that affected the community. Every child had a voice. Every child could propose legislation. Decisions were made by vote, and the results were binding. Not symbolically binding, in the way that student councils in modern schools produce recommendations that adults may or may not implement. Actually binding. The children decided, and the institution followed.

The adults participated but did not control. Korczak himself could be outvoted, and was, on multiple occasions. When the parliament decided something he disagreed with, he accepted the decision. This was not performance. It was the institutional expression of a conviction: that children are capable of self-governance, and that the exercise of self-governance is the mechanism through which the capacity for self-governance develops. The capacity does not develop through instruction. It does not develop through observation. It develops through practice — through the actual experience of deliberating, deciding, accepting consequences, and living with decisions that were genuinely yours.

The children's court was, if anything, more radical. Disputes — between children, between children and staff — were adjudicated by a rotating panel of children. The court had jurisdiction over real conflicts with real emotional stakes. A child accused of stealing. A staff member accused of unfairness. A group of children accused of bullying. The child-judges heard testimony, examined evidence, deliberated, and rendered verdicts. The code of law they applied was extraordinarily lenient by design — most offenses were "forgiven" on first or second occurrence, with escalating consequences only for repeated patterns. The leniency was deliberate. Korczak wanted the court to develop the children's capacity for judgment, not their capacity for punishment.

The newspaper, published by the children, served a different function. It was a public space — a place where children could observe, formulate, and share their perspectives on the life of the institution and the world beyond it. The articles were uneven, as writing by children tends to be. Some were perceptive. Some were trivial. Some were complaints about the food. Korczak did not edit them for quality because the newspaper's purpose was not to produce good journalism. Its purpose was to give children the experience of having a voice that reached beyond the immediate conversation — a voice that persisted in text, that could be read and responded to, that carried the weight of publication.

These three structures — parliament, court, newspaper — were institutional dams in Korczak's river, structures that redirected the flow of institutional power toward the children who inhabited the institution. They were expensive in every currency that modern institutions optimize for: time, efficiency, predictability, control. The parliament was slow. The court was imprecise. The newspaper was unpolished. An institution designed to maximize any measurable output would have eliminated all three and replaced them with adult authority, adult adjudication, and adult communication — or, in 2026, with algorithmic equivalents.

The AI-mediated classroom of the present moment is, in structural terms, the inverse of Korczak's orphanage. Where Korczak's institution distributed authority to children, the AI-mediated classroom concentrates authority in the system. Where Korczak created spaces in which children exercised judgment, the algorithmic classroom exercises judgment on the children's behalf. Where Korczak trusted children with the messy, imperfect, sometimes unjust process of self-governance, the AI-mediated classroom trusts the optimization algorithm to govern the children's learning path with a precision and consistency that no child — and no human teacher — can match.

Consider the contrast point by point. In Korczak's parliament, a child who wanted to change a rule had to articulate why the rule was inadequate, persuade her peers, withstand counterarguments, and accept the outcome of a vote. The process required her to think about her community, to consider perspectives other than her own, to formulate an argument and defend it publicly, to accept that her view might not prevail. The experience was formative regardless of the outcome. In the AI-mediated classroom, the child's "learning path" is determined by the system. The child does not deliberate about what she should study, in what order, by what method. The system assesses, prescribes, and evaluates. The child's role is to engage with the prescribed content and demonstrate mastery. She is a consumer of a personalized educational product. She is not a citizen of an educational community.

In Korczak's court, a child who had been wronged had to testify — to give an account of what happened, to submit that account to examination by peers, to accept that her account might not be believed or might be weighed against a competing account and found less persuasive. The experience was often painful. Justice is sometimes painful. But the experience developed the child's capacity for moral reasoning — the capacity to distinguish between what happened to her and what she felt about what happened, to consider the other person's perspective even when the other person had caused her harm, to accept an outcome that was less than perfect because the process of arriving at it was fair. In the AI-mediated classroom, assessment is automated. The child does not participate in the evaluation of her own work or anyone else's. She submits output and receives a score. The process by which the score was determined is opaque. The capacity for moral reasoning — which requires the experience of deliberating about justice, not merely receiving it — is not addressed.

In Korczak's newspaper, a child who had an observation about her world had to formulate it in language, submit it to publication, and accept that others would read it and respond. The observation became public. It carried the child's name. It was hers in a way that a conversation is not — fixed in text, available for scrutiny, a permanent contribution to the community's ongoing self-understanding. In the AI-mediated classroom, the child's written output is increasingly produced with AI assistance. The observation may be hers. The formulation may not be. The language may have been polished beyond recognition by a tool that produces grammatically perfect prose regardless of whether the child has something genuine to say. The newspaper is replaced by the assignment — a private transaction between the child and the grading system, with no public dimension, no community audience, no accountability to peers.

The structural inversion is complete. Korczak's orphanage was designed to develop the child's capacity for agency — for deliberation, judgment, expression, and self-governance. The AI-mediated classroom is designed to develop the child's capacity for compliance — for engaging with prescribed content, producing assessed outputs, and progressing along a path determined by a system whose logic the child cannot see.

UNICEF's guidance on AI and children identifies "support for children's agency" as a core requirement. The EU Joint Research Centre calls for "integration and respect of children's agency." These recommendations are necessary and insufficient, because they assume that the systems they are trying to reform are capable of supporting agency. An adaptive learning platform that determines the child's learning path, selects the child's content, assesses the child's performance, and modifies the child's trajectory — all without the child's meaningful participation in any of these decisions — is structurally incapable of supporting agency. Agency is not a feature that can be added to a manufacturing system. It requires a different system — one that begins with the assumption that the child is a citizen, not a consumer, and that the educational institution exists to serve the child's self-governance, not to govern the child.

Korczak demonstrated that such a system is possible. It is inefficient. It produces outcomes that are messy, imperfect, and difficult to measure. The children's parliament sometimes made bad decisions. The children's court sometimes rendered verdicts that adults found unjust. The newspaper sometimes published articles that were embarrassing. But the children who participated in these institutions developed something that no efficient system can produce: the experience of having governed themselves, judged for themselves, spoken for themselves — and lived with the consequences.

This experience is the foundation of democratic citizenship. It is also, in the age of AI, the foundation of the capacity to direct technology rather than be directed by it. The child who has practiced self-governance — who has deliberated, decided, erred, corrected, and deliberated again — is the child who will grow into the adult capable of exercising the judgment that Segal identifies as the scarce resource of the AI economy. The child who has practiced compliance — who has followed the algorithm's path, consumed the algorithm's content, and submitted to the algorithm's assessment — is the child who will grow into the adult who accepts the algorithm's direction without questioning whether the direction serves her.

Korczak built institutions that made children into citizens. The question for the present moment is whether the institutions being built for children now — the platforms, the algorithms, the AI-mediated educational environments — are making children into citizens or into something else. Something more efficient, perhaps. Something more measurable. Something less capable of the moral reasoning, the deliberative courage, and the stubborn insistence on self-governance that a world of powerful technologies will demand of every person who inhabits it.

The orphanage at 33 Chłodna Street no longer exists. The children who governed themselves there were murdered. But the model they lived — the model of an institution designed around the child's right to participate in the decisions that shape her life — remains the most radical and the most necessary educational idea of the past century. It is more necessary now than it has ever been, because the systems that govern children's lives have never been more powerful, more opaque, or more capable of substituting their own logic for the child's own judgment.

The parliament, the court, the newspaper: these are the dams that protect the child's capacity for self-governance against the current of optimization. They are expensive, imperfect, and indispensable. Korczak knew this. He built them anyway. The question is whether anyone will build them now.

Chapter 9: Against the Optimization of Childhood

There is a word in German that has no precise English equivalent. Langeweile — literally, "a long while." Boredom. But the German carries something the English does not: the implication that boredom is an experience of duration itself, of time stretching out unstructured, unfilled, waiting to be inhabited by whatever the mind, left to its own devices, decides to do with it.

Korczak observed boredom in his orphans with the same clinical attentiveness he brought to illness. He noted that children who were bored — genuinely, uncomfortably bored, with nothing prescribed and no entertainment provided — did not remain bored for long. Boredom, he observed, was a transitional state. The child passed through it on the way to something else: an invented game, a sustained fantasy, a conversation with another child that explored territory neither had visited before, or simply a period of quiet observation during which the child watched, with an intensity adults rarely notice, the behavior of an ant, the movement of light on a wall, the sound of rain on a window.

The boredom was not wasted time. It was the soil in which self-directed attention grew. And self-directed attention — the capacity to decide, without external prompting, what to attend to and for how long — is, in the judgment of every developmental psychologist who has studied it, one of the foundational capacities of the human mind. Without it, the child cannot sustain focus, cannot pursue a line of inquiry to its conclusion, cannot engage in the kind of deep, voluntary concentration that Csikszentmihalyi identified as the prerequisite for flow and that every educational system in the world claims to value.

Artificial intelligence, as deployed in children's environments, is the most efficient boredom-elimination technology ever created. The tablet offers a child in 2026 an effectively infinite supply of stimulation — games calibrated to her skill level, videos selected by recommendation algorithms to match her interests, chatbots that respond to any question with instant engagement. The child never needs to experience Langeweile. The long while never arrives. The transitional state is skipped. And the capacity that would have developed in that transitional state — the capacity to generate one's own engagement from within, to discover one's own interests through the experience of having nothing prescribed — does not develop, because the conditions it requires have been optimized away.

Johann Hari's investigation of the global attention crisis documented the consequences with empirical specificity. Children's capacity for sustained attention has been declining measurably for decades, and the decline accelerates in correlation with the proliferation of devices that eliminate unstructured time. The correlation does not establish causation in the strict scientific sense, but the mechanism is not mysterious. A child whose every moment of potential boredom is filled by algorithmically selected stimulation is a child who never practices the cognitive skill of self-directed attention. The skill atrophies, as any unused muscle atrophies, and the atrophy creates a feedback loop: the child whose attention span has shortened finds unstructured time even more intolerable, reaches for the device more quickly, and the spiral tightens.

Korczak could not have anticipated the tablet. But he anticipated the impulse behind it — the adult impulse to fill every moment of a child's time with prescribed activity, to eliminate uncertainty, to ensure that the child is always "doing something productive." He resisted this impulse with the full force of his educational philosophy. The orphanage schedule included, deliberately, periods of unstructured time. The children were not entertained during these periods. They were left to themselves — to invent, to quarrel, to daydream, to be bored and discover what lay on the other side of boredom. The adults supervised for safety. They did not intervene for stimulation.

The optimization of childhood — the systematic elimination of unstructured time, productive boredom, unsupervised play, and open-ended struggle — did not begin with AI. It began, arguably, with the parenting culture of the late twentieth century, the shift from what Gopnik calls the "gardener" model to the "carpenter" model, from creating conditions for growth to engineering outcomes. The scheduled playdate, the enrichment activity, the tutoring session, the travel sports team — each of these, individually defensible, collectively constitute an environment in which the child's time is accounted for, measured, and optimized with a thoroughness that would have astonished Korczak and appalled him.

AI represents the apotheosis of this trajectory. The optimization is no longer limited to the child's physical schedule. It has colonized the child's cognitive environment. The adaptive learning platform optimizes the child's educational path. The recommendation algorithm optimizes the child's media consumption. The AI tutor optimizes the child's homework process. The chatbot optimizes the child's inquiry — transforming every question into an efficiently answered transaction rather than an open-ended exploration.

The result is a childhood that is, by every measurable standard, more productive than any previous generation's. Children in AI-rich environments learn factual content faster. They complete assignments more quickly. They produce output — essays, projects, code — that meets higher standards than their unassisted predecessors achieved. The metrics are impressive. The dashboards glow green.

And something is missing. Something that does not appear on the dashboard because no one thought to measure it, because it is the kind of thing that only becomes visible in its absence, and its absence takes years to manifest. What is missing is the child's relationship with her own mind — the knowledge, built through the experience of unstructured time, of what happens when she is left alone with her thoughts. Does she know what interests her? Not what the algorithm predicts she will engage with, but what she, left to herself, would choose to explore? Does she know what she thinks about a difficult question? Not what the chatbot says, but what she herself believes, arrived at through the slow, uncertain, often uncomfortable process of formulating a position and testing it against her own experience?

These are not questions that optimization can answer. They are questions that optimization prevents from being asked, because asking them requires the very conditions that optimization eliminates: unstructured time, productive boredom, the space in which the child encounters herself.

Korczak's novel King Matt the First, published in 1923, explores the consequences of optimized childhood through parable. Matt, a child king, attempts to reform his kingdom according to rational principles — efficient governance, fair distribution, optimized systems. The reforms are well-intentioned and, initially, effective. But Matt discovers that the efficiency he has imposed on his kingdom has eliminated something he did not know was valuable: the capacity of his subjects to govern themselves, to resolve their own conflicts, to develop their own solutions to problems that no central authority could anticipate. The optimized kingdom is orderly but brittle. When crisis arrives, it shatters, because the capacity for self-governance that would have made the kingdom resilient was the thing the optimization displaced.

The parable maps onto the AI-optimized childhood with uncomfortable precision. The optimized child is capable but brittle. She can produce output when prompted but cannot generate direction when left unprompted. She can engage with prescribed content but cannot sustain engagement with self-selected content. She can answer questions but cannot formulate them. She can perform but cannot play — not in the therapeutic sense, but in the Korczakian sense of play as the child's fundamental mode of engagement with the world, the activity through which she discovers what interests her, tests what she can do, experiments with who she might become.

Play, in Korczak's framework, is not leisure. It is the child's work. It is the activity through which the child constructs her understanding of the world and her place in it. It is characterized by precisely the qualities that optimization eliminates: uncertainty, inefficiency, open-endedness, the absence of a predetermined outcome. The child at play does not know where the play will lead. That is the point. The not-knowing is the generative condition. The child discovers, through the play, interests and capacities that she did not know she possessed, because they only emerge in conditions of freedom — conditions where no algorithm is directing the experience and no metric is evaluating the outcome.

The AI systems deployed in children's environments in 2025 and 2026 are not designed to support play. They are designed to support productivity. The distinction matters because it determines the fundamental orientation of the tool toward the child. A tool designed to support play would maximize openness — providing materials without directing their use, offering capabilities without prescribing outcomes, creating a space in which the child's own intentions guide the experience. A tool designed to support productivity maximizes convergence — identifying the desired outcome, computing the efficient path, directing the child's activity toward the target.

Nearly every AI tool currently deployed in children's environments is oriented toward productivity. The adaptive learning platform converges on the competency benchmark. The AI tutor converges on the correct answer. The recommendation algorithm converges on the content profile that maximizes engagement. The chatbot converges on the response that the child's question appears to request. In each case, the openness that play requires — the freedom to wander, to fail, to discover something unintended — is the thing the system is designed to eliminate.

Korczak's prescription, derived from decades of observation and practice, is not the elimination of tools but the protection of conditions. The child needs tools. She also needs time and space in which no tool is directing her experience — time in which she is the author of her own attention, the director of her own activity, the citizen of her own inner world. The protection of these conditions is not a luxury. It is a developmental necessity. Without them, the child develops capabilities but not agency, performance but not purpose, output but not selfhood.

The dam that must be built around childhood in the age of AI is not a wall that keeps the technology out. It is a structure that creates pools of stillness within the current — protected spaces where the child's own mind can operate without algorithmic direction, where boredom can do its developmental work, where play can unfold according to the child's own logic rather than the system's, where the long while can stretch out and the child can discover, in that unoptimized, unproductive, essential duration, who she is when no one — and no system — is telling her who to be.

---

Chapter 10: What We Owe the Question

On the morning of August 5, 1942, Korczak dressed himself carefully. He put on his old Polish army boots, faded but polished. He had been awake for hours. He may not have slept at all. The order had come: the children of Dom Sierot were to be transported. The destination was not named in the order. Everyone in the ghetto knew what unnamed destinations meant.

Korczak had been offered escape. Former students had arranged it. Contacts in the Polish underground had offered to smuggle him out. At least one German officer, who recognized him as the author of children's books he had loved in his own youth, offered to intervene on his behalf — his behalf alone, not the children's. Every offer was the same: save yourself. Leave the children.

He refused every one. Not heroically, in the sense the word usually implies — not with drama or declaration. Quietly, with the stubbornness of a person for whom the offer was not merely unattractive but incoherent. To leave the children was not an option he was declining. It was a sentence in a language he did not speak. The grammar of his life did not contain a construction in which he existed separately from his responsibility to the children in his care.

The children marched in rows of four. Szymerle, the oldest, carried the orphanage flag — green, with the Star of David on one side and King Matt's emblem on the other. The smaller children carried knapsacks. Some carried dolls or books. Korczak walked at the front. Stefania Wilczyńska, his co-director, walked alongside. Two hundred people — one hundred and ninety-two children, a dozen staff — walked through the streets of the Warsaw Ghetto to the Umschlagplatz, where the trains waited.

Eyewitnesses described the march as eerily ordered. The children did not cry. They had been told, presumably, something — a story about where they were going, a fiction designed to contain the terror. Or perhaps they had not been told anything, and what the witnesses perceived as calm was the numbness of children who had lived in the ghetto long enough to understand that the world was not organized around their survival. The witnesses did not agree on the details. They agreed on one thing: Korczak did not leave.

This final act — the march to the Umschlagplatz, the refusal to save himself, the walk into death alongside the children he had raised — carries a moral weight that must be handled with extreme care. It can be sentimentalized. It can be instrumentalized. It can be turned into a parable that flattens the man and the moment into a lesson, which would be a betrayal of everything Korczak stood for, because Korczak never taught lessons. He accompanied children. He was, at the end, accompanying them.

The act is invoked here not for pathos but for precision. It answers, with the finality of a life completed in perfect consistency with its principles, the question that governs this entire volume: What does the adult owe the child?

The adult owes the child presence. Not answers. Not solutions. Not the elimination of uncertainty. Presence — the willingness to be there, in the uncertainty, alongside the child, for as long as the uncertainty lasts. Korczak's final walk was the extreme expression of this principle, and its extremity is what makes it clarifying. Most adults will never face a choice as stark as the one Korczak faced. But every adult who is responsible for a child faces a version of the choice every day: Will you be present with this child in her uncertainty, or will you resolve the uncertainty for her and call it help?

The twelve-year-old who asks "What am I for?" is standing at an Umschlagplatz of a different kind. The trains are not visible. The danger is not mortal. But the child is at a threshold — the threshold between a sense of self that was built on capability ("I am what I can do") and a sense of self that must be rebuilt on something else, because the machine has demonstrated that capability is no longer uniquely hers. She does not know what the "something else" is. She is standing at the threshold, looking into uncertainty, and she needs an adult who will stand there with her.

Not an adult who will answer the question. Not an adult who will say, "Don't worry, you are the candle in the darkness, consciousness is cosmically rare, your capacity to ask is the proof of your value." These are true statements, and Segal offers them with genuine care in The Orange Pill. But the twelve-year-old does not need statements. She needs a person. A person who takes her question seriously enough to sit with it rather than resolve it. A person who is willing to say, "I do not know the answer. I am uncertain too. And I will be here with you while we figure it out."

This is the hardest thing Korczak's framework asks of adults, because it requires the adult to surrender the one thing adults most want to provide: certainty. The adult wants to reassure. The adult wants to fix. The adult wants to deliver the answer that will make the child's distress go away, because the child's distress is painful to witness and the adult's helplessness in the face of it is intolerable. The premature answer — however eloquent, however well-intentioned — is, in many cases, a response to the adult's discomfort rather than the child's need. The adult cannot bear the child's uncertainty and resolves it, not for the child's sake but for the adult's.

Korczak's practice was to bear it. To sit with the child's uncertainty without resolving it. To model, through his own visible engagement with questions that did not have clean answers, that uncertainty is not a pathology to be cured but a condition to be inhabited — and that inhabiting it, with patience and courage and genuine curiosity about where it leads, is one of the defining capacities of a fully human life.

The AI systems deployed in children's environments model the opposite. They resolve uncertainty instantly. They present confidence where honest engagement would present doubt. They answer every question as though every question were answerable, and in doing so they teach children that the appropriate response to not-knowing is to find someone — or something — that knows. The capacity to sit with not-knowing, to tolerate ambiguity, to discover that the space of uncertainty is where genuine thinking takes place — this capacity is not developed by the AI interaction. It is undermined by it, because every interaction reinforces the expectation that answers are always available and that the child's own struggle to arrive at understanding is a detour to be eliminated rather than the main road.

The @TinyKorczak bot — Yohanna Joseph Waliya's automated system that tweets Korczak's advocacy for children's rights every three hours on behalf of Nigeria's out-of-school children — offers one model of what it looks like to use AI in service of children rather than in substitution for the adult's obligation to them. The bot does not replace presence. It amplifies a message. It carries Korczak's words into contexts — Nigerian Twitter feeds, global educational networks — where those words can prompt human action on behalf of children who need human advocates. The bot is a tool in the service of accompaniment, not a replacement for it. The distinction is everything.

What would Korczak do? The question that the UNESCO Janusz Korczak Chair poses to each generation is not a rhetorical exercise. It is a practical demand. In the Warsaw of the 1920s and 1930s, Korczak built parliaments, courts, and newspapers — institutions that gave children structural authority over their own lives. In the ghettos of the 1940s, he maintained those institutions under conditions of absolute deprivation, because the children's right to self-governance did not depend on the circumstances being favorable. In the digital environments of the 2020s, the question takes a specific form: What institutions must be built — in schools, in homes, in the design of AI systems themselves — to protect the child's right to participate in the decisions that shape her life, to struggle with questions that do not have clean answers, and to develop, through the messy, inefficient, irreplaceable process of living, the selfhood that no optimization can produce?

The answer is not a technology. It is not a policy. It is not a curriculum. The answer is a commitment — the same commitment Korczak made and honored to its ultimate conclusion: that the adult's responsibility to the child is unconditional. It does not depend on the adult's comfort, the adult's certainty, or the adult's capacity to resolve the child's distress. It depends only on the adult's willingness to be present.

To say: I do not know what this technology means for your future. I do not know what you are for. I do not know the answer to the question you are asking. But I am here. I will not leave. And we will face the uncertainty together, because facing it together is the only honest thing I can offer you, and it is enough.

Korczak walked into Treblinka with his children because leaving them was incompatible with who he was. The twelve-year-old who asks "What am I for?" is not marching to her death. She is marching into a future that the adults around her do not understand and cannot predict. She needs someone who will walk with her. Not ahead of her, not behind her, not above her pointing the way. Beside her. In the uncertainty. For as long as it lasts.

That is what we owe the question. That is what we owe the child.

Not an answer. A presence. A hand. A willingness to not know, together, for as long as the not-knowing requires.

---

Epilogue

Three weeks after finishing this book, I stood in a doorway watching my daughter draw.

She was not using a tablet. She was using crayons — the cheap, waxy kind that break if you press too hard and leave color under your fingernails for days. She was drawing something I could not identify. It might have been a house. It might have been a horse. She was pressing with the kind of furious concentration that makes a child's tongue poke out of the corner of her mouth, and she was getting it wrong, and she was not stopping.

I almost walked in. I almost asked what she was making. I almost offered to help.

I stayed in the doorway. Because Korczak, who I had been living inside for weeks by then, had taught me something I should have known all along: the struggle was not the obstacle to the drawing. The struggle was the drawing. The real one. The one that mattered. Not the image on the paper — lopsided, unrecognizable, destined for the recycling bin before the week was out — but the experience happening inside the child who was making it. The negotiation between intention and execution. The frustration of hands that would not do what the mind demanded. The persistence. The small, private discovery that she could keep going even when the result did not match the vision.

No AI could give her that experience. An AI could give her a better drawing. It could give her a perfect drawing, in any style, of anything she could describe. It could eliminate every moment of frustration between her intention and its realization. And in doing so, it would steal from her the only thing the drawing was actually producing: the knowledge, felt in her body before it could be spoken in words, that she was someone who could do a hard thing.

That is what I took from Korczak. Not a policy position on AI in education. Not a framework for children's rights in digital environments, though his work is the foundation of every such framework that exists. What I took was simpler and harder: the conviction that my job, as a parent standing in a doorway, is to not help. To resist the most generous impulse I have — the impulse to smooth the path, to eliminate the difficulty, to spare my child the frustration of getting it wrong — because the frustration is not the enemy of her development. It is the instrument of it.

I built products that remove friction. That is what I do. That is what The Orange Pill is about — the extraordinary, world-altering power of tools that close the gap between imagination and reality. I believe in those tools. I use them every day. They have made me more capable than I have ever been.

And Korczak made me see that the question is not whether the tools work. The question is whether there are people — small people, twelve-year-old people, crayon-wielding people — for whom the friction is not a problem to be solved but a gift to be protected. Not forever. Not in every domain. But in the specific, bounded, sacred space of a childhood that is still in the process of building the person who will someday use those tools with the judgment they require.

The twelve-year-old who asked "What am I for?" — I think about her differently now. I answered her in The Orange Pill with the candle: consciousness, the rarest thing in the universe, the light that asks why. I still believe that. But Korczak would say I answered too fast. He would say: sit with her. Do not resolve her question. Let it work on her. Let it stretch out into a Langeweile — a long while of not-knowing — and trust that what grows in that space will be stronger and more genuinely hers than anything I could hand her.

A man who refused to leave his children walked into the worst place human beings have ever built, because accompaniment was not something he did. It was something he was. I cannot match that. No one reading this can. But we can learn from it the one thing it has to teach: that presence — not answers, not optimization, not the elimination of difficulty — is what we owe the children.

They are not future people. They are people now. And the world they are building inside themselves, drawing by drawing, struggle by struggle, question by unanswered question, is the only world that matters.

Protect the space in which they build it.

Edo Segal

Every system designed for children in the age of AI answers the same question: How do we prepare them for the future? Janusz Korczak spent his life insisting that this is the wrong question. The right

Every system designed for children in the age of AI answers the same question: How do we prepare them for the future? Janusz Korczak spent his life insisting that this is the wrong question. The right one is harder, more uncomfortable, and more urgent: Does the child matter now -- not as an investment, not as a future worker, but as a person whose present struggle has dignity that no optimization should be permitted to erase?

This volume brings Korczak's radical framework into collision with the AI revolution Edo Segal chronicles in The Orange Pill. When machines can produce any output a child can produce -- faster, cleaner, better -- what happens to the developmental friction that builds the child? The twelve-year-old who asks "What am I for?" doesn't need a better answer. She needs an adult willing to sit beside her in the question.

Korczak built parliaments for orphans, courts run by children, and institutions that trusted kids with real authority over their own lives. Then he walked into Treblinka rather than abandon them. His framework is not sentimental. It is the most demanding standard ever applied to how adults treat children -- and the one the AI age needs most.

-- Janusz Korczak

Janusz Korczak
“One must not leave the world as it is. The repairing of the world must begin with repairing matters concerning children.”
— Janusz Korczak
0%
11 chapters
WIKI COMPANION

Janusz Korczak — On AI

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Janusz Korczak — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →