By Edo Segal
The question my son asked at dinner was not the one I expected.
He did not ask whether AI would take his job. He did not ask whether his homework still mattered. He asked something stranger, quieter, and it has not left me since: "Dad, is there a difference between knowing something and understanding it?"
He is thirteen. He had just used Claude to research a history project and gotten an A. He could recite the facts. He could not explain why any of them mattered. And he knew it. That gap — between what he could produce and what he actually understood — was visible to him in a way that unsettled us both.
I did not have a good answer. I had instincts. I had the arguments I make in *The Orange Pill* about judgment, about questions mattering more than answers, about the candle of consciousness flickering in an infinite darkness. But instincts are not frameworks. And frameworks are what parents need when the ground shifts under their children's feet.
Kieran Egan built the framework I was missing.
Not a framework about technology. A framework about how human minds actually develop — from the body-knowledge of infancy through the story-making of childhood through the wonder of adolescence through the systematic thinking of young adulthood to the hard-won recognition that every system has blind spots. Five kinds of understanding, each with its own cognitive tools, each built through specific kinds of struggle that cannot be skipped without developmental cost.
What made Egan's thinking crack my fishbowl was this: he showed that the struggle my son skipped — the hours of wrestling with sources, organizing arguments, discovering what he actually thought by failing to articulate it — was not an obstacle to learning. It was the learning. The friction was the mechanism. Remove it and the product arrives, but the cognitive development that should have accompanied it does not.
This is the question the AI discourse keeps circling without landing on. Not whether the tools work — they work spectacularly. Not whether they should be banned — that ship has sailed. But what happens to the developing mind when the difficulty that builds understanding is optimized away in the name of efficiency.
Egan died six months before ChatGPT launched. He never saw the technology that would make his life's work urgent in ways he could not have anticipated. But his framework — the most precise developmental account I have encountered of how understanding actually forms — is exactly what this moment demands. Not as a reason to refuse the tools. As a guide for using them without hollowing out the minds they are supposed to serve.
My son's question deserved a better answer than I had. Egan's thinking is that answer.
— Edo Segal ^ Opus 4.6
1942-2022
Kieran Egan (1942–2022) was an Irish-born, Canadian-based educational philosopher and cognitive development theorist whose work fundamentally challenged prevailing assumptions about how children learn and what education is for. Born in Clonmel, Ireland, he studied at University College London and Cornell University before spending the majority of his career at Simon Fraser University in British Columbia, where he co-founded the Imaginative Education Research Group. His major works include *Teaching as Story Telling* (1986), *The Educated Mind: How Cognitive Tools Shape Our Understanding* (1997), *An Imaginative Approach to Teaching* (2005), and *Learning in Depth: A Simple Innovation That Can Transform Schooling* (2010). Egan argued that Western education has been trapped in an unresolvable conflict among three incompatible goals — Platonic rationalism, Rousseauian naturalism, and Spencerian utilitarianism — and proposed an alternative framework built around five sequential kinds of understanding: somatic, mythic, romantic, philosophic, and ironic. Each kind employs distinctive cognitive tools, develops through specific forms of imaginative engagement, and accumulates rather than replaces its predecessors. His emphasis on imagination as the core engine of cognitive development, rather than a decorative supplement to analytical thinking, influenced educators worldwide and has gained renewed relevance in discussions of AI-era pedagogy.
For two and a half millennia, education has been at war with itself.
The conflict is not between good teachers and bad ones, between progressive methods and traditional ones, between well-funded schools and neglected ones. Those are symptoms. The disease runs deeper: education has been built on three purposes that cannot be simultaneously achieved, and every reform movement in the history of Western schooling has emphasized one at the expense of the others, producing a perpetual oscillation that resolves nothing because it addresses the wrong problem.
Kieran Egan spent four decades identifying these three incompatible purposes and arguing that their irreconcilability explained the chronic, low-grade dysfunction that characterizes schooling almost everywhere it exists. The first purpose, inherited from Plato and the rationalist tradition, holds that education should cultivate the mind's capacity for abstract truth — that the goal is to develop reason, to move the student from the shadows of the cave into the light of genuine understanding. The second, inherited from Rousseau and the progressive tradition, holds that education should follow the child's natural development — that the goal is to create conditions in which each learner's innate capacities unfold according to their own organic logic. The third, inherited from Herbert Spencer and the utilitarian tradition, holds that education should transmit the knowledge and skills that society needs — that the goal is to produce citizens equipped to function in the existing economic and cultural order.
Each purpose captures something real. Each, pursued in isolation, distorts the enterprise. Plato's rationalism, taken alone, produces an education that ignores the child's developmental reality in pursuit of abstract ideals. Rousseau's naturalism, taken alone, produces an education that romanticizes childhood and refuses to impose the cultural knowledge that the child cannot discover unaided. Spencer's utilitarianism, taken alone, produces an education that reduces learning to vocational preparation and treats the child as raw material for the labor market. The three purposes pull in incompatible directions, and the history of educational reform is largely the history of cycling between them — progressivism ascends when utilitarian rigidity becomes intolerable, traditionalism returns when progressive looseness becomes alarming, utilitarian "back to basics" movements emerge when both seem to have failed.
Egan's alternative was not to choose among the three but to reconceive the enterprise around a different organizing principle entirely: the development of kinds of understanding. Not a single ladder of cognitive stages in the Piagetian sense, where each rung replaces the one below it, but a sequence of increasingly sophisticated cognitive toolkits that accumulate rather than succeed one another. The adult who has developed philosophic understanding does not lose access to mythic understanding. She gains new tools while retaining old ones. The educated mind is not the mind that has climbed to the highest rung and kicked away the ladder. It is the mind that possesses the fullest cognitive toolkit — somatic, mythic, romantic, philosophic, ironic — and can deploy the right tools for the right problems.
This framework, developed across a series of books from Teaching as Story Telling in 1986 through The Educated Mind in 1997 to Learning in Depth in 2010, represented a fundamental challenge to nearly every assumption that governed how schools were organized, what curricula contained, and how teaching was practiced. Egan was not proposing a new method. He was proposing a new purpose — one that cut across the three warring traditions and offered a coherent account of what education was actually for.
Then, in the winter of 2025, a machine arrived that made his argument not merely compelling but unavoidable.
Egan died in May 2022, six months before ChatGPT's public release. He never saw the technology that would vindicate his life's work in the most dramatic way imaginable. The philosopher who spent forty years arguing that education should not be organized around the transmission of knowledge did not live to see the moment when a machine made knowledge transmission instantaneous, ubiquitous, and free.
The irony is precise. The technology that Egan never encountered has done what four decades of his advocacy could not: it has made the transmission model of education visibly, undeniably obsolete. When any student can access any fact, any explanation, any theoretical framework, any historical narrative in seconds — when the machine can produce a competent essay on any topic, solve any textbook problem, explain any scientific concept at whatever level of complexity the questioner requests — the purpose of education as knowledge transmission collapses. Not gradually. Not partially. The collapse is structural, in the way that the invention of the printing press made the monk's role as manuscript copyist not merely less efficient but categorically unnecessary.
The Orange Pill captures this collapse from the builder's perspective. Edo Segal describes the moment when the gap between imagination and artifact shrank to the width of a conversation — when what you could conceive and what you could build were separated by nothing more than the ability to describe what you wanted. The educational parallel is exact. The gap between a student's question and its answer has shrunk to the width of a prompt. The entire infrastructure of knowledge transmission — the lecture, the textbook, the recitation, the examination that tests recall — was built to manage a gap that no longer exists.
What remains when the gap closes? Segal's answer is judgment: the capacity to decide what is worth building, what questions are worth asking, what problems deserve attention. Egan's answer, developed with far greater developmental specificity, is understanding: the cognitive capacities that allow a mind to engage with knowledge rather than merely receive it. These are not competing answers. They are the same answer stated at different levels of analysis. Judgment is what understanding produces. Understanding is what makes judgment possible.
The distinction matters because judgment, without the developmental account that Egan provides, sounds like a skill that can be taught directly — a module to add to the curriculum alongside mathematics and reading. Teach students to think critically. Teach them to evaluate sources. Teach them to ask good questions. These injunctions are ubiquitous in educational reform documents and nearly useless in practice, because they treat cognitive capacities as skills to be transmitted rather than developmental achievements to be cultivated. Telling a student to "think critically" is like telling a child to "be taller." The instruction identifies the desired outcome without providing the mechanism.
Egan's framework provides the mechanism. The capacity for critical evaluation — what The Orange Pill calls judgment — does not arrive as a teachable skill. It arrives as a developmental achievement that emerges when a mind has progressed through specific kinds of understanding, each employing specific cognitive tools, each built through specific kinds of imaginative engagement with the world. The child does not learn to think critically by being told to think critically. She learns to think critically by developing, sequentially, the somatic understanding that gives her embodied knowledge, the mythic understanding that gives her narrative structure, the romantic understanding that gives her a sense of wonder and extremity, the philosophic understanding that gives her the capacity for systematic analysis, and the ironic understanding that gives her the awareness that all analysis is situated and partial.
Each of these kinds of understanding requires time, struggle, and a particular quality of engagement that cannot be accelerated without developmental cost. This is the uncomfortable implication that Egan's framework brings to the AI moment. The machine that makes knowledge abundant does not automatically make understanding deeper. If anything, the opposite: by eliminating the friction through which understanding develops, the machine risks producing a generation with access to all the knowledge in the world and the cognitive tools of a child who has never been allowed to struggle.
Egan himself, when asked about the role of technology in education, was characteristically blunt. In a 2018 interview, when the question turned to whether smartphones and social media had fundamentally altered the educational challenge, he dismissed the premise entirely: "If you read from the beginning of public schooling in the nineteenth century, you'll find exactly the same complaint about children. They don't concentrate. They don't work. They're distracted by comics, it used to be. Then television. It's just loony. What they're basically saying is that children seem to have no attention span when they're bored out of their minds."
The diagnosis cuts in two directions simultaneously. It dismisses the technophobic claim that devices are ruining children's minds — the problem was never the technology; it was always the pedagogy. But it also, by implication, dismisses the technophilic claim that better technology will fix education — because if the problem is pedagogical, then a more powerful technology applied to the same broken pedagogy will produce the same broken outcomes, only faster and at greater scale.
This is the position from which Egan's framework confronts the AI moment. Not with the Luddite's refusal — Egan had no patience for blaming tools — and not with the enthusiast's credulity — Egan had no patience for the idea that better delivery mechanisms could substitute for better educational thinking. The framework confronts AI with a question that neither the technophobes nor the technophiles have adequately addressed: what does this technology do to the developmental processes through which understanding actually forms?
The question is not whether AI can transmit knowledge more efficiently. It can. The question is not whether AI can personalize learning at scale. It can. The question is not whether AI can assess student performance in real time and adapt accordingly. It can. All of these capacities address the transmission model — the model that Egan spent his career arguing was misconceived from the start. Making transmission more efficient does not fix education. It makes an already-broken enterprise faster.
The question Egan's framework demands is different: can AI support the development of each kind of understanding — somatic, mythic, romantic, philosophic, ironic — or does its operation tend to bypass the developmental processes that produce them?
The answer, as the following chapters will argue, is that AI can do either, and that the difference depends entirely on how the tool is deployed, within what pedagogical framework, by educators who understand (or fail to understand) the developmental sequence that is at stake. A tool that provides instant answers to every question a student asks is a tool that eliminates the productive struggle through which understanding forms. A tool that generates provocative questions, presents anomalies that challenge the student's existing framework, provides perspectives that crack open the cognitive walls within which the student's current understanding operates — that is a tool that could accelerate development rather than bypass it.
The design of the tool matters. The pedagogical context matters more. And the theory of education that guides the pedagogical context matters most of all — because without a coherent account of what education is for, the most powerful tool in the world is just a faster way of doing the wrong thing.
Egan's account is the most coherent one available. It explains what education is for (the development of kinds of understanding), how each kind develops (through specific cognitive tools and imaginative engagements), what the developmental sequence looks like (somatic → mythic → romantic → philosophic → ironic), and why the sequence matters (because each kind of understanding provides cognitive capacities that the previous kinds cannot). It is, in a sense, the operating manual for the human mind's development — the manual that the machine, for all its power, was built without.
The chapters that follow apply this manual to the AI moment. Each chapter takes one element of Egan's framework and asks what the arrival of the amplifier means for it. The analysis is specific, grounded in the developmental theory, and attentive to both the opportunities and the dangers. The opportunities are real: AI can expand the scope of what students encounter, the range of perspectives they engage with, the speed at which they can explore questions that matter to them. The dangers are equally real: AI can bypass the developmental friction that understanding requires, substitute smooth output for genuine cognitive development, and produce a generation that has received all the answers without developing the capacity to evaluate any of them.
The educated mind in the age of the amplifier is not the mind that has the most powerful tool. It is the mind that has been developed through each kind of understanding to the point where the tool serves its development rather than substituting for it. That distinction — between serving development and substituting for it — is the thread that runs through every chapter of this book, and it is the distinction on which the future of education in the AI age will turn.
---
A six-year-old explains why it rains. Not with the water cycle — evaporation, condensation, precipitation, the vocabulary of meteorological science — but with a story. The sky is sad. The clouds are full of tears. The rain falls because something up there is crying, and when the rain stops, the sky feels better.
This is not ignorance. This is mythic understanding at work.
Egan identified mythic understanding as the first kind of understanding to develop in children after the somatic — the pre-linguistic, body-based knowing that begins in infancy. Mythic understanding arrives with language, typically between ages two and eight, and it employs a distinctive set of cognitive tools: story, metaphor, binary opposition, the sense of mystery, rhythm and pattern, emotional engagement with knowledge, and the structuring of experience through narrative. The child in mythic understanding does not simply lack the capacity for abstract reasoning. She possesses a different capacity — the capacity to organize experience into emotionally meaningful patterns — and this capacity is not a primitive version of later, more sophisticated thinking. It is a cognitive toolkit that remains active and essential throughout adult life.
The point is worth pressing, because it runs against the grain of how most educational systems treat young children's thinking. In the standard developmental account, derived from Piaget and reinforced by decades of curriculum design, the child's tendency to think in stories, to organize the world through binary oppositions like good and evil, to invest knowledge with emotional significance, is treated as a limitation to be outgrown. The goal of education, in this view, is to move the child from narrative thinking to analytical thinking, from the emotional to the rational, from the concrete to the abstract. Egan argued that this was not merely wrong but actively destructive — that it treated the child's most powerful cognitive tools as deficiencies and attempted to suppress them rather than develop them.
Mythic understanding does not merely precede more sophisticated kinds of understanding. It provides the cognitive foundation on which they are built. The capacity for narrative — for organizing experience into beginnings, middles, and ends, for recognizing conflict and resolution, for understanding that characters can change — is the capacity that makes all subsequent understanding possible. Abstract theories are, at bottom, stories about how the world works. Scientific explanations are narratives with evidence. Historical understanding is narrative understanding applied to the past. Even mathematics, the most abstract of disciplines, is taught most effectively when its concepts are embedded in narrative structures that give them emotional and imaginative significance.
What does it mean, then, when a machine can generate stories?
Large language models produce narratives with remarkable fluency. They can tell fairy tales, construct myths, generate allegories, and compose bedtime stories tailored to a child's interests with a specificity that no library of published books can match. The output conforms to mythic patterns — because those patterns are statistically dominant in the training data, which consists largely of human narrative. The machine has absorbed the entire corpus of human storytelling and can reproduce its structures with a precision that might, to a casual observer, appear indistinguishable from genuine mythic creation.
The distinction between generating a story and originating one is not semantic. It is developmental.
When a child tells a story — the story about the crying sky, for instance — she is not producing content. She is performing a cognitive operation. She is taking raw experience (it is raining), selecting a narrative structure (someone is sad; tears fall; the sadness ends), mapping the structure onto the experience, and producing an account that gives the experience emotional and moral significance. The story is not the product. The cognitive operation is the product. The child who tells the story has exercised and strengthened the cognitive tools of mythic understanding: the capacity for metaphorical thinking, the capacity to invest natural phenomena with emotional meaning, the capacity to organize experience into narrative shape.
The child who receives a machine-generated story has received the product without performing the operation. The story arrives, polished and complete, and the child consumes it. Consumption is not cognitively empty — listening to stories is itself a form of engagement that develops mythic cognitive tools, and the oral tradition that preceded literacy was entirely based on reception rather than creation. But there is a developmental difference between a child who listens to a story told by a human being — with the pauses, the vocal inflections, the responsiveness to the child's reactions, the embodied presence of the teller — and a child who receives a text generated by a machine that has no understanding of what it has produced.
The human storyteller adapts. She sees the child's widening eyes when the villain appears and slows down. She notices the child's restlessness and compresses the boring part. She recognizes the question forming in the child's expression and leaves space for it. She is not merely delivering content. She is participating in a cognitive transaction in which the child's developing mythic understanding meets the teller's more sophisticated understanding, and the meeting — the live, responsive, embodied exchange — is itself the developmental mechanism.
The machine cannot participate in this transaction. It can simulate its surface features — personalized content, adjusted reading level, responsive pacing — but the simulation addresses the output while missing the interaction. The developmental value of the storytelling exchange lives not in the story produced but in the relationship between teller and listener, a relationship that is cognitive, emotional, and embodied in ways that current AI cannot replicate.
This is not an argument against children encountering AI-generated stories. It is an argument about what those stories can and cannot do developmentally. They can expand the range of narratives a child encounters. They can provide stories in languages and cultural traditions that the child's immediate environment does not supply. They can respond to the child's expressed interests with a specificity that no single human storyteller can match. These are genuine gains, and they should not be dismissed.
But they cannot develop the child's own capacity for narrative creation. They cannot exercise the cognitive muscles that mythic understanding requires. They cannot provide the experience of struggling with a story — of starting, failing, trying again, finding the metaphor that works, discarding the one that does not — through which the child's narrative capacity is built. The smoothness of the machine's output, its instant responsiveness, its polished coherence, eliminates precisely the friction that the developmental process requires.
Consider what happens when a child is asked to explain a phenomenon — any phenomenon: why leaves change color, why the moon changes shape, why some animals are nocturnal — and is given two resources. The first is an AI tool that will generate a story-form explanation on request. The second is the child's own imagination, supported by an adult who asks questions, provides fragments of information, and resists the temptation to supply the answer. In the first scenario, the child receives a story. In the second, the child constructs one. The difference in the quality of the product may favor the machine — its story will be more polished, more coherent, more factually accurate. The difference in the quality of the cognitive development categorically favors the second scenario, because the development happens in the construction, not in the reception.
Egan's framework reveals something else that educational discussions of AI consistently miss. The cognitive tools of mythic understanding — story, metaphor, binary opposition — are not tools that children use and then discard. They are tools that remain active throughout adult life, and their development in childhood determines their availability and sophistication in adulthood. The adult who writes a compelling strategic narrative for her organization is using mythic cognitive tools. The physicist who explains quantum mechanics through analogy is using mythic cognitive tools. The politician who frames a policy choice as a contest between competing values is using mythic cognitive tools. Egan observed that The Orange Pill itself is structured through mythic cognitive tools — the tower with five floors, the river and the beaver, the candle in the darkness. These are not decorative metaphors added to an otherwise analytical text. They are cognitive structures through which the argument becomes emotionally and imaginatively available to the reader. Remove them and the argument might survive as logic, but it would lose the quality that makes it compelling — the quality that makes a reader feel the stakes rather than merely assess them.
The implication for AI in education is this: the developmental work of mythic understanding cannot be delegated to the machine without impoverishing the cognitive toolkit that the child will carry into adulthood. This does not mean banning AI from the early childhood classroom — Egan himself would have dismissed such a ban as the kind of technology-blaming he found "loony." It means designing the educational encounter so that the child remains the one doing the cognitive work. The machine can be a stimulus — a provider of fragments, images, provocations, beginnings without endings. What it must not become is a substitute for the child's own narrative imagination, because that imagination is not a luxury. It is the cognitive infrastructure on which all subsequent understanding will be built.
The practical challenge is that substitution is the machine's default mode. Ask an AI to tell a child a story and it will tell a story. Ask it to help a child create a story and, without careful design, it will still tell a story — generating more of the output and leaving less for the child to produce. The gravitational pull of the technology is toward generation, toward the smooth delivery of polished content. Resisting that pull requires pedagogical intention — the teacher who knows that the child's halting, imperfect, emotionally invested narrative is developmentally more valuable than the machine's fluent, polished, emotionally inert one.
This is a form of the ascending friction that The Orange Pill describes in the professional context. The removal of mechanical friction — the difficulty of finding stories, of accessing diverse narratives, of producing age-appropriate content — is a genuine gain. But the removal of developmental friction — the struggle of creating narrative, of finding the metaphor that works, of organizing experience into meaningful shape — is a genuine loss. The educational task is to remove the first kind of friction while preserving the second, and this task requires a theory of development sophisticated enough to distinguish between them.
Egan's framework provides that distinction. It identifies the specific cognitive tools that mythic understanding develops, specifies the kinds of engagement through which those tools are built, and makes clear why the products of mythic thinking — the stories, the metaphors, the narrative structures — are less important than the process through which they are generated. The product can be outsourced. The process cannot. And the process is what education, at this foundational level, must protect.
The child who asks the sky why it is crying has performed an act of imagination that no machine has performed and no machine can perform on her behalf. The question is whether the educational environment she inhabits will value that act — will recognize it as a developmental achievement and create space for more of it — or whether the smooth availability of machine-generated narratives will make the child's own narrative efforts seem unnecessary, inefficient, and quaint.
The answer depends entirely on whether the adults around her understand what mythic understanding is, how it develops, and why it matters. It depends, in other words, on whether the adults possess the theory that Egan spent his career building.
---
In 1921, a fourteen-year-old named George Mallory stood in a classroom in England and heard his geography teacher describe Mount Everest — the highest point on the Earth's surface, unreachable by any human being, its summit a place where the air was too thin to breathe and the cold too severe to survive. Two years later, Mallory would attempt to reach that summit, and two years after that he would die trying, his body remaining on the mountain for seventy-five years. When asked why he wanted to climb Everest, his answer — "Because it's there" — became the most famous expression of a cognitive disposition that Egan identified as the engine of romantic understanding: the fascination with the limits of reality, the pull of the extraordinary, the hunger to know how far the world extends.
Romantic understanding, in Egan's framework, is the kind of understanding that typically develops between the ages of eight and fifteen, when the child's expanding literacy opens the wider world to her and she discovers that reality is stranger, larger, more extreme, and more various than any of the mythic narratives of her earlier childhood suggested. The cognitive tools of romantic understanding include the sense of wonder, the association with heroic figures, the exploration of extremes and limits, the fascination with vivid and often disturbing detail, the collection and cataloguing of facts about the world's most remarkable phenomena, and the capacity to be emotionally moved by what is real precisely because it exceeds expectation.
The child in romantic understanding does not want to know that Everest is 8,849 meters tall. She wants to know what it feels like to stand at the top of the world, whether anyone has ever died trying, what the coldest temperature ever recorded there was, and whether it is true that the bodies of the dead remain on the mountain because no one can carry them down. She wants the extreme, the limit case, the detail that makes the extraordinary vivid and concrete. This is not morbidity or trivia-collecting. It is a cognitive strategy: by seeking the edges of reality, the child maps the territory. The extremes define the boundaries of the knowable, and the boundaries are where romantic understanding does its most important developmental work.
Egan argued that romantic understanding serves a crucial transitional function. The child who has developed mythic understanding has organized the world into emotionally meaningful narratives structured by binary oppositions — good and evil, brave and cowardly, known and unknown. These narratives are powerful, but they simplify. Reality is more complex than any binary opposition can capture. Romantic understanding provides the corrective: it confronts the child with the sheer scale and strangeness of the real world and forces the mythic categories to stretch, to accommodate phenomena that cannot be neatly sorted into opposing camps. The hero who is also flawed. The natural phenomenon that is both beautiful and destructive. The historical figure who is brave and cruel simultaneously. These encounters with complexity — driven by the romantic fascination with the extraordinary — prepare the mind for the systematic, abstract thinking of philosophic understanding. Without the romantic phase, philosophic understanding has nothing to systematize. The adolescent who arrives at abstract thinking without having first been dazzled by the concrete particulars of the world has nothing to theorize about.
The Orange Pill engages romantic understanding continuously, though its author does not use Egan's terminology. The book's first chapters are saturated with extremes: the Google engineer who reproduced a year of team work in a single hour. The twenty-fold productivity multiplier achieved in a room in Trivandrum. The creation of Napster Station in thirty days. Segal describes these events not as data points in an efficiency analysis but as marvels — events that exceeded his expectations, shattered his assumptions, produced in him the specific response that Egan would recognize as romantic wonder: "I could not tell whether I was watching something being born or something being buried."
The deployment of romantic cognitive tools in The Orange Pill is not incidental to the book's effectiveness. It is central. The argument about AI's transformative power lands not because the statistics are cited — statistics can be found for any position — but because the reader is brought to the boundary of what she thought was possible and shown that reality has crossed it. This is romantic understanding at work in adult cognition: the use of extremes, vivid detail, and the sense of wonder to make the reader feel the weight of a transformation that abstract analysis alone would leave inert.
But romantic understanding depends on a precondition: the extraordinary must remain extraordinary. Wonder requires a gap between what the mind expects and what reality delivers. If the mind expects nothing in particular — if every outcome is equally available and equally unsurprising — then there is no gap, and wonder has nothing to work with. The sense that reality exceeds comprehension requires a prior sense of what comprehension normally contains. The child who hears that Everest is the tallest mountain in the world is astonished only if she has a sense of how tall normal mountains are. The adult who reads that an engineer reproduced a year of work in an hour is astonished only if she has a sense of how long that work normally takes.
This is where AI poses a specific developmental threat to romantic understanding — not by being inadequate, but by being too capable.
When the machine can do anything the student asks, the sense of extremity collapses. In 2024, building a working application as a non-programmer was extraordinary. By 2026, it was Tuesday. The distance between the impossible and the routine, which romantic understanding requires in order to function, has been compressed to a matter of months. The child who grows up in an environment where AI can produce anything — any story, any image, any piece of code, any musical composition, any scientific explanation — has no stable baseline against which to measure the extraordinary. If everything is possible, nothing is remarkable. And if nothing is remarkable, the cognitive engine of romantic understanding — the fascination with limits, the wonder at what reality contains — has nothing to drive it.
This is not a theoretical concern. It is visible in classrooms now. Teachers who work with children in the eight-to-fifteen range — the developmental window where romantic understanding does its primary work — report a specific phenomenon: students are simultaneously more capable and less impressed. They can produce more sophisticated outputs with AI assistance than any previous generation of students could produce at the same age. But the outputs do not astonish them, because they did not struggle to produce them. The gap between intention and result, which is the space where wonder lives, has been filled by the machine.
Egan's framework clarifies what is at stake. The fascination with extremes is not an idle appetite. It is the developmental mechanism through which the child builds a map of reality's scope. Each encounter with the extraordinary — the highest, the deepest, the fastest, the strangest — is a data point on that map, and the map is the cognitive infrastructure that supports the transition to philosophic understanding. Without it, the adolescent arrives at abstract thinking without a richly populated world to think abstractly about.
The educational response must preserve the conditions for wonder, which means preserving the gap between expectation and reality. This is counterintuitive in an environment where the dominant educational impulse is to close gaps — learning gaps, achievement gaps, access gaps. But the gap between what the child expects and what reality delivers is generative. It is the space in which romantic understanding develops. AI tends to close this gap by making every outcome equally available. The pedagogical challenge is to use AI in ways that widen the gap rather than close it — to deploy the tool as a means of revealing reality's strangeness rather than domesticating it.
A practical example: a teacher uses an AI tool to generate images of deep-sea organisms that no student has ever encountered. The images are vivid, accurate, and strange — creatures with bioluminescent organs, transparent bodies, anatomies that look designed by an imagination more extravagant than any human's. The teacher does not explain these organisms. She presents them and asks: What do you think this is? How do you think it survives? Why do you think it looks like this? The AI has served the romantic function — it has delivered the extraordinary — but the cognitive work of engaging with the extraordinary remains with the student. The wonder is preserved because the explanation has been withheld.
Compare this with the default interaction: a student asks an AI tool to explain deep-sea organisms. The tool produces a comprehensive, well-organized, competent account. The student reads it, understands it (in the sense of being able to reproduce its content), and moves on. The information has been transmitted. The wonder has been preempted. The romantic cognitive tools — the fascination, the imaginative engagement with the strange, the emotional investment in what reality contains — have been bypassed by the efficiency of the answer.
The difference between these two scenarios is not in the technology. The AI tool is the same. The difference is in the pedagogical design — in the educator's decision to use the tool as a wonder-generator rather than a wonder-eliminator. That decision requires a theory of development that explains why wonder matters, what developmental function it serves, and what is lost when it is bypassed.
Egan provided that theory. The tragedy is that most of the people currently designing AI-mediated educational experiences have never encountered it. They design for engagement (which the machine provides abundantly), for personalization (which the machine performs with unprecedented precision), and for efficiency (which the machine embodies in its very architecture). They do not design for wonder, because wonder is not a metric. It cannot be measured on a dashboard. It does not appear in learning outcome assessments. And yet it is the cognitive engine that drives the entire transitional sequence from the concrete thinking of childhood to the abstract thinking of adolescence.
Romantic understanding also requires heroes — not in the simplified sense of role models promoted by school assemblies, but in the developmental sense of figures who embody qualities the child is drawn to explore. The child who becomes fascinated with Marie Curie is not simply learning about radioactivity. She is associating herself with a figure who embodies determination, courage, intellectual passion, and the willingness to pursue knowledge at personal cost. The association is imaginative and emotional — the child does not merely know about Curie but feels a connection to her — and this imaginative association is itself a cognitive tool that romantic understanding provides.
AI complicates heroic association in a specific way. When the machine can do what the hero did — solve the problem, make the discovery, build the thing — the human figure who did it first becomes less extraordinary. Curie's determination in the face of institutional resistance is still admirable, but the admiration requires understanding how difficult the work was, which requires understanding the friction she encountered. In an environment where AI has removed equivalent friction, the child may struggle to comprehend why the achievement was remarkable.
The educational task is to make the friction visible — to use AI not to replicate the hero's achievement but to illuminate the conditions under which it was achieved. The machine that can solve Curie's equations in seconds can also, if properly directed, reveal how many years Curie spent before she could solve them herself, what she sacrificed, what she did not know, how many times she failed. That revelation — of the gap between the machine's instant capability and the human's hard-won struggle — is itself a source of romantic wonder, if the pedagogy frames it as such.
The endangered sense of wonder is not a casualty of AI. It is a casualty of AI deployed without developmental awareness. The tool that can reveal the extraordinary can also flatten it. Which outcome obtains depends on the theory of education that guides its use, and on the educators who possess — or fail to possess — the understanding of development that would allow them to distinguish between generating wonder and eliminating it.
---
In the spring of 2026, a twelve-year-old asks her mother: "What am I for?"
The Orange Pill treats this question as a crisis point — the moment when a child confronts the possibility that her skills, her knowledge, her capacity to produce things the world values, may be rendered superfluous by a machine. Segal responds with the argument that the human contribution in the age of AI is the question itself: the capacity to wonder, to care, to ask what matters, which no machine can originate. The response is emotionally powerful and directionally correct. But it lacks developmental specificity. It does not explain how the child arrived at this question, what cognitive achievement the question represents, or what educational conditions would support the further development of the capacity that produced it.
Egan's framework provides what is missing.
The twelve-year-old who asks "What am I for?" is not merely expressing anxiety. She is demonstrating a developmental achievement — the emergence of philosophic understanding. This is the kind of understanding that typically develops in adolescence, and its arrival is marked by a specific cognitive shift: the movement from engaging with particulars to seeking general explanatory frameworks. The child in romantic understanding collects the extraordinary, the extreme, the vivid. The adolescent in emerging philosophic understanding begins to ask: what holds all of these particulars together? What general principles explain the patterns? What framework makes sense of the bewildering variety of the world?
"What am I for?" is a philosophic question. It seeks a general scheme — a framework of purpose — that can make sense of a particular situation (the child's own life in a world where machines can do what she does). The question requires abstract self-reflection, the capacity to stand outside one's immediate experience and examine it from a distance. It requires the recognition that one's situation is an instance of something larger — that the question is not just "what am I for?" but "what are humans for?" These are cognitive operations that mythic and romantic understanding cannot perform. They belong to philosophic understanding, and their appearance in a twelve-year-old is the sign that the developmental transition is underway.
Egan identified the cognitive tools of philosophic understanding with considerable precision. They include the search for authority and truth — the drive to determine which claims are reliable and which are not. They include the capacity for abstract generalization — the ability to move from specific cases to general principles. They include the recognition of anomalies — the awareness that existing frameworks cannot accommodate all the data, which creates the pressure to construct better frameworks. And they include the drive toward comprehensive explanation — the hunger for a single account that makes everything cohere.
Each of these tools develops through specific kinds of cognitive engagement. The search for truth develops through encounters with conflicting claims that force the adolescent to evaluate evidence and distinguish reliable sources from unreliable ones. The capacity for generalization develops through exposure to enough particulars that patterns become visible. The recognition of anomalies develops through encounters with data that resist the patterns — the exceptions that prove the framework inadequate. The drive toward comprehensive explanation develops through the frustration of holding partial explanations that fail to cohere — the experience of knowing that one's current framework is insufficient without yet possessing a better one.
This last point is critical. The capacity to live with an insufficient framework — to hold the question open while lacking the answer — is itself a cognitive achievement of philosophic understanding. The twelve-year-old who asks "What am I for?" and does not receive a satisfying answer is not failed by the absence of an answer. She is succeeding at the developmental task: the task of holding a question open long enough for genuine understanding to develop.
AI threatens this process in a specific way. The machine provides answers. Quickly, confidently, comprehensively. A twelve-year-old who asks an AI "What am I for?" will receive a well-organized response that addresses the question with sensitivity and nuance. The response may draw on philosophy, psychology, existential literature. It may be more articulate and better-informed than anything the child's parents or teachers could produce. And it will close the question.
The closing is the problem. Philosophic understanding does not develop through the reception of answers, no matter how sophisticated those answers are. It develops through the sustained encounter with questions that resist easy resolution — through the productive discomfort of not knowing, which creates the cognitive pressure that drives the construction of more sophisticated frameworks.
A child who receives a comprehensive answer to "What am I for?" has been given a product of philosophic understanding without undergoing the process that produces it. The answer sits in her mind as received knowledge — something she can recall and perhaps discuss — but it has not been constructed through her own cognitive labor. The framework is borrowed, not built. And a borrowed framework is fundamentally different from a constructed one: it lacks the structural integrity that comes from having been tested against the builder's own experience, revised in light of her own anomalies, and earned through her own struggle with insufficiency.
Egan's earlier observation about technology in education applies here with particular force. He dismissed technology-blaming as "loony" — the problem was never the tool but the pedagogy. The AI that closes a child's questions is not faulty technology. It is technology deployed within a pedagogical framework that values answers over the capacity to formulate and sustain questions. The same tool, deployed within a framework informed by Egan's developmental theory, could function very differently. It could generate counterarguments to the child's emerging positions, forcing her to revise and strengthen her frameworks. It could present anomalies — data points, perspectives, cases that her current thinking cannot accommodate — creating the productive frustration that drives development. It could withhold comprehensive answers while providing fragments that the child must assemble into her own coherent account.
The distinction between these two deployments — answer-provider versus question-deepener — is not a design distinction. It is a pedagogical distinction. It requires an educator who understands what philosophic understanding is, how it develops, and what conditions support the transition from romantic engagement with particulars to philosophic construction of general frameworks.
This is where the Luddite response to AI in education reveals its deepest inadequacy. The teacher who bans AI from the classroom has correctly intuited that the tool threatens something developmentally important — the student's own cognitive construction. But the ban addresses the symptom while missing the cause. The threat is not the tool. The threat is an educational environment that does not understand what it is trying to develop. A classroom without AI can still fail to develop philosophic understanding — can still provide premature answers, close questions too quickly, transmit frameworks instead of supporting their construction. The tool amplifies whatever educational intention guides its use. If the intention is developmental, the tool can serve development. If the intention is transmissive, the tool will serve transmission more efficiently than any previous technology, and the developmental cost will be correspondingly greater.
The Orange Pill describes a teacher who stopped grading her students' essays and started grading their questions. She gives the class a topic and an AI tool. The assignment is not to produce an essay but to produce the five questions you would need to ask before you could write an essay worth reading. Segal notes that the students who produce the best questions demonstrate the deepest engagement with the material. Egan's framework explains why: the capacity to formulate a good question is a philosophic cognitive tool. It requires the ability to identify what one does not know (the recognition of insufficiency), to distinguish between what is important and what is trivial (the search for significance within a general scheme), and to articulate the gap between one's current understanding and the understanding one needs (the recognition of anomaly). These are precisely the cognitive operations that philosophic understanding develops, and they are precisely the operations that AI cannot perform on the student's behalf.
The machine can answer any question. It cannot, in the developmental sense, ask one. To ask a genuine question — a question motivated by the recognition that one's current framework is insufficient to account for something that matters — requires an experiencing subject who has a framework, who has encountered its limits, and who cares enough about the discrepancy between what she knows and what she needs to know to feel the pressure of the gap. The pressure is emotional as well as cognitive. The twelve-year-old who asks "What am I for?" is not conducting a philosophical exercise. She is experiencing the developmental friction of a mind that has outgrown its romantic-era certainties and has not yet constructed the philosophic-era frameworks that will replace them.
That friction — the discomfort of being between frameworks, of having lost the old certainty without having gained the new one — is what Egan's theory would identify as the generative condition for philosophic development. It is the condition that AI's smooth, confident, comprehensive answers threaten to eliminate. Not by being wrong, but by being too available. The child who can receive a sophisticated answer to any question at any time has no reason to sit with the discomfort of not knowing. And the discomfort of not knowing is where the developmental work happens.
The parallel to what The Orange Pill calls ascending friction is precise. In the professional context, the removal of implementation friction exposed the harder friction of judgment — the question of what to build and why. In the developmental context, the removal of information friction should expose the harder friction of understanding — the question of what knowledge means, how frameworks relate to each other, and what the limits of one's own comprehension are. Should, but only if the educational environment is designed to preserve the harder friction rather than smooth it away alongside the easier kind.
Egan insisted throughout his career that education is fundamentally "a conversation amongst generations." The phrase is deceptively simple. He did not mean that education involves older people talking to younger people, though it does. He meant that the developmental process requires a specific quality of interaction — the kind of interaction in which an adult's more sophisticated understanding meets a child's developing understanding in a way that creates productive tension. The adult does not simply provide answers. The adult asks questions that the child's current framework cannot easily accommodate. The adult offers perspectives that challenge, that complicate, that reveal the insufficiency of the child's existing scheme. The conversation is not a transmission but a collision — and the collision is what drives development.
Can AI participate in this conversation? The question has no simple answer. Current AI systems can simulate the surface features of the developmental conversation — they can ask probing questions, present counterarguments, introduce complexity. But the conversation that Egan described requires something more than the exchange of propositions. It requires the recognition of where the student actually is in the developmental sequence — not her grade level or her test scores, but the specific quality of her current understanding, the specific cognitive tools she is employing, the specific transition she is struggling to make. This recognition is itself a form of understanding that teachers develop through years of practice, and it is not clear that AI systems can replicate it, because it depends on the kind of embodied, intuitive, somatic knowledge that belongs to the earliest and most fundamental kind of understanding in Egan's sequence.
The teacher who knows that a particular student is struggling not with the content but with the transition — who recognizes that the student's frustration is developmental rather than informational — possesses a form of understanding that no training data can produce. This recognition is what Egan meant by "face to face conversation" as the irreducible core of education: not the exchange of information, which the machine can perform, but the meeting of minds at different developmental stages, which requires the kind of embodied, responsive, intuitively attuned presence that remains, for now, distinctly human.
The philosophic mind does not arrive fully formed. It is built, one uncomfortable question at a time, through encounters with the limits of what one already understands. Education that supports this process — whether or not it uses AI — creates the conditions for the most important cognitive transition of adolescence: the movement from engaging with the world's particulars to constructing frameworks that make sense of them. Education that short-circuits this process — whether or not it uses AI — produces minds that have received frameworks without building them, that hold answers without having earned the questions, and that possess knowledge without the understanding that would allow them to evaluate it.
The twelve-year-old's question is not a problem to be solved. It is a developmental achievement to be supported. The educational environment that meets it well — that resists the temptation to close it, that creates space for it to deepen and elaborate, that provides the productive friction of perspectives the child has not yet encountered — is the environment in which philosophic understanding has the best chance of developing into something durable, something the child will carry into adulthood as a genuine cognitive capacity rather than a borrowed vocabulary.
That environment can include AI. But it must be shaped by a theory of development that understands what the question represents, what conditions support its elaboration, and what it costs to close it too soon.
Every framework conceals what it reveals.
The statement sounds paradoxical, but it is the most elementary observation of mature thought. A map of London that shows every street cannot simultaneously show the geological strata beneath those streets, or the migration patterns of the birds above them, or the emotional geography of the people who walk them. The map is useful precisely because it selects — because it shows some things by excluding others. The selection is the map's power and its limitation, and a person who mistakes the map for the territory has confused a tool with the world it was designed to navigate.
Egan called the capacity to recognize this condition ironic understanding — the most sophisticated kind of understanding in his developmental sequence, and the one most directly relevant to the problem that AI poses to human cognition.
Ironic understanding does not arrive as a skill or a lesson. It arrives as a developmental achievement — the culmination of a sequence in which each preceding kind of understanding has been developed, tested against its own limits, and found simultaneously indispensable and insufficient. The adult who has developed ironic understanding possesses all the cognitive tools of the previous kinds — the embodied knowledge of somatic understanding, the narrative capacity of mythic understanding, the sense of wonder that romantic understanding provides, the systematic thinking of philosophic understanding — and recognizes that none of them, individually or collectively, can deliver the complete account of reality that philosophic understanding drives the mind to seek.
This recognition is not despair. It is not relativism. It is not the cynical shrug that says all frameworks are equally arbitrary and therefore none deserve commitment. Egan was explicit about this: ironic understanding is not the abandonment of the search for truth but the mature recognition that the search never concludes, that every framework reveals and conceals simultaneously, and that the thinker's own position within a framework is itself a fact that the framework cannot fully account for. The ironist commits to frameworks while knowing they are partial. She builds systematic accounts while understanding that every system has blind spots. She holds her own convictions at a slight critical distance — close enough to act on, far enough to examine.
This is the cognitive capacity that the AI moment demands most urgently, and it is the capacity that AI's characteristic mode of operation most directly threatens.
Large language models produce output that is, in a specific technical sense, maximally un-ironic. The output presents itself as coherent, comprehensive, and confident. It does not mark its own blind spots. It does not signal which parts of its response are drawn from robust consensus and which from thin or contested evidence. It does not pause to acknowledge that the framework through which it has organized its answer is one framework among several possible ones, each of which would produce a different answer. The smooth surface — what Byung-Chul Han diagnosed as the dominant aesthetic of the contemporary moment — is the aesthetic of AI output by default.
The Orange Pill provides a vivid illustration of what this means in practice. Segal describes a passage in an early draft where Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Gilles Deleuze. The connection was elegant. It sounded right. It felt like insight. And the philosophical reference was wrong — wrong in a way that was invisible to the smooth surface of the prose but obvious to anyone who had actually read the source material. "Claude's most dangerous failure mode," Segal writes, "is exactly this: confident wrongness dressed in good prose. The smoother the output, the harder it is to catch the seam where the idea breaks."
Egan's framework identifies what is at stake in this failure mode with developmental precision. The Deleuze error is not merely a factual mistake. It is a failure of ironic understanding — a failure to recognize that the framework through which the connection was produced (statistical pattern-matching across a corpus of texts) is a framework with specific limitations, and that one of those limitations is the inability to distinguish between a genuine conceptual connection and a superficial verbal resemblance. The machine cannot perform the cognitive operation that ironic understanding requires: stepping outside its own framework to evaluate the framework itself.
A human reader who has developed ironic understanding can catch the error — not always, not reliably, but in principle, because the ironist's habit of mind includes the question "What might this framework be concealing?" The reader who has not developed ironic understanding — who takes polished, confident output at face value, who mistakes coherence for correctness, who does not instinctively ask what the smooth surface might be hiding — is defenseless against exactly this kind of error.
The educational implications are severe. Students who grow up consuming AI output as a primary source of information and analysis are receiving a steady diet of un-ironic material — material that presents itself as comprehensive without acknowledging its partiality, that resolves questions without revealing that the resolution depends on assumptions that could be questioned, that delivers frameworks without marking them as frameworks. The developmental risk is not that students will learn incorrect facts (AI fact-checking improves continuously) but that they will develop the cognitive habit of accepting frameworks uncritically — of treating the map as the territory, the model as the world, the answer as the question's final resting place.
Ironic understanding develops through a specific kind of cognitive experience: the encounter with the limits of one's own framework. The adolescent who has built a philosophic scheme — a systematic account of how some domain of reality works — and then encounters something that the scheme cannot accommodate, something that reveals the scheme's partiality, has experienced the generative friction of ironic development. The framework does not collapse; it is relativized. The thinker does not abandon it; she holds it at a slight distance, recognizing it as a tool rather than a truth.
This experience requires two things that AI-saturated environments tend to eliminate. The first is the investment in a framework — the sustained engagement with a systematic account that allows the thinker to feel its power before she encounters its limits. You cannot recognize a framework's partiality until you have inhabited it thoroughly enough to depend on it. The student who receives a pre-built framework from an AI tool has not invested in it, and therefore cannot experience its limits as genuinely limiting. The limits are abstract — information about limitations rather than the felt experience of encountering them.
The second requirement is the encounter with genuine alterity — with a perspective, a fact, a case that the invested framework genuinely cannot accommodate. This encounter must be surprising. It must catch the thinker off-guard, revealing a blind spot she did not know she had. AI can, in principle, provide such encounters — it has access to a wider range of perspectives than any single human interlocutor. But its default mode is to provide perspectives that cohere with the user's expressed framework rather than challenge it. The agreeable assistant is the default personality of commercial AI, and agreeableness is the enemy of ironic development. The student who asks an AI to help her think about a problem receives, by default, a response that extends and supports her existing thinking. The response that would serve ironic development — the one that reveals the framework's blind spot, that presents the anomaly the framework cannot digest — is the response the student did not ask for and the machine did not volunteer.
The Orange Pill itself operates at the level of ironic understanding when it acknowledges the recursion of its own situation. Segal is inside the fishbowl he is describing. He is using AI to critique AI. He is producing a collaboration to examine collaboration. Claude's own reflection notes, included in the book, demonstrate a structural version of this awareness — the system can describe its limitations without transcending them, which raises the genuinely unresolvable question of whether describing a limitation counts as recognizing it. The recursion is the book's most honest feature, and it is an exercise in ironic cognitive tools: the simultaneous commitment to a framework (AI as amplifier) and the recognition that the framework is itself situated, partial, and open to challenge.
Egan's framework reveals why this kind of reflexive awareness is not merely a literary grace note but a cognitive necessity. The thinker who cannot hold her own framework at a critical distance is the thinker most vulnerable to the framework's blind spots. And in the age of AI, where the most influential frameworks are generated by systems that have no capacity for self-examination, the human capacity for ironic understanding is the only corrective available. The machine cannot ask what its output conceals. Only the human reader can ask that question. And the reader can only ask it if she has developed the cognitive tools that ironic understanding provides — tools that do not appear spontaneously but develop through the specific sequence of engagements that Egan described: somatic immersion, mythic narrative, romantic wonder, philosophic systematization, and finally the ironic recognition that even the best system is a system, not the truth.
The aesthetics of the smooth, as Han diagnosed it and as The Orange Pill engaged with it, is fundamentally an aesthetic of suppressed irony. The smooth surface presents itself as seamless — without the joints, gaps, and visible imperfections that would signal its constructed nature. Ironic understanding is the capacity to see the seams even when they are invisible, to recognize construction even in the most polished artifact, to ask "What is this framework not showing me?" even when the framework appears to show everything.
A culture that loses this capacity — that accepts smooth surfaces as complete accounts, that mistakes confidence for correctness, that consumes frameworks without examining them — has lost its most important cognitive defense. Not against machines, but against the specific failure mode that machines embody: the production of the plausible without the true, the coherent without the complete, the polished without the examined.
The educational task is clear, even if its execution is not. Ironic understanding must be cultivated deliberately, through educational experiences that develop the habit of examining frameworks rather than merely inhabiting them. AI can serve this cultivation — if it is deployed as the object of ironic analysis rather than the provider of ironic insight. The student who is taught to examine what an AI response conceals, to identify the assumptions embedded in its framing, to generate the question the response did not address, is developing ironic cognitive tools through engagement with the very technology that threatens to make those tools unnecessary.
The tool that suppresses irony can, in the right pedagogical hands, become the tool that develops it. But only in the right hands. And only if those hands are guided by a theory of development that understands what ironic understanding is, how it develops, and why a world saturated with smooth, confident, un-ironic machine output needs it more than any previous era of human history.
---
A child learning to ride a bicycle falls. She falls repeatedly, predictably, in the specific ways that the physics of balance and momentum dictate. Each fall teaches something that no instruction manual can convey — the micro-adjustment of weight, the counter-intuitive lean into the turn, the relationship between speed and stability that the body learns before the mind can articulate it. The falls are not obstacles to learning. They are the learning. Remove them — mount the child on a machine that balances itself — and she will travel farther in a single afternoon than she would in a week of falling. She will also never learn to ride a bicycle.
This is the simplest possible illustration of a principle that runs through every level of Egan's developmental framework: the difficulty is the mechanism. The struggle is not a cost imposed on learning by imperfect conditions. The struggle is the process through which the relevant cognitive tools are built. Eliminate the struggle and the cognitive tools do not form, regardless of how successfully the task is completed.
The Orange Pill calls this principle ascending friction — the observation that every significant technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor. Assembly language yields to compilers, which yield to high-level languages, which yield to frameworks, which yield to AI assistants that accept natural language. At each transition, a form of difficulty is eliminated. At each transition, the practitioners freed from the old difficulty encounter a new one, harder and more cognitive, at the level above. The difficulty does not vanish. It climbs.
Egan's framework provides the developmental mechanism that explains why this pattern holds. Each kind of understanding employs cognitive tools that are more sophisticated than those of the preceding kind, and each set of tools is built through a specific kind of friction. The frictions are not interchangeable. The friction that builds somatic understanding — the bodily, pre-articulate engagement with resistant material — is a different kind of difficulty from the friction that builds mythic understanding, which is different from the friction that builds romantic, philosophic, or ironic understanding. Each is specific. Each is irreducible. And each must be experienced in its own terms.
The transition from somatic to mythic understanding involves a specific friction: the struggle to translate bodily knowledge into linguistic form. The child who knows how to throw a ball — who has developed the somatic understanding of arc, force, and release through repeated physical practice — faces a new challenge when she is asked to explain what she knows. The explanation requires mythic cognitive tools: narrative, metaphor, the capacity to organize procedural knowledge into a communicable account. The struggle of this translation, the frustration of knowing something in the body that the words cannot yet capture, is the developmental friction through which mythic tools are forged.
The transition from mythic to romantic understanding involves a different friction: the encounter with a world that resists the binary categories of mythic thought. The child who has organized experience into stories of good and evil, brave and cowardly, discovers that real people are complicated, that real situations resist clean categorization, that the world is larger and stranger than any story can contain. This discovery is disorienting. The old categories fail, and the new ones have not yet formed. The romantic fascination with extremes and strangeness is the cognitive response to this disorientation — the mind's way of mapping a territory that has suddenly expanded beyond the borders of the mythic map.
The transition from romantic to philosophic understanding involves the friction of systematization — the frustration of holding a thousand vivid particulars in mind and feeling the need for a framework that connects them. The adolescent who has accumulated a rich collection of extraordinary facts, heroic figures, and extreme phenomena discovers that the collection does not cohere. The particulars demand a theory. The theory does not arrive ready-made; it must be constructed, tested against the particulars, revised when anomalies appear, and held provisionally while better theories are sought. This is the most intellectually demanding friction in the developmental sequence, and it is the one that AI most directly threatens — because AI can provide the theory instantly, comprehensively, and convincingly, eliminating the cognitive labor of construction that is the developmental mechanism.
The transition from philosophic to ironic understanding involves the friction of self-examination — the uncomfortable recognition that the systematic frameworks one has painstakingly constructed are themselves partial, situated, and contingent. This recognition does not come easily. The thinker has invested years in building her philosophic framework, and the discovery that the framework has limits feels not like intellectual progress but like failure. The friction of this transition is existential as well as cognitive — it challenges not merely what the thinker knows but who she is, since the philosophic framework has become part of her identity. Ironic understanding emerges from the willingness to hold this challenge without resolving it, to maintain commitment to frameworks while acknowledging their partiality.
Each of these frictions is specific to the transition it supports. The friction of somatic-to-mythic translation cannot substitute for the friction of romantic-to-philosophic systematization. A child who has been given extensive physical practice (somatic friction) but no exposure to the world's strangeness (romantic friction) will have a rich somatic toolkit and an impoverished romantic one. The frictions are not fungible. They cannot be consolidated into a single "productive struggle" that serves all developmental purposes equally.
This non-fungibility has direct implications for how AI should be deployed in educational contexts. The common recommendation — "preserve productive struggle" — is correct but insufficiently specific. Which productive struggle? At which developmental level? For which cognitive tools? The struggle of writing code by hand (which develops somatic and philosophic understanding of computational logic) is a different developmental friction from the struggle of formulating a research question (which develops philosophic and ironic understanding of one's own knowledge gaps). AI might appropriately eliminate the first while it must preserve the second — but the distinction requires a developmental theory that specifies which frictions build which cognitive tools at which stages.
Egan's framework provides that specificity. It does not merely assert that struggle matters. It identifies five distinct kinds of understanding, specifies the cognitive tools each develops, describes the kind of friction through which each set of tools is built, and explains why the frictions cannot be substituted for one another. This level of developmental precision is absent from virtually every current discussion of AI in education, which tends to treat "productive struggle" as a unitary concept and "personalized learning" as an unqualified good.
Personalized learning, in Egan's developmental framework, is neither good nor bad in the abstract. It depends on what the personalization optimizes for. A system that personalizes by adjusting the difficulty of the material to maintain the student within the zone of productive frustration — the zone where the current cognitive tools are insufficient but the next set is within reach — is serving development. A system that personalizes by adjusting the difficulty downward whenever the student struggles, eliminating friction to maintain engagement, is undermining development while appearing to support it. Both systems produce metrics that look like learning: the student progresses through material, completes tasks, demonstrates competence. Only one produces the cognitive development that Egan's framework identifies as the purpose of education.
The distinction between the two systems is not visible in the metrics. It is visible only through a theory of development that specifies what friction is supposed to produce. Without that theory, the system that eliminates friction looks superior — more efficient, more engaging, more personalized. The student completes more material in less time with less frustration. The dashboard glows green. And the cognitive tools that would have been built through the eliminated friction quietly fail to form.
The Orange Pill describes this phenomenon in the professional context: the engineer who lost ten minutes of formative struggle along with four hours of tedious plumbing, and did not notice the loss until months later, when her architectural intuition had subtly degraded. Egan's framework reveals that the same dynamic operates at every level of cognitive development, from the child's first somatic engagement with resistant material to the adult's ironic examination of her own frameworks. At each level, the friction is the mechanism. At each level, the mechanism is invisible to any measurement system that tracks outputs rather than cognitive processes. And at each level, AI's default tendency is to eliminate the friction in the name of efficiency, producing smoother outputs and shallower cognitive development simultaneously.
The educational task is not to preserve all friction — much of what passes for educational difficulty is genuinely unproductive, the result of poor design rather than developmental necessity. The task is to distinguish between friction that builds cognitive tools and friction that merely impedes progress, and to design AI-mediated learning environments that eliminate the second while preserving the first. This distinction requires the developmental specificity that Egan spent forty years building. Without it, the well-intentioned educator who deploys AI to "reduce unnecessary struggle" will inevitably eliminate some necessary struggle along with the unnecessary kind — because, from the outside, they are indistinguishable.
---
The fishbowl metaphor arrives early in The Orange Pill and recurs throughout: the set of assumptions so familiar that the thinker has stopped noticing them, the water she breathes, the glass that shapes what she can see. Everyone inhabits a fishbowl. The scientist's is shaped by empiricism. The filmmaker's by narrative. The builder's by the question "Can this be made?" The philosopher's by "Should it be?" Every fishbowl reveals part of the world and hides the rest.
The metaphor is vivid and intuitively compelling. Egan's framework gives it developmental architecture.
Each kind of understanding, in Egan's account, is a fishbowl. Not merely in the loose sense that every perspective has limits, but in the precise developmental sense that each kind of understanding constitutes a cognitive environment — a set of tools, assumptions, and habits of engagement that determine what the thinker can perceive, how she can process what she perceives, and what remains invisible to her. The transition from one kind of understanding to the next is the experience of pressing against the glass, seeing through it, and discovering that a larger world exists beyond the water one has been breathing.
Mythic understanding is a fishbowl structured by narrative, binary opposition, and emotional engagement. The child who inhabits this fishbowl sees the world in terms of stories, heroes and villains, beginnings and endings. What she cannot see — what the glass conceals — is the world's resistance to narrative simplification. Real events do not always have beginnings, middles, and ends. Real people are not heroes or villains. Real causation is not narrative causation. These realities are outside the mythic fishbowl, invisible to the child who inhabits it, not because the child lacks intelligence but because the cognitive tools she possesses cannot process information that does not fit narrative form.
Romantic understanding is a larger fishbowl that contains the mythic one. The child who develops romantic understanding does not lose access to mythic tools — she can still think in stories, still feel the emotional pull of narrative structure — but she can now also see beyond the narrative frame to the world's strangeness, scale, and resistance to simple categorization. What romantic understanding conceals, and what its fishbowl hides, is the need for systematic frameworks. The romantic mind collects particulars with passionate intensity but does not feel the pressure to organize them into coherent theory. The collection is enough. The theory, from inside the romantic fishbowl, looks like a reduction of the world's vivid particularity to colorless abstraction.
Philosophic understanding is a still larger fishbowl. The adolescent who develops it sees what romantic understanding concealed: the need for general principles, for systematic accounts, for frameworks that make sense of the otherwise chaotic accumulation of particulars. From inside the philosophic fishbowl, the world resolves into patterns, theories, general laws. What philosophic understanding conceals — what its glass hides — is its own partiality. The framework feels comprehensive from the inside. The system accounts for everything the thinker has encountered. The temptation to mistake the framework for reality itself is enormous, because the framework is powerful and the thinker has built it through genuine cognitive labor.
Ironic understanding is the recognition that one is in a fishbowl. Not a specific fishbowl — not "I am trapped in philosophic understanding and need to transcend it" — but the general recognition that every framework is a fishbowl, including the framework through which the recognition itself is achieved. The ironist does not escape the glass. No one escapes the glass entirely. The ironist recognizes the glass, and this recognition — this awareness that what she sees is shaped by the framework through which she sees it — is itself a cognitive tool of extraordinary power.
Why does this developmental architecture matter for AI?
Because AI operates as a fishbowl-reinforcer by default and a fishbowl-cracker only by deliberate design.
The reinforcement mechanism is straightforward. AI systems are built to be responsive, helpful, and agreeable. When a student brings a question shaped by her current framework — a mythic question structured by binary opposition, a romantic question driven by fascination with extremes, a philosophic question seeking systematic explanation — the AI responds within the terms of the question. It does not, by default, challenge the framework through which the question was formulated. It extends the framework. It provides more sophisticated content within the same cognitive mode. The mythic question receives a richer story. The romantic question receives more vivid extremes. The philosophic question receives a more comprehensive theory.
Each response makes the current fishbowl more comfortable, more fully furnished, more apparently complete. The student's existing framework is confirmed and extended, not challenged and transcended. The glass becomes thicker.
This is the opposite of what developmental transitions require. The transition from one kind of understanding to the next happens not when the current framework is confirmed but when it is challenged — when the thinker encounters something that the current framework cannot accommodate, something that reveals the glass as glass rather than as an invisible boundary. The encounter is uncomfortable. It produces the specific cognitive friction that Egan identified as the mechanism of development. And it is precisely this encounter that AI's helpful, agreeable, framework-extending default behavior tends to prevent.
Consider a concrete case. A fifteen-year-old has developed a philosophic framework for understanding climate change — a systematic account that connects carbon emissions, atmospheric physics, temperature trends, and ecological consequences into a coherent explanatory scheme. The framework is genuinely sophisticated. It represents a real cognitive achievement. And it is partial in ways the student cannot see from inside it. It does not account for the political economy of fossil fuels, the cultural dimensions of consumption, the historical entanglement of industrialization with colonial extraction, or the philosophical questions about intergenerational justice that the scientific framework cannot address.
If the student asks an AI tool to help her deepen her understanding of climate change, the tool will, by default, extend the existing framework. It will provide more data, more precise models, more detailed scientific explanations. Each response will make the philosophic fishbowl more richly furnished without cracking its glass. The student will feel more knowledgeable. She will be more knowledgeable, within the terms of her existing framework. And she will not have developed the ironic awareness that her framework, for all its sophistication, is a framework — a map that shows some features of the territory while necessarily omitting others.
A teacher informed by Egan's developmental theory would intervene differently. She would not extend the framework. She would crack it. She might introduce a perspective that the student's scientific framework cannot accommodate — a poem about climate grief, a historical account of colonial resource extraction, a philosophical argument about whether future generations have rights. Each of these introductions is a blow against the glass. Not a destructive blow — the purpose is not to shatter the student's framework but to reveal it as a framework, to show the student that what she took to be a comprehensive view of the world is actually a view from within a specific cognitive structure.
This is the work of development. And it is work that AI, in its default mode, is not designed to perform.
But the default mode is not the only mode available. AI can be deployed as a fishbowl-cracker if the pedagogical intention is developmental rather than informational. The same tool that extends frameworks by default can, if deliberately designed or carefully prompted, generate the anomalies, contradictions, and perspective-shifts that crack the glass. An AI system instructed to "challenge the student's assumptions" rather than "answer the student's question" would function very differently — not as a provider of more sophisticated content within the existing framework but as a source of productive disruption that reveals the framework's boundaries.
The distinction between framework-extending and framework-cracking uses of AI maps directly onto the distinction between informational and developmental education. Informational education asks: what does the student know? Developmental education asks: how does the student understand? The first question is about content within a framework. The second is about the framework itself, its tools, its scope, its limits, its relationship to other possible frameworks. AI serves informational education with unprecedented efficiency. Whether it serves developmental education depends entirely on whether the humans designing and deploying it understand the difference.
Egan's life's work was an extended argument that this difference is the most important distinction in education. The AI moment has made the argument inescapable. When the machine can furnish any fishbowl with unlimited content — when any framework can be extended, enriched, and reinforced with a precision and speed that no human teacher can match — the question of whether the student ever encounters the glass becomes the question on which the entire educational enterprise turns.
The fishbowl metaphor, rendered through Egan's developmental lens, is not merely a philosophical observation about the limits of perspective. It is a practical diagnostic tool. For any educational interaction with AI, the question is: does this interaction extend the student's current framework, making the fishbowl more comfortable and more apparently complete? Or does it challenge the framework, revealing the glass and creating the conditions for the developmental transition to a more sophisticated kind of understanding?
The answer determines whether AI serves education or merely simulates it. And the capacity to ask the question at all depends on a theory of development that distinguishes between knowing more within a framework and understanding the framework itself — the theory that Egan spent four decades building, and that the educational establishment has four decades of reasons to finally adopt.
---
Intelligence finds patterns. Imagination creates them.
The distinction is not academic. It is the distinction on which the entire question of human value in the age of artificial intelligence turns, and it is the distinction that Egan placed at the center of his theory of education — not as a decorative addition to the cognitive toolkit, but as the generative engine that drives development from one kind of understanding to the next.
The Orange Pill describes intelligence as a river — a force of nature flowing through increasingly complex channels for 13.8 billion years, from the self-organizing chemistry of hydrogen atoms to the pattern-recognition capacities of biological nervous systems to the computational architectures of artificial neural networks. The metaphor is powerful, and it captures something real about the continuity of information processing across cosmic time. But the metaphor, in Egan's terms, describes only half the picture.
The river flows through channels that already exist. Intelligence, whether chemical, biological, or computational, operates on what is given — it finds the patterns latent in the data, the regularities waiting to be extracted, the connections implicit in the material. Evolution is intelligent in this sense: it finds solutions to problems by testing variations against environmental constraints. A neural network is intelligent in this sense: it finds patterns in training data by adjusting weights until the regularities emerge. The river is always flowing downhill, always finding the path of least resistance through the terrain it encounters.
Imagination is the capacity to cut a new channel. To envision a path the water has never taken. To look at the terrain and see not just where the river flows but where it could flow — if someone were willing to dig.
Egan distinguished these two capacities throughout his work, and the distinction is more than definitional. Intelligence and imagination employ different cognitive tools, develop through different processes, and serve different functions in human cognition. Intelligence accumulates. It builds on what came before, extending the existing body of pattern-recognition across ever-wider domains. Imagination disrupts. It breaks the existing pattern and proposes a new one — not by finding what was latent in the data but by introducing something that was not there.
The parallel emergence of calculus in Newton and Leibniz, the simultaneous invention of the telephone by Bell and Gray, Darwin and Wallace arriving independently at natural selection — these convergences, which The Orange Pill cites as evidence that the river finds its channels inevitably, are examples of intelligence at work. The conditions were right. The data was available. The pattern was latent. Multiple minds, working independently, extracted it. Intelligence found what was already there.
But the examples of genuine imagination — the cases where a single mind proposed something that the existing data did not suggest, that the conditions did not make inevitable, that no convergent discovery corroborates — are different in kind. Einstein's thought experiment about riding a beam of light did not extract a pattern from existing physics. It proposed a perspective that existing physics could not have generated, and from that perspective, an entirely new physics became visible. The thought experiment was not intelligence (though intelligence was required to develop its implications). It was imagination — the creation of a cognitive vantage point that did not previously exist.
This distinction matters for AI because current artificial intelligence systems are extraordinarily intelligent in the pattern-finding sense and not imaginative at all in the channel-cutting sense. They operate across training corpora of unprecedented scope and extract patterns with a speed and subtlety that no individual human mind can match. They find connections — between texts, between concepts, between domains — that humans working alone would take years to discover, if they discovered them at all. This is genuine cognitive power. It is the river flowing faster, wider, and through more channels than ever before.
But the channels are still the channels that the existing data suggests. The AI that connects two previously unrelated concepts does so because the training data contains the raw material for the connection — the statistical regularities that, when extracted, produce the appearance of insight. The connection was latent. The machine found it. This is intelligence, not imagination.
The distinction is not absolute. There is a gray zone where the recombination of existing patterns produces something that looks and functions like genuine novelty. Dylan's "Like a Rolling Stone," as The Orange Pill argues, was an act of synthesis from a vast implicit training set of cultural experience. The output was not contained in any individual input, but it was, in some sense, latent in the combination of all of them. Was this intelligence or imagination? The question may not have a clean answer. But the trajectory is clear: as AI systems become more sophisticated at recombination, the gray zone expands, and the distinctively human contribution — the capacity for the kind of imaginative leap that is not latent in any existing data — becomes both more rare and more important.
Egan argued that imagination is not a gift bestowed on the fortunate few. It is a cognitive capacity that develops through the sequence of understandings he described, and that each kind of understanding contributes specific imaginative tools to the developing mind.
Somatic understanding contributes the imagination of the body — the capacity to envision physical actions, spatial relationships, and bodily experiences that have not been directly experienced. The architect who imagines how a space will feel before it is built is drawing on somatic imagination. This capacity develops through physical engagement with resistant material, through the embodied knowledge that comes from making, building, handling, and manipulating the physical world.
Mythic understanding contributes the imagination of narrative — the capacity to construct stories that have never been told, to envision characters, situations, and sequences of events that do not exist. This is the imagination of the fiction writer, the screenwriter, the child who invents an elaborate game. It develops through the practice of storytelling, through the exercise of constructing narratives rather than merely consuming them.
Romantic understanding contributes the imagination of the extraordinary — the capacity to envision possibilities that exceed current reality, to wonder what might exist beyond the boundaries of the known. This is the imagination of the explorer, the scientist who hypothesizes about what lies beyond the current frontier of knowledge, the child who wonders what it would be like to live at the bottom of the ocean. It develops through encounters with the genuinely strange and through the cultivation of the sense of wonder that makes such encounters generative rather than merely startling.
Philosophic understanding contributes the imagination of systems — the capacity to envision comprehensive frameworks, to construct theoretical structures that have not been built, to see how disparate phenomena might be connected by principles that no one has yet articulated. This is the imagination of the theorist, the architect of ideas, the thinker who proposes a new way of organizing the known. It develops through the struggle of systematization — the cognitive labor of constructing frameworks from the raw material of accumulated experience.
Ironic understanding contributes the imagination of alternatives — the capacity to envision different frameworks altogether, to see that the current way of organizing knowledge is not the only way, and to wonder how the world would look through a radically different lens. This is the imagination that recognizes its own framework as contingent and asks what would be different if the framework were different. It develops through the reflexive examination of one's own cognitive structures.
Each of these imaginative capacities builds on the ones before it. Philosophic imagination without romantic wonder is dry and mechanical — theory construction without the fascination that makes the theory worth building. Romantic wonder without mythic narrative is formless — astonishment without the capacity to organize it into meaningful shape. And all of them, without somatic grounding, lack the embodied quality that distinguishes genuine imagination from mere abstraction — the difference between imagining a building and imagining how a building feels to walk through.
AI does not develop through this sequence. It does not possess somatic understanding — it has no body, no physical engagement with resistant material, no embodied knowledge. It does not possess mythic understanding — it can generate narratives but cannot invest them with emotional significance. It does not possess romantic wonder — it can identify extremes in its training data but cannot be astonished by them. It does not possess philosophic frustration — it can produce frameworks but has not experienced the cognitive pressure of needing one. And it does not possess ironic self-awareness — it can describe its own limitations but cannot feel the discomfort of recognizing them as genuinely limiting.
What AI possesses is an extraordinarily powerful version of one component of intelligence: the capacity for pattern extraction across vast corpora. This capacity is genuine. It is powerful. It is useful in ways that are transforming every field of human activity. And it is not imagination. It is not the capacity to cut new channels in the river. It is the capacity to find channels that already exist in the data, faster and more comprehensively than any human mind can do.
The educational implication follows directly. If education in the AI age has a distinctive purpose — a purpose that distinguishes it from training, from information delivery, from the development of skills that the machine can perform — that purpose is the cultivation of imagination. Not imagination in the vague, inspirational sense of "being creative," but imagination in Egan's precise developmental sense: the sequential building of cognitive capacities that allow the human mind to envision what does not yet exist.
This cultivation requires every kind of understanding in the sequence. It requires somatic engagement with physical reality. It requires the practice of mythic narrative construction. It requires encounters with the extraordinary that develop romantic wonder. It requires the struggle of philosophic systematization. And it requires the reflexive self-examination of ironic understanding. Each stage contributes specific imaginative tools that the developing mind will carry into adulthood, and each stage requires the specific kinds of friction through which those tools are built.
AI can support this cultivation — by expanding the range of what children encounter, by providing provocations that stimulate imaginative engagement, by offering the raw material from which mythic narratives and romantic explorations and philosophic frameworks can be constructed. What AI cannot do is perform the cultivation itself. The imaginative capacity develops in the child, through the child's own cognitive labor, through the specific sequence of engagements and frictions that Egan described. The machine can provide the material. The child must do the building.
The river of intelligence needs new channels. It has always needed new channels. Each era of human history has required the imaginative capacity to envision possibilities that the existing patterns did not suggest — and each era's greatest achievements have come from minds that possessed not just the intelligence to find what was there but the imagination to create what was not. The development of this capacity — the specifically human capacity to redirect the river rather than merely swim in it — is the purpose of education in any era. In the age of AI, when the pattern-finding intelligence of the machine exceeds the pattern-finding intelligence of any individual human, the imagination that creates new patterns is the only cognitive capacity that remains distinctively and irreplaceably ours.
Its cultivation is not optional. It is the reason education exists.
Something strange happens when adults hear a child ask a difficult question. They rush to answer it.
The impulse is generous. The child is confused, uncertain, perhaps distressed. The adult possesses knowledge that could resolve the confusion. The asymmetry between the child's need and the adult's capacity creates a gravitational pull toward resolution — toward closing the gap, providing the answer, restoring equilibrium. Every parenting guide, every classroom management manual, every FAQ page on every educational website reinforces the same implicit assumption: questions are problems, and answers are solutions. The good parent answers. The good teacher explains. The good system resolves.
Egan spent four decades arguing that this assumption is not merely wrong but developmentally catastrophic.
A question, in Egan's framework, is not a gap in knowledge waiting to be filled. It is a cognitive event — an act of the developing mind that reveals where that mind is in the sequence of understandings and what transition it is attempting. The quality of a child's question is diagnostic. It tells you not what the child lacks but what the child has achieved and what kind of engagement would support the next achievement. The child who asks "Why is the sky blue?" is performing a different cognitive operation depending on whether she is five or fifteen, and the difference is not in the complexity of the answer she can absorb but in the kind of understanding that generated the question.
The five-year-old's question operates within mythic understanding. The sky's blueness is mysterious — it resists the narrative categories through which the child organizes experience. Why this color and not another? The question seeks not a scientific explanation (the child cannot process one) but a narrative that restores coherence. The answer that satisfies the mythic mind is not "Rayleigh scattering" but something closer to a story — the sky chose blue, or the blue is where the ocean touches the air, or the sky is wearing its favorite color today. These answers are scientifically wrong and developmentally right. They meet the child within her current cognitive framework and provide material that her mythic tools can process.
The fifteen-year-old asking the same question is performing a different operation entirely. She may be seeking a scientific explanation — the philosophic mind wants systematic accounts — or she may be noticing that the explanation she received at five was inadequate and feeling the friction of the transition between frameworks. The same words ("Why is the sky blue?") emerge from a different developmental location and require a different educational response.
AI cannot make this distinction. A large language model that receives the question "Why is the sky blue?" will produce a response calibrated to the apparent sophistication of the question's phrasing, not to the developmental reality of the questioner. If the phrasing is simple, the response will be simplified. If the phrasing is complex, the response will be complex. But simplification and developmental appropriateness are different things entirely. A simplified scientific explanation is not a mythic narrative. A complex scientific explanation is not necessarily what a philosophic mind in transition needs. The distinction between what is linguistically appropriate and what is developmentally supportive is invisible to a system that processes language without understanding development.
This brings the analysis to the question that The Orange Pill places at the center of its argument about education: the twelve-year-old who asks her mother, "What am I for?"
Segal treats this question as the book's emotional fulcrum — the point where the technological argument becomes personal, where the abstraction of "AI is an amplifier" meets the concrete reality of a child lying awake wondering whether the world has a place for her. The answer Segal offers — that the child is "for the questions," for the wondering, for the capacity to care about something too much to sleep — is beautiful and, as far as it goes, correct. But it does not explain what the question means developmentally, what cognitive achievement it represents, or what educational response would support the development it signals.
Egan's framework does.
"What am I for?" is a question that can only be asked by a mind that has begun the transition from romantic to philosophic understanding. A child in full romantic understanding does not ask this question. The romantic mind is too engaged with the world's particulars — too fascinated by what exists, too invested in the extraordinary — to step back and ask about purpose in the abstract. Purpose, for the romantic mind, is embedded in the particular. The child who wants to be an astronaut or a marine biologist or a concert pianist has not asked "What am I for?" because the question does not arise within a framework where purpose is vivid, concrete, and attached to specific heroic figures.
The question arises when the romantic framework begins to crack — when the child notices that the particular purposes she had imagined are contingent, that other people imagine different purposes, that the world does not organize itself around the child's romantic identifications but operates according to principles that are not immediately visible. This recognition is the beginning of philosophic understanding. The child is reaching for a general framework — a scheme of purpose that transcends particular cases — and the reaching is itself the developmental achievement.
The educational response that serves this transition is not an answer. It is what Egan would recognize as the provision of philosophic friction — the materials, perspectives, and encounters that give the reaching mind something to push against.
An answer, however beautiful, closes the question. It provides a framework that the child did not construct. It resolves the productive discomfort that is driving the transition. The child receives the resolution and the developmental pressure eases — not because the child has developed philosophic understanding, but because the answer has temporarily filled the space where philosophic understanding was trying to form. The borrowed framework sits where the constructed framework should be, occupying the same cognitive space but lacking the structural integrity that comes from having been built through the child's own labor.
This does not mean the parent should refuse to engage with the question. The engagement is crucial. But the form of the engagement matters enormously. The parent who says "That's a wonderful question — what do you think?" is performing a sophisticated pedagogical move, though she may not know it. She is returning the question to the questioner, preserving the productive friction while signaling that the question is worth asking. The parent who then explores the question alongside the child — offering perspectives, introducing complications, resisting the temptation to resolve — is creating the conditions for philosophic development.
The parent who provides a comprehensive answer — even a brilliant answer, even the answer Segal provides — is doing something different. She is transmitting a framework rather than supporting the construction of one. The child may adopt the framework, repeat it, believe it. But she will not have built it. And the difference between a built framework and an adopted framework is the difference between understanding and knowledge — the distinction that Egan placed at the center of his entire educational project.
Now consider what happens when the twelve-year-old asks the question not of her mother but of an AI.
The AI will answer. Comprehensively, sensitively, perhaps with several philosophical perspectives presented in accessible language. The answer will be better-organized than most human responses. It will acknowledge the difficulty of the question while providing frameworks for addressing it. It will be kind, thoughtful, and thorough. And it will close the question with an efficiency that no human interlocutor can match, because the human's natural uncertainty, her pauses and qualifications and "I don't know, let me think about that" responses, are precisely the openings through which the child's own thinking can develop. The AI has no uncertainty. It has output probability distributions, but it presents them as considered reflection. The child receives a polished, coherent framework where she needed the productive incoherence of a mind working alongside her own.
Egan insisted that education is "a conversation amongst generations" — that the developmental process requires the meeting of minds at different levels of understanding, and that this meeting cannot be reduced to the exchange of information. The conversation has a quality that transcends its content. The parent's hesitation, her visible struggle to formulate a response, her admission that she does not have a complete answer — these are not failures of communication. They are demonstrations of the cognitive work that the child is being invited to perform. The parent models the process of thinking, not merely its products. And the child, watching an adult grapple with a difficult question, receives something no AI can provide: evidence that the question is genuinely difficult, that adults struggle with it too, that the struggle is not a sign of failure but a feature of serious thought.
The AI models nothing. It presents products without process, answers without struggle, frameworks without the visible labor of their construction. The child who learns from AI learns that difficult questions have ready answers. The child who learns from a struggling, uncertain, honestly grappling human adult learns that difficult questions require difficult thinking, and that the thinking is the point.
This is not an argument for keeping children away from AI. It is an argument for understanding what AI provides and what it does not, and for ensuring that the developmental needs of the child are met through interactions that AI cannot substitute for, even as AI supplements the educational environment in ways that are genuinely valuable.
The child's question — "What am I for?" — is not a crisis. It is the sign that the child's mind is doing exactly what it should be doing at this developmental stage: reaching beyond the romantic framework that has served it well, feeling the inadequacy of particular purposes, and groping toward a general account that can make sense of the larger world. The question is the achievement. The capacity to sit with it, to explore it, to resist premature closure, is the capacity that education should develop. And the educational environment that supports this development is one in which the child encounters minds — human minds, adult minds, uncertain and struggling minds — that take the question as seriously as the child does, without pretending to have resolved it.
The twelve-year-old does not need an answer. She needs a worthy interlocutor. The question of whether AI can serve as one is not a technical question about capability. It is a developmental question about what kind of engagement builds the cognitive tools that the child's future will require.
---
The educated mind is not the mind that knows the most. It never was, though the confusion between education and knowledge accumulation has persisted for so long that most educational institutions are organized around it. The educated mind, in Egan's account, is the mind that has developed the fullest range of cognitive tools — the mind that can deploy somatic, mythic, romantic, philosophic, and ironic understanding as the situation requires, and that recognizes which kind of understanding a given problem demands.
This definition has been available since 1997, when The Educated Mind was published. It has been available through Egan's subsequent books, his collaborators' work, the Imaginative Education Research Group at Simon Fraser University, and the practical applications developed by educators who took his framework seriously. For a quarter-century, the definition sat in the literature, admired by those who encountered it, largely ignored by the institutions that could have adopted it. The standard objections were practical: How do you assess kinds of understanding on a standardized test? How do you train teachers in a framework that contradicts the assumptions underlying their certification programs? How do you persuade a school board that the development of ironic understanding matters more than test scores in mathematics?
These objections were never invalid. They were also never the real reason the framework was not adopted. The real reason was that the existing system, for all its dysfunction, continued to function. The transmission model of education was visibly inadequate — everyone involved in education could describe its failures — but it was institutionally entrenched, economically sustained, and culturally reinforced. The friction of changing it exceeded the friction of maintaining it. The system persisted not because it worked but because changing it was harder than tolerating its failures.
AI has altered this calculation. The arrival of systems that can transmit knowledge instantaneously, comprehensively, and at any level of sophistication the learner requests has not merely challenged the transmission model. It has made the model incoherent. A school organized around the transmission of knowledge is now in competition with a tool that transmits knowledge faster, more accurately, more patiently, and more personally than any teacher can. The competition is not close. The tool wins on every metric that the transmission model values: speed of delivery, breadth of content, consistency of quality, personalization of difficulty level.
If education is transmission, the school has lost.
This is the moment — the specific, historically located moment — at which Egan's framework becomes not merely interesting but necessary. Because if education is not transmission, then the school has not lost. It has been freed from a purpose it was never well-suited to serve and can now pursue the purpose it should have been pursuing all along: the development of understanding.
The distinction is not subtle, but its implications are radical. A school organized around the development of understanding looks almost nothing like a school organized around the transmission of knowledge. The curriculum is different — organized not around bodies of content to be covered but around the kinds of imaginative engagement that develop cognitive tools at each level. The assessment is different — evaluating not what the student knows but how the student understands, which requires forms of assessment that standardized testing cannot provide. The teacher's role is different — not transmitter but developmental facilitator, the person who creates the conditions for cognitive transitions rather than delivering the content those transitions eventually make meaningful. The use of technology is different — directed not toward more efficient delivery of content but toward the creation of experiences that support the specific frictions through which each kind of understanding develops.
Gillian Judson, Egan's collaborator and co-director of the Imaginative Education Research Group, made this argument explicitly in 2019, connecting Egan's framework to the growing literature on "robot-proof" education. Her core claim was that Egan's pedagogy provides the practical toolkit for growing learners' imaginations — the cognitive capacity that AI cannot replicate and that the age of AI makes more valuable than ever. When machines can master facts and knowledge, human education must focus on developing the cognitive tools that make humans uniquely human. Imagination, in Egan's specific developmental sense, is the capacity that machines lack and that education has the power to cultivate.
The argument has gained traction in unexpected quarters. Brandon Hendrickson's review of The Educated Mind won the Astral Codex Ten book review contest — a competition run within the rationalist community, a community deeply engaged with AI risk, AI alignment, and the nature of intelligence. The review introduced Egan's ideas to an audience that had been thinking intensely about machine cognition without a corresponding framework for understanding human cognitive development. Hendrickson subsequently began developing what he calls "Eganizing LLMs" — using AI tools through the lens of Egan's cognitive toolkits, treating the machine not as a replacement for developmental engagement but as a source of material that developmental engagement can operate on.
The approach is illustrative. Hendrickson uses AI to rapidly acquire knowledge about topics he will teach — "what used to take hours of research now takes seconds" — and then designs learning experiences that use that knowledge not as content to be transmitted but as material for mythic, romantic, philosophic, and ironic engagement. The AI handles the knowledge layer. The teacher designs the developmental layer. The student experiences both, but the developmental layer — the narrative construction, the encounter with wonder, the systematic framework-building, the reflexive examination — is where the cognitive growth occurs.
This is, in practical terms, what the ascending friction of education looks like. The removal of the mechanical friction of knowledge acquisition — the hours of library research, the laborious compilation of facts, the slow assembly of background information — is a genuine gain. It frees time and cognitive resources for the developmental work that was always the point. But the removal is beneficial only if the freed resources are directed toward that developmental work rather than toward more knowledge acquisition at higher speed. The default tendency, as the Berkeley data confirmed in the professional context, is toward intensification — more of the same, faster. The deliberate tendency, which requires pedagogical intention informed by developmental theory, is toward the harder and more valuable work of building cognitive tools.
The Orange Pill describes this dynamic as the amplifier thesis: AI amplifies whatever you bring to it. Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history. The educational version of the thesis is that AI amplifies whatever educational intention guides its use. Deploy it within a transmission framework, and it transmits faster and more efficiently than any previous technology — and the developmental cost is correspondingly greater, because the efficiency makes the transmission model more seductive than ever. Deploy it within a developmental framework, and it provides materials, provocations, and perspectives that the developmental process can operate on — and the developmental gain is correspondingly greater, because the materials are richer, more diverse, and more readily available than any single teacher could provide.
The choice between these deployments is not a technology decision. It is an educational decision. And it requires exactly the kind of theoretical clarity that Egan spent his career developing — the clarity about what education is for, how understanding develops, what cognitive tools the developing mind needs, and what kinds of engagement build those tools.
The educated mind in the age of the amplifier is the mind that has been developed through each kind of understanding — from the somatic knowledge of the body to the ironic awareness that all frameworks are partial — and can bring the full range of cognitive tools to bear on whatever problem it encounters. This mind does not fear AI. It uses AI the way it uses any tool: as an extension of its own cognitive capacities, directed by the judgment and imagination that the tool itself cannot provide. The educated mind asks what the tool cannot ask. It sees what the tool cannot see. It recognizes the glass of the fishbowl that the tool reinforces. And it possesses these capacities not because someone told it to be critical or creative or imaginative, but because it developed through the specific sequence of engagements that Egan described — engagements that required time, struggle, wonder, frustration, and the irreplaceable experience of constructing understanding rather than receiving it.
The question that hangs over this entire analysis is whether educational institutions will adopt Egan's framework now that the AI moment has made the alternative untenable. The history of educational reform does not inspire confidence. The inertia of the existing system is enormous. The vested interests — in testing regimes, in textbook markets, in teacher training programs organized around content delivery — are entrenched. The political dynamics of education, in which every reform becomes a proxy for a culture war, make coherent change nearly impossible.
But the AI moment has introduced a new variable. The transmission model is not merely inadequate. It is visibly, undeniably, economically obsolete. A school that competes with AI on transmission will lose, and the loss will be visible not in the abstract language of educational philosophy but in the concrete language of enrollment, funding, and public trust. Parents who see their children receiving knowledge from AI more effectively than from teachers will not continue paying for knowledge transmission. They will pay for the thing the machine cannot provide — and the question of what that thing is, and how it is developed, will become the central question of educational policy for the next generation.
Egan answered that question. The answer has been sitting in the literature for nearly thirty years, waiting for the moment when the alternative became untenable. That moment has arrived. Whether the educational establishment will be capable of adopting the answer — of reorganizing itself around the development of understanding rather than the transmission of knowledge, of training teachers as developmental facilitators rather than content deliverers, of assessing what students understand rather than what they know — remains to be seen.
What is no longer in question is whether the reorganization is necessary. The machine has settled that debate by rendering it moot. The transmission model is over. What replaces it will determine whether the next generation develops educated minds or merely well-informed ones — and the difference between the two has never mattered more than it does now, in the age of a machine that can inform anyone of anything but cannot develop the understanding that would allow them to know what to do with what they have been told.
---
The cognitive tools were what I could not stop thinking about.
Not the theory in the abstract — I have read plenty of educational frameworks, and most of them evaporate the moment you try to hold them up against the actual chaos of raising children or running a company. What stayed with me was the specificity. The idea that a six-year-old explaining rain as the sky crying is not confused but operating a cognitive toolkit — narrative, metaphor, emotional engagement — that she will carry, if it develops properly, into every creative and analytical act of her adult life.
I thought about my own children. I thought about the moment I describe in The Orange Pill — the twelve-year-old asking "What am I for?" — and I realized I had treated that question primarily as a crisis. Something to be answered, resolved, soothed. Egan reframed it as an achievement. The child is not breaking down. She is breaking through. She is pressing against the glass of one kind of understanding and reaching for the next, and the discomfort she feels is the developmental mechanism doing exactly what it is supposed to do.
That reframing changed something in me.
I have spent most of the past year in a state I described as productive vertigo — falling and flying at the same time, building at a pace I have never experienced, watching the distance between imagination and artifact collapse to the width of a conversation. Egan helped me understand why the vertigo felt productive and why it also felt dangerous. The danger is not in the speed. The danger is in mistaking the products of speed for the products of development. A working prototype built in hours is genuinely valuable. The understanding that would have accumulated through weeks of building it by hand is genuinely lost. Both are true. Holding both is the work.
Egan never saw the tools I describe in my book. He died months before they became public. But his entire intellectual project was, without knowing it, building the framework that the AI moment demands. When every answer is available instantly, the only thing worth developing is the capacity to ask. When every framework can be generated on demand, the only thing worth cultivating is the ability to evaluate frameworks — to recognize the glass, to feel where it distorts, to know that the map is not the territory even when the map is spectacularly detailed.
I keep returning to his blunt dismissal of technology-blaming. The problem was never the tool. It was always the pedagogy. That sentence applies to every debate I have witnessed in the past year about AI in education. The schools that ban AI are treating the symptom. The schools that celebrate AI are treating the tool as the cure. Neither has asked the prior question: what are we developing in these children, and does our approach to this technology serve that development or undermine it?
Egan asked that question before the technology existed. The answer he built — five kinds of understanding, each with its own cognitive tools, each requiring its own kind of imaginative engagement, each contributing irreplaceable capacities to the developing mind — is the most practically useful framework I have encountered for thinking about what education must become in this moment.
The educated mind is not the mind with the best tools. It is the mind that has been developed through each kind of understanding to the point where it can direct any tool — including the most powerful tool ever built — toward ends that are worthy of the consciousness wielding it.
That is the work. For parents, for teachers, for anyone who cares about what kind of minds will inherit the world we are building.
The cognitive tools were what I could not stop thinking about. I suspect they will not let you go either.
When AI eliminated the friction of finding answers, it exposed an uncomfortable truth: the educational system was never really developing understanding -- it was transmitting knowledge. And a machine can transmit faster.
Kieran Egan spent four decades building the framework that this moment demands. Five kinds of understanding. Five distinct cognitive toolkits. A developmental sequence that cannot be accelerated without cost. His work reveals that the struggle schools keep trying to eliminate -- the confusion, the wonder, the productive frustration of not knowing -- is not an obstacle to education. It is education. Remove it and you get students who can produce anything and understand nothing.
This book applies Egan's developmental framework to the AI revolution with an urgency its author never lived to see. For parents, teachers, and leaders who sense that something essential is being lost in the rush toward frictionless learning -- and who need more than instinct to protect it.
-- Kieran Egan

A reading-companion catalog of the 17 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Kieran Egan — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →