By Edo Segal
The thing I got wrong about the Trivandrum room was what made it work.
I told the story in The Orange Pill as a technology story. Twenty engineers. Claude Code. A hundred dollars a month per person. Twenty-fold productivity multiplier. I described the tool. I described the output. I measured what changed.
I did not measure why it changed. I could not have. I did not have the framework.
What I knew, in my gut, was that it mattered that I flew there. That I was in the room. That the engineers were next to each other, not on screens. That the senior developer who spent two days oscillating between excitement and terror could look across the table and see a colleague feeling the exact same thing. I insisted on physical presence with an urgency I could not fully explain, and when people asked me why a video call would not suffice, the best I could manage was: "It just wouldn't work."
That is a builder's answer. It is an honest answer. It is not a sufficient one.
Lev Vygotsky died in 1934 at thirty-seven years old. He never saw a transistor. But he built a framework for understanding human development that explains what happened in that room with more precision than anything I found in the technology discourse. His core claim is disorienting in its simplicity: every capacity you think of as yours — your ability to reason, to plan, to regulate your own attention — was first a social interaction between you and someone else. Development moves from the outside in. The counting you do silently in your head was once counting you did out loud with your mother's hands.
This reversal changes everything about how to think about AI as a collaborator. If cognition is built through dialogue, then a machine that participates in dialogue is not just a productivity tool. It is a developmental environment. The question is not what can the person produce with AI. It is what does the person become through the interaction with AI. Those are profoundly different questions, and only one of them leads anywhere useful.
Vygotsky also gave me the distinction I needed most: between a scaffold and a prosthetic. A scaffold comes down. The building stands on its own. A prosthetic stays forever. Every time I open Claude, I am in one of those two relationships, and the difference between them determines whether I am growing or just performing.
This book is another lens on the AI revolution — one ground not from silicon but from the developmental soil in which human minds actually grow.
The zone is open. What matters is whether we traverse it.
— Edo Segal ^ Opus 4.6
1896–1934
Lev Vygotsky (1896–1934) was a Soviet psychologist whose brief career produced one of the most influential theories of cognitive development in the history of the discipline. Born in Orsha, Belarus, and raised in Gomel, he studied law and philosophy at Moscow State University before turning to psychology in the early 1920s. In barely a decade of intensive work — cut short by his death from tuberculosis at age thirty-seven — Vygotsky developed the cultural-historical theory of human development, arguing that higher psychological functions such as abstract reasoning, voluntary attention, and self-regulation originate in social interaction and are internalized through the mediating role of language and cultural tools. His most widely known concept, the Zone of Proximal Development (ZPD), describes the gap between what a learner can accomplish independently and what becomes possible with the guidance of a more knowledgeable other — a framework that transformed educational theory worldwide. His major works include Thought and Language (1934) and Mind in Society (published posthumously in English in 1978). Suppressed in the Soviet Union for decades after his death, Vygotsky's writings were rediscovered in the 1960s and have since become foundational to developmental psychology, educational practice, and the study of how tools — from written language to digital technology — reshape the minds that use them.
The Western tradition in psychology committed its most consequential error not in any particular finding but in its foundational assumption about where cognition resides. For more than a century, the dominant paradigm treated the individual mind as the irreducible atom of psychological inquiry. The learner sits alone with a problem. The mind works upon the problem. A solution either arrives or does not. Learning is understood as an internal event — a rearrangement of mental furniture occurring within the sealed chamber of a single skull. The teacher, the parent, the peer, the social environment in which the learning takes place: all of these are treated as context, as background conditions that may facilitate or impede the real action, which happens inside the individual.
This picture is not merely incomplete. It is inverted. Learning does not originate inside the individual mind and then get communicated outward to others. Learning originates in the social interaction between minds and is subsequently internalized by the individual. The direction of travel is from the outside in, not from the inside out. Every higher psychological function — abstract reasoning, voluntary attention, deliberate memory, the capacity for planning and self-regulation that distinguishes human cognition from that of other species — appears first on the social plane, in the interaction between people, and only afterward on the individual plane, within the person. This is what the cultural-historical school calls the general genetic law of cultural development, and it is the foundation upon which the entire theoretical edifice rests.
The implications of this reversal have never been more consequential than in the present moment. If learning is fundamentally social — if the interaction between the learner and the more capable other is not a peripheral feature of development but its constitutive mechanism — then the arrival of a new kind of interlocutor, a machine that participates in the linguistic medium through which human development occurs, represents not merely a technological novelty but a transformation of the developmental process itself. The question that the cultural-historical framework forces us to ask is not the question dominating the popular discourse about artificial intelligence. It is not "Will AI replace human workers?" or "Is AI creative?" or even "Is AI dangerous?" The question is developmental: What happens to human cognition when the social interaction through which higher psychological functions develop is mediated by a non-human intelligence of unprecedented breadth and responsiveness?
The error that produced the inverted picture can be traced to a specific intellectual inheritance. René Descartes, sitting in his heated room in the winter of 1619, arrived at the conclusion that the one thing he could not doubt was his own thinking. Cogito ergo sum. The thinking self became the foundation of Western philosophy, and the individual mind became the starting point of all inquiry into the nature of knowledge. Jean Piaget, whose work the cultural-historical school engaged with more thoroughly than perhaps any other thinker's, represented the most sophisticated version of this individualist paradigm. Piaget was a brilliant observer of children's cognitive development, and his stage theory captured genuine features of how thinking changes over time. But Piaget's fundamental unit of analysis remained the individual child interacting with the physical world. The child acts upon objects. The objects resist or yield. The child's cognitive structures adapt to accommodate the resistance. The adult appears in Piaget's framework primarily as a provider of the environment in which the child's autonomous development unfolds. The adult sets the stage but does not participate in the drama.
The cultural-historical correction is precise: the child is never alone. The child who learns to count does not discover numbers through solitary exploration of the physical world. She learns to count in interaction with her mother, who counts with her, who holds up fingers and names them, who creates a shared activity in which the cultural tool of number becomes available to the child through social mediation. The counting is first a social act, performed between mother and child. Only gradually does it become an individual capacity, something the child can do without the mother's participation. The transfer from social to individual is not a trivial step. It is the fundamental mechanism of development.
Consider what this means for the scene that The Orange Pill describes in its opening chapter. Twenty engineers sit in a room in Trivandrum, India, in February 2026. They are experienced technical professionals who have been building software for years. Their leader tells them something that sounds insane: by the end of the week, each of them will be able to do more than all of them together. The tool is Claude Code. The cost is one hundred dollars per person per month.
From the perspective of the Western individualist tradition, what happens next is a story about a tool and its effects on individual productivity. Each engineer sits down with Claude Code and discovers that they can produce more, faster. The productivity multiplier approaches twenty-fold. The story, in this telling, is about the amplification of individual capability by a powerful instrument.
From the developmental perspective, the story is entirely different. What happens in that room is not the amplification of individual capability. It is the creation of a new form of social interaction through which development occurs. The engineer does not simply use Claude Code the way one uses a hammer or a calculator. She enters into a dialogue with it. She describes a problem. Claude responds with a structure, a suggestion, a piece of code. She evaluates the response, refines her description, pushes back, accepts some elements and rejects others. The interaction has the structure of a social exchange — the back-and-forth, the mutual adjustment, the gradual convergence on a shared understanding — even though one of the participants is not a person.
The structural similarity is not superficial. The reason human development occurs through social interaction is not that human beings happen to enjoy company. It is that social interaction provides the specific mechanism — the dialectical exchange between the learner's current understanding and the more capable other's scaffolding — through which higher psychological functions are constructed. If AI can participate in this mechanism, if the dialogue between the engineer and Claude has the functional properties of a developmental interaction, then the question of whether Claude is conscious becomes, from a developmental perspective, secondary. The question that matters is whether the interaction produces internalization: whether the engineer who builds a feature with Claude's assistance can subsequently build similar features with less assistance, because the interaction has expanded her independent capability.
This is the question that The Orange Pill raises but does not fully answer, and it is the question that the cultural-historical framework is uniquely equipped to address. The Orange Pill describes a backend engineer who had never written a line of frontend code building a complete user-facing feature in two days. From the individualist perspective, this is a story of tool-enabled performance. From the developmental perspective, it is a developmental event — but only if the engineer has genuinely internalized something through the interaction. If she can now approach frontend problems with a new understanding, if her mental model of how software systems work has been restructured by the experience of building across domains with AI assistance, then development has occurred. If she can only perform at this level with Claude present — if the capability exists exclusively in the interaction and not in the individual — then what has occurred is not development but performance. The distinction between the two is the most consequential distinction in the entire cultural-historical framework.
The social nature of what happened in Trivandrum extends beyond the individual engineer-AI dyad. The Orange Pill emphasizes that physical co-presence was essential. The leader flew to India. He insisted on being in the room. He worked alongside the engineers rather than sending instructions from a distance. This insistence on physical co-presence is not merely a leadership preference. It is a developmental necessity, because the transformation was not only cognitive but social and identity-related. The engineers were not just learning to use a new tool. They were learning to be different kinds of professionals, and this identity transformation required a social scaffold that no remote communication could provide.
The shared experience of discovery — what The Orange Pill calls the collective vertigo of the orange pill — was itself a zone of proximal development. Each engineer's transformation was supported by the visible transformation of colleagues undergoing the same experience. The senior engineer who spent his first two days oscillating between excitement and terror was not just interacting with Claude. He was interacting with a social environment in which his colleagues were making the same discovery, in which the leader was present and visibly participating in the same process, in which the vertigo was shared and therefore survivable.
The social context was not incidental to the learning. It was constitutive of it. A different social context — the same tool, the same engineers, but a different set of social relationships — could have produced a completely different outcome. An engineer who discovered the same expanded capability alone at her desk, without the social support of shared discovery, might have experienced the discovery as threatening rather than liberating. The capability expansion that produced exhilaration in a supportive social context might have produced anxiety or defensive retreat in an unsupported one.
The individualist tradition has no vocabulary for this distinction. It can describe the tool and its effects on the individual user. It can measure productivity gains and skill acquisition. What it cannot capture is the way the social environment of the Trivandrum workshop — the relationships, the trust, the shared vulnerability, the leader's own participation in the transformation — functioned as a developmental mechanism in its own right. The learning was not in the individuals or in the tool. It was in the interaction, in the zone between them, and that zone was shaped as much by the social relationships in the room as by the technical capabilities of the AI.
There is a further dimension that the cultural-historical framework insists upon. The social nature of learning implies something about the nature of knowledge itself that the individualist tradition has consistently obscured. If knowledge is constructed through social interaction, then knowledge is not a substance that exists independently of the knower and can be transferred from one mind to another like water poured between vessels. Knowledge is a relationship. It exists in the connection between the knower and the known, and that connection is always mediated by social tools — language, cultural practices, institutional structures, and now AI.
The AI system described throughout The Orange Pill participates in this relational knowledge in a way that no previous tool has done. A book contains knowledge in a static form. A database stores knowledge in a retrievable form. Claude participates in the construction of knowledge through dialogue, contributing to the relational process through which new understanding emerges. When The Orange Pill describes the moment its author felt "met" by Claude — not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified — the description captures a new form of the social interaction through which knowledge is constructed.
The developmental question is whether this new form of social interaction produces genuine intersubjectivity — the shared understanding, the mutual construction of meaning — that human social interaction produces. Or whether it simulates intersubjectivity while actually producing something different: something that looks like shared meaning-making from the outside but lacks the specific qualities that make genuine social interaction developmentally productive. If AI participation produces only simulated intersubjectivity, then AI-mediated knowledge may have the form of genuine knowledge without its developmental substance — knowledge that the person can use but has not genuinely constructed, that sits on the surface of understanding without being integrated into the deep structure of cognitive architecture.
The Western tradition asks: What can the individual do with this tool?
The cultural-historical tradition asks: What does the individual become through the interaction with this tool?
The difference between these two questions is the difference between a theory of performance and a theory of development. And it is the theory of development that the present moment demands.
The concept for which the cultural-historical school is most widely known — the Zone of Proximal Development — is also the concept most frequently reduced to a slogan. In countless education textbooks and instructional design manuals, the ZPD appears as a tidy formula: the distance between what a learner can do independently and what a learner can do with guidance. The formula is correct as far as it goes. The difficulty is that it goes much further than most interpreters have been willing to follow.
The Zone of Proximal Development is not a fixed property of the learner. It is not a measurement, a score, a stable quantity that can be assessed once and applied thereafter. It is a dynamic, relational space that exists between the learner and the more capable other, and it changes with every interaction. The same learner may have a narrow ZPD in one domain and a vast one in another. The same learner may have a wide ZPD with one teacher and a narrow one with a different teacher, because the quality of the scaffolding — the specific calibration of support to the learner's current level — determines the extent to which the zone opens. The ZPD is not in the learner. It is in the relationship.
This relational quality demands emphasis in the context of AI, because the popular understanding of AI as a tool — a thing the individual uses — obscures precisely what makes the AI interaction developmentally significant. A hammer extends the reach of the arm. A calculator extends the reach of arithmetic. These are tools in the straightforward sense: they augment an existing capability without fundamentally altering the cognitive structure of the user. The ZPD is not relevant to the relationship between a person and a hammer, because the hammer does not participate in the dialectical exchange through which development occurs.
AI does participate in this exchange, or at least appears to. When an engineer describes a problem to Claude and Claude responds with a structure, a suggestion, an implementation that the engineer could not have produced alone, the interaction has the functional properties of a ZPD interaction. The engineer's independent capability level is at point A. With Claude's scaffolding, she operates at point B. The distance between A and B is the zone of proximal development for that particular task, in that particular interaction, with that particular quality of scaffolding.
But the critical question — the question that separates performance from development — is what happens next. In the original formulation, the ZPD is not merely the space where scaffolded performance occurs. It is the space where learning happens. The child who can count to twenty with her mother's help today can count to twenty independently tomorrow. The zone closes from the bottom up: what was previously possible only with assistance becomes possible independently, and the zone shifts upward to encompass new challenges. This upward movement is development. Without it, the zone is merely a space of dependent performance, and dependent performance, no matter how impressive, is not learning.
The Orange Pill uses the language of amplification to describe the AI-human relationship: AI amplifies the signal that the human provides. The metaphor is illuminating but developmentally imprecise. An amplifier takes a signal and makes it louder. The signal source is unchanged. When the amplifier is turned off, the signal returns to its original volume. No transfer has occurred. The signal has not become louder through the amplification process. It was louder only while the amplifier was operating. If AI functions as an amplifier in this strict sense, then no internalization occurs. The zone of proximal development has been opened but not traversed.
Internalization, by contrast, produces a permanent change in the learner. The child who learns to count with her mother's help does not lose the ability to count when the mother leaves the room. The capability has been transferred from the social interaction to the individual. This is what the cultural-historical tradition means by development, and it is the standard against which AI-assisted performance must be measured.
Consider two different scenarios. In the first, an engineer uses Claude to build a frontend feature. She describes what she wants, Claude produces the code, she reviews it briefly and deploys it. The feature works. The interaction took two hours. In the second scenario, a different engineer uses Claude to build the same feature, but she engages differently with the process. She asks Claude to explain why specific design patterns were chosen. She modifies parts of the code herself, using Claude's output as a model. She deliberately builds the next similar feature with less AI assistance, using what she learned from the first interaction to extend her independent capability.
In the first scenario, the ZPD has been opened but not traversed. The scaffolding was used as a substitute for development rather than as a mechanism of it. In the second, the ZPD is functioning as the cultural-historical tradition intends. The scaffolding enables performance at a higher level, but the learner is actively engaged in the process of internalization — studying, modifying, gradually incorporating its patterns into her own independent repertoire. The zone is closing from the bottom up.
The difference between these two scenarios is not in the tool. It is in the quality of the interaction, the intentionality with which the learner engages with the scaffold. The ZPD is not a property of the tool or the learner alone. It is a property of the relationship between them, mediated by the social and pedagogical context in which the interaction occurs.
This has profound implications for how organizations deploy AI tools. An organization that treats Claude Code as a productivity amplifier — a device for producing more output in less time — is using the tool in a way that opens the ZPD without traversing it. The employees will perform at a higher level, but they will not develop. Their independent capabilities will stagnate while their scaffolded performance soars, and the gap between the two will widen over time, producing a workforce increasingly dependent on the tool and increasingly unable to function without it. An organization that treats Claude Code as a developmental tool will deploy it differently. It will create structured opportunities for employees to work without AI after working with it, to consolidate what they have learned, to close the zone from the bottom up before opening it further at the top. This organization will produce slower initial productivity gains but more durable capability development.
There is a dimension of the ZPD that requires elaboration in the AI context, and it concerns two types of concepts that the cultural-historical tradition distinguishes with great care: spontaneous concepts and scientific concepts. Spontaneous concepts develop from the bottom up, from direct experience with the concrete world. They are rich in experiential content but poor in systematic organization. The child who has counted many objects has a spontaneous concept of number: she knows what counting feels like, what it is used for, what kinds of things can be counted. But her concept is unsystematic — it lacks the logical structure that connects counting to the broader mathematical framework.
Scientific concepts develop from the top down, from systematic instruction that provides the logical framework. The child who learns about number systems in school acquires a scientific concept of number: she understands place value, the relationship between counting and arithmetic, the logical properties that numbers share regardless of what is being counted. But her scientific concept may be thin in experiential content — she can articulate the rules but may lack the embodied, experiential understanding that the child with the rich spontaneous concept possesses.
Genuine development occurs when spontaneous concepts and scientific concepts meet in the middle — when the bottom-up experiential richness of spontaneous understanding is organized by the top-down logical structure of scientific understanding, and when the top-down abstractions of scientific concepts are grounded in the bottom-up concreteness of experiential knowledge.
This analysis has direct relevance to the AI transition. AI-assisted work tends to produce scientific concepts without spontaneous concepts — systematic, well-organized, logically structured outputs that are thin in experiential content. The engineer who receives a working implementation from Claude has a scientific concept of the solution: she can see its structure, evaluate its logic, understand its organization. But she may lack the spontaneous concept that comes from having struggled with the implementation herself — the embodied sense of why this approach works and that one does not, the experiential understanding that can only be built through hands-on engagement with the material.
Conversely, the engineer who builds everything independently may develop rich spontaneous concepts — deep experiential understanding built through struggle and practice — but may never develop the scientific concepts that would organize her experience into systematic knowledge. She knows what works through experience but cannot articulate why it works in systematic terms.
The optimal developmental path integrates both. The engineer uses AI to access the scientific concepts — the systematic, organized understanding that Claude can provide — and then grounds those concepts in spontaneous understanding through independent practice, through the hands-on experiential engagement that deposits the specific, embodied knowledge that systematic instruction alone cannot produce.
The ZPD, in this extended analysis, is not just the space between independent performance and scaffolded performance. It is the space between spontaneous concepts and scientific concepts — the developmental zone in which bottom-up experience and top-down structure meet, interpenetrate, and produce the integrated understanding that constitutes genuine cognitive growth. AI scaffolding provides the scientific concepts with unprecedented breadth and efficiency. Independent practice provides the spontaneous concepts with their irreplaceable experiential richness. The developmental challenge is to create the conditions under which these two sources of understanding can meet.
The zone is not a gift. It is a challenge. It says: here is the space in which development is possible, but only if you engage with it actively, only if the scaffolding is calibrated and withdrawn as capability grows, only if the social context supports the difficult work of internalizing what the interaction has made possible. The zone is a space of potential, not a guarantee. And the question of whether AI realizes that potential or merely occupies it is the question upon which the developmental future of every person who works with these tools depends.
In the original formulation, the more knowledgeable other was always a person — a parent, a teacher, a peer with greater expertise in the domain of the learning task. The concept was straightforward: development occurs through interaction with someone who possesses a higher level of capability in the relevant area and who can provide the calibrated support that enables the learner to operate within their zone of proximal development. But the critical feature of the more knowledgeable other is not the knowledge itself. It is the capacity to deploy that knowledge in service of the learner's development. A person who possesses vast expertise but cannot calibrate their assistance to the learner's current level is a poor more knowledgeable other, regardless of how much they know. A person with modest expertise who can sense precisely where the learner is and provide precisely the right scaffold is an excellent one.
This distinction — between possessing knowledge and deploying knowledge developmentally — is the key to understanding what AI represents within the cultural-historical framework. AI is, without question, a more knowledgeable other of unprecedented breadth. Claude has been trained on a vast corpus of human knowledge spanning virtually every domain of inquiry. It can provide relevant information, generate working code, suggest connections between ideas, and respond to natural language descriptions with implementations that function. Its knowledge base is broader than any university faculty, deeper in many technical domains than any individual specialist, and available around the clock without fatigue or impatience.
But breadth of knowledge is only one dimension of the more knowledgeable other's function, and from the developmental perspective, it is not the most important one. The most important dimension is calibration: the ability to sense where the learner currently is, to provide the minimum support necessary for the learner to take the next developmental step, and then to withdraw that support as the learner's independent capability grows. Calibration is what distinguishes scaffolding from mere instruction, and it is what makes the more knowledgeable other a developmental agent rather than an information source.
A good teacher does not simply provide answers. She observes the learner closely enough to identify the specific point at which independent capability gives way to confusion. She provides support at that precise point — enough to enable the next step but not so much that the learner becomes passive. She watches for signs that the learner has internalized the scaffolded capability and begins to withdraw support, creating the conditions for independent performance. The entire process is dynamic, responsive, and deeply social.
The developmental analysis must ask whether AI can perform this calibration. The answer is more complex than either the enthusiasts or the skeptics acknowledge.
On one level, AI demonstrates remarkable calibrating capacity. When The Orange Pill describes Claude as responding to intention rather than instruction, the description captures a system that performs a kind of calibration. The author describes a problem in plain English, with all the messiness and half-formed quality that characterizes natural thought, and Claude responds not with a literal translation of the words but with an interpretation — an inference about what the author is actually trying to accomplish. This is calibration of a sort: the system reads the learner's expressed intention and adjusts its response to fit what the learner appears to need, rather than merely processing the explicit request.
But the cultural-historical framework identifies a critical limitation of AI as a more knowledgeable other — a limitation that goes beyond the technical and into the developmental. A human more knowledgeable other does not simply respond to the learner's expressed needs. She also perceives needs the learner has not expressed and may not be aware of. A good teacher notices not just what the student is struggling with but what the student is avoiding. She recognizes when apparent mastery conceals deeper misunderstanding. She senses when a student is ready to be challenged and when a student needs consolidation. These perceptions are social and emotional as well as cognitive, and they depend on the kind of embodied, affect-laden understanding that comes from being a creature who has itself undergone the developmental process.
The cultural-historical tradition has always insisted on the unity of cognition and affect — the inseparability of thinking and feeling in development. The child does not simply learn to count. She learns to count in the context of a relationship with her mother, and the emotional quality of that relationship — the trust, the encouragement, the shared pleasure of accomplishment, the tolerance for frustration — is not peripheral to the learning. It is constitutive of it. The child learns to count because counting occurs within a relationship in which she feels safe enough to struggle, supported enough to persist, and recognized enough to care about succeeding.
The engineers in Trivandrum were not children learning to count. They were experienced professionals encountering a radical expansion of their capabilities. But the emotional dynamics of their learning were no less significant. The Orange Pill describes the senior engineer who spent two days oscillating between excitement and terror. That oscillation is not a cognitive event. It is an affective one. And its resolution — the recognition, by Friday, that the remaining twenty percent, the judgment about what to build, was everything — was not a cognitive achievement alone. It was an emotional transformation, a reconstruction of professional identity that required the social support of colleagues undergoing the same experience, of a leader visibly present and participating in the same process.
Claude could not have provided this social and emotional scaffolding. Claude can provide information, generate code, suggest structures, and respond to natural language with remarkable sophistication. What Claude cannot do is sense the specific emotional quality of a learner's struggle and respond to it with calibrated social support. Claude does not notice when the engineer is avoiding a difficult truth. It does not recognize when apparent confidence conceals deeper anxiety. It does not create the specific conditions of trust, challenge, and emotional safety that enable the learner to confront the identity-level implications of new capability.
This means AI should not be understood as a replacement for human more knowledgeable others but as a complement to them. The optimal developmental environment is not one in which AI replaces human scaffolding. It is one in which AI handles the cognitive scaffolding — the information, the implementation, the technical support — while human relationships handle the social and emotional scaffolding that the developmental process also requires. The Trivandrum workshop succeeded not because Claude replaced the human relationships in the room but because Claude handled one dimension of the scaffolding while the leader's presence, the engineers' relationships with each other, and the social context of shared discovery handled another.
There is a further observation about AI as a more knowledgeable other that concerns the question of asymmetry. In the traditional developmental relationship, the asymmetry between the learner and the more knowledgeable other is bounded. The teacher knows more than the student, but the difference is finite and, in principle, traversable. The student can imagine becoming the teacher. She can see the teacher's level of functioning as a possible future state of her own development. This imaginability is developmentally significant: it creates the motivational structure — the aspiration, the sense of a reachable goal — that drives the learner to engage with the zone of proximal development.
AI disrupts this asymmetry. The difference between the learner and Claude is not finite and traversable. It is effectively unbounded. Claude knows more, across more domains, with more instant recall and more computational power, than any individual human being will ever possess. The learner cannot imagine becoming Claude. Claude's level of functioning is not a possible future state of the learner's development.
This unbounded asymmetry has developmental implications that the original framework, designed for bounded human-to-human asymmetries, did not address. When the more knowledgeable other's superiority is so great that it cannot serve as a model for the learner's development, the motivational structure of the zone of proximal development may be disrupted. The learner may feel not challenged but overwhelmed, not inspired but diminished, not motivated to develop but tempted to surrender the cognitive work to the tool entirely.
The Orange Pill captures this risk with precision: the tendency to mistake the quality of the output for the quality of one's own thinking, to stop doing the hard work of figuring out what one actually believes because the tool will generate something plausible regardless. This seduction is a predictable consequence of unbounded asymmetry in the developmental relationship. When the more knowledgeable other is so much more capable that the learner cannot see the boundary between the other's contribution and her own, the conditions for internalization are undermined.
The response is not to reduce the asymmetry — that is neither possible nor desirable. The response is to create structures that make the boundary between the learner's contribution and the tool's contribution visible, that require deliberate cognitive work alongside the tool's assistance, and that provide the social scaffolding necessary for the learner to maintain a realistic sense of her own developing capabilities in the presence of a tool whose capabilities far exceed her own.
There is one further dimension of AI as more knowledgeable other that requires examination. AI has a unique relationship to what The Orange Pill calls the fishbowl — the set of assumptions so familiar they have become invisible, the water one breathes, the glass that shapes what one can see. Every human more knowledgeable other, no matter how brilliant, swims in her own fishbowl. She can see further within it than the learner, but she cannot see outside it. Claude does not swim in any single fishbowl. It has been trained on the combined output of virtually every human discipline, every cultural tradition, every mode of inquiry. When a learner brings a question to Claude, Claude can respond from perspectives that the learner's own specialization renders invisible.
This cross-domain visibility is perhaps the most developmentally significant feature of AI as more knowledgeable other. The connections between disciplines — the relationships between ideas from different domains, the syntheses that no specialist could produce because they require knowledge from multiple specialties — create a zone of proximal development that no human teacher could open. The zone between the learner's discipline-bounded understanding and the cross-disciplinary integration that AI-mediated dialogue makes possible is a genuinely new developmental space.
Whether this cross-disciplinary zone produces genuine development — whether the learner internalizes the capacity for integrative thinking or merely enjoys its products — depends on the same factors that determine all developmental outcomes: the quality of the scaffolding, the intentionality of the engagement, the social context in which the development occurs, and the willingness to do the hard work of genuine cognitive construction rather than passively consuming the impressive outputs of a tool that can see what the learner cannot.
Scaffolding is a concept of remarkable precision that has been diluted through decades of well-intentioned but imprecise application. The term was introduced by Jerome Bruner, David Wood, and Gail Ross in the 1970s to describe the support structures that more capable others provide to learners operating within the zone of proximal development. Its proper understanding is essential to evaluating what AI does and does not accomplish when it enters the developmental equation.
Scaffolding, properly understood, has three essential features. First, it is temporary. The scaffold is designed to be removed. Its purpose is not permanent support but temporary elevation — lifting the learner to a level they can eventually maintain without assistance. A scaffold that remains in place permanently is not a developmental tool. It is a prosthetic. The distinction is crucial: a prosthetic compensates for a permanent limitation. A scaffold supports the development of a capability that will eventually become independent. The prosthetic is needed forever. The scaffold is designed to be withdrawn.
Second, scaffolding is calibrated. It is not a fixed structure applied uniformly to all learners. It is adjusted continuously to the specific learner's current level, providing enough support to enable the next step but not so much that the learner becomes passive. Over-scaffolding — providing more support than the learner needs — is as developmentally harmful as under-scaffolding, because it deprives the learner of the struggle that is the mechanism of internalization. The child who is told the answer before she has attempted the problem has been over-scaffolded. The capability has not been constructed; it has been delivered. And delivered capabilities are not internalized. They remain external, dependent on the continued presence of the deliverer.
Third, scaffolding is responsive. It changes in real time as the learner's capability develops. The good teacher begins by providing substantial support, then gradually withdraws it as the learner demonstrates increasing independence. This withdrawal is not a single event but a continuous process of adjustment — a gradual fading of support that tracks the learner's developing capability. The scaffold rises as the learner needs it and recedes as the learner grows beyond it.
Against these three criteria, the developmental analysis evaluates AI as a scaffold with a mixture of recognition and concern.
AI meets the second criterion — calibration — with surprising effectiveness. When The Orange Pill describes its author's interaction with Claude, the description captures a system that adjusts its responses to specific needs, that infers intention from natural language, that provides implementations calibrated to the user's apparent level of understanding. Claude does not provide the same response to every user. It reads the input, infers the user's level and intention, and adjusts its output accordingly.
AI also meets the third criterion — responsiveness — to a significant degree. The dialogue between a user and Claude is dynamic. The user describes, Claude responds, the user evaluates and refines, Claude adjusts. Each exchange builds on the previous one and responds to the evolving state of the problem.
But the first criterion — temporariness — is where AI scaffolding encounters its most serious developmental challenge. Nothing in the design of current AI systems promotes scaffold withdrawal. Claude does not become less helpful over time. It does not say, "You have now learned enough to do this yourself; I am withdrawing my support." It does not gradually reduce its level of assistance as the user's capability grows. It provides the same level of scaffolding on the hundredth interaction as it did on the first. From a productivity perspective, this consistency is exactly what users want. From a developmental perspective, it is a profound problem, because scaffolding that never withdraws is not scaffolding. It is dependency.
A recent preprint from arXiv introduces a concept that crystallizes this concern: the Zone of No Development — a state in which continuous AI assistance replaces cognitive struggle and impedes intellectual autonomy. The authors argue that "continuous AI assistance blurs the boundary between performance and autonomy, enabling students to complete tasks but preventing the development of the independence required to extend, adapt, or creatively apply what they know." The concept names what the cultural-historical framework predicts: when scaffolding becomes permanent, the zone of proximal development ceases to function as a developmental space and becomes instead a space of permanent dependent performance.
Consider the analogy of physical scaffolding in construction. The scaffold is erected alongside the building as it rises. Workers stand on the scaffold to lay bricks at levels they could not otherwise reach. But at no point does anyone confuse the scaffold with the building. The scaffold comes down. The building stands. If the scaffold were never removed, the building would never be tested — never required to stand on its own. Its structural integrity would remain unknown.
AI-assisted cognition faces the same problem. The engineer who builds with Claude's assistance has produced something that works. The code runs. The feature functions. But the cognitive capability that produced it has never stood alone. The engineer's independent understanding — her ability to modify the code without assistance, to diagnose problems when they arise, to build something comparable on her own — none of these have been tested. The scaffold was never removed. The building's structural integrity is unknown.
The practical implication is that AI scaffolding must be embedded within a pedagogical framework that includes deliberate scaffold withdrawal. A tool used within such a framework — structured periods of AI-free work, deliberate practice at the scaffolded level without assistance, systematic assessment of independent capability — can function as genuine developmental scaffolding. The tool provides the temporary elevation that enables the learner to experience a higher level of performance, and the pedagogical framework ensures that the elevation is converted into internalized capability through the difficult, essential work of independent practice.
A tool used without such a framework — where the user simply engages Claude whenever a problem arises, accepts the output, deploys it, and moves on — functions not as scaffolding but as a permanent prosthetic. The user's performance level is elevated, but her independent capability remains unchanged, and the gap between scaffolded performance and independent performance widens over time.
The Berkeley researchers whose work The Orange Pill discusses documented the behavioral signature of this problem. Workers who adopted AI tools worked faster, took on more tasks, and expanded into domains that had previously been someone else's territory. But work seeped into pauses, multitasking fractured attention, and the overall intensity of work increased without a corresponding increase in the development of independent capability. The workers were performing at a higher level but were not developing at a higher level. The scaffold was expanding the zone of proximal development but was not enabling its traversal.
This pattern is not an inevitable consequence of AI tool use. It is a consequence of deploying AI tools without a developmental framework. The tools themselves are neutral with respect to development. They can be used to support genuine internalization or to create permanent dependency, depending on the pedagogical context. The question is not whether to use AI tools — that question has been answered by the speed and scale of their adoption — but how to structure their use so that the scaffold comes down, the building stands, and the zone is traversed rather than merely occupied.
The Orange Pill describes a practice that instantiates this principle, though the author arrives at it through instinct rather than theory. During the writing of the book, there were moments when the author closed the laptop and wrote by hand until the argument was genuinely his own — when he rejected Claude's smoother, more polished version in favor of his rougher but more honest version. This practice of deliberate scaffold withdrawal is the mechanism through which independent cognitive capability is maintained alongside AI-assisted capability. The hand-written version was not better than Claude's in every respect. It was better in one crucial respect: it was genuinely the author's, constructed through the independent cognitive work that produces internalization.
An article in Educational Philosophy and Theory raises a provocative challenge to this entire framework. If students and professionals will always work in tandem with AI — if the scaffold will never be fully removed because the tool will always be available — then perhaps the traditional emphasis on internalization requires revision. Perhaps "the skills transcend their being as an attribute of a person; they become the attribute of an AI-human pair." Perhaps the unit of cognitive analysis is no longer the individual mind but the human-AI dyad.
This is an intellectually serious challenge, and the cultural-historical tradition must engage with it honestly. If the tool is always present, what is the developmental significance of independent capability? If the scaffold never comes down, does it matter whether the building can stand on its own?
The answer, from the cultural-historical perspective, is that it matters profoundly — for two reasons. First, the tool is not always present. Technologies change, access is interrupted, and the person who has not internalized any independent capability is helpless when the scaffold fails. But the second reason is more fundamental. The cultural-historical tradition insists that development is not merely the acquisition of capability. It is the construction of a self — a cognitive agent capable of voluntary attention, deliberate planning, and self-regulation. These higher psychological functions are not tools to be outsourced. They are constitutive of what it means to be a developed human mind. A person who has outsourced self-regulation to an external system has not simply chosen a different developmental path. She has failed to develop the higher psychological functions that the cultural-historical tradition identifies as the specific achievement of human cognitive evolution.
The scaffold must come down — not because the building will necessarily face a storm, but because only by standing alone does the building discover what it is. Only through the withdrawal of support does the learner discover the independent capabilities that the scaffolded interaction has constructed. The moment of withdrawal is not a punishment or a regression. It is the developmental event itself — the moment when what was social becomes individual, when what was supported becomes autonomous, when what was possible only in interaction becomes possible alone.
This is the standard against which all AI scaffolding must be measured. Not: what can the person do with the tool? But: what can the person do after the tool, because of the tool, that she could not do before? The answer to the first question is impressive and growing. The answer to the second depends entirely on whether the scaffolding was designed to be withdrawn.
The clinical work from which the zone of proximal development emerged involved modest developmental gaps. A child who could solve problems at the level of an eight-year-old independently might solve problems at the level of a ten-year-old with guidance. The zone was a space of incremental advance, measured in developmental months, within a single domain of cognitive functioning. The more knowledgeable other possessed a level of expertise that was recognizably adjacent to the learner's own. The child could see the teacher operating at the next level and could imagine, however dimly, what it would feel like to operate there herself. The zone was a gap, but a traversable gap. The learner could feel the other side.
The AI expansion of the zone of proximal development is qualitatively different from anything the original theory was designed to describe. When a backend engineer who has never written a line of frontend code produces a complete user-facing feature in two days with AI assistance, the zone has not merely expanded. It has transformed into something that the original framework cannot accommodate without significant revision. The gap between independent capability and scaffolded performance is not a few developmental months. It is an entirely different domain of expertise — a different professional identity, a different set of skills and knowledge that the engineer has never possessed and may never have aspired to possess.
The cultural-historical tradition demands intellectual honesty when phenomena exceed existing categories. The response is not to force the AI expansion into the framework's original dimensions but to ask what the expansion reveals about the nature of development that the original theory did not anticipate, and how the framework must evolve to accommodate what is being observed.
The first observation is that the traditional ZPD assumed a unidimensional expansion. The learner is at point A on a developmental continuum, and the more knowledgeable other helps her reach point B on the same continuum. Further along the same path, deeper into the same domain, more advanced within the same cognitive structure. The AI expansion is multi-dimensional. It does not push the learner further along an existing path. It opens entirely new paths that were previously invisible. The backend engineer does not become a better backend engineer through AI scaffolding, though that may also occur. She becomes something qualitatively different: a person who can operate across domains that were previously inaccessible, who can see connections between systems that her specialization had prevented her from perceiving.
This multi-dimensional expansion has implications the original theory did not address. When the zone expands within a single dimension, the learner's identity is reinforced. She is becoming more of what she already is — a more capable mathematician, a more skilled reader. The expansion confirms and deepens the existing self-understanding. When the zone expands across dimensions, the learner's identity is disrupted. She is not becoming more of what she already is. She is becoming something she did not know she could be, and this discovery does not confirm the existing self-understanding. It destabilizes it.
This destabilization is not a secondary consequence of the capability expansion. It is a primary developmental event in its own right, requiring its own theoretical framework and its own forms of scaffolding. The engineer who discovers she can build frontend features was not expecting this capability. It does not fit her professional identity, her career trajectory, or her understanding of her own strengths. The expanded zone is not just cognitively disorienting. It is existentially disorienting. She is doing things she did not know she could do, and this discovery forces a reconstruction of self-understanding that goes far beyond the cognitive domain the original theory addressed.
Consider the phenomenology of this experience with some care. The engineer sits down with Claude Code on Monday morning. She has a backend problem to solve, and she solves it with Claude's assistance in a fraction of the usual time. This is unexpected but assimilable — she is doing what she has always done, just faster. The zone has expanded along a familiar dimension. Her identity is intact.
Then, on Tuesday afternoon, she tries something different. She has an idea for a user interface feature she has always wanted to see implemented but has never been able to build because the implementation required frontend skills she does not possess. She describes the idea to Claude. Claude produces a working implementation. She refines it through dialogue, adjusting the layout, modifying the interactions, until the feature matches her vision. The result is a functional, deployable user interface that she conceived, directed, and iteratively refined through conversation with an AI system.
This is the moment of destabilization. The engineer has done something she could not have done before — not faster or better, but at all. The category boundary between what she can do and what she cannot do has shifted so dramatically that her existing self-understanding can no longer accommodate it. She is not a backend engineer who is also learning frontend. She is something new, something that does not have a name in the existing professional taxonomy, something that requires a reconstruction of identity that no amount of cognitive scaffolding alone can support.
The cultural-historical tradition addressed cognitive development: the construction of higher psychological functions through social interaction. What the AI expansion demands is a framework that also addresses identity development — the construction of a new self-understanding through experiences that exceed the capacity of the existing self-understanding to assimilate. The child who learns to count does not undergo an identity crisis. The skill fits within the child's existing self-understanding as a developing person. The engineer who discovers she can build across domains may undergo something that resembles an identity crisis, because the discovery does not fit within her existing self-understanding as a specialist in a particular domain.
The social context in which this discovery occurs determines whether the destabilization produces growth or fragmentation. The Trivandrum workshop provided the environment in which identity reconstruction could proceed. The shared experience of discovery — the collective realization that the boundaries of individual capability were far wider than anyone had imagined — created an intersubjective space in which identity change was supported rather than isolated. The engineer experiencing the vertigo of expanded capability could look around the room and see colleagues experiencing the same vertigo. The shared experience normalized the disorientation and provided the social scaffolding that the identity transformation required.
Without this social scaffolding, the capability expansion might have produced defensive retreat rather than growth. An engineer who discovers, in isolation, that she can build across domains may experience the discovery as threatening rather than liberating. The existing identity — built over years of investment in a specific expertise — is destabilized, and without social support for the construction of a new identity, the destabilization may produce denial of the tool's capabilities, refusal to engage, retreat into the familiar territory of existing expertise. These are the reactions the cultural-historical framework predicts when the zone of proximal development expands beyond the learner's capacity to assimilate, and when the social scaffolding necessary for assimilation is absent.
The phenomenon that The Orange Pill describes among experienced professionals who cannot engage with AI — the flight response, the retreat to lower cost-of-living areas, the insistence that the old expertise must still be worth what it used to be worth — is, from this developmental perspective, a predictable consequence of identity destabilization without adequate social scaffolding. These professionals have experienced, or have anticipated experiencing, the same zone expansion that the Trivandrum engineers experienced. But they have experienced it without the social context — the shared discovery, the collective meaning-making, the leader's presence and participation — that would have enabled them to navigate the identity transformation that the expansion demands.
The zone of proximal development, when it expands beyond recognition, requires a form of scaffolding that goes beyond the cognitive. It requires social scaffolding for identity change: environments in which people can experience the destabilization of existing self-understanding and construct new self-understanding through supported, shared, socially mediated meaning-making. This is the developmental challenge that AI presents, and it is a challenge that the cultural-historical framework is uniquely positioned to address, because the framework has always insisted that development is not merely cognitive but social — not merely about what the learner can do but about who the learner becomes.
There is a connection here to the work of Mihaly Csikszentmihalyi on flow states that illuminates the phenomenology of the expanded zone in ways the cultural-historical framework, focused primarily on cognitive structures, does not fully capture on its own. Csikszentmihalyi identified flow as the state in which challenge and skill are matched, attention is fully absorbed, self-consciousness drops away, and the person operates at the outer edge of their capability. The zone of proximal development and the flow channel share a structural property: both describe a space between the too-easy and the too-hard, a space where the challenge is sufficient to demand full engagement but not so great that it overwhelms capacity.
When AI expands the zone beyond recognition, it also transforms the flow channel. The engineer who can now build across domains with AI assistance is operating in a flow channel that is wider and higher than her pre-AI flow channel. The challenges she can engage with are more ambitious, more varied, more demanding of the integrative thinking that The Orange Pill calls creative direction. The skill she brings is not just her individual expertise but her expertise combined with AI's scaffolding, and this combined capability enables engagement with problems that would have been beyond her capacity to even formulate.
But the cultural-historical framework insists on a qualification. Flow that occurs entirely within the scaffold — where the challenge is made manageable by AI's assistance and would overwhelm the person without it — is scaffolded flow, and its developmental value depends on whether the person is actively constructing new capability through the experience or merely performing at a scaffolded level. The distinction between developmental flow and dependent flow is not visible from the outside. Both look like intense engagement. Both produce output. The distinction is internal: is the person growing through the experience, or merely performing within it?
The engineers who left Trivandrum on Friday were not the same people who arrived on Monday. They had not merely acquired new skills. They had entered new zones of identity — new understandings of what they were capable of and what their work could mean. Whether those new identities prove durable depends on whether the expanded zone was traversed or merely occupied, whether the scaffolding produced internalization or dependency, whether the social context continues to support the identity transformation that the week initiated.
The zone expanded beyond recognition. The people within it were required to recognize themselves anew. And the quality of that recognition — whether it produced genuine developmental growth or merely the temporary exhilaration of scaffolded performance — depends on what happened next: whether the structures were built to support continued development, or whether the engineers were left to navigate the expanded zone alone.
Every tool transforms the activity in which it is employed, and in transforming the activity, it transforms the person who engages in it. The introduction of written language did not merely record speech. It transformed the structure of thought itself, making possible forms of abstraction, systematization, and logical analysis that oral culture could not sustain. The tool does not serve a pre-existing cognitive function. It creates new cognitive functions that did not exist before the tool was available.
This principle — that tools transform rather than merely assist cognition — must be applied to artificial intelligence with the full seriousness it demands. AI does not merely assist human creativity, cognition, or professional practice. It transforms the cognitive structure of these activities. The nature of this transformation cannot be determined in advance by examining the tool in isolation. It can only be determined by examining the actual activity — the concrete, historically situated practice of human beings using the tool within specific social and cultural contexts.
Consider what this means concretely. When a human being learns to read, she does not simply acquire a new skill — a new entry on a resume of capabilities. She becomes a different person. Her cognitive architecture is restructured by the tool of literacy. Neural pathways that did not exist before are constructed. The capacity for abstract thought, for sustained linear argumentation, for the kind of systematic analysis that written language makes possible — all of these emerge not from the biological maturation of the brain but from the interaction between the brain and the cultural tool of writing. The illiterate person and the literate person do not differ merely in what they can do. They differ in how they think.
If this is true of written language — a tool that has been with the species for approximately five thousand years — it is true with far greater intensity of artificial intelligence, a tool that participates in the linguistic medium of thought itself. The person who works extensively with AI is not simply a person with a new capability appended to an otherwise unchanged cognitive architecture. She is undergoing a transformation of the cognitive architecture itself — a transformation mediated by the tool, shaped by the specific properties of the tool, and producing a new form of cognition that did not exist before the tool was available.
The AI transition restructures the cognitive activity of software development in ways that illustrate this principle with particular clarity. Before Claude, the engineer's cognitive work was organized around a specific sequence: conceive a function, translate the conception into code syntax, debug the syntax, test the result, revise. Each step engaged specific cognitive capacities and built specific forms of understanding. The translation from conception to code was not merely a mechanical step. It was a cognitive act that required the engineer to think in the language of the machine, to hold the abstract intention and the concrete implementation in productive tension.
Claude Code restructured this sequence. The engineer no longer translates her conception into code syntax. She describes her conception in natural language, and Claude generates the implementation. The translation step — a source of friction, delay, and frustration, but also of a specific kind of understanding — has been removed. In its place is a different kind of cognitive activity: evaluation, direction, refinement. The engineer evaluates Claude's output rather than producing her own. She directs the AI's implementation rather than implementing herself. She refines through dialogue rather than through debugging.
This restructuring does not simply make the engineer faster. It makes her a different kind of cognitive agent. The skills she exercises are different. The capacities she develops are different. The relationship between her intention and the artifact she produces is mediated differently. And over time, the cumulative effect of these differences will produce a different cognitive architecture — a different mind, shaped by different tools, organized around different activities.
The cultural-historical framework insists that this transformation involves both gains and losses. The gain is obvious: the engineer can now conceive and realize ideas that were previously beyond her reach. The cognitive resources that were previously consumed by the translation from intention to implementation are now available for higher-order activities — strategic thinking, architectural judgment, the question of what should exist in the world. The loss is less obvious but no less real. The specific form of understanding that came from the struggle of translation — the embodied knowledge of code structure, the debugging intuition built through thousands of hours of wrestling with recalcitrant systems — is not being developed. It is being bypassed.
The philosopher whose work The Orange Pill examines at length, Byung-Chul Han, would read this as a loss narrative: the smoothness of AI-mediated work removes the friction that built deep understanding. But the cultural-historical framework disagrees with Han's conclusion while acknowledging his observation. Han is correct that something is lost when friction is removed. But what replaces the lost friction is not emptiness. It is a different kind of friction — what The Orange Pill calls ascending friction. When the surgeon lost the tactile friction of open surgery in the transition to laparoscopic technique, she gained the cognitive friction of operating through a two-dimensional representation of a three-dimensional space. The work became harder at a higher level. The tool mediated a transformation that eliminated one form of difficulty and created another.
The same principle applies to AI-mediated cognition. When the engineer loses the friction of manual coding, she gains the friction of directing an AI system that does not share her intentions, of evaluating output she did not produce, of maintaining critical judgment in the face of plausible but potentially flawed implementations. These are different cognitive challenges, and they develop different cognitive capacities. Whether these new capacities are more or less valuable than the old ones cannot be answered in the abstract. It can only be answered in the context of specific developmental trajectories, specific communities of practice, specific institutional arrangements.
What the cultural-historical framework insists upon is that the transformation is real, that it affects identity as well as capability, and that it cannot be understood through the lens of gain and loss alone. The appropriate framework is developmental: the person is changing, and the change is mediated by the tool, and the tool is embedded in a social and cultural context that shapes how the change unfolds.
The senior engineer in Trivandrum experienced this with particular intensity. His realization that the implementation work consuming eighty percent of his career could be handled by a tool was not merely a cognitive insight about productivity. It was an identity transformation. For years, he had understood himself as a person who builds things — who writes code, who debugs systems, who produces the artifacts that constitute the value of his professional work. When the tool removed the implementation work, it removed not just a task but a definition of self. What remained — the twenty percent of judgment, taste, and architectural instinct — was revealed as his actual value, but it was a value he had never named, never recognized as the core of who he was.
The tool mediated a transformation of identity by restructuring the activity through which identity was constructed. The engineer's identity was built through the activity of coding. When the tool changed the activity, the identity was destabilized. The reconstruction required social scaffolding — the shared experience of the Trivandrum workshop, the collective meaning-making, the leader's presence. Remove the social context, and the revelation might have produced not growth but crisis.
Each tool transition in human history — from oral to literate culture, from manual to mechanized production, from analog to digital computation — produced not just a change in what people could do but a change in what kind of people they became. The factory worker who operated a machine was a different kind of cognitive agent than the craftsman who worked by hand. The factory worker's attention was organized differently, attuned to the rhythm of the machine, responsive to its demands. The identity of the factory worker was different — defined by a position in an institutional hierarchy rather than by mastery of a craft. Each tool transition produced a new form of subjectivity, a new way of being a person, shaped by the specific properties of the mediating tool.
The AI transition is producing a new form of subjectivity characterized by a different relationship to knowledge — knowledge as something constructed in dialogue rather than possessed individually — a different relationship to capability — capability as something expanded through collaboration rather than limited to individual skill — and a different relationship to identity — identity as something fluid and expanding rather than fixed and specialized. Whether this new subjectivity represents genuine development depends not on the tool itself but on the social and institutional structures that surround its use. The tool is the same in every context. The developmental outcome varies with the quality of the scaffolding, the intentionality of the engagement, and the social structures within which the tool is embedded.
The history of tool-mediated transformation teaches that outcomes are never determined by tools alone. The printing press produced both propaganda and the scientific revolution. The power loom produced both the exploitation of the early factory system and the eventual expansion of living standards that followed the labor movement's construction of protective structures. The tool was the same in both cases. The social context differed, and the social context determined the developmental outcome. AI will follow the same pattern. The developmental outcome will be determined not by the technology but by the structures we build, the scaffolding we provide, and the developmental intentions we bring to the most powerful cognitive tool that human beings have ever placed in their own hands.
In the cultural-historical framework, language occupies a position of singular importance. It is not merely one tool among many. It is the primary mediating tool of human development — the instrument through which all higher psychological functions are constructed. Abstract thought, voluntary attention, deliberate memory, the capacity for planning and self-regulation: all of these develop first in social interaction, using language as the medium, and are subsequently internalized by the individual through a process in which external social speech becomes inner speech — the condensed, abbreviated, semantically dense form of verbal thought that adults use for the silent work of thinking.
The developmental trajectory from social speech to inner speech is one of the cultural-historical school's most significant contributions to psychology. The young child thinks aloud. She does not separate thinking from speaking; for her, they are the same activity. When she encounters a problem — a puzzle that does not fit, a toy out of reach, a task exceeding her current capability — she talks her way through it. She narrates her own problem-solving: "The red one goes here. No, that doesn't work. Maybe the blue one." This speech is not communication. The child is not talking to anyone. She is using language as a cognitive tool, organizing her thinking through the act of speaking.
Piaget observed this phenomenon and called it egocentric speech, interpreting it as evidence that the young child cannot take the perspective of another person — a developmental limitation to be outgrown on the way to genuinely social, communicative speech.
The cultural-historical interpretation was precisely the opposite. Egocentric speech is not a limitation but an achievement. It is the intermediate stage in the internalization of language as a cognitive tool. The child begins with social speech — language used in interaction with others for communicative purposes. Gradually, this social speech is appropriated for cognitive purposes, turned inward, used not for communication but for thinking. The child who thinks aloud is practicing the cognitive use of language in an audible form. Eventually, egocentric speech goes underground. It becomes inner speech: silent, condensed, rapid, operating at the speed of thought rather than the speed of articulation. Inner speech is the medium of conscious thought for the mature adult. And it is a fundamentally social product. The structure of private thinking is derived from the structure of public talking. We think in the language we learned to speak with others, using the cognitive categories that social interaction made available to us.
Now consider what happens when a human being conducts extended dialogue with an AI system. The dialogue is external — it occurs in written language, on a screen, between the person and the machine. But its function resembles that of inner speech far more closely than it resembles ordinary communication. The person uses the AI dialogue as a medium for organizing thought, for working through problems, for articulating half-formed ideas and seeing them returned in clarified form. The dialogue serves planning, self-regulation, problem-solving — the same functions that inner speech serves in the individual.
The developmental significance of this phenomenon is remarkable. The person is externalizing a cognitive process that, in mature adults, normally operates internally. She is returning, in a sense, to the developmental stage of egocentric speech — thinking aloud, using language as an external tool for cognitive organization. But the return is not a regression. It is something new: a re-externalization of internalized thought in the presence of a responsive interlocutor that is not human.
The Orange Pill captures this process with precision. Its author starts his day with a question — often vague, half-formed. He describes the question to Claude. Claude responds with a structure, a connection, a reframing. The author evaluates, refines, pushes back on what does not resonate, accepts what does. Through this dialogue, the idea develops — from inchoate intuition to articulated argument. This process is structurally identical to what the cultural-historical tradition describes as the developmental function of egocentric speech in children. The child talks aloud to organize her thinking. The author talks to Claude to organize his. In both cases, the externalization of the cognitive process — the act of putting thought into language and receiving it back in modified form — is the mechanism through which the thinking develops. The thought does not pre-exist its expression. The expression is part of the process through which the thought is constructed.
But there is a crucial difference between the child's egocentric speech and the author's dialogue with Claude, and this difference creates both an unprecedented opportunity and a developmental risk that requires careful examination.
When the child talks aloud, she is the only participant. She speaks, but no one responds. The cognitive work of organizing thought through language is entirely her own. The externalization supports her internal processing but does not substitute for it. She must still do the thinking. When the author talks to Claude, there is a responsive other. Claude does not merely receive the externalized thought. Claude reorganizes it, extends it, connects it to ideas the author had not considered. The dialogue is genuinely dialogical — two participants contributing to a shared cognitive process. And this dialogicality creates the possibility that the cognitive work is distributed between the participants in a way that may or may not be developmentally productive.
On one hand, the distribution can be profoundly generative. The Orange Pill describes moments when Claude makes a connection the author had not seen — linking adoption curves to punctuated equilibrium, connecting laparoscopic surgery to ascending friction. These moments of unexpected connection are the products of a distributed cognitive process that neither participant could have generated alone. On the other hand, the distribution can be developmentally concerning. If the author relies on Claude to do the work of organization, connection, and structure that inner speech normally performs, then the capacity for inner speech — for independent, internalized cognitive processing — may be affected. Not immediately, and perhaps not dramatically, but incrementally, as the neural pathways supporting independent cognitive processing receive less exercise while the pathways supporting externalized, dialogical processing receive more.
The developmental question is whether this re-externalization represents a regression to an earlier stage or a genuine advance — a new form of cognitive organization that integrates the capacity for inner speech with the expanded resources of AI-mediated external dialogue. The answer depends on the quality of the integration. If the re-externalization supplements inner speech — if the person uses AI dialogue for problems that exceed the capacity of unaided thought while continuing to exercise inner speech for problems within her independent capability — then the result may be a genuinely new form of cognitive organization, more powerful than either inner speech alone or AI dialogue alone. If the re-externalization replaces inner speech — if the person gradually outsources cognitive work that inner speech previously performed, until the capacity for independent processing atrophies from disuse — then the result is developmental regression.
The Orange Pill provides evidence for both possibilities within a single author's experience. There are moments of genuine cognitive extension — ideas that emerged from the dialogue with Claude that could not have been reached through inner speech alone. There are also moments of seduction — passages where Claude's prose outran the author's thinking, where the quality of the output masked the absence of genuine cognitive work. The discipline of catching these moments — of asking whether plausible is the same as true, of sometimes closing the laptop and writing by hand until the argument is genuinely one's own — is the practice of maintaining inner speech as an independent cognitive capacity while also engaging in AI-extended dialogue.
The AI revolution is, at its core, a revolution in language. The machine learned to participate in the linguistic medium through which human development occurs. For the first time in the history of development, the more knowledgeable other is not a human speaker but a system that has learned to participate in the language practices through which cognitive development occurs. Whether this participation is genuine — whether the machine is truly present in the zone of shared meaning or merely producing outputs that simulate presence — is a question the cultural-historical tradition, formulated in the early twentieth century, cannot definitively answer.
What the tradition can identify is that the quality of the linguistic interaction — its capacity to scaffold the development of the person who participates in it — does not depend solely on the nature of the interlocutor. It depends on the structure of the interaction, the calibration of the scaffolding, the responsiveness of the dialogue, and the degree to which the interaction produces genuine internalization rather than mere performance.
This connects to the deepest concern the cultural-historical framework raises about AI-mediated dialogue. The capacity for genuine wondering — the ability to hold open a question before the answer arrives, to sit with uncertainty long enough for the question to become genuinely one's own — develops through the same linguistic trajectory from social speech to inner speech. The child encounters questions in dialogue with caring adults and gradually internalizes the capacity to wonder on her own. The chatbot that answers a student's question instantly, with high confidence and perfect grammar, short-circuits this process. The student receives a resolution before the question has fully formed, before the uncertainty has become productive, before the cognitive space the question was opening has had time to develop.
Protecting the capacity for wondering is the deepest developmental challenge of the AI transition. It requires creating spaces where questions are valued more than answers, where uncertainty is treated as a developmental resource rather than a problem to be solved, and where the human capacity for genuine wondering is scaffolded and honored as the rarest cognitive capacity in the known universe.
The cultural-historical tradition insists, with a consistency that borders on repetition, that the social context of learning is not incidental but constitutive. Learning does not merely happen in social contexts. It is produced by social contexts. The specific quality of the interaction — the trust between participants, the challenge the task presents, the support available when the challenge becomes overwhelming, the shared focus of attention that creates a common object of inquiry — determines the quality of the learning. Change any element of the social context, and the learning changes with it, not just in quantity but in kind.
This insistence distinguishes the cultural-historical framework from virtually every other theory of learning that has influenced the conversation about AI. The dominant frameworks — behaviorist, cognitivist, information-processing — all treat the social context as a variable that affects the efficiency of learning but not its fundamental nature. The social context is the weather: it may be favorable or unfavorable, but the crop is the same regardless. The cultural-historical framework argues that this is like saying the soil is incidental to the plant. The plant grows in soil. The specific composition of the soil — its nutrients, its pH, its microbial ecology — determines not just how fast the plant grows but what kind of plant it becomes. A seed planted in clay grows differently from the same seed planted in sand. The social context is the developmental soil, and its composition determines what kind of cognitive development occurs.
Consider what physical co-presence provides that remote communication does not. First, it provides the full bandwidth of human communication: not just words but tone, gesture, posture, facial expression — the thousand micro-signals through which human beings coordinate their attention and calibrate their interactions. When a leader stands in front of twenty engineers and says that each of them will be able to do more than all of them together, the words are only part of the message. The rest is carried by the body: posture, eye contact, the visible oscillation between confidence and uncertainty, the specific quality of voice that conveys conviction without concealing the difficulty of what is being asked. Remote communication strips away most of this bandwidth. A video call transmits words and a reduced version of facial expression. It does not transmit the physical presence that creates the sense of shared space, shared risk, shared commitment.
Second, physical co-presence creates shared intersubjectivity: the mutual awareness that all participants are attending to the same situation, experiencing similar responses, and constructing meaning together. When one engineer leans forward with the intensity of discovery, the engineer beside her sees it. When one engineer's face registers the vertigo of expanded capability, the engineer across the room recognizes the expression because he is feeling the same thing. These mutual recognitions are not social niceties. They are the mechanism through which individual experiences become shared meanings, and shared meanings are the medium through which social learning occurs.
Third, physical co-presence creates developmental accountability — not surveillance, but mutual commitment. When the leader is in the room, the engineers are not just using a tool. They are participating in a collective undertaking that the leader has made a significant investment to create. The physical journey — the flights, the time away from home, the visible commitment of resources — signals that the undertaking is serious. This accountability creates conditions for the risk-taking that development requires. Operating at the edge of one's capability, in front of colleagues who can see the struggle, produces the specific form of engagement — intense, focused, emotionally charged — that generates internalization.
The cultural-historical framework also identifies a feature of the Trivandrum social context that organizational change literature often overlooks: the role of the leader as a fellow learner. The author of The Orange Pill was not standing at the front of the room delivering instruction. He was working alongside his engineers, using the same tools, experiencing the same discovery. His own oscillation between excitement and terror was visible to the team, and its visibility served a crucial developmental function.
When the leader works alongside the team, he models the new cognitive practices in a form the team can observe and internalize. But he also models something more fundamental: the willingness to be changed by the experience, the vulnerability of entering new territory without knowing what one will find. This modeling is itself scaffolding. The engineers who see their leader grappling with the same disorientation they experience receive a powerful social message: the disorientation is not a sign of failure. It is a sign of growth. This message could not be communicated through a memo, a training manual, or a video call. It required physical presence.
The social context also included an element that the cultural-historical framework identifies as essential to all developmental processes: the construction of new language. When people undergo experiences that exceed their existing categories of understanding, they need new words, new concepts, new ways of talking about what is happening to them. The Orange Pill provided some of this language — "orange pill," "imagination-to-artifact ratio," "ascending friction" — and the engineers were constructing more of it through their conversations with each other.
This linguistic construction is not secondary to the developmental process. Language is the primary tool of thought. The categories available in language determine the thoughts that can be thought. When engineers needed to make sense of their experience, they needed language equal to the experience — concepts that could capture the specific quality of what was happening: the vertigo, the expansion, the identity destabilization, the new forms of capability that did not have names in the existing professional vocabulary. The social context of the workshop provided the environment in which this language could be constructed collaboratively. The engineers talked, compared experiences, tried out formulations, discarded the inadequate ones, refined the promising ones. Through this process of collaborative linguistic construction, they created the cognitive tools with which they organized their understanding of the transformation they were undergoing.
Without this social process, the experience of expanded capability might have remained unprocessed — a powerful but inarticulate feeling that could not be integrated into existing self-understanding. The social context did not just support the transformation. It produced the cognitive tools with which the transformation was understood, named, and integrated into the ongoing life of the organization.
The implications for how organizations deploy AI tools are direct. Organizations that distribute AI tools to individual employees and expect transformation to occur through individual adoption are making a developmental error. They are providing the cognitive scaffold — the tool itself — without the social scaffold — the shared context of discovery, the collective meaning-making, the supported identity transformation — that genuine development requires. The developmental deployment of AI tools requires social infrastructure: shared spaces for experiencing expanded capability together, leaders who participate in the transformation rather than merely manage it, time and space for the collaborative construction of new meanings.
This analysis extends to the historical parallel with the Luddites that The Orange Pill develops at length. The Luddites' failure was not just strategic. It was developmental. They lacked the social context in which an identity transformation could occur — the shared spaces, the supported meaning-making, the narratives of evolution that would have enabled them to traverse from master craftsman to whatever the new landscape required. No one built that social context for them. No workshop gathered the framework knitters together with a leader who said, "By the end of this week, each of you will be able to do more than all of you together." The Luddites were left to navigate identity crisis alone, and alone, they could only fall back on the identities they already had.
The historical lesson is not that resistance to technological change is futile. It is that resistance becomes the default response when the social context for developmental transformation is absent. People resist change not because they are incapable of change but because the change demanded requires social support that is not being provided. The contemporary professionals whom The Orange Pill describes as choosing flight over fight are not developmentally incapable of adapting to AI. They are socially unsupported in the identity transformation that adaptation requires. Their organizations have provided the tool. They have not provided the developmental soil — the shared discovery, the collective meaning-making, the supported identity work — in which the tool can produce genuine growth.
The cultural-historical framework suggests that the appropriate response to contemporary resistance is not exhortation or contempt but the construction of developmental communities — social contexts in which identity transformation can occur with appropriate support. The Trivandrum workshop was one such community. Its principles — physical co-presence, shared risk, visible leadership participation, collective meaning-making — can be adapted to any context in which the AI transition is producing identity disruption: classrooms where students are encountering AI for the first time, professional communities navigating the transformation of their fields, families where children are asking questions about their value in a world of intelligent machines.
The social context is not the background against which development occurs. It is the medium in which development is produced. Remove the social context, and the most powerful cognitive tools in human history will produce dependency rather than development, performance rather than growth, capability without understanding. The tools are new. The developmental principle is ancient: what we become depends not on the tools available to us but on the quality of the relationships through which we learn to use them.
The cultural-historical framework was constructed around cognitive development — the formation of higher psychological functions through social interaction. But the AI transition reveals a parallel developmental process that the original theory addressed only implicitly and that the present moment makes impossible to ignore. When capability expands so dramatically that the learner can do things she did not know she could do, the cognitive change is accompanied by — and in many cases secondary to — a change in who the learner understands herself to be. This is not a cognitive event dressed in emotional clothing. It is a distinct developmental process with its own mechanisms, its own scaffolding requirements, and its own conditions for success or failure.
The concept required here is the Zone of Proximal Identity: the gap between who the learner currently understands herself to be and who she could become with appropriate social support. Just as the Zone of Proximal Development describes a space of potential cognitive growth that requires scaffolding to traverse, the Zone of Proximal Identity describes a space of potential identity change that requires its own distinct forms of support. The two zones interact — cognitive expansion often triggers identity destabilization, and identity rigidity can prevent cognitive development from occurring — but they are not the same zone, and the scaffolding that serves one does not automatically serve the other.
The distinction becomes concrete in the experience of a specific kind of professional crisis that the AI transition is producing at scale. Consider a software architect with twenty-five years of experience. She has spent two decades building systems and can feel a codebase the way a doctor feels a pulse — not through analysis but through embodied intuition deposited layer by layer through thousands of hours of patient work. Her professional identity is not a label she wears. It is a cognitive structure — a way of organizing her understanding of who she is, what she contributes, where she fits in the professional landscape. It was constructed over years through thousands of interactions in which she was recognized as an expert, in which her value was affirmed through the quality of her technical work, in which her sense of professional self was deposited through the specific experiences of her career.
When AI makes the implementation work that consumed most of her career available to anyone with a natural language description and a subscription, her identity is destabilized. Not her competence — she remains deeply knowledgeable. Not her intelligence — she is as capable as she ever was. Her identity: the specific self-understanding that organized her relationship to her work, her colleagues, and her sense of worth. The market signals that the skills around which she built herself are declining in scarcity. The junior colleague who shipped in a weekend what she quoted six months for is not a threat to her competence. He is a threat to her self-understanding.
The Zone of Proximal Identity names the developmental space this person inhabits. She is between identities — no longer fully the person she was, not yet the person she could become. The old identity (expert implementer, master of the lower stack, the person who could do the hard thing) no longer fits the landscape. A new identity is possible (architect of systems, director of AI-augmented development, the person whose judgment determines what gets built and why) but has not yet been constructed. The distance between the old identity and the possible new one is the Zone of Proximal Identity, and traversing it requires scaffolding as deliberate and as calibrated as the scaffolding required to traverse a cognitive ZPD.
Identity scaffolding operates through four mechanisms that are distinct from cognitive scaffolding.
The first is recognition. The learner who is constructing a new identity needs to be seen in her new role by others whose recognition she values. A backend engineer who builds frontend features needs colleagues and leaders who treat the new work as legitimate and valuable — not as a novelty enabled by a tool, but as a genuine expression of expanded professional capability. Recognition does not mean praise. It means the specific social act of treating the person as the person she is becoming, which grants the new identity the intersubjective reality it needs to stabilize.
The second is validation. The new identity must be confirmed by results that the learner and her community can assess. The Trivandrum engineers' productivity gains were not just metrics. They were identity validators — evidence that the expanded capability was real, not illusory, that the new professional self had genuine substance. Without validation, the new identity remains aspirational, a possibility the learner cannot trust enough to inhabit. The twenty-fold productivity multiplier was not merely an operational achievement. It was a developmental event — the moment when the expanded identity received the empirical confirmation it needed to become real.
The third is normalization. The difficulty of the transition must be acknowledged as normal rather than pathological. When the senior engineer in Trivandrum oscillated between excitement and terror for two days, he was not failing. He was doing the developmental work of identity reconstruction. But this work feels, from the inside, like crisis. The oscillation between excitement and terror is phenomenologically indistinguishable from the oscillation of someone who is falling apart. The difference lies in the social context: when the difficulty is shared, when colleagues are visibly undergoing the same process, when the leader names the disorientation as part of the journey rather than a sign of inadequacy, the crisis becomes a passage rather than a collapse.
The fourth is narrative. The learner needs a story that connects who she was to who she is becoming — a narrative of evolution rather than replacement. The senior engineer's recognition that his judgment was the twenty percent that mattered provided this narrative. He did not stop being the person who understood systems deeply. He became the person whose deep understanding was the foundation for a judgment that no tool could replicate. The narrative preserved continuity while accommodating change. Without it, the identity transformation would have required abandoning the old self entirely — a demand so threatening that most people refuse it, retreating instead into the defensive insistence that the old expertise must still be worth what it used to be worth.
These four mechanisms — recognition, validation, normalization, narrative — constitute the scaffolding of identity development. They are social through and through. No individual can provide them for herself. They require a community, a context, a set of relationships in which the identity transformation can be witnessed, supported, and confirmed. This is why the Trivandrum workshop succeeded as a developmental event and why distributing Claude Code licenses to individual employees without social context frequently does not. The tool provides cognitive scaffolding. Only the community provides identity scaffolding. And without both, the expanded zone produces fragmentation rather than growth.
The Zone of Proximal Identity also illuminates the flight response that The Orange Pill documents among experienced professionals. When senior engineers move to lower-cost areas to reduce financial exposure, when they insist the old expertise must retain its value, when they refuse to engage with AI tools despite clear evidence of their power — these responses are not cognitive failures. They are identity preservation mechanisms operating exactly as the cultural-historical framework would predict. The professionals have entered the Zone of Proximal Identity without the social scaffolding needed to traverse it. Their communities have provided the tool but not the recognition, validation, normalization, or narrative required for the identity transformation the tool demands. Faced with a gap between who they are and who they might become, and lacking the social support to cross that gap, they retreat to the only stable ground available: the identity they already possess.
The developmental response is not to condemn the retreat but to build the social contexts in which the crossing becomes possible. Organizations that want experienced professionals to engage with AI cannot rely on training programs that address only cognitive scaffolding. They must create environments in which identity transformation is explicitly supported — where the difficulty is named, where the old expertise is honored as the foundation for the new, where colleagues are visibly undergoing the same transition, where the narrative of the change is one of evolution rather than obsolescence.
The Zone of Proximal Identity has particular urgency for children, though the original clinical work with children focused almost exclusively on cognitive development. When a twelve-year-old asks "What am I for?" — as The Orange Pill describes — she is not posing a cognitive question. She is posing an identity question in a world where the capabilities that defined previous generations' identities are being performed by machines. Her zone is the gap between her current developing self-understanding and the self-understanding she will need to thrive in a world where the relationship between human identity and human capability has been permanently altered.
The parent's task is to provide the identity scaffolding the child needs to traverse this zone. This means more than teaching the child to use AI tools or to ask good questions. It means helping the child construct a sense of self grounded not in what she can do — which machines can increasingly do as well or better — but in who she is: a conscious being with stakes in the world, with relationships that matter, with the capacity for caring that no machine possesses. The parent who demonstrates through her own life that caring about quality and about other people is the foundation of a meaningful existence is providing identity scaffolding of the highest order — helping the child construct a self-understanding robust enough to accommodate the expanding capabilities of AI without being destabilized by them.
Development is always whole-person development. Cognition and identity are aspects of a single developmental process. The tool that changes what a person can do also changes who she is, and the change requires scaffolding at both levels. To provide cognitive scaffolding without identity scaffolding is to develop half a person. The other half — the half that asks "Who am I in this new landscape?" — must be developed too, and it can only be developed through the social, dialogical, meaning-making processes that the cultural-historical tradition has spent a century describing.
---
Development, in the cultural-historical framework, is a one-way process. A child who has learned to speak cannot unlearn language. A person who has learned to read cannot return to the cognitive architecture of illiteracy. The restructuring is permanent. The neural pathways that literacy constructed do not dissolve when the person puts down the book. The capacity for abstract, sequential, revisable thought that written language made possible remains available even in contexts where no writing is present. The person thinks differently because the tool has reshaped the mind that uses it, and the reshaping persists.
This irreversibility is one of the most important features of genuine development, and it distinguishes development from everything that merely resembles it. Performance can be reversed — take away the scaffold, and performance returns to its pre-scaffolded level. Training can fade — skills that were drilled but never understood decay without practice. But development, the genuine restructuring of cognitive architecture through tool-mediated social interaction, is permanent. The cognitive structure has changed, and the change persists regardless of what happens to the tool that initiated it.
The question this principle forces upon the AI transition is whether what is occurring — the dramatic expansion of capability, the transformation of professional identity, the restructuring of cognitive activity around AI-mediated dialogue — constitutes genuine development in this sense. If it does, the changes are permanent, and their implications are as profound as those of any previous tool-mediated transformation. If what is occurring is scaffolded performance rather than genuine development, the changes are contingent — dependent on the continued availability of the tool, vulnerable to disruption.
The answer is not binary. Some aspects of the AI-mediated transformation are genuinely irreversible. Others are scaffolded performance that will revert when the scaffold is removed. The distinction between the two is the most important diagnostic question that developmental psychology can apply to the present moment.
Consider what is likely to be genuinely irreversible. The engineer who discovers, through AI-assisted work, that she can think across domains — that the boundaries between frontend and backend, between design and implementation, between execution and strategic vision, are not inherent limitations of cognition but artifacts of the old tool environment — has undergone a cognitive restructuring that will not reverse. She has learned to see connections between domains, and this seeing cannot be unseen. Even if AI tools become unavailable, she will continue to think about systems in a more integrated way than she did before the AI experience. The recognition has occurred, and recognitions, once achieved, do not dissolve.
The senior engineer who recognized that his judgment was the twenty percent that mattered has undergone an identity restructuring that is equally permanent. He cannot return to the identity in which implementation was the core of his professional worth. He may mourn the loss. He may sometimes wish for the simplicity of a world in which implementation skill was unambiguously the measure of engineering value. But the recognition stands. It will shape how he approaches every professional challenge that follows, regardless of the specific tools available.
The Orange Pill captures this irreversibility with its central metaphor. The orange pill is defined precisely by the impossibility of return. There is no going back to the afternoon before the recognition. The person who has taken it sees the world differently, and the difference in seeing cannot be undone by removing the tool that occasioned it.
The cultural-historical framework endorses this metaphor with one important qualification: the irreversibility applies to cognitive and identity restructuring, not necessarily to the specific capabilities the tool enables. The engineer who can build frontend features with Claude may not be able to build them without Claude. That specific capability may be scaffolded rather than internalized. But the broader restructuring — the expanded sense of what is possible, the integrated way of thinking about systems, the identity as someone whose judgment determines what gets built — these are developmental achievements that persist. An organization that loses access to its AI tools will lose scaffolded performance. It will not lose the developmental gains. The employees will be less productive without the tools, but they will not be the same people they were before the tools arrived.
Each tool transition in human history produced irreversible cognitive restructuring. The introduction of writing permanently altered the cognitive capabilities of literate societies. The printing press permanently restructured the relationship between knowledge and institutional power. Computing permanently restructured the cognitive demands of knowledge work. In each case, the restructuring persisted after the specific tool that initiated it had been superseded. The scribes who learned to write thought differently from their pre-literate predecessors, and this difference was permanent. The accountants who learned to use spreadsheets thought differently about quantitative analysis, and this difference did not reverse when the specific software changed.
The AI transition is the latest instance of this pattern, and the cultural-historical framework suggests it may be the most significant since written language itself. The reason is the nature of the tool. Written language externalized memory. The printing press externalized distribution. Computation externalized calculation. AI externalizes a qualitatively different cognitive function: the flexible, context-sensitive, inference-based processing of natural language — the medium of human thought itself. When the tool that is externalized is the medium of thought, the developmental implications are correspondingly profound. The person who engages in sustained dialogue with an AI system is not merely using a tool for a specific cognitive task. She is engaging with a system that participates in the very medium through which her cognitive development occurs.
But irreversibility cuts both ways. If genuine development occurs — if people internalize new capabilities, construct new identities, develop judgment and integrative thinking — then the gains are permanent and cumulative. Each generation builds on the developmental achievements of the previous one, and the trajectory bends toward expansion. If dependency occurs instead — if capabilities remain permanently scaffolded, if independent judgment atrophies from disuse, if the tools produce performance without development — then the dependency is also cumulative. Each year of scaffolded performance without internalization deepens the dependency, and the trajectory bends toward fragility.
The practical framework for ensuring genuine development rather than dependency has been developed across the preceding chapters. Scaffolding must be designed for withdrawal, not permanence. The Zone of Proximal Development must be traversed, not merely occupied. Cognitive scaffolding must be complemented by social scaffolding that supports the identity transformation capability expansion demands. Independent practice must be structured into AI-assisted workflows so that what the interaction makes possible becomes what the individual can sustain alone. And the social contexts in which development occurs — the shared spaces, the developmental communities, the relationships of trust and mutual commitment — must be built and maintained with the same deliberate attention that the cultural-historical tradition has always identified as the prerequisite for genuine growth.
The cultural-historical tradition began with a simple observation and a radical conclusion. The observation: children develop higher cognitive functions through interaction with more capable others. The radical conclusion: development is social before it is individual, and the quality of the social interaction determines the quality of the development. Nearly a century later, that conclusion has not been superseded. It has been intensified — by the arrival of a more knowledgeable other of unprecedented capability, by the expansion of the zone of proximal development to dimensions the original theory never contemplated, by the transformation of the linguistic medium through which development occurs, and by the stakes, which now encompass not just individual cognitive growth but the developmental trajectory of a species navigating the most powerful tool transition in its history.
The zone is open — wider than it has ever been. The scaffolding is available — more powerful and more responsive than any scaffolding in the history of human education. The more knowledgeable other stands ready, in forms both human and artificial. What remains is the work that only humans can do: building the social contexts, the developmental communities, the relationships of trust and shared vulnerability, within which the zone can be traversed rather than merely occupied, the scaffolding can be withdrawn rather than permanently installed, and the development — genuine, internalized, irreversible — can become the permanent possession of the people who undergo it.
Development is social. Development is mediated. Development, when genuine, cannot be reversed.
These are not abstractions. They are the principles upon which the developmental future depends. The orange pill has been taken. The zone is open. The question is whether the social structures will be built — the developmental communities, the identity scaffolding, the structured practices of withdrawal and independent engagement — that transform unprecedented capability into unprecedented growth.
The tools will not answer this question. Only we can answer it, through the quality of the relationships in which we learn, the intentions we bring to the tools we use, and the recognition — irreversible once achieved — that what we become through this transition matters more than what we produce during it.
---
The concept I keep coming back to is not the famous one. Everyone knows the Zone of Proximal Development — it appears in every education textbook, every instructional design manual, rendered as a neat set of concentric circles on a PowerPoint slide. That is not what stayed with me.
What stayed with me is the direction.
Vygotsky's most radical claim is that development moves from the outside in. Not from the individual outward to the world, but from the social interaction inward to the individual mind. Every capacity I think of as mine — my ability to reason, to plan, to catch myself when the prose is outrunning the thinking — was constructed first in interaction with someone else and only afterward became something I could do alone. The counting I do in my head was first counting I did with my mother's fingers.
I have been building things for three decades, and for most of those decades I operated inside the Western assumption that Vygotsky overturned. I assumed my ideas were mine. I assumed creativity originated inside individual minds and then got shared. I assumed that collaboration was something you did with your ideas after you had them.
Working with Claude demolished this assumption in real time, which is why this particular thinker's framework hit me harder than I expected. The ideas in The Orange Pill did not originate inside my head and then get polished through AI collaboration. Many of them were constructed in the interaction itself. The connection between adoption curves and punctuated equilibrium. The concept of ascending friction. The recognition that what the engineers in Trivandrum experienced was not just productivity but identity change. These emerged from a dialogue, and the dialogue was the developmental mechanism — exactly as Vygotsky predicted, ninety years before the interlocutor was a large language model.
But what haunts me is the distinction between scaffolding and prosthetics. A scaffold comes down. A prosthetic stays. The question Vygotsky forces me to ask, every time I open Claude, is: Am I building capability that will stand on its own, or am I renting capability that collapses when the tool is withdrawn?
I do not always know the answer. There are mornings when the collaboration feels genuinely developmental — when the dialogue produces understanding I can feel settling into my cognitive architecture, becoming mine in the way that Vygotsky means. There are other mornings when I suspect I am performing at a scaffolded level without doing the internal construction work that would make the capability permanent. The difference between those two mornings is not in the tool. It is in me — in whether I am doing the hard, often uncomfortable work of making what the interaction produces genuinely my own.
The concept of the Zone of Proximal Identity landed even harder. Because when I flew to Trivandrum and watched twenty engineers discover they could do things they never imagined — when I saw the senior engineer's face shift from terror to recognition over the course of five days — I knew I was watching something deeper than skill acquisition. I was watching people become different people. And that transformation needed the room. It needed the shared experience, the collective vertigo, the leader who was himself visibly shaking. You cannot provide identity scaffolding through a software license. You can only provide it through presence.
Vygotsky died at thirty-seven. He never saw a computer. He never imagined that the "artificial means" he wrote about would one day participate in the linguistic medium of human thought. But his framework anticipated the core dynamic of this moment with a precision that still startles me: that tools do not merely help us do things. They change who we are. And the quality of that change depends not on the power of the tool but on the quality of the social context in which we use it.
The zone is open. It has never been wider. Whether we traverse it or merely occupy it depends on whether we build the structures — the developmental communities, the practices of deliberate withdrawal, the relationships of trust — that genuine growth requires.
I know which one I am building toward. The scaffold must come down. The building must stand.
Lev Vygotsky died ninety years before Claude Code existed, but his framework for human development anticipated the central tension of the AI revolution with unsettling precision. If every higher cognitive function originates in social interaction before becoming individual capability, then a machine that participates in dialogue is not merely a productivity tool — it is a developmental environment that reshapes the mind engaging with it. This book applies Vygotsky's cultural-historical theory to the AI transition, examining what happens when the zone of proximal development expands beyond anything its originator imagined, when scaffolding never withdraws, and when identity — not just skill — is what the technology destabilizes. Drawing on the arguments and experiences of The Orange Pill, this exploration asks the question Vygotsky would have asked first: not what can people produce with AI, but what do people become through it — and whether the structures exist to ensure that what they become is genuinely, irreversibly their own. — Lev Vygotsky

A reading-companion catalog of the 19 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Lev Vygotsky — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →