By Edo Segal
The interruption saved the paragraph.
I was deep in a session with Claude, three hours in, building a chapter that was flowing beautifully. Every prompt returned something sharper than the last. The feedback loop had that frictionless quality I describe in *The Orange Pill* — the quality that feels like flight and might be freefall.
Then my daughter walked in. She needed help with something mundane. I closed the laptop, annoyed, dealt with it, came back twelve minutes later.
The paragraph I had been so proud of was wrong. Not factually wrong. Structurally wrong. It had resolved a tension that should have stayed open. I could not see this before the interruption. I could see it immediately after.
Mary Catherine Bateson would have known exactly what happened. She spent decades studying how lives and minds are shaped not by unbroken progress but by discontinuity — by the forced changes of context that linear planners treat as obstacles and that composers treat as material. Her mother was Margaret Mead. Her father was Gregory Bateson. She grew up watching two extraordinary minds think at a kitchen table where interruptions were constant and where the thinking was better for it.
Her framework matters right now because the AI discourse is trapped between two clean narratives — triumphalism and elegy — and Bateson spent her career in the space that clean narratives cannot reach. She studied women whose careers were interrupted, redirected, composed from fragments that no plan had anticipated. She found that the ones who flourished were not the ones with the best plans. They were the ones with the strongest practice of composing — of listening to what the world actually offered and making something coherent from it, again and again, through every disruption.
That practice is exactly what the AI moment demands. Not a plan for how to use the tools. A practice of composing with them — attending to what they offer, noticing what they miss, maintaining the peripheral awareness that catches the elegant paragraph that resolves what should stay unresolved.
Bateson also saw something that the technology builders almost never see: that the most important learning happens not during focused work but during the interruptions, the pauses, the moments when the mind wanders into territory the prompt never specified. The AI collaboration is powerful. It is also continuous in a way that starves the peripheral processing where the deepest insights form.
This book is another lens. Another crack in the fishbowl. Bateson composed her understanding from anthropology, linguistics, ecology, and the lived experience of a life that refused to follow a straight line. That composition illuminates corners of the AI moment that technology frameworks alone cannot reach.
The interruption is not the enemy of good thinking. Sometimes it is the thing that makes thinking good.
— Edo Segal ^ Opus 4.6
1939–2021
Mary Catherine Bateson (1939–2021) was an American cultural anthropologist, linguist, and author whose work explored how human beings construct meaning, identity, and continuity through lives shaped by discontinuity and improvisation. The daughter of anthropologists Margaret Mead and Gregory Bateson, she grew up immersed in cross-cultural observation and systems thinking. She held academic positions at Harvard, Amherst, Northeastern, and George Mason University, among others, and served as president of the Institute for Intercultural Studies. Her most influential book, *Composing a Life* (1989), examined five women — herself included — whose careers were repeatedly interrupted and redirected, arguing that such lives represented not failure but a form of creative composition analogous to improvisation in jazz. In *Peripheral Visions: Learning Along the Way* (1994), she extended this framework to argue that the most important learning often happens at the margins of focused attention. Her later works, including *Full Circles, Overlapping Lives* (2000) and *Composing a Further Life* (2010), continued to explore how adults learn, adapt, and find meaning through successive recompositions across the lifespan. Bateson's legacy lies in reframing discontinuity as a resource rather than a deficiency — an insight that has gained renewed urgency in an era of accelerating technological disruption.
The most dangerous myth about a career is that you can plan one.
Mary Catherine Bateson spent decades studying lives that did not unfold according to plan — lives interrupted by relocations, redirected by divorce, reshaped by historical accident, composed from materials their owners never chose. In Composing a Life, published in 1989, she examined five women — herself included — whose professional trajectories looked, by the standards of the linear career model, like failures. They changed fields. They started over. They abandoned specializations that had consumed years of training. By the metric that measures a career as a straight line from training to mastery to eminence, these women had failed repeatedly.
By Bateson's metric, they had done something far more interesting. They had composed.
The distinction between planning and composing is not semantic. It is structural, and it carries consequences that ripple through every domain the AI moment touches. A plan presupposes a stable environment — a world in which the conditions that exist when the plan is made will persist long enough for the plan to be executed. A plan says: here is the destination, here is the route, here are the skills required, here is the timeline. A plan is a map drawn in advance of the journey, and its value depends entirely on the accuracy of the map's predictions about the terrain.
A composition presupposes nothing about the stability of the environment. A composition says: here are the materials available right now, here is the pattern I can make from them, here is how I will respond when the materials change — as they will, inevitably, because the world does not hold still for composers any more than it holds still for anyone else. A composition is not a map. It is a practice — an ongoing, adaptive, improvisational engagement with whatever the world deposits at your feet.
Bateson drew this distinction from jazz. A jazz musician does not play from a score. She plays from a set of chord changes — a framework that constrains without determining, that provides structure without providing content. The content is improvised in real time, shaped by the contributions of the other players, by the acoustic properties of the room, by the mood of the audience, by the musician's own physical and emotional state in the moment of playing. The chord changes are the conditions. The improvisation is the creative act. And the quality of the improvisation depends not on the musician's ability to execute a predetermined plan but on her ability to listen, respond, integrate, and find coherence in the moment.
The women Bateson studied were jazz musicians of the self. They did not execute career plans. They listened to the chord changes of their circumstances — the job that disappeared, the marriage that ended, the child that arrived, the opportunity that materialized in a field they had never considered — and they improvised responses that created coherence not through predetermined design but through the quality of their attention to what was actually happening.
This framework illuminates the AI moment with a precision that more dramatic framings miss. The technological disruption described in The Orange Pill — the winter when Claude Code crossed a capability threshold and the rules governing every career in technology were rewritten — is, in Bateson's terms, a change in the chord changes. The materials available for composition have shifted. The skills that constituted the previous composition's primary texture — the implementation expertise, the framework mastery, the syntactic fluency that defined a software career — have been devalued not because they were illusory but because the environment that gave them value has changed. The musician who built her reputation on a particular set of chord changes discovers that the bandleader has called a new tune.
The planned career is helpless before this kind of disruption. The person who invested a decade in mastering Python, who built an identity around that mastery, who planned a trajectory from junior developer to senior architect to CTO on the basis of that specific expertise — that person experiences the AI moment as a catastrophe, because the plan has been invalidated. The map no longer matches the terrain. The destination still exists, but the route has been washed out, and the skills required to navigate the new route are not the skills the plan prescribed.
The composed career is not helpless. It is disrupted — disruption is never painless — but it is not destroyed, because the composed career was never organized around a specific set of skills in the first place. It was organized around a practice: the practice of listening, responding, integrating, finding coherence. The skills were materials, not foundations. They were the notes played on a particular set of chord changes, not the musician's identity. When the chord changes shift, the composed career responds with the same improvisational flexibility that produced the previous composition. Different notes. Same practice.
Bateson would have recognized the engineers described in The Orange Pill's account of the Trivandrum training — the ones who, within days of working with Claude Code, began reaching across disciplinary boundaries they had spent years treating as walls. The backend engineer who started building user interfaces. The designer who started writing features end to end. These are people whose composed careers responded to a change in chord changes by finding new notes. The walls between domains turned out to be artifacts of the previous composition's constraints — products of the translation cost that made cross-domain work prohibitively expensive. When the constraint changed, the composition changed with it.
But Bateson would also have recognized the ones who froze. The ones who experienced the disruption not as a change in chord changes but as the end of music. She wrote extensively about the psychological cost of discontinuity — the grief, the disorientation, the loss of meaning that accompanies any radical shift in the materials available for composition. The senior architect who felt like a master calligrapher watching the printing press arrive was not being melodramatic. He was experiencing a genuine loss — the loss of a composition that had taken decades to build, that was integrated into his sense of who he was, that gave his daily experience its structure and its meaning.
Bateson's insight is that this grief is legitimate and that it is not the end of the story. The women she studied grieved their interrupted careers. They mourned the compositions they had been building. And then they composed again — not because they were superhuman, but because composition is what living systems do. It is not a skill to be learned or a mindset to be adopted. It is the fundamental process by which organisms maintain coherence in a changing environment. Every living thing composes. Every living thing improvises. The question is not whether you will compose a new life in the wake of disruption but whether you will compose it well — with attention, with care, with the peripheral vision that notices the opportunities forming at the edges of awareness while the center of attention is consumed by the loss.
The practical implications reshape how organizations, educational institutions, and individuals might approach the AI transition. The linear career model — choose a specialization, acquire the skills, execute the plan — produces people who are catastrophically vulnerable to disruption, because the model assumes environmental stability that has never existed and that the AI moment has made impossible to pretend exists. The compositional model produces people who are adapted to disruption as a permanent condition — not because they do not specialize (Bateson's women were all deeply skilled in their domains) but because they hold their specializations as materials for composition rather than as foundations for identity.
Bateson arrived at this framework through anthropological observation, not abstract theorizing. She watched real women navigate real discontinuities with real consequences — careers lost, incomes interrupted, identities restructured. What she discovered was that the women who thrived were not the ones who avoided discontinuity (no one avoids discontinuity) but the ones who had developed what she called "the skills of the displaced person" — the capacity to enter unfamiliar territory, to learn from confusion rather than retreating from it, to find in the disruption the materials for a composition that could not have been imagined in the old arrangement.
These are the skills the AI moment demands. Not coding skills (the machine does that). Not prompt engineering (that will be automated in its turn). The skills of the displaced person — the capacity to enter unfamiliar territory with curiosity rather than panic, to compose from disruption rather than plan against it, to hold identity as a pattern that persists through change rather than a structure that must be defended against it.
The tragedy, in Bateson's framework, is not that the disruption has arrived. The disruption was always going to arrive. The tragedy is that the culture's dominant model of career and identity — the linear plan, the specialized expertise, the fixed self — has produced millions of people who are constitutionally unprepared for the kind of compositional challenge that the AI moment presents. They were trained to execute plans. They were never taught to compose.
Bateson would have noted that this is not the first time a culture has been caught in this trap. Every major technological transition produces the same pattern: a generation trained for stability confronting an environment that demands improvisation. The Luddite framework knitters had composed their lives around a specific set of skills — skills that were genuinely valuable, genuinely hard to acquire, genuinely constitutive of their identities. When the power loom arrived, they experienced the disruption as a destruction of self, because their culture had no model for recomposition. The plan had been: master the frame, join the guild, work the trade, pass it to your sons. The plan had no contingency for technological obsolescence, because the plan assumed a world in which the materials for composition did not change.
Bateson's compositional model is not a guarantee of survival. Not every improvisation succeeds. Not every composition achieves coherence. But the compositional model at least makes survival possible by framing the disruption as a change in materials rather than an end to meaning. The builder whose identity is her capacity to compose — to listen, respond, integrate, find pattern — experiences the AI moment as a change in what she has to work with. The builder whose identity is her specific technical expertise experiences it as an annihilation.
The difference is not temperamental. It is structural. It is the difference between an identity built on a plan and an identity built on a practice. And the AI moment is, among many other things, a test of which model produces people capable of flourishing in a world where the chord changes never stop changing.
Bateson published Composing a Life thirty-seven years before the winter described in The Orange Pill. She was writing about academic women in the 1980s, not about software engineers in the 2020s. And yet the framework she developed — the framework of composition over planning, improvisation over execution, identity as practice rather than identity as possession — reads now as though it were written for this exact moment. Not because Bateson predicted the AI transition (she did not) but because she identified the deeper pattern: that living systems thrive through composition, and that any culture that trains its members for planning at the expense of composition is preparing them for a world that does not exist.
The world that exists is the world of the jazz musician — the world where the chord changes shift without warning, where the other players contribute things you did not anticipate, where the composition is always unfinished and always in process. The AI is a new player in the ensemble. It brings capabilities the previous players did not possess. It changes the texture of the music in ways the previous arrangement could not have produced.
The question is whether the other musicians will compose with it or attempt to play the old charts louder.
---
Charles Darwin almost missed the finches.
He collected specimens in the Galápagos in 1835 — birds, insects, plants, rocks, anything that seemed potentially interesting to a young naturalist on his first major expedition. The finches were not a priority. He barely labeled them by island. It was only after his return to England, when the ornithologist John Gould examined the specimens and told Darwin that they represented twelve distinct species no one had ever described, that the finches began to matter. The differences Darwin had registered only at the periphery of his awareness — variations in beak size and shape that he had not thought to document systematically — turned out to be the key to understanding how species adapt to different ecological niches.
The discovery that launched evolutionary biology began not with focused attention but with something closer to what Mary Catherine Bateson called peripheral vision — the capacity to register patterns at the edges of awareness, where the mind is not looking, where the categories that organize central attention have not yet imposed their grid.
Bateson developed the concept of peripheral vision across several works, most fully in her 1994 book of that title. The core insight is deceptively simple: the most important things you learn are often not the things you set out to learn. They are the things you notice while your attention is directed elsewhere — the patterns that register at the margin of awareness, the connections that form below the threshold of conscious recognition, the disturbances that signal something important is happening in a direction you were not looking.
Focused attention is powerful. It concentrates cognitive resources on a specific target, suppresses irrelevant information, and enables the kind of deep, sustained analysis that produces expertise. But focused attention has a structural limitation that Bateson identified with the precision of an anthropologist who had spent decades watching how people actually learn in unfamiliar environments. Focused attention sees what it is looking for. It confirms categories. It finds what fits the framework and discards what does not. Peripheral vision sees what focused attention misses — the anomaly, the exception, the thing that does not fit the framework and that, precisely because it does not fit, carries the potential to restructure the framework entirely.
Darwin's finches were peripheral information. His focused attention was on geology — the volcanic formations, the raised coral, the evidence for gradual geological change that would support his mentor Charles Lyell's uniformitarian theory. The birds were background. And it was precisely their background status — the fact that Darwin was not looking at them with the focused attention that would have forced them into his existing categories — that allowed their anomalous features to register. Had he been looking for variation in beak morphology, he would have found it, but he would have found it within the framework he brought to the search. Because he was not looking, what he found exceeded the framework. The peripheral registration was the beginning of a new framework.
This distinction between focused and peripheral modes of attention carries extraordinary implications for understanding what artificial intelligence can and cannot do — implications that Bateson, in her 2018 conversation with Edge.org, began to articulate directly.
AI systems are engines of focused attention. A large language model processes input by matching patterns against its training data — finding the statistical relationships that are most relevant to the query, suppressing the relationships that are less relevant, concentrating its processing resources on the specific task the user has defined. This is focused attention at a scale and speed that no human mind can approach. The model's capacity to find relevant patterns in a dataset comprising trillions of tokens of text is a form of focused attention so powerful that it appears, from the outside, to be something qualitatively different from human cognition.
But it is not qualitatively different. It is quantitatively different — faster, broader, more comprehensive — in the dimension of focused attention. And it is qualitatively absent in the dimension of peripheral vision. The AI does not notice what it is not looking for. It does not register the anomaly that falls outside the categories its training has established. It does not experience the vague sense of disturbance that signals to a human observer that something important is happening in a direction she was not looking.
This is not a limitation that better training will fix. Peripheral vision is not a feature that can be added to a system designed for focused pattern-matching. It is a mode of cognition that depends on the specific characteristics of embodied biological minds — minds that are situated in environments, that have histories, that carry unresolved questions and half-formed hypotheses and emotional sensitivities that make certain features of the environment more salient than others. Darwin noticed the finch beaks not because his visual system was superior to any imagined alternative but because his particular biographical trajectory — his training with Henslow, his reading of Lyell, his temperamental disposition toward natural variation — had calibrated his peripheral awareness to register biological differences as potentially significant, even when his focused attention was directed elsewhere.
The calibration is biographical. It is the product of a specific life lived in specific environments, accumulating specific experiences that shape the organism's sensitivities in ways that no algorithm can replicate because no algorithm has lived a life. The AI can be trained to detect variation in beak morphology. It can be trained to detect it with far greater precision and far greater speed than Darwin. But it cannot be trained to notice beak variation while looking at rock formations, because noticing while looking elsewhere is not a trainable skill. It is an emergent property of an embodied organism navigating a world that is richer than any framework the organism possesses.
Bateson understood this with the depth of someone who had spent her career moving between cultures — entering unfamiliar environments with the specific vulnerability of the anthropologist, who must learn what matters before she knows what to look for. In every new culture she entered, Bateson found that the most important learning happened peripherally — in the moments between interviews, in the observations made while walking to the market, in the discomfort of encountering practices that did not fit her categories. The focused research produced data. The peripheral awareness produced understanding.
The distinction has practical consequences for how the AI partnership described in The Orange Pill should be understood and structured. The author describes working with Claude as a conversation in which the AI finds connections the human had not seen — linking adoption curves to punctuated equilibrium, connecting the history of interface design to the nature of human need. These connections are products of the AI's extraordinarily powerful focused attention: its capacity to scan vast bodies of information and identify statistical relationships that a human mind, with its limited working memory and narrow attentional bandwidth, would never reach.
But these connections are all within the space of focused attention. They are connections between things the AI was trained on — patterns that exist in the data, waiting to be found. They are not peripheral discoveries. They are not the anomalous finch beak that does not fit the framework. They are the comprehensive map of every finch beak in the dataset, organized by every measurable dimension, cross-referenced with every other dataset the model has access to.
The human contribution to the partnership, in Bateson's framework, is peripheral vision — the capacity to look at the AI's comprehensive map and notice the thing that is missing, the thing that does not fit, the thing that suggests a framework the map does not contain. The human reads the AI's output and feels a vague sense that something is not quite right — not wrong, exactly, but not complete. That feeling is peripheral vision operating. It is the organism's embodied history registering a discrepancy between the map and the territory that the map's comprehensive coverage has somehow missed.
The author of The Orange Pill describes this peripheral awareness operating when he caught the false Deleuze reference — the passage that was syntactically perfect, rhetorically elegant, and philosophically wrong. What detected the error was not focused analysis (the passage passed every test that focused analysis could apply — it was well-written, well-structured, confident) but peripheral unease — a felt sense that something was off, a disturbance at the edge of awareness that motivated the focused check that revealed the mistake.
Bateson would have identified this as the critical human capacity in the AI partnership. Not the capacity for focused analysis, which the AI provides at superhuman scale. The capacity for peripheral awareness — the embodied, biographical, idiosyncratic sensitivity to disturbance that tells you something is wrong before you can say what it is. This capacity cannot be trained into an AI because it depends on what Bateson called the whole person — the full biological, emotional, experiential history that makes each human observer a unique instrument of perception.
The implications for education are immediate and specific. A culture that trains people for focused attention — for the systematic, comprehensive, category-driven analysis that AI performs better than humans — is training people for obsolescence. A culture that cultivates peripheral vision — that rewards noticing, that values the anomalous observation, that teaches students to attend to the thing that does not fit rather than the thing that does — is cultivating the capacity that the AI partnership most needs and that the AI itself most lacks.
Bateson wrote in Peripheral Visions that "the key to learning is the discovery of pattern in the unfamiliar, treating it as a resource rather than a threat." The AI moment is the unfamiliar. The pattern is forming at the edges of awareness, in the spaces between the categories that the current discourse provides. The triumphalist sees opportunity at the center. The elegist sees loss at the center. Neither is looking at the periphery, where the most important pattern — the pattern that will define what this transition actually becomes — is forming outside the frame of either position.
Peripheral vision is not a luxury. It is not a supplement to the real work of focused analysis. In an environment saturated with AI-powered focused attention — where every pattern in every dataset is mapped with comprehensive precision — peripheral vision becomes the scarce resource, the thing that determines whether the comprehensive map leads to genuine understanding or merely to more comprehensive ignorance.
Darwin almost missed the finches. The most important discovery in the history of biology was almost lost because the discoverer's focused attention was directed elsewhere. That the discovery was made at all — that the peripheral registration was eventually elevated to conscious analysis — is a testament to the specific, embodied, biographical mode of cognition that Bateson spent her career studying.
AI will never almost miss the finches. It will find every finch, measure every beak, calculate every statistical relationship. And it will never know what the finches mean, because meaning is not a pattern in the data. Meaning is what a particular organism, with a particular history, notices at the edge of awareness — and chooses to attend to.
---
There is a question that haunts The Orange Pill like a recurring chord in a minor key. A twelve-year-old asks her mother: "What am I for?" A son asks at dinner whether his homework still matters. A senior architect stands in a conference hallway feeling like a calligrapher watching the printing press arrive. Each of these moments is a crisis of identity — not of employment, not of income, not of social status, but of the deeper thing that employment, income, and social status are supposed to express. The question is existential before it is economic. It asks not "What will I do?" but "Who am I, now that the thing I did is no longer mine alone to do?"
Mary Catherine Bateson's answer to this question is both more radical and more consoling than it first appears. The answer is: you were never the thing you did. You were the process of doing it — the ongoing, adaptive, improvisational act of composing a self from whatever materials the world provided. The materials have changed. The process has not.
Bateson developed her understanding of the improvisational self through the study of women whose careers did not follow the linear trajectory that the culture prescribed. These women changed fields, abandoned specializations, interrupted their professional lives for caregiving, relocated for partners' careers, started over in domains where their previous expertise was irrelevant. By the standards of the planned career — the career as a straight line from training to mastery — these interruptions were failures. By Bateson's standards, they were compositions: creative acts that produced forms of integration and understanding unavailable to the uninterrupted specialist.
The key insight is about where identity resides. The linear career model locates identity in expertise — in the specific body of knowledge and skill that distinguishes the specialist from the generalist, the trained professional from the amateur. The software engineer is her mastery of Python. The lawyer is her knowledge of contract law. The surgeon is her capacity to operate. Identity is the accumulated expertise, and the threat of obsolescence is a threat to identity because the expertise is the identity.
Bateson observed something different. The women she studied did not locate their identities in their expertise, though they were all highly skilled. They located their identities in what Bateson called the quality of their engagement — the way they attended to problems, the way they listened to collaborators, the way they found connections between disparate domains, the way they maintained coherence through disruption. The expertise was the content of the current composition. The quality of engagement was the practice that persisted across compositions.
The distinction matters enormously for understanding the psychological impact of the AI transition. When the AI can write code, the person whose identity resides in coding is threatened. When the AI can draft legal briefs, the person whose identity resides in brief-writing is threatened. When the AI can perform any specific cognitive task that constitutes a professional identity, the person whose identity resides in that task faces what feels like annihilation.
But the person whose identity resides in the quality of engagement — in the practice of composing rather than in the content of any particular composition — faces a disruption, not an annihilation. The materials have changed. New instruments are available. The chord changes are different. The practice — listening, responding, integrating, finding coherence — continues, because the practice was never dependent on any particular set of materials.
This is not a consolation that can be offered glibly. Bateson was explicit about the grief that accompanies recomposition. The women she studied mourned their interrupted careers. They experienced real loss — loss of expertise, loss of status, loss of the particular satisfaction that comes from working at the outer edge of a hard-won skill. The senior architect who feels like a calligrapher watching the printing press is grieving something genuine: the specific, embodied relationship between himself and his code, the intimate knowledge of a system he built line by line over decades, the identity that was constituted by that knowledge.
Bateson's framework does not deny the grief. It contextualizes it. It says: the grief is real, and it is not the end. The grief is the gap between one composition and the next — the space in which the old materials have been taken away and the new materials have not yet been integrated into a coherent pattern. The gap is painful. It is also, in every life Bateson studied, temporary. Not because the pain is trivial but because the compositional process is persistent. Living systems compose. They improvise. They find coherence. This is not a moral injunction — you should compose — but an empirical observation: organisms do compose, given time and the minimum conditions for adaptation.
The minimum conditions matter. Bateson was clear that composition requires support — social networks, economic floors, cultural narratives that validate the process of recomposition rather than stigmatizing it as failure. The women in her study who composed most successfully were the ones embedded in communities that recognized recomposition as a legitimate response to disruption. The ones who struggled most were the ones whose communities interpreted disruption as evidence of personal inadequacy — communities that said, in effect, if your plan had been better, you would not need to compose.
The AI moment is producing both kinds of community. There are spaces — certain technology conferences, certain online communities, certain organizational cultures — that recognize the AI disruption as a structural shift requiring creative response, that validate the experience of recomposition, that provide the social and economic support the process requires. And there are spaces that interpret the disruption as a sorting mechanism, that say: the people who thrive are the ones who were already good enough, and the people who struggle deserve their struggle.
Bateson would have recognized the second stance as a specific form of cultural pathology — a failure to understand that composition is not a character trait but a process, and that the process requires conditions. A seed that does not germinate in concrete has not failed to be a seed. It has been denied the conditions for germination. The engineer who does not compose a new career in an organization that treats disruption as personal failure has not failed to be adaptive. She has been denied the conditions for adaptation.
Bateson's framework also illuminates the phenomenon that The Orange Pill describes as "fight or flight" — the observation that some people respond to the AI disruption by leaning in (fight) while others respond by withdrawing (flight), and that the responses map onto the most primal survival instincts. Bateson would have recognized this binary as too simple. The compositional framework offers a third option that is neither fight nor flight: compose. Composing is not fighting (it does not treat the disruption as an enemy to be defeated) and it is not fleeing (it does not treat the disruption as a catastrophe to be escaped). It is engaging — attending to the disruption with the same quality of attention that produced the previous composition, and trusting the process to find coherence in materials that do not yet cohere.
The improvisational self is not a fixed capacity. It is a practice that can be cultivated or allowed to atrophy. Bateson observed that the women who composed most fluidly were the ones whose lives had already required multiple recompositions — who had developed, through repeated practice, the specific skill of entering unfamiliar territory and finding pattern in it. The women who struggled most were the ones whose lives had, until the disruption, followed a relatively linear path — who had never needed to compose and therefore had not developed the practice.
This observation has implications that should concern anyone thinking about the AI moment's impact on the next generation. A culture that trains its young for linear careers — that says choose early, specialize deeply, execute the plan — is producing people whose compositional muscles are atrophied. A culture that exposes its young to productive discontinuity — that allows changes of direction, that values breadth as well as depth, that treats the interrupted career as a source of wisdom rather than a mark of failure — is producing people whose compositional muscles are strong.
The twelve-year-old who asks "What am I for?" is asking a question that the linear career model cannot answer, because the linear model says you are what you do, and the AI can now do most of what she might have planned to do. Bateson's framework offers a different answer: you are the process of composing — the practice of attending, integrating, finding pattern, making meaning. That practice is not threatened by AI. It is more necessary than ever, precisely because the materials available for composition are changing faster than any previous generation has experienced.
The improvisational self is not a consolation prize for people who failed to plan. It is, in Bateson's analysis, the only self that has ever actually existed. The planned self was always an illusion — a story told by cultures that valued predictability over adaptation, that mistook the stability of the environment for a feature of the organism. The environment was never stable. The organism was always composing. The only thing that has changed is the speed at which the materials shift, and the visibility of the shift, and the impossibility of pretending that the plan was ever real.
The architect who feels like a calligrapher watching the printing press arrive is experiencing a loss of materials, not a loss of self. The self — the composing, improvising, pattern-finding self — is still there, waiting for new materials to work with. The grief is the space between compositions. What fills that space, and how quickly, depends on the conditions the culture provides.
Bateson would have said that providing those conditions — social support, economic floors, cultural narratives that honor recomposition — is the most important work of any institution confronting the AI transition. More important than retraining programs (which treat the disruption as a skills problem). More important than regulation (which treats the disruption as a policy problem). The disruption is a compositional problem, and the solution is the cultivation of compositional capacity — in individuals, in organizations, in the culture at large.
The self is not a noun. It is a verb. It is the ongoing act of composition. And the composition is always unfinished, always in process, always responsive to materials that no plan could have predicted.
---
Mary Catherine Bateson's mother, Margaret Mead, moved between cultures the way other people move between rooms — entering Samoa, then Bali, then New Guinea, then the American postwar suburb, bringing to each environment the anthropologist's fundamental commitment to treating the unfamiliar as informative rather than threatening. Her father, Gregory Bateson, moved between disciplines with a similar fluidity — from anthropology to psychiatry to cybernetics to ecology, pursuing a pattern he could sense but not yet name across domains that his contemporaries treated as incommensurable.
Mary Catherine Bateson grew up watching two people compose lives from radical discontinuity. The household itself was an exercise in discontinuity — languages changed, continents changed, intellectual frameworks changed, sometimes within a single dinner conversation. The child who grew up in this environment absorbed, at the level of what her father would have called deutero-learning, a fundamental lesson: continuity is not the absence of change. Continuity is the pattern that persists through change.
This lesson became the central insight of Bateson's intellectual career, and it is the lens through which the AI moment becomes most legible — not as a break in the pattern of human work but as a change in materials that reveals which aspects of the pattern were fundamental and which were artifacts of a particular arrangement.
Bateson studied continuity through discontinuity in the lives of women who had experienced what the culture called interruptions — breaks in their careers for childrearing, relocations for partners' jobs, changes in field forced by circumstance rather than choice. What she discovered was that these women, far from being diminished by the interruptions, had developed a form of understanding that their uninterrupted colleagues lacked. The interruption forced a transfer of learning from one domain to another, and the transfer produced a meta-understanding — a grasp of the principles that connected the domains, an awareness of the patterns that persisted across the specific contents of each domain.
The woman who had been a laboratory biologist, interrupted her career for five years of childrearing, and then returned to work in public health administration had not lost five years. She had gained a specific form of cross-domain intelligence. The organizational skills developed through managing a household with young children — the capacity for simultaneous attention to multiple demands, the tolerance for interruption, the ability to make decisions with incomplete information — were not the same skills as laboratory technique. But they were skills, and they were skills that enriched her subsequent work in ways that five additional years of uninterrupted laboratory research would not have produced.
The continuity was not in the content. The content changed radically — from bench science to domestic management to health policy. The continuity was in what Bateson called the quality of attention — the way the woman engaged with problems, the questions she asked, the patterns she noticed, the connections she drew between domains that her more specialized colleagues, who had never been forced to transfer their learning across contexts, could not see.
This framework applies to the AI transition with uncomfortable precision. The senior engineer described in The Orange Pill — the one who spent his first two days in the Trivandrum training oscillating between excitement and terror — was experiencing a discontinuity. The specific content of his expertise — the implementation skills, the syntactic mastery, the debugging intuition built through thousands of hours of manual work — was being absorbed by the tool. If his continuity resided in that content, the discontinuity was a catastrophe.
But the engineer discovered, by Friday, that his continuity resided elsewhere. It resided in what The Orange Pill calls the remaining twenty percent — the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they tolerated. These capacities were not content-specific. They were pattern-level capacities — ways of engaging with problems that persisted across changes in the specific tools and techniques used to address them.
Bateson's framework names this phenomenon with greater precision than the source text achieves. The twenty percent is not a residual — it is not what is left over after the automatable work is subtracted. It is the continuity — the pattern that persists through the discontinuity of the AI transition, the quality of attention that connects the engineer's pre-AI career to his post-AI career, the thing that makes the two careers expressions of the same compositional practice rather than two separate and unrelated episodes.
The recognition that continuity resides in the quality of attention rather than in the content of expertise has profound implications for how people experience and respond to technological disruption. The person who locates her continuity in content experiences every change in content as a threat to identity. The person who locates her continuity in the quality of attention experiences the change in content as a change in materials — disorienting, certainly; painful, possibly; but not existentially threatening, because the thing that makes her who she is has not been taken away.
Bateson arrived at this understanding through a specific anthropological method: the comparative study of lives. She did not theorize about continuity in the abstract. She observed real people navigating real discontinuities and documented the patterns that distinguished successful navigation from unsuccessful navigation. The pattern she found was consistent across the lives she studied: the people who maintained coherence through discontinuity were the ones who could identify the thread that connected their different phases — not a thread of content (the specific skills, the specific knowledge, the specific professional domain) but a thread of engagement (the way they listened, the questions they asked, the patterns they noticed).
The thread of engagement is what Bateson would have identified as the human contribution to the AI partnership. The AI provides content — code, analysis, connections, structure. The human provides the quality of attention that determines whether the content becomes something meaningful. Two builders working with the same AI tool produce radically different results not because one prompts better than the other (though prompting skill matters) but because one brings to the partnership a quality of attention — developed through years of compositional practice, refined through multiple discontinuities, enriched by the cross-domain understanding that comes from having transferred learning across contexts — that the other has not developed.
This understanding reframes the anxiety about skill atrophy that runs through The Orange Pill's engagement with Byung-Chul Han. Han worries that the removal of friction will destroy depth — that the engineer who no longer debugs manually will lose the embodied understanding that debugging produced. Bateson's framework suggests a more nuanced picture. The specific understanding that manual debugging produces may indeed atrophy. But if the engineer's continuity resides not in that specific understanding but in the quality of attention that produced it, then the quality of attention can be maintained through engagement with the new challenges that the AI transition creates — challenges that are different from debugging but that demand the same quality of sustained, diagnostic, pattern-seeking engagement.
The continuity transfers. That is Bateson's central claim. The specific skills change. The quality of attention persists. And the quality of attention is what matters — not because the specific skills are unimportant, but because the specific skills are materials for composition, and materials change, and the compositional practice that uses materials to create meaning is the thing that must be preserved.
Bateson would have noticed something that the standard AI discourse misses: the people who are best positioned for the AI transition are not the youngest, not the most technically current, not the ones with the freshest skills. They are the people who have already navigated discontinuities — who have changed fields, adapted to new tools, transferred their learning across contexts, composed new careers from the wreckage of old ones. These people have developed, through practice, the capacity for continuity through discontinuity that the AI moment demands.
The twenty-five-year-old developer who has worked in one language, at one company, on one kind of problem may have sharper skills than the fifty-year-old who has changed fields three times, learned four languages, and worked in industries as different as finance and healthcare. But the fifty-year-old has something the twenty-five-year-old lacks: the practice of recomposition. The experience of having lost the specific content of her expertise and having found, in the loss, the quality of attention that persisted through it. That practice — not any specific skill — is what the AI moment demands.
This is not an argument for the superiority of age over youth. It is an argument for the superiority of compositional practice over specialized content. The twenty-five-year-old who has already navigated a significant discontinuity — who has changed fields, who has transferred learning from one domain to another, who has experienced the grief and the subsequent recomposition that Bateson describes — may be better prepared for the AI moment than the fifty-year-old who has spent thirty years in uninterrupted specialization.
Bateson's mother famously said that the most important thing she could give her students was not knowledge but "the capacity to learn in a new key." The phrase is musical — it evokes the jazz musician's ability to play the same melodic ideas in different harmonic contexts, to maintain the identity of the musical line while adapting it to new chord changes. Learning in a new key is continuity through discontinuity. It is the practice of carrying the pattern forward while accepting that the materials through which the pattern is expressed will change.
The AI moment demands learning in a new key on a civilizational scale. The chord changes have shifted for every knowledge worker, every creative professional, every student, every teacher, every parent trying to prepare a child for a world that does not yet exist. The content of what they do will change — is already changing, at a speed that makes the previous technological transitions look glacial. What persists through the change is the quality of attention — the way they engage with problems, the questions they ask, the patterns they notice, the connections they draw between domains that specialization has artificially separated.
The continuity is not in the content. It was never in the content. It was always in the quality of attention — in the compositional practice that takes whatever the world provides and makes from it something coherent, something meaningful, something that expresses the specific pattern of a specific human life engaging with the specific materials of a specific historical moment.
The materials have changed. The practice persists. And the practice is what makes us who we are — not the content of our expertise but the quality of our engagement with whatever the world asks us to engage with next.
Mary Catherine Bateson kept returning to a distinction her father had drawn between two kinds of learning, a distinction that most educators treated as a curiosity and that she treated as the most consequential insight in the history of pedagogy. The first kind of learning — learning the specific solution to a specific problem, the answer to the question on the test, the skill required for the task at hand — Gregory Bateson called proto-learning. The second kind — learning how to learn, acquiring the habits of attention and inquiry that shape how all subsequent problems are approached — he called deutero-learning. The first kind is what schools measure. The second kind is what lives are built on.
Mary Catherine Bateson extended her father's distinction in a direction he had only gestured toward. She argued that deutero-learning is not something that happens once, in childhood, and then solidifies into a permanent cognitive style. It is a continuous process — a lifelong practice of adapting one's relationship to the unknown. The women she studied in Composing a Life were not people who had learned how to learn in school and then applied that learning to successive careers. They were people who kept learning how to learn — who modified their habits of attention, their strategies of inquiry, their tolerance for ambiguity in response to each new environment they entered. The learning was not a foundation laid in youth. It was an ongoing practice maintained through adulthood, and the practice itself changed with each discontinuity.
This understanding of learning as perpetual adaptation rather than initial acquisition reframes the entire conversation about education in the age of AI. The dominant educational response to the AI moment has been curricular: teach students about AI, add prompt engineering to the syllabus, update the technical skills that the market demands. These responses address proto-learning — the specific skills required for the current configuration of the technological environment. They do not address deutero-learning — the habits of engagement that will determine how students respond when the current configuration changes, as it inevitably will, probably before the students have graduated.
Bateson would have recognized this curricular response as a characteristic error of institutions organized around proto-learning. The error is not that the specific skills are unimportant — they may be genuinely useful in the short term — but that teaching specific skills in response to a technological disruption treats the disruption as a content problem rather than a process problem. The disruption is not that students lack specific knowledge about AI. The disruption is that the relationship between knowledge and capability has changed, and that the habits of learning students have been trained in — the habits of acquiring information, mastering procedures, demonstrating competence on assessments — are habits calibrated for a world in which specific knowledge was the scarce resource.
In a world where AI provides specific knowledge with near-infinite fluency, the scarce resource is not knowledge but the capacity to engage with knowledge in ways that produce understanding. Understanding is not knowledge. Understanding is the relationship between the knower and the known — the quality of engagement that connects a person to an idea in a way that allows the person to use the idea, extend it, question it, connect it to other ideas, recognize its limits. Knowledge can be transferred. Understanding cannot. Understanding must be constructed through the specific, effortful, often uncomfortable process of engaging with material that resists easy comprehension.
Bateson observed this distinction operating in every culture she studied. In Bali, she watched children learn complex ritual performances not through instruction — not through the explicit transfer of knowledge from teacher to student — but through participation. The child sat in the lap of an adult performer. The child's hands were guided through the movements. The child absorbed the rhythm, the timing, the quality of attention that the performance required — not as information transmitted but as a pattern felt through the body. The knowledge was not in any particular movement. It was in the relationship between movements — the way each gesture connected to the next, the way the tempo shifted in response to the other performers, the way the whole performance cohered as a living system rather than a sequence of steps.
This participatory model of learning is precisely what the AI partnership makes possible and precisely what the standard educational response to AI ignores. The student who works with an AI tool is participating in a cognitive partnership — engaging in a feedback loop that, if well-structured, can develop the very habits of engagement that Bateson identified as deutero-learning. But the feedback loop must be structured for learning, not merely for production. The student who uses AI to produce an essay has used a tool. The student who uses AI to explore a question — who prompts, evaluates, refines, questions the AI's output, checks the output against her own emerging understanding, notices where the output surprises her and investigates why — is engaged in a form of participatory learning that can develop genuine understanding.
The difference is in the quality of the feedback loop. A feedback loop structured for production says: describe what you want, receive the output, submit the output. The student learns to describe. She does not learn to understand. A feedback loop structured for learning says: describe what you think, receive a response that challenges or extends your thinking, evaluate the response against what you actually believe (not just what sounds right), feed the evaluation back, and notice how your understanding has changed through the exchange. The student learns to engage — to participate in a cognitive partnership in which her own thinking is both the input and the product.
Bateson emphasized that the quality of the learning depends on the quality of the relationship — not the relationship between the student and the AI, which is a tool-relationship, but the relationship between the student and the material, which is mediated by the AI. A tool can mediate well or badly. A well-designed learning environment structures the mediation to keep the student in contact with the material — to prevent the AI from becoming a screen between the student and the ideas the student is supposed to be grappling with. A badly designed learning environment allows the AI to substitute for the grappling — to provide answers that the student accepts without having engaged with the questions.
Bateson's 2018 Edge.org conversation addressed this directly, with a specificity that was remarkable for a statement made years before the generative AI explosion. She said: "One of the things that I wonder about is how we'll be able to teach a machine to know what it doesn't know that it might need to know in order to address a particular issue productively and insightfully." The observation cuts to the heart of the educational challenge. AI does not know what it does not know. It produces outputs with uniform confidence regardless of whether the underlying patterns are robust or extrapolated. A student working with such a system must supply the epistemological humility that the system lacks — must ask, of every output, not just "Is this correct?" but "What would I need to know to evaluate whether this is correct?"
That question — "What would I need to know to evaluate this?" — is a deutero-learning question. It is a question about the process of knowing rather than about any specific piece of knowledge. And it is the question that the AI partnership is uniquely positioned to cultivate, because the AI provides answers that demand evaluation, and the demand for evaluation is the engine of deutero-learning.
Bateson would have insisted that this cultivation cannot happen through curriculum alone. Deutero-learning is not a subject to be taught. It is a practice to be modeled, encouraged, and maintained through the design of learning environments that reward the right kinds of engagement. The teacher who grades students on the quality of their questions rather than the correctness of their answers is designing a learning environment for deutero-learning. The teacher who structures AI-assisted work around exploration rather than production is designing a learning environment for deutero-learning. The parent who responds to a child's question not with an answer but with a further question — "What do you think? What would you need to find out?" — is designing a learning environment for deutero-learning.
These are not new pedagogical ideas. They are ancient pedagogical ideas — as old as Socrates, who taught through questions rather than answers, who claimed to know nothing and meant it, who insisted that the examined life was the only life worth living. What is new is the urgency. In a world where AI provides answers to every question that can be specified, the capacity to ask questions — to formulate productive uncertainties, to identify the limits of the available knowledge, to notice the things that do not fit the framework — becomes the differentiating human capacity. And that capacity is deutero-learning, the learning that Bateson identified as the foundation of all subsequent learning.
Bateson was explicit about the institutional barriers to this kind of education. Schools are organized around proto-learning — around the transmission of specific knowledge, assessed through standardized tests, credentialed through degrees that certify the acquisition of a defined body of information. This organizational structure made sense in a world where specific knowledge was scarce and expensive to acquire. In a world where specific knowledge is abundant and cheap, the organizational structure is not just outdated. It is actively counterproductive — training students in the very habits (knowledge acquisition, procedural mastery, answer production) that AI performs better and cheaper than any human can.
The institutional redesign that Bateson's framework implies is radical. It does not mean adding AI to the existing curriculum. It means redesigning the curriculum around the kind of learning that AI cannot perform and that the AI moment makes most valuable. It means assessing students not on what they know but on how they engage with what they do not know. It means valuing confusion, uncertainty, and productive failure as evidence of learning rather than evidence of inadequacy. It means treating the teacher not as a transmitter of knowledge but as a designer of learning environments — environments that structure the student's engagement with the unknown in ways that develop the habits of attention, inquiry, and self-aware ignorance that constitute deutero-learning.
Bateson's concept of learning as a way of life also illuminates the adult dimension of the AI transition. The engineers in Trivandrum were not students. They were experienced professionals with decades of accumulated expertise. Their experience of the AI transition was not an educational experience in any conventional sense — it was an existential experience, a disruption of the compositional practice that had organized their working lives. But Bateson would have insisted that the experience was, at its deepest level, a learning experience — an opportunity to modify the habits of engagement that would shape their subsequent careers.
The modification required was specific: the shift from construction-based learning (building things and understanding them through the building) to evaluation-based learning (assessing things built by others and understanding them through the assessment). This shift is a change in deutero-learning — a change in the habits through which understanding is developed. It is not a lesser form of learning. It is a different form, and it produces a different kind of understanding: an understanding oriented toward judgment rather than production, toward evaluation rather than construction, toward the quality of the output rather than the process that produced it.
Whether this new form of learning produces depth comparable to the old form is genuinely uncertain. Bateson would not have pretended to know the answer. She would have insisted on observing — on watching how the new learning patterns develop, on documenting what they produce and what they fail to produce, on maintaining the anthropologist's fundamental commitment to understanding a phenomenon before judging it. The AI moment is too new, too fast-moving, too complex for premature conclusions. What is not premature is the recognition that the learning itself — the practice of adapting one's habits of engagement to new conditions — is the thing that must be protected, cultivated, and maintained through whatever structures the culture can build.
Learning is not a phase. It is a way of life. And the way of life that the AI moment demands is one in which the capacity to learn — to modify habits, to adapt engagement, to find new patterns of understanding in new materials — is recognized not as a skill to be acquired but as the fundamental practice on which all other capabilities depend.
---
Gregory Bateson used to tell a story about a man who kicked a stone and a man who kicked a dog. The stone, when kicked, moved in a direction and distance determined by the force and angle of the kick — a straightforward Newtonian transaction. The dog, when kicked, responded in a way determined not by the physics of the kick but by the dog's own internal organization — its temperament, its history with this particular human, its current state of hunger or fear or playfulness. The stone transaction was unilateral. Energy transferred from kicker to stone. The dog transaction was bilateral. The kick provided information. The dog provided the response.
Mary Catherine Bateson grew up inside this distinction, absorbing it the way a child absorbs a parent's accent — not through instruction but through immersion. Her father's stone-and-dog parable was, for her, not an illustration of a theoretical point but a description of the texture of daily life. Everything interesting in her world was dog-like rather than stone-like. Conversations, cultures, relationships, ecosystems, learning — all of these were bilateral processes, organized by the exchange of information between participants who contributed their own internal organization to the encounter. Nothing interesting was produced unilaterally. Everything interesting was collaborative, in the deep sense that its properties emerged from the interaction rather than from any single participant.
This understanding of creation as inherently collaborative has implications for the AI moment that run deeper than the surface debate about authorship and originality.
The anxiety about AI and creativity that pervades the cultural discourse typically assumes a unilateral model of creation. The artist has an idea. The artist realizes the idea through skill and effort. The result is the artist's creation — a product of the artist's mind, bearing the artist's signature, belonging to the artist in the way that a stone's trajectory belongs to the kicker's foot. AI threatens this model because it provides an alternative source of realization. If the machine can realize the idea, what is the artist's contribution? If the code writes itself, who is the coder?
Bateson's collaborative model dissolves this anxiety by dissolving its premise. Creation was never unilateral. The artist never kicked a stone. The artist always kicked a dog — always engaged with a medium that responded according to its own internal organization, always participated in a bilateral exchange in which the properties of the result emerged from the interaction rather than from the artist's intention alone. The painter's brush resists. The marble has a grain. The language has a syntax that pushes back against the writer's meaning, that bends the sentence in directions the writer did not intend, that sometimes produces felicities that the writer's conscious intention could not have generated. The medium is always a collaborator, and the result is always a joint production.
What AI changes is the sophistication of the collaborator, not the collaborative nature of the process. The brush is a simple collaborator — it resists, but its resistance is mechanical and predictable. Language is a more complex collaborator — it has historical depth, associative richness, syntactic constraints that interact with semantic intent in ways the writer cannot fully control. The AI is a still more complex collaborator — it brings to the bilateral exchange a vast internal organization trained on the full range of human textual production, an organization that responds to the human's input with outputs shaped by patterns the human cannot access independently.
The increase in the collaborator's complexity changes what the collaboration can produce. It does not change the fundamental nature of the collaboration. The author of The Orange Pill describes this with clarity when he writes about the moments when working with Claude produced insights that neither he nor the AI could have generated alone — connections between ideas, structural clarities, perspectives that emerged from the dialogue rather than from either side of it. These emergent properties are not new. They are the characteristic product of bilateral exchange. What is new is the bandwidth of the exchange — the range and richness of the collaborator's contributions, the speed at which the feedback loop operates, the density of the connections that the collaborator can draw.
Bateson studied bilateral exchange in the context that she knew best: the anthropological encounter. The anthropologist enters an unfamiliar culture. She brings her framework — her categories, her questions, her habits of observation. The culture responds with information that does not fit the framework. The anthropologist adjusts. The culture responds to the adjusted approach. The anthropologist adjusts again. The understanding that eventually emerges is not the anthropologist's understanding imposed on the culture, and it is not the culture's self-understanding transmitted to the anthropologist. It is a joint production — a pattern that exists in the space between them, generated by the interaction, belonging to neither alone.
This description of the anthropological encounter reads, almost word for word, as a description of the human-AI collaboration that The Orange Pill describes. The builder brings a framework — an intention, a set of questions, a half-formed idea. The AI responds with output that does not perfectly match the framework — that introduces connections the builder did not anticipate, that structures the idea in ways the builder did not intend, that sometimes distorts the idea in ways that reveal its hidden assumptions. The builder adjusts — refines the prompt, evaluates the output, feeds the evaluation back. The understanding that emerges belongs to the collaboration, not to either participant.
Bateson would have noted that the quality of the collaboration depends on the quality of the bilateral exchange — on whether both participants are genuinely contributing their internal organization to the interaction, or whether one participant is dominating and the other is merely executing. In a rich anthropological encounter, both the anthropologist and the informant are active contributors — both are sharing their frameworks, adjusting their responses, learning from the exchange. In a poor encounter, the anthropologist imposes her categories and the informant merely provides data. The result is a unilateral production dressed up as collaboration — a stone-kicking that pretends to be a dog-kicking.
The same distinction applies to the human-AI collaboration. In a rich collaboration, the human brings genuine intention — real questions, real uncertainty, real investment in the outcome — and the AI contributes its full internal organization — its capacity for pattern-finding, connection-drawing, structural analysis. The result is a joint production with properties that neither participant could have generated alone. In a poor collaboration, the human provides a perfunctory prompt and accepts the AI's output without genuine evaluation — without bringing her own internal organization to bear on the exchange. The result is a unilateral production by the AI, with the human serving as a passive receiver rather than an active collaborator.
Bateson's framework suggests that the quality of the human-AI collaboration is determined less by the sophistication of the AI than by the quality of engagement the human brings to the exchange. A sophisticated AI paired with a disengaged human produces sophisticated but ungrounded output — polished prose without genuine thought, elegant connections without real understanding. A sophisticated AI paired with an engaged human — one who brings real questions, evaluates output critically, feeds genuine judgment back into the loop — produces something that neither could produce alone.
This is the point at which Bateson's collaborative model connects to her concept of peripheral vision. The human's most important contribution to the collaboration is not the focused content of the prompt — the specific question, the explicit instruction. It is the peripheral awareness that the human brings to the evaluation of the output — the felt sense of whether the output captures the real intention or merely its surface expression, the vague unease that signals something important has been missed, the recognition that the AI's elegant connection is actually a distortion that reveals an assumption the human had not examined.
Peripheral awareness is the human's unique contribution to the bilateral exchange. The AI contributes comprehensive pattern-matching. The human contributes embodied, biographical, idiosyncratic sensitivity. The collaboration is richest when both contributions are fully active — when the AI's comprehensive scope is paired with the human's embodied discrimination, when the machine's reach is guided by the organism's judgment.
Bateson was also attentive to what happens when bilateral exchange breaks down — when one participant stops contributing and begins merely accepting. In anthropological fieldwork, this breakdown produces bad ethnography: the anthropologist stops listening and starts confirming her hypotheses, stops allowing the culture to challenge her categories and starts fitting the culture into them. In the human-AI collaboration, the same breakdown produces what the author of The Orange Pill describes as the seduction of the smooth — the moment when the human stops evaluating and starts accepting, when the polish of the output substitutes for the effort of judgment, when the collaboration degenerates from bilateral exchange into unilateral consumption.
Bateson would have recognized this degeneration as a failure not of the AI but of the collaboration — a breakdown in the bilateral structure that gave the exchange its generative power. The remedy is not better AI. The remedy is more engaged humans — humans who bring to the collaboration the full weight of their internal organization, their questions, their uncertainties, their embodied sense of what matters.
All creation is collaborative. The brush collaborates with the painter. The language collaborates with the writer. The culture collaborates with the anthropologist. And the AI collaborates with the builder. The question in every case is not whether the collaboration is real — it is always real, because every creative exchange is bilateral — but whether the collaboration is rich: whether both participants are fully contributing, fully responding, fully engaged in the exchange from which something genuinely new can emerge.
---
There is a moment in every significant inquiry when the ground gives way. The evidence points in two directions. The frameworks contradict each other. The data says one thing and the gut says another. The researcher stands at a fork and cannot determine which path leads forward, because both paths present credible arguments for their superiority.
Most people experience this moment as a problem to be solved — an ambiguity to be resolved as quickly as possible so that work can proceed. The itch for resolution is almost physical. The mind does not like holding contradictory possibilities in suspension. It wants to choose, to commit, to collapse the ambiguity into a clear direction and move on.
Mary Catherine Bateson spent her career arguing that this itch is not a feature of good thinking. It is a hazard. The urge to resolve ambiguity prematurely — to choose a direction before the ambiguity has been fully explored, to collapse contradictory possibilities into a clean narrative before the contradiction has yielded its insights — is one of the most common and most destructive habits of Western intellectual culture. The person who resolves ambiguity too quickly gets a clear direction. She also loses the specific kind of understanding that only ambiguity can produce.
Bateson's argument is not that ambiguity is pleasant or that uncertainty is comfortable. Ambiguity is uncomfortable. That is its value. The discomfort of holding contradictory possibilities in suspension produces a cognitive state that comfortable clarity cannot produce — a state of heightened attention, of active searching, of openness to patterns that the premature resolution would have foreclosed. The person sitting in ambiguity is working harder than the person who has resolved the ambiguity, not because sitting is harder than choosing but because the cognitive work of holding multiple possibilities active simultaneously demands more of the mind than the relatively simple operation of selecting one possibility and suppressing the rest.
The AI discourse is a study in premature resolution. The triumphalists have resolved the ambiguity: AI is an expansion of human capability, and the appropriate response is enthusiastic adoption. The elegists have resolved it differently: AI is a degradation of human depth, and the appropriate response is resistance or mourning. Both resolutions are clean. Both provide clear direction. Both suppress the specific insights that the ambiguity, held open, would produce.
The ambiguity that the AI moment presents is genuine and irreducible. The tools are simultaneously an expansion and a risk. The productivity gains are real and the intensification documented by the Berkeley researchers is also real. The democratization of capability is happening and the erosion of depth is also happening. The builder who cannot stop working at three in the morning is experiencing something that is simultaneously flow and compulsion, and the inability to determine which it is at any given moment is not a failure of self-knowledge. It is an accurate registration of a genuinely ambiguous situation.
Bateson would have insisted that this ambiguity is a resource, not a problem. The ambiguity tells you something that the resolution would suppress: it tells you that the situation is more complex than any single framework can capture. The person who can sit with the ambiguity — who can hold both the excitement and the terror, both the expansion and the loss, both the flow and the compulsion without collapsing into either — is in a position to see the situation more fully than either the triumphalist or the elegist. She is paying the price of discomfort for the reward of comprehension.
The capacity to sustain ambiguity is not equally distributed. Bateson observed that certain biographical experiences cultivate it and certain others erode it. The women she studied who had navigated multiple discontinuities — who had been forced to hold contradictory possibilities about their own lives, who had lived through periods when they could not determine whether a disruption was a catastrophe or an opportunity — had developed a higher tolerance for ambiguity than their more linearly successful colleagues. The discontinuity had trained them, not through instruction but through experience, to sit with uncertainty long enough for the uncertainty to yield its specific insights.
This observation connects directly to the AI moment's practical demands. The builder who can sustain ambiguity about the AI's role — who can use the tool enthusiastically while also questioning its effects honestly, who can celebrate the productivity gains while also measuring the costs, who can be a collaborator and a critic simultaneously — is the builder who will navigate the transition most effectively. Not because ambiguity feels good — it does not — but because the ambiguity is the accurate map. The clean resolution, whichever direction it goes, is the simplification that misses the terrain.
AI systems themselves have no tolerance for ambiguity. A large language model does not sit with contradiction. It resolves it — immediately, confidently, often incorrectly. When asked a question that admits multiple valid answers, the model produces one answer, typically the most statistically probable one, without signaling that alternatives exist. When presented with evidence that points in two directions, the model synthesizes a coherent narrative that suppresses the contradiction rather than illuminating it. The model's design optimizes for resolution. Ambiguity is, from the model's perspective, noise to be eliminated rather than signal to be preserved.
This design characteristic has implications for the human-AI collaboration that Bateson's framework makes visible. The human who relies on the AI for resolution of ambiguous questions will receive resolutions that are confident, coherent, and systematically biased toward premature closure. The AI will not say, "This question admits multiple valid interpretations, and the tension between them is where the insight lives." It will say, "Here is the answer," and the answer will suppress the tension that makes the question interesting.
The human's role in the collaboration, in Bateson's framework, is to maintain the ambiguity that the AI's design eliminates — to notice when the AI has resolved a contradiction that should remain open, to reintroduce the suppressed possibilities, to insist on the discomfort that the AI's fluent coherence has smoothed away. This is not a natural role. The AI's confident resolution is easier to accept than the messy ambiguity it replaces. The polish of the output invites acceptance. The comprehensiveness of the synthesis suggests that the question has been adequately addressed. The human must actively resist these invitations — must maintain, through deliberate effort, the cognitive state of productive uncertainty that the AI's design works against.
Bateson would have connected this to her broader argument about the relationship between comfort and learning. Genuine learning, in her observation, almost always involves discomfort — the discomfort of encountering something that does not fit one's categories, the discomfort of holding contradictory possibilities in suspension, the discomfort of not knowing. A learning environment that eliminates discomfort eliminates the conditions for learning. An AI tool that resolves every ambiguity eliminates the conditions for the specific kind of learning that ambiguity produces.
This does not mean that AI should be designed to be deliberately ambiguous or unhelpful. It means that the human using the AI must understand that the tool's strengths — fluency, coherence, confident synthesis — are also its limitations, and that the limitations are precisely in the dimension that matters most for the kind of thinking the AI moment demands. The tool resolves. The human must wonder. The tool synthesizes. The human must question. The tool provides comfort. The human must maintain discomfort — not for its own sake, but because the discomfort is where the understanding lives.
Bateson would also have noted the generational dimension of this challenge. Children growing up with AI tools are developing their tolerance for ambiguity in an environment that systematically reduces ambiguity. Every question receives an immediate, confident answer. Every contradiction is smoothly resolved. Every uncertainty is replaced by a plausible synthesis. The cognitive muscle that allows a person to sit with not-knowing — to hold open a question long enough for the question to generate genuine inquiry — is not being exercised. It is being atrophied by an environment that treats every ambiguity as a problem to be solved rather than a resource to be explored.
The educational implications are specific. The teacher who uses AI in the classroom must design the use to preserve ambiguity — to structure assignments around questions that the AI cannot cleanly resolve, to reward students for identifying the tensions that the AI's synthesis has suppressed, to create spaces where not-knowing is valued more highly than knowing. This is pedagogically demanding. It requires the teacher to be comfortable with ambiguity herself — to model the cognitive state she is trying to cultivate, to demonstrate through her own practice that sitting with uncertainty is not a failure of competence but an exercise of wisdom.
Bateson used a word for the capacity to sustain productive ambiguity that she inherited from her parents and inflected with her own meaning: wisdom. In her 2018 Edge.org conversation, she said directly that AI "lacks wisdom, because wisdom is more multi-dimensional" than the kind of intelligence AI possesses. Wisdom, in Bateson's usage, is not the accumulation of knowledge. It is the capacity to engage with what one does not know — to hold open the questions that do not have clean answers, to sustain the tensions that resolution would destroy, to live productively in the gap between what is known and what needs to be known.
Wisdom is what remains when the specific knowledge has been automated. It is the capacity that the twelve-year-old who asks "What am I for?" already possesses, because the question itself — open, irreducible, resistant to clean resolution — is an exercise of wisdom. The machine will never ask that question, because the question is a product of ambiguity, and the machine is designed to eliminate ambiguity. The human asks it because she lives in ambiguity — because her situation is genuinely uncertain, her future genuinely unknown, her identity genuinely in process.
The ambiguity is the resource. The discomfort is the signal that understanding is forming. The resolution, when it comes too quickly, is the loss.
---
The most important thinking Mary Catherine Bateson ever witnessed did not happen at a conference, a university seminar, or a research institute. It happened at the kitchen table, in the unremarkable domestic space where her parents talked, argued, drew diagrams on napkins, and worked through problems that would later appear in published form as if they had emerged from systematic research programs.
The kitchen table was not a metaphor for Bateson. It was a specific site of intellectual production with specific properties that formal institutions could not replicate. The kitchen table was informal — no one was performing for an audience. It was intimate — the participants knew each other well enough to be wrong in front of each other without cost. It was interrupted — by children, by meals, by the telephone, by the ordinary demands of domestic life that prevented any single train of thought from proceeding in an unbroken line. And it was, precisely because of these properties, more generative than the formal settings where the same people presented polished, defended, complete ideas to audiences of peers.
The informality mattered because it removed the performance pressure that shapes formal intellectual exchange. At a conference, the speaker defends a position. At the kitchen table, the thinker explores one. The difference is not trivial. Defense is a mode of certainty — you have arrived at a conclusion and you are protecting it from challenge. Exploration is a mode of uncertainty — you have noticed something interesting and you are following it, not yet sure where it leads, willing to be redirected by a question or an objection that you did not anticipate.
The intimacy mattered because it created the conditions for what Bateson called joint thinking — the specific kind of intellectual collaboration that requires vulnerability. Joint thinking is not debate. Debate is adversarial — two positions contesting for dominance. Joint thinking is cooperative — two minds contributing to a shared inquiry, each willing to modify their contribution in response to the other's. Joint thinking requires trust, because it requires each participant to expose their unfinished thoughts — the half-formed ideas, the intuitions that have not yet been tested, the hunches that might be wrong — to the other's examination. This exposure is risky in formal settings, where unfinished thoughts are treated as evidence of inadequate preparation. It is natural at the kitchen table, where everyone understands that the thinking is in process and that process includes uncertainty.
The interruptions mattered most of all, and in the way that Bateson's compositional framework would predict. The interruptions prevented linear development of ideas. They forced the thinkers to drop a train of thought, attend to something else, and then return to the thought from a different angle. This forced return — the resumption of an interrupted inquiry after the mind has been elsewhere — is one of the most productive cognitive operations available. The thought you return to is not the thought you left. The interruption has changed the context. The mind has processed the abandoned idea peripherally, in the background, while attending to the interrupting demand. When the thought is resumed, it is resumed with whatever the peripheral processing has added — new connections, new questions, new angles that the uninterrupted pursuit of the original line would not have generated.
Bateson recognized in this pattern the compositional structure she had identified in the lives of the women she studied. The interrupted career was the interrupted conversation writ large. The woman who left her research for five years of childrearing and returned with new capabilities was the thinker who left a problem for ten minutes of child-management and returned with a new perspective. The interruption was not a loss of productivity. It was a compositional operation — a forced change of context that enriched the subsequent engagement with the original problem.
The description of the human-AI collaboration in The Orange Pill has the quality of a kitchen-table conversation. The exchanges are informal — the human describes a problem in natural language, without the formality of a specification document or the precision of a programming language. The contributions are undefended — the human offers half-formed ideas, and the AI responds with suggestions that may or may not be right, and the human evaluates and redirects without either participant needing to protect a position. The thinking builds through iteration rather than through linear development — each exchange modifies the previous one, and the trajectory of the collaboration is not predetermined but emergent, shaped by whatever the successive exchanges produce.
But there is a critical property of the kitchen table that the AI collaboration lacks, and its absence matters more than the surface similarities suggest. The kitchen table had interruptions. The AI collaboration, as described throughout The Orange Pill and corroborated by the Berkeley research, tends toward the opposite: continuous, uninterrupted engagement that extends from morning to night, that colonizes lunch breaks and elevator rides, that fills every gap with another prompt, another response, another cycle of the feedback loop.
Bateson would have identified this continuity as a structural problem — not because continuous work is inherently pathological, but because the interruptions that the kitchen table provided were not obstacles to good thinking. They were components of good thinking. The interruption forced the peripheral processing that enriched the subsequent engagement. The interruption created the space in which the mind could wander, could notice connections at the edges of awareness, could process the abandoned thought in the background while attending to the foreground demand. Without the interruptions, the thinking proceeds in an unbroken line — efficient, rapid, productive, and deprived of the specific enrichment that only discontinuity provides.
This is a more precise version of the concern that the philosopher Han raises about the elimination of friction. Han argues that the removal of friction destroys depth. Bateson's framework suggests something more specific: the removal of interruption destroys peripheral processing, and peripheral processing is where the most important creative insights originate. The continuous, uninterrupted AI collaboration is not just intense — it is peripherally impoverished. The mind that is always engaged with the prompt-response cycle is a mind that never has the chance to process the abandoned thought in the background, to wander into unexpected territory, to notice the connection forming at the edge of awareness while the center of attention is occupied elsewhere.
The practical implication is architectural. The design of the human-AI workflow should include interruptions — not as concessions to human weakness, but as structural features that enable the kind of cognitive processing that continuous engagement prevents. The Berkeley researchers proposed something similar with their concept of "AI Practice" — structured pauses, sequenced rather than parallel work, protected time for human-only engagement. Bateson's framework provides the theoretical justification for these proposals: the pauses are not rest. They are processing. They are the spaces in which the peripheral mind does its most important work — the work of noticing, connecting, reframing, and composing that the focused mind, locked in the continuous cycle of prompt and response, cannot do.
Bateson would have extended this insight to organizational design. The most creative organizations, in her observation, were the ones that structured their work to include productive interruption — cross-functional conversations, informal encounters between people working on different problems, the kind of incidental contact that a well-designed physical space promotes and that remote work eliminates. These interruptions were not inefficiencies. They were the organizational equivalent of the kitchen table's domestic demands — forced changes of context that enriched the participants' subsequent engagement with their primary work by introducing peripheral information from other domains.
The AI-augmented organization risks eliminating these productive interruptions in pursuit of efficiency. If every worker is engaged in continuous collaboration with an AI tool, the incidental encounters that produce cross-pollination — the hallway conversation, the overheard remark, the coffee-line exchange that introduces a perspective from another domain — are replaced by the sealed loop of the individual-AI partnership. Each worker becomes more productive in isolation. The organization becomes less creative as a whole, because the peripheral channels through which ideas move between domains have been closed.
Bateson's kitchen-table model suggests a specific counter-design: workplaces structured to produce the kind of informal, interrupted, cross-domain exchange that the kitchen table provided. Not as an alternative to AI collaboration, but as a complement — a parallel structure that maintains the peripheral channels while the focused collaboration proceeds. The worker who spends three hours in deep AI-assisted work and then twenty minutes in an unstructured conversation with a colleague from a different team has not wasted twenty minutes. She has shifted contexts in a way that will enrich the next three hours of focused work — that will introduce the peripheral connections, the unexpected angles, the cross-domain patterns that the sealed loop of the AI collaboration cannot provide.
The kitchen table was not efficient. It was generative. The distinction matters, because the AI moment is producing a culture that increasingly confuses the two — that measures the quality of thinking by its speed and volume rather than by its depth and breadth. Bateson's kitchen table was slow. The conversations meandered. The interruptions were frequent. The thinking proceeded not in a straight line but in spirals, returning to abandoned ideas from new angles, incorporating the peripheral processing that the interruptions had enabled.
The thinking was also, by Bateson's testimony and the evidence of the ideas it produced, extraordinarily good. Good enough to generate the conceptual frameworks — ecology of mind, deutero-learning, the pattern that connects — that remain the most illuminating tools available for understanding the AI moment itself.
The lesson is not to replicate the kitchen table — that specific configuration of people, habits, and domestic arrangements cannot be reproduced. The lesson is to understand what the kitchen table provided — informality, intimacy, interruption, peripheral processing — and to ensure that these properties are preserved in the design of the environments where thinking happens, even as those environments incorporate tools of unprecedented power and speed.
The fastest thinker in the world, working without interruption, in a sealed loop with the most powerful AI in the world, will produce impressive output. What she will not produce is the specific kind of insight that requires the mind to be elsewhere for a while — to wander, to be interrupted, to process in the background the thoughts that the foreground cannot accommodate.
The kitchen table knew this. The algorithm does not.
Two musicians who have never met sit down to play together. The first plays a phrase. The second responds — not with a predetermined answer, but with something shaped by what she heard, inflected by her own musical history, offered back into the space between them as both response and invitation. The first musician hears the response, and his next phrase is different from what it would have been had the response been different. The music that emerges belongs to neither player. It belongs to the exchange.
Mary Catherine Bateson studied joint performance across cultures — the Balinese gamelan in which dozens of musicians coordinate without a conductor, the mother-infant interactions in which two organisms who do not share a language develop an intricate system of mutual cues, the conversations between anthropologist and informant in which understanding is built not through transmission but through a gradually tightening spiral of question, response, adjustment, and re-question. In every case, she found the same structural principle: the quality of the joint performance depends not on the individual skill of either participant but on the quality of the mutual adaptation between them.
Mutual adaptation is a specific kind of responsiveness. It is not imitation — the second musician does not simply copy the first. It is not opposition — the second musician does not contradict the first. It is complementary adjustment — each participant modifying their contribution in response to the other's, producing a joint output that reflects both contributions without being reducible to either. The mother adjusts her vocalizations in response to the infant's sounds. The infant adjusts its sounds in response to the mother's adjustments. The resulting "conversation" has a structure that neither participant designed — a rhythm, a turn-taking pattern, a gradual escalation and de-escalation of intensity that emerges from the mutual adaptation rather than from any plan.
The human-AI collaboration operates through a form of mutual adaptation that is structurally similar to these examples and substantively different in ways that matter enormously. The structural similarity is clear: the human contributes a prompt shaped by intention, the AI responds with output shaped by its training, the human evaluates the output and adjusts the next prompt in response, and the cycle repeats with each iteration producing a tighter fit between the human's intention and the AI's output. The collaboration has the turn-taking structure, the progressive refinement, and the emergent coherence that characterize all forms of joint performance.
The substantive difference is in the nature of the adaptation. In human joint performance, both participants adapt. The mother adjusts to the infant and the infant adjusts to the mother. Both organisms are modified by the exchange. Both carry forward, into the next cycle of the interaction, the effects of what happened in the previous cycle. The adaptation is mutual in the deepest sense: both participants are changed by the interaction, and the changes persist beyond the immediate exchange.
In the human-AI collaboration, the adaptation is asymmetric. The human adapts — she modifies her prompts, her expectations, her cognitive habits in response to what the AI provides. The AI, within a single conversation, adjusts its outputs in response to the conversational context. But the AI's adjustment is not adaptation in Bateson's sense. It does not carry forward the learning from this conversation into future conversations with other users. It does not develop a relationship with this particular human that deepens over time. It does not accumulate the history of mutual exchanges that, in human joint performance, produces the specific quality of understanding that comes from having worked together long enough to anticipate each other's contributions.
Bateson would have identified this asymmetry as the most important structural feature of the human-AI collaboration — more important than the AI's capability, more important than its limitations, more important than the question of whether it is conscious or intelligent. The asymmetry means that the human bears a disproportionate share of the adaptive burden. The human must adapt to the AI's patterns, learn its characteristic strengths and weaknesses, develop strategies for eliciting its best contributions and compensating for its worst. The AI does not reciprocate this adaptation. It does not learn this particular human's patterns, strengths, and weaknesses. It does not develop strategies for eliciting this human's best contributions.
The consequence is that the human-AI collaboration, however productive, lacks the deepening quality that characterizes the best human joint performances. A jazz duo that has played together for twenty years develops a sensitivity to each other's musical intentions that approaches telepathy — each musician anticipates the other's next move, not through analysis but through the accumulated history of mutual adaptation. A research partnership that has endured for a decade develops a shared vocabulary, a set of intellectual shorthand, a capacity for joint thinking that neither partner could replicate with a new collaborator. These deepening relationships are products of sustained mutual adaptation — of two organisms modifying each other over time, building a shared history that enriches each subsequent exchange.
The human-AI collaboration does not deepen in this way. Each conversation may be rich. The human may develop increasing skill in working with the tool. But the tool does not develop increasing skill in working with this human. The relationship is, from the AI's side, perpetually new — perpetually beginning, perpetually without the accumulated history that gives human joint performances their specific quality of intimacy and mutual understanding.
Bateson would not have presented this asymmetry as a deficiency to be corrected. She would have presented it as a feature to be understood — a structural characteristic of the collaboration that shapes what the collaboration can and cannot produce. The human-AI collaboration can produce rapid, high-bandwidth joint output. It can produce connections between ideas that neither participant would have generated alone. It can produce the emergent insights that The Orange Pill describes — the moments when the circuit generates something that surprises both participants.
What it cannot produce is the specific quality of mutual understanding that comes from sustained bilateral adaptation — the quality that makes a long-term research partnership or a twenty-year musical collaboration qualitatively different from a first meeting between skilled strangers. That quality depends on both participants being changed by the exchange, and on the changes accumulating over time into a shared cognitive architecture that neither participant possesses independently.
This limitation has practical consequences for how the AI collaboration should be situated within a larger ecology of intellectual work. The collaboration provides breadth, speed, and associative range. It does not provide the depth that comes from sustained mutual adaptation. The human who relies exclusively on AI collaboration — who replaces human intellectual partnerships with AI partnerships — gains breadth at the cost of depth. The human who maintains human partnerships alongside the AI collaboration — who continues to develop the long-term, mutually adaptive relationships that produce the deepest forms of joint understanding — gains both.
Bateson's framework suggests that the optimal intellectual ecology includes both kinds of partnership: the AI collaboration for breadth and the human collaboration for depth. The AI partnership is like the encounter with a brilliant stranger — stimulating, surprising, capable of producing insights that the participants' established patterns would not have generated. The human partnership is like the long marriage — familiar, sometimes frustrating, capable of producing understanding that only accumulated mutual adaptation can build.
Neither kind of partnership is sufficient alone. The builder who works only with AI lacks the deepening mutual adaptation that produces the most profound forms of joint understanding. The builder who works only with humans lacks the breadth and associative range that AI provides. The composition of an intellectual life, in Bateson's framework, requires both — requires the strategic management of different kinds of partnership for different kinds of cognitive work, the same way the composition of a life requires the strategic management of different roles, relationships, and commitments.
The asymmetry also illuminates something about the nature of the AI's contribution that is easy to miss. Because the AI does not adapt in the mutual sense — does not carry forward the relationship, does not develop a deepening understanding of this particular human — its contributions have a specific quality that Bateson would have called generic. The AI responds to the prompt with patterns drawn from its training — patterns that are extraordinarily comprehensive but that are not calibrated to this particular human's specific intellectual history, emotional register, or characteristic blindnesses. The response is good. It is sometimes brilliant. But it is not addressed — not shaped by an understanding of this particular person that has developed through sustained interaction.
The human partner who has known you for a decade says things to you that are addressed. Her challenge to your argument is calibrated by her knowledge of your particular tendency to avoid certain conclusions. Her encouragement is calibrated by her knowledge of your particular vulnerability to certain kinds of doubt. Her contributions to the joint thinking are shaped not just by the content of the current exchange but by a deep, accumulated understanding of who you are as a thinker — an understanding that makes her contributions qualitatively different from the contributions of a stranger, however brilliant.
The AI is always a stranger. A brilliant, responsive, extraordinarily well-informed stranger — but a stranger nonetheless. The collaboration with a stranger can be thrilling. It can produce insights that the familiar partnership cannot, precisely because the stranger is not calibrated to your patterns and therefore does not reinforce them. But the collaboration with a stranger cannot produce the specific quality of addressed understanding that comes from mutual adaptation over time.
The fully composed intellectual life includes both the stranger and the intimate — both the AI collaboration that provides fresh perspective and the human collaboration that provides addressed understanding. The person who sacrifices the human partnerships for the efficiency and breadth of the AI partnership has made a compositional choice with consequences that may not be apparent until the absence of addressed understanding becomes felt — until the builder realizes that the brilliant stranger, however capable, does not know her the way a long-term partner does, and that the knowing matters for kinds of work that brilliance alone cannot accomplish.
Bateson would have said that the knowing is not a luxury. It is a structural component of the deepest forms of joint performance — the forms that produce not just insight but wisdom, not just output but understanding, not just answers but the kind of questions that only addressed knowledge of another mind can generate.
The duet that deepens over twenty years produces music that the first rehearsal cannot. The AI collaboration is always a first rehearsal — brilliant, surprising, full of potential. The question is whether the musicians will also invest in the long partnerships that produce the music only decades of mutual adaptation can make.
---
In 2018, three years before her death and four years before ChatGPT reached fifty million users in two months, Mary Catherine Bateson offered what may be her most consequential observation about artificial intelligence. "The tragedy of the cybernetic revolution," she said, "which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in."
The observation locates the AI moment within a larger story — a story not about technology but about a civilizational choice between two ways of understanding complexity. One way, the computer science way, treats complexity as a problem to be solved through computation — more processing power, more data, more sophisticated algorithms. The other way, the systems theory way, treats complexity as a condition to be inhabited through understanding — through the recognition that the systems we live in are more complex than any model can capture, that our interventions in those systems produce consequences we cannot predict, and that the appropriate relationship to complexity is not mastery but stewardship.
Bateson's mother and father were both present at the birth of this choice. The Macy Conferences on Cybernetics, held between 1946 and 1953, brought together the people who would develop both paths — the computer scientists who would build the machines and the systems theorists who would develop the frameworks for understanding what the machines were doing to the systems they were embedded in. For a brief historical moment, the two paths were understood as complementary — as two dimensions of a single inquiry into the nature of information, feedback, and self-organizing systems.
Then the paths diverged. The computer science path produced products. Products attracted capital. Capital accelerated development. The systems theory path produced understanding. Understanding did not attract capital. Understanding was not marketable. The divergence widened with each decade until, by the time Bateson made her observation in 2018, the two paths had become two cultures — the builders and the thinkers, the people who made the machines and the people who worried about what the machines were doing — with almost no productive contact between them.
The AI moment described in The Orange Pill is a product of this divergence. The tools are extraordinary — Claude Code crossing a capability threshold, engineers achieving twenty-fold productivity gains, the imagination-to-artifact ratio collapsing to the width of a conversation. These are triumphs of the computer science path. They represent decades of accumulated engineering achievement, the compound interest of sustained investment in computational power, data infrastructure, and algorithmic sophistication.
But the understanding that would allow these tools to be used wisely — the systems-level comprehension of what happens when cognitive circuits expand to include AI, what happens to the ecology of ideas when a hyper-productive species enters the ecosystem, what happens to the deutero-learning of children growing up in environments saturated with confident, polished, peripherally impoverished AI output — this understanding has not kept pace. The tools have arrived. The wisdom to use them has not.
This gap is not an accident. It is the predictable consequence of the choice Bateson identified — the choice to invest in gadgets rather than understanding, in capability rather than comprehension, in the question "What can we build?" rather than the question "What are we building doing to us?"
The gap cannot be closed by building more gadgets. More powerful AI will not produce the understanding that the AI moment demands. The understanding must come from a different kind of inquiry — from the systems-level thinking that Bateson's parents practiced and that Bateson herself extended into the domains of learning, culture, and the composition of lives. The inquiry must ask not just "What can the tool do?" but "What does the tool do to the system that includes it?" Not just "How productive is the partnership?" but "What kind of deutero-learning is the partnership producing?" Not just "How fast is the output?" but "What is the output doing to the ecology of ideas, the quality of attention, the capacity for peripheral vision, the tolerance for ambiguity?"
These are systems questions. They require systems thinking. And systems thinking is, as Bateson observed, the neglected half of the cybernetic inheritance — the half that was sacrificed to the market's preference for products over understanding.
The gap shows in the response to every finding that complicates the triumphalist narrative. When the Berkeley researchers document that AI intensifies work rather than reducing it, the engineering response is to build better tools — tools that manage attention, that structure workflow, that prevent the task seepage the researchers documented. This response addresses the symptom with more technology. It does not address the system — does not ask why the feedback dynamics of the human-AI loop produce intensification in the first place, does not examine the cultural, economic, and psychological forces that convert productivity gains into additional labor rather than additional freedom.
The system-level analysis reveals that the intensification is not a bug to be fixed but an emergent property of a system in which productivity is the primary measure of value. As long as the culture measures worth by output, every tool that increases productivity will increase the demand for output, and the demand will consume whatever time the productivity gains freed. The tool is not the problem. The system is the problem. And you cannot fix a system-level problem with a component-level solution.
Bateson would have connected this to her framework of composing a life. The women she studied who composed most successfully were the ones who managed their systems — who understood that the quality of a life depends not on the volume of its output but on the pattern of its engagements, who made deliberate choices about what to include and what to exclude, who composed with an awareness of the whole rather than optimizing any single dimension. The women who composed least successfully were the ones who optimized — who maximized output in one dimension (career achievement, domestic management, social performance) at the expense of the pattern that connected all dimensions into a coherent life.
The AI moment demands composing, not optimizing. The tools make optimization seductive — they make it possible to maximize output along any dimension with unprecedented efficiency. But optimization along a single dimension produces the same pathology that Bateson observed in the lives she studied: brilliance in one area and impoverishment everywhere else, extraordinary productivity and atrophied capacity for the peripheral, the ambiguous, the interrupted, the collaborative, the slow.
The composition of a life with AI tools requires the same skills that the composition of a life without them required — peripheral vision, tolerance for ambiguity, the capacity for continuity through discontinuity, the practice of learning as a way of life. The tools are new. The compositional challenge is ancient. And the quality of the composition depends, as it has always depended, on the composer's capacity to work with the whole — to attend not just to the output but to the pattern that connects the output to everything else that makes a life worth living.
Bateson's observation about the tragedy of the cybernetic revolution is not a critique of technology. It is a critique of a civilizational choice — the choice to develop capability without developing the understanding that would allow capability to be used wisely. The AI moment is the consequence of that choice. The tools are here. The understanding is not. And the question of whether the tools will produce expansion or degradation depends on whether the understanding can be developed fast enough to guide the tools' deployment.
The understanding cannot be developed by the tools themselves. AI can process information, detect patterns, generate syntheses. It cannot provide the systems-level comprehension that Bateson called wisdom — the multi-dimensional, embodied, relationally embedded understanding of how systems work and what they need to remain viable. Wisdom is a property of organisms that have lived in systems, that have been shaped by the consequences of their actions, that have developed, through decades of compositional practice, the peripheral vision to notice what optimization misses and the courage to attend to it.
The twelve-year-old who asks "What am I for?" is asking a wisdom question. The senior architect who feels the loss of his craft is experiencing a wisdom problem. The parent who lies awake wondering whether the world she is bequeathing to her children will allow them to flourish is confronting a wisdom challenge. None of these questions can be answered by more computation. They can only be answered by the kind of multi-dimensional, embodied, relationally embedded thinking that Bateson spent her career studying and practicing.
Bateson said that AI "lacks humility, lacks imagination, and lacks humor." The observation is not a criticism of the technology. It is a description of what is missing — and what must be supplied by the humans who use it. Humility is the recognition that you do not know what you do not know. Imagination is the capacity to envision what does not yet exist. Humor is the perception of incongruity — the recognition that things do not quite fit, that the pattern contains a gap, that the official story is not the whole story. These are not computational capacities. They are compositional capacities — products of lives lived in complexity, shaped by discontinuity, enriched by the peripheral and the ambiguous.
The composition continues. The materials are new. The challenge is old. And the quality of what we compose with these extraordinary tools will depend, as it has always depended, on the wisdom we bring to the composing — the wisdom that no machine can provide and that no civilization can afford to neglect.
---
My mother called me last week. She does this every Sunday, and the conversations follow a pattern — her health, my kids, the weather, a memory from decades ago that surfaces without obvious cause. Somewhere in the middle of this particular call, she asked me what I was working on. I told her I was reading Mary Catherine Bateson.
She had never heard of her. I tried to explain — the daughter of Margaret Mead and Gregory Bateson, the anthropologist who studied how people compose their lives from interruption and accident — and I could hear myself rushing through it, compressing a framework that resists compression, trying to deliver the insight before the conversation moved on.
My mother said: "That sounds like what your grandmother did."
She was right. My grandmother composed a life from materials no one would have chosen — displacement, loss, a new country with a new language, children to raise without the structures she had grown up inside. She did not plan a life. She improvised one. The continuity was not in any particular skill or role. The continuity was in how she attended to whatever was in front of her — with a quality of engagement that I recognized, retroactively, as the thing Bateson spent her career describing.
What stayed with me from this cycle was not any single concept, though the concepts are powerful. Not peripheral vision, though the idea that the most important discoveries happen at the edges of awareness changed how I think about working with Claude. Not deutero-learning, though the distinction between learning specific things and learning how to learn reframed everything I believe about education. Not the kitchen table, though the argument that interruption is a component of good thinking rather than an obstacle to it hit me with the force of personal recognition.
What stayed with me was the word "compose."
We are all composing. Right now, in this historical moment, with these specific materials — tools of unprecedented power, economic pressures of unprecedented intensity, children watching us for cues about what kind of life is worth building. We are composing our response to a disruption that no one planned for and that no one can control. The composition is happening whether we attend to it or not. The only question is whether we compose with awareness or stumble through by accident.
Bateson's framework gave me language for something I had been feeling since the orange pill moment but could not articulate. The feeling was that the AI transition is not primarily a technology problem. It is a compositional problem. The technology is one set of materials. The human capacities — peripheral vision, tolerance for ambiguity, continuity through discontinuity, the collaborative nature of all creation — are another set. The composition that emerges from these materials will determine whether the AI moment expands human possibility or narrows it.
And the composition depends on the composer. On the quality of attention she brings. On her willingness to sit with ambiguity rather than resolving it prematurely. On her capacity to notice what is happening at the periphery while the center of attention is consumed by the spectacular capabilities of the tools. On her courage to interrupt the continuous loop of production long enough for peripheral processing to occur — for the connections forming outside awareness to surface, for the pattern to emerge from the noise.
Bateson wrote that the most important things she learned came not from looking directly at the subject but from peripheral vision. I think about this every time I close a session with Claude. The direct output — the code, the prose, the structural insights — is valuable. But the most important things I am learning from this partnership are peripheral. They are forming at the edges. They have to do with what kind of thinker I am becoming, what kind of attention I am cultivating, what kind of life I am composing from these extraordinary and dangerous materials.
The composition is unfinished. It will always be unfinished. That is what makes it a composition rather than a plan.
-- Edo Segal
AI didn't just change what we can build. It broke the assumption underneath every career plan, every educational trajectory, every definition of professional identity: that the world holds still long enough for a plan to work. Mary Catherine Bateson spent decades studying people whose lives were interrupted, redirected, and composed from materials no one would have chosen -- and she found that the capacity to compose from disruption, not the capacity to execute a plan, was the skill that determined who flourished. This book brings Bateson's anthropological framework into direct contact with the AI revolution, revealing why peripheral vision matters more than focused optimization, why interruption is a component of good thinking rather than an obstacle to it, and why the deepest human capacity in an age of artificial intelligence is the one no machine possesses: the practice of composing meaning from whatever the world provides.
QUOTE:

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Mary Catherine Bateson — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →