By Edo Segal
The tool that changed everything was not the one I expected. It wasn't Claude. It was a reed stylus pressed into wet clay five thousand years ago.
I don't mean that literally. I mean that when I encountered Jack Goody's argument — that writing didn't just record human thought but restructured what human thought could be — something shifted in how I understood the moment we're living through. Not the technology. The cognitive landscape underneath it.
I had been thinking about AI as a capability revolution. Faster output. Broader reach. The imagination-to-artifact ratio collapsing to the width of a conversation. All of that is true, and I wrote about it in *The Orange Pill*. But Goody forced a harder question: What if the tool isn't just changing what we can do? What if it's changing what we can *think*?
Not metaphorically. Structurally.
Goody spent decades showing that writing didn't give people a convenient way to store ideas they were already having. It gave them ideas they couldn't have had without it. The list — that humble, vertical, decontextualized thing — was not a natural cognitive form waiting for a medium. It was a product of the medium itself. No writing, no lists. No lists, no taxonomy. No taxonomy, no science as we know it. The tool didn't serve the mind. It reshaped it.
That pattern — tool arrives, cognition restructures, culture transforms, and nobody notices the restructuring while it's happening — is what makes Goody essential reading right now. Because we are inside the restructuring. Every conversation I have with Claude, every half-formed idea I externalize before it's ready for the blank page, every connection the system surfaces from domains I've never studied — these are not just productivity gains. They are alterations to the cognitive landscape I inhabit. And Goody's framework is the only one I've found that takes both the gain and the loss seriously, without collapsing into either celebration or mourning.
The gain: thoughts that would have died in the pre-verbal fog now have a path to expression. The loss: the specific struggle of self-clarification that builds the thinking muscle may be quietly atrophying. Both are real. Both follow the historical pattern Goody documented across every previous technology of the intellect.
This book applies that pattern to the AI moment with the rigor Goody would have demanded. It does not tell you whether to be excited or afraid. It tells you what to watch for. And it insists — as Goody always insisted — that the watching must be empirical, patient, and honest about what is actually happening to the minds being reshaped.
The reed stylus changed everything once. The cursor is doing it again.
-- Edo Segal ^ Opus 4.6
1919-2015
Jack Goody (1919–2015) was a British social anthropologist whose fieldwork among the LoDagaa of northern Ghana and wide-ranging comparative scholarship transformed the study of literacy, cognition, and cultural evolution. Educated at St John's College, Cambridge — where he spent nearly his entire academic career and held the William Wyse Chair of Social Anthropology — Goody was also a Second World War veteran who spent three years as a prisoner of war, an experience that deepened his lifelong commitment to understanding how societies organize knowledge. His landmark works include *The Domestication of the Savage Mind* (1977), *The Logic of Writing and the Organization of Society* (1986), and *The Interface Between the Written and the Oral* (1987), as well as his influential early essay "The Consequences of Literacy" (1963), co-authored with Ian Watt. Goody's central argument — that technologies of communication such as writing do not merely record thought but restructure it, enabling cognitive operations like listing, classification, and formal logic that are unavailable in purely oral cultures — challenged prevailing assumptions about the universality of rational thought and established the study of "technologies of the intellect" as a field. Elected a Fellow of the British Academy and knighted in 2005, Goody remains one of the most cited anthropologists of the twentieth century, and his framework for understanding how media shape cognition has gained renewed urgency in the age of artificial intelligence.
In 1968, a social anthropologist at Cambridge published an essay with a title that contained, in five words, one of the most consequential arguments of the twentieth century. The essay was called "The Technology of the Intellect." The anthropologist was Jack Goody. And the argument, stripped to its core, was this: the tools human beings use to think are not passive instruments that leave thinking unchanged. They are environments that restructure thought itself.
The claim sounds modest until its implications are traced. A hammer does not change what the hand is. A microscope does not change what the eye is. But writing, Goody argued, changes what the mind is — not metaphorically, not gradually, not as a secondary effect of some other cause, but directly, structurally, and in ways that can be documented by comparing societies that have adopted writing with societies that have not. The technology of writing did not merely give people a way to record the thoughts they were already having. It gave them thoughts they could not previously have had. It made new cognitive operations possible — operations that required the visible, permanent, spatially organized, externally manipulable medium that writing provided and that speech, by its nature, could not.
This distinction between recording and restructuring is the hinge on which everything in the present analysis turns.
Consider what it means for a technology to record. A recording technology preserves a signal. The wax cylinder preserves the voice. The photograph preserves the image. The notebook preserves the sentence. In each case, the signal exists independently of the recording medium. The voice existed before the cylinder. The face existed before the photograph. The thought existed before the pen touched paper. The technology captures something that was already there. Its value is in preservation, transmission, storage — not in the generation of the signal itself.
A restructuring technology does something categorically different. It changes what signals can be generated. It alters the cognitive landscape so that certain operations become possible that were not possible before, and certain operations that were once central become peripheral, then unnecessary, then forgotten. The technology does not preserve thought. It transforms the thinker.
Goody's career was an extended demonstration that writing is a restructuring technology. His fieldwork among the LoDagaa of northern Ghana in the 1950s and 1960s gave him the empirical foundation. Here was a society in transition — some members literate, others not, the differences between them observable in real time rather than reconstructed from historical inference. Goody could watch what happened to cognitive practices when writing entered a community. The changes were not incidental. They were systematic. They followed patterns. And the patterns were not about intelligence — the LoDagaa who could not write were not less intelligent than those who could. The patterns were about the cognitive operations that the medium of writing made available.
The list, for instance. Goody devoted some of his most celebrated analysis to the humble list — a cognitive form so ubiquitous in literate societies that its revolutionary character has become invisible. A list extracts items from the flow of narrative and arranges them in a new kind of order: vertical, spatial, decontextualized. In speech, information is always embedded in a story, a sequence, a context. You remember the name of the king because you remember the tale in which the king appears. The name and the narrative are fused. A list separates them. It pulls the name out of the story and places it in a column alongside other names, and in doing so it enables operations that narrative cannot support — comparison, classification, hierarchical ordering, the detection of gaps and duplications.
None of these operations are available in a purely oral medium. Not because oral peoples lack the cognitive capacity for comparison or classification, but because the medium of speech does not provide the external, persistent, spatially organized field that these operations require. The operations need a surface. Writing provides one.
This analysis extended to tables, which enable cross-referencing across multiple dimensions simultaneously; to formulae, which enable the abstract manipulation of relationships between variables; to syllogisms, which enable the formal comparison of premises that must be held in view at the same time. Each of these cognitive forms was, in Goody's analysis, a product of the technology rather than a property of the mind that existed prior to the technology and merely found convenient expression in it. The technology made the cognitive form possible. Without the technology, the form did not exist — not because it was suppressed but because it could not be generated.
The implications of this thesis radiate outward in every direction. If writing restructured thought, then every subsequent technology of communication — printing, telegraphy, computing — must be examined not merely for what it transmits but for what it makes thinkable. Each technology opens certain cognitive channels and closes others. Each creates new forms of intellectual work and renders old forms unnecessary. The printing press did not merely distribute what scribes had been writing. It created new genres, new reading practices, new relationships between author and audience, new forms of knowledge accumulation that depended on the standardization and mass availability that print provided. The computer did not merely accelerate what calculators had been doing. It made simulation possible, made real-time data visualization possible, made the manipulation of complexity at scales that no human mind could hold unaided possible.
Each transition followed a pattern that Goody's framework illuminates. First, the new technology makes certain cognitive operations dramatically easier. Second, the operations that become easier are practiced more, developed further, and eventually institutionalized. Third, the operations that the old technology supported and the new technology renders unnecessary begin to atrophy — not because anyone decides to abandon them, but because the cognitive ecosystem no longer selects for them. Fourth, the atrophy becomes invisible, because the people who might notice it are themselves products of the new cognitive environment and cannot easily imagine the cognitive landscape that preceded it.
The pattern is not deterministic. Goody was careful — more careful than his critics sometimes acknowledged — to insist that the consequences of a communication technology are mediated by the social, institutional, and cultural contexts in which the technology is adopted. Writing in ancient Sumer, controlled by a scribal elite and deployed primarily for administrative purposes, had different cognitive consequences than writing in classical Greece, where broader literacy enabled philosophical dialogue and democratic participation. The technology created potentials. The culture determined which potentials were realized. As Michael Cole observed in his careful analysis of Goody's position, one can find in Goody's work both emphatic statements about the cognitive consequences of literacy and equally emphatic statements that those consequences are contingent on cultural circumstances. This is not a contradiction. It is a sophisticated recognition that technologies of the intellect operate through cultures, not upon them — that the technology and the culture are a system, and the cognitive outcomes are properties of the system rather than of either component in isolation.
This sophistication is precisely what the current discourse about artificial intelligence lacks.
The popular conversation about AI oscillates between two positions, both inadequate. The first, which might be called technological inevitabilism, treats AI as a force that will reshape human cognition regardless of what anyone does about it — a tide that cannot be held back, a revolution that will proceed on its own terms. The second, which might be called technological voluntarism, treats AI as a neutral tool whose effects depend entirely on how people choose to use it — a hammer that can build a house or crack a skull, with the moral weight resting entirely on the hand that wields it.
Goody's framework reveals both positions as incomplete. AI is not a neutral tool, because no technology of the intellect is neutral — each one restructures the cognitive landscape in ways that are partly independent of user intention. But AI is not a deterministic force either, because the restructuring is always mediated by institutional, cultural, and social contexts that shape which potentials are realized and which are foreclosed.
What Goody's framework demands is a different kind of question entirely. Not "Is AI good or bad?" — a question as uninformative as asking whether writing was good or bad. Not "Will AI change how we think?" — a question whose answer, given the history of every previous technology of the intellect, is obviously yes. But rather: What specific cognitive operations does AI make possible that were not possible before? What specific cognitive operations does AI render unnecessary that were previously essential? And what institutional structures might maximize the first set while protecting against the unexamined loss of the second?
These are empirical questions. They demand observation, comparison, careful documentation of what actually happens to human minds when AI enters the cognitive environment. They demand the anthropologist's commitment to seeing what is actually there rather than what one hopes or fears to find. And they demand something that the current discourse has in remarkably short supply: patience. The cognitive consequences of writing took centuries to unfold fully. The cognitive consequences of printing took decades. The cognitive consequences of computing are still being understood, seventy years after the first general-purpose computers.
The cognitive consequences of AI will not be understood in a news cycle. They will not be captured by a single study, however well-designed. They will be understood, if they are understood at all, through the kind of sustained, cross-contextual, empirically grounded analysis that Goody practiced throughout his career — analysis that takes seriously both the power of the technology and the agency of the cultures that adopt it.
Edo Segal, in The Orange Pill, describes the moment when machines learned to speak human language as a phase transition — the kind of change that reorganizes the rules rather than merely accelerating the game. Goody's framework suggests that Segal is right, but for reasons more fundamental than the technological description captures. The significance of the transition is not that machines became faster or more capable. The significance is that a new technology of the intellect has arrived — one that restructures the relationship between thinking and expression in ways that no previous technology has approached. The command line restructured how programmers thought. The graphical interface restructured how users thought. Natural language AI restructures how everyone thinks, because the medium it operates in — human language itself — is the most universal cognitive environment there is.
Every previous technology of the intellect required the user to enter its medium. The scribe had to learn to write. The programmer had to learn to code. The statistician had to learn the software. The technology restructured cognition, but only the cognition of those who learned to use it. The restructuring was gated by literacy — by the specific competences required to operate in the technology's medium.
AI in its current form removes the gate. The medium is natural language. Everyone already operates in it. The restructuring, therefore, is not confined to a specialist class. It is available to anyone who can describe what they want in the language they already speak. This is either the most democratic expansion of cognitive capability since the invention of public schooling, or the most widespread cognitive restructuring in human history conducted without the participants' informed understanding of what is being restructured.
Probably it is both.
The chapters that follow apply Goody's framework systematically to this transition. They ask what AI makes thinkable, what it externalizes, what cognitive forms it generates, what capacities it threatens to atrophy, and what new literacy it demands. The analysis proceeds in Goody's manner: empirically, comparatively, without ethnocentric prejudice, and with the anthropologist's commitment to holding complexity rather than resolving it prematurely.
The question is not whether AI will change how human beings think. The historical pattern is unambiguous on that point. The question is whether the change will be recognized, studied, and shaped by deliberate institutional design — or whether it will proceed invisibly, restructuring the minds of a generation before anyone has thought carefully about what the restructuring produces and what it costs.
The technology of the intellect has arrived. The intellect, as always, is the last to notice.
---
There is a tendency, deep in literate culture, to assume that thought precedes its expression — that the idea exists fully formed in the mind and the act of writing merely captures it, the way a jar captures water. Goody spent his career dismantling this assumption. His central demonstration was that writing does not capture thought. It transforms the conditions under which thought occurs. And the transformation is so thoroughgoing that literate people cannot easily reconstruct what thinking was like before writing existed, for the simple reason that the cognitive forms they use to reconstruct it — systematic comparison, propositional analysis, taxonomic classification — are themselves products of writing.
To understand what AI makes thinkable, one must first understand, with some precision, what writing made thinkable. The comparison is not ornamental. It is structural. The same pattern — technology enabling cognitive forms that did not previously exist — is repeating, and the features of the current repetition become visible only against the background of the original.
Start with the list. Goody's analysis of lists in The Domestication of the Savage Mind remains one of the most illuminating pieces of cognitive anthropology ever written, partly because the subject seems so trivial. A list. Shopping items. Kings of England. The periodic table. Who would devote serious analytical attention to the list as a form?
Goody would, because he recognized that the triviality was an illusion produced by familiarity. The list is not a natural cognitive form. It is an artifact of writing technology, and its characteristics — vertical arrangement, decontextualization, spatial ordering, boundaried completeness — are properties of the written medium, not properties of thought in general.
In oral cultures, information is embedded in narrative. The LoDagaa myth of the Bagre, which Goody studied extensively, conveys cosmological knowledge, social norms, ritual practices, and historical memory in the form of a recited story. The knowledge is not separable from the story. Ask a Bagre reciter to extract the names of the deities mentioned in the recitation and arrange them in order of importance, and the request is not merely difficult. It is, in a precise sense, unintelligible. The deities do not exist outside the narrative. Their importance is not a property that can be abstracted from the contexts in which they appear. The question presupposes a cognitive operation — extraction, decontextualization, comparison — that the oral medium does not support.
Writing makes the question intelligible by providing a surface on which the extracted names can be placed, a spatial arrangement in which their relative positions can be manipulated, and a permanence that allows the arrangement to be examined, revised, and compared with alternative arrangements. The list is born.
What does the list make thinkable? Several operations that are fundamental to what literate cultures call rational thought. First, comparison: items placed side by side on a list can be compared in ways that items embedded in separate narratives cannot. The similarities and differences become visible. Second, classification: the list enables the grouping of items into categories, which enables the creation of hierarchies, which enables the construction of taxonomies — the great organizational structures of literate knowledge, from Linnaean biology to library cataloging systems. Third, gap detection: a bounded list, by its very boundedness, makes absences visible. The list of kings that skips a generation reveals the skip. The inventory that omits an item reveals the omission. Absence, in an oral narrative, is invisible. In a list, it has a shape.
The table extends the list into a second dimension. Where the list arranges items along a single axis, the table arranges them along two — rows and columns — enabling cross-referencing. A table of trade goods organized by origin and destination reveals patterns in the flow of commerce that no narrative account could render visible. A table of astronomical observations organized by date and celestial position reveals periodicities that narrative description would bury in sequential detail. The cognitive operation the table enables — the simultaneous comparison of items across multiple dimensions — is so fundamental to modern scientific and administrative thought that its absence is nearly unimaginable. And yet it was absent, entirely absent, from human cognitive practice for the vast majority of the species' history, because the medium that supports it did not exist.
The syllogism — the formal comparison of premises to derive a conclusion — is another cognitive form that Goody argued depends on writing. Syllogistic reasoning requires that the premises be held in view simultaneously so that their logical relationship can be inspected. In speech, premises are sequential: one follows the other in time, and by the time the second has been stated, the first is available only in memory, where it is subject to the distortions, compressions, and contextual contaminations that memory always introduces. Writing places both premises on the page at once. Their relationship becomes a spatial relationship, available for visual inspection, immune to the drift of memory. Formal logic, in Goody's analysis, is not a universal human capacity that writing merely facilitated. It is a cognitive practice that writing enabled — that could not exist in the form literate cultures recognize without the external, visible, permanent medium that writing provided.
This is not to say that oral cultures lacked logic. The distinction Goody drew — and that careless readers sometimes missed — was not between logical and illogical peoples but between embedded and decontextualized reasoning. Oral peoples reason brilliantly within the contexts they inhabit. A LoDagaa farmer deciding when to plant, a navigator reading the stars and currents to find an island across hundreds of miles of open ocean, a storyteller constructing a narrative that holds an audience for hours — each of these is a feat of reasoning that literate people could not easily replicate. The reasoning is contextual, embodied, responsive to the specific situation. What it is not is abstract — and this is not because oral peoples lack abstraction as a capacity but because the medium they think in does not support the operations that produce decontextualized abstraction.
The implications for understanding AI are direct and consequential. If writing made certain cognitive forms possible — lists, tables, syllogisms, taxonomies — by providing a medium with specific properties (visibility, permanence, spatial organization, manipulability), then the question about AI becomes: What cognitive forms does AI make possible by virtue of the specific properties of its medium?
The medium of AI is not static text on a surface. It is dynamic conversation with a system that can hold vast bodies of knowledge, detect patterns across them, generate structured output from unstructured input, and iterate through cycles of proposal and revision at a speed that no human collaborator can match. These properties — conversational responsiveness, encyclopedic scope, pattern detection across scale, and iterative speed — are as distinctive to AI as visibility and permanence are to writing. And if Goody's thesis holds, they should be producing cognitive forms as distinctive as the list and the table — forms that do not merely accelerate existing thought but make new kinds of thought possible.
The history of writing also reveals a pattern that has urgent relevance to the present moment. The cognitive restructuring that writing produced was not recognized as restructuring by the people undergoing it. The scribes of ancient Sumer did not experience themselves as participants in a cognitive revolution. They experienced themselves as doing administrative work — recording transactions, inventorying goods, tracking obligations. The cognitive revolution was a byproduct of the administrative practice. The new forms of thought — listing, classifying, comparing — emerged as side effects of practical tasks and only gradually became recognized as intellectual practices in their own right.
The formalization of these practices into explicit intellectual disciplines — logic, taxonomy, systematic philosophy — took centuries. And the recognition that the practices were enabled by the technology rather than merely expressed through it took millennia: it required Goody himself, and the anthropological tradition he drew on, to see what had been hiding in plain sight for five thousand years.
The current AI transition is proceeding at vastly greater speed, but the pattern of invisible restructuring is the same. Builders who use AI to develop software, writers who use it to draft text, analysts who use it to process data — these practitioners experience themselves as doing their existing work more efficiently. They do not typically experience themselves as undergoing cognitive restructuring. The restructuring is, as it was with writing, a byproduct of practical use. And the forms of thought that AI is making possible are emerging as side effects of practical tasks, not yet recognized as distinctive cognitive practices with their own characteristics, their own strengths, and their own blind spots.
Goody's example teaches one more thing about the relationship between a technology of the intellect and the minds that use it. The relationship is not one-directional. Writing restructured thought, but thought also shaped how writing developed. The particular forms that writing took — alphabetic versus logographic, cursive versus block, horizontal versus vertical — were shaped by the cognitive needs and cultural practices of the societies that adopted writing. The technology and the cognition co-evolved, each shaping the other in a process that Goody, borrowing from evolutionary biology, might have called cognitive niche construction.
The same co-evolution is visible in the AI transition. The large language models that power current AI systems were shaped by human cognitive practices — trained on human text, designed to respond to human prompts, optimized for human evaluation. And they are in turn reshaping human cognitive practices — changing how people formulate problems, what they expect from intellectual work, how they evaluate the quality of a thought. The technology and the cognition are entangled. Neither can be understood in isolation from the other. Neither can be evaluated except as part of the system they jointly constitute.
This entanglement is what makes the AI transition so difficult to study and so dangerous to ignore. The cognitive forms that AI is producing — whatever they turn out to be — are being produced inside a feedback loop. The humans shaping the technology are already being reshaped by it. The observers studying the change are already changed by what they are studying. The anthropologist's traditional tool — the outsider's perspective, the view from a culture that has not adopted the technology — is not available, because the technology is everywhere, and the restructuring is already underway.
What remains available is the analytical framework. The recognition that technologies of the intellect restructure thought. The commitment to identifying specific gains and specific losses. The insistence that neither celebration nor mourning is an adequate response to a transition that is simultaneously enriching and impoverishing, in proportions that only sustained empirical attention can determine.
Writing made lists, tables, and syllogisms thinkable. The question now is what AI is making thinkable — what cognitive forms are emerging from the new medium that could not have emerged from any previous one. That question occupies the next chapter. It is, in Goody's terms, the question of the hour: not what the technology does, but what the technology makes the mind capable of doing that the mind could not do before.
---
The scribe in Uruk, circa 3100 BCE, pressing a reed stylus into wet clay to record a delivery of barley, was not trying to invent systematic thought. The scribe was trying to keep track of grain. But the medium — the clay tablet, the visible marks, the spatial arrangement of signs — afforded cognitive operations that the task of record-keeping did not require, and the operations, once available, were practiced, developed, and eventually transformed into the intellectual infrastructure of civilization. The list was a side effect of accounting. Taxonomy was a side effect of listing. Logic was a side effect of taxonomy. Each cognitive form was enabled by the technology and then developed far beyond the practical purpose that occasioned its use.
The same pattern — technology enabling cognitive operations that exceed the purposes for which the technology was adopted — is visible in the AI transition. Builders and writers and analysts adopt AI tools to do their existing work faster. But the medium of AI, like the medium of writing, affords cognitive operations that the existing work does not require. These operations are being practiced, developed, and — if the historical pattern holds — will eventually transform into cognitive forms as distinctive as the list and the table, forms that could not exist without AI and that will become, for the people who inhabit them, as invisible and indispensable as literacy itself.
Goody's framework suggests that the way to identify these new cognitive forms is not to ask what AI does, but to ask what properties the medium of AI possesses that no previous cognitive medium possessed, and then to examine what cognitive operations those properties make possible. Three properties stand out as genuinely distinctive.
The first is what might be called sub-articulacy processing — the ability of the medium to accept input that has not yet reached the threshold of clarity that previous media demanded. Writing requires articulacy. To write a sentence, the thought must be clear enough to be encoded in words, arranged in grammatical order, committed to a surface in a form that will be intelligible to a reader. The threshold is high. Ideas that have not yet reached it — intuitions, hunches, half-formed recognitions, the vague sense that two things are connected without yet knowing how — cannot enter the medium of writing. They remain trapped in what might be called the pre-verbal fog: the zone of cognition where thoughts exist as felt directions rather than formulated propositions.
Every person who has ever stared at a blank page knows the pre-verbal fog. The idea is there. It is real. It has a shape, or at least a direction. But it cannot yet be written, because writing demands a level of precision that the idea has not achieved. The gap between the felt sense of the idea and the articulacy required to externalize it is the fundamental friction of literate thought, and it is a friction that has governed intellectual production for five thousand years.
AI lowers the threshold. The builder who describes a problem to a large language model in natural language — incomplete, imprecise, groping — receives a response that attempts to structure the groping. The response may be wrong. It may be partially right in ways the builder did not expect. It may organize the half-formed thought along axes the builder had not considered. In any case, it provides something that the blank page does not: a structured response to an unstructured input, which the builder can then evaluate, revise, and re-submit in a cycle that moves the idea from vaguer to clearer through iterative externalization rather than solitary struggle.
This is not merely faster writing. It is a different cognitive operation. The writer at the blank page must do the work of clarification internally — must move the idea from fog to articulacy through an entirely private process before the medium of writing will accept it. The builder working with AI can externalize the fog itself and receive structural assistance from the medium. The clarification becomes collaborative rather than solitary, distributed between the human and the machine in a way that no previous medium supported.
Segal describes exactly this experience: bringing half-formed questions to Claude, receiving responses that advance the formulation, and iterating through cycles that move the idea toward clarity. What Goody's framework reveals is that this experience is not a convenience. It is a cognitive restructuring. The availability of a medium that accepts sub-articulacy input and returns structured output changes what kinds of thoughts the thinker can think, because ideas that would have dissipated in the pre-verbal fog — that would never have crossed the threshold into the medium of writing — can now be externalized, engaged with, and developed. The pre-verbal fog, which has been the graveyard of half-formed ideas since the invention of writing, suddenly has an exit.
The second distinctive property of AI as a cognitive medium is pattern detection at scales that exceed individual cognition. Goody observed that writing enabled comparison across categories that speech could not hold simultaneously — the table of trade goods, the astronomical chart, the inventory that reveals gaps. But the comparison was limited by the capacity of the human mind to hold the written information in working memory and detect patterns across it. A table of a hundred entries can be scanned by a literate person. A table of a million entries cannot. The patterns are there, but the human cognitive apparatus is not equipped to find them.
AI operates at scales that dwarf individual cognitive capacity. Large language models have been trained on corpora that no single human mind could read in a lifetime, and the patterns they detect — statistical regularities across billions of tokens of text — constitute a form of knowledge that has no precedent in the history of cognitive technology. This knowledge is not the same as understanding. The model does not understand the patterns it detects in the way a human reader understands a text. But the patterns are real, and the cognitive operations they enable — the detection of connections between ideas from widely separated domains, the recognition of structural similarities between problems that appear superficially different, the identification of latent associations that the human interlocutor had not considered — are operations that no previous technology of the intellect could support.
Segal's account of the laparoscopic surgery example in The Orange Pill is illustrative. The connection between his question about friction and the history of surgical technique was not a connection that either Segal or the machine set out to find. It emerged from the collision between a half-formed question and a vast training set, and it proved to be generative — it opened a line of argument that neither the human nor the machine could have produced independently. This kind of emergent connection is not incidental to AI's function. It is a characteristic cognitive form of the medium, analogous to the list as a characteristic form of writing. The associative synthesis — the connection between ideas from different domains that becomes visible only through the mediation of a system trained on more text than any human can process — is something AI makes thinkable.
The third distinctive property of AI as a cognitive medium is the rapid generation of option spaces. Before AI, the creation of multiple alternative expressions of a single idea was expensive. A writer could draft a paragraph one way, delete it, draft it another way, delete it, draft it a third way — but each iteration cost time and cognitive effort, and the practical limit on how many alternatives could be considered was low. A designer could sketch three versions of a layout. A programmer could try two approaches to an algorithm. The constraint was always the cost of generation: producing an alternative required doing the work of producing it.
AI collapses this cost. The builder can request ten versions of a paragraph, five approaches to an algorithm, eight framings of a problem. The generation is nearly instantaneous. The cognitive operation that becomes possible is comparative evaluation at a scale that was previously available only to teams — the examination of a space of possibilities rather than a single path through it. The builder does not choose between the first idea and the revision. The builder evaluates a field of options and selects, combines, or rejects from a position of comparative overview.
This is a new cognitive form. It is not brainstorming — brainstorming is a social process that generates ideas through the interaction of multiple human minds. It is not drafting — drafting is a sequential process that moves through alternatives one at a time. It is option-space evaluation: the simultaneous consideration of multiple structured alternatives generated at a speed that allows the mind to focus on judgment rather than generation. The cognitive labor shifts from producing alternatives to evaluating them — from generation to curation — and this shift, like the shift from memorization to list-making, changes not just the efficiency of thought but its character.
Goody would recognize these three cognitive forms — sub-articulacy processing, associative synthesis at scale, and option-space evaluation — as belonging to the same category as lists, tables, and syllogisms: forms of thought enabled by a specific medium, impossible without it, and destined to become invisible to the people who inhabit them. The builder who works with AI will, in time, find it as difficult to imagine thinking without these operations as a literate person finds it to imagine thinking without lists. The operations will become the water in which thought swims, and the water, as always, will be the last thing the fish notices.
But the historical pattern also suggests a caution. Each cognitive form that a technology of the intellect enables carries its own distortions. The list, by decontextualizing items, conceals the relationships between them that narrative preserves. The table, by imposing a grid, forces information into categories that may not correspond to the structure of the information itself. The syllogism, by formalizing reasoning, creates an illusion of certainty that informal, contextual reasoning does not claim. Each form of thought that writing made possible came with a shadow — a systematic distortion built into the form itself.
The new cognitive forms that AI enables will carry their own shadows. Sub-articulacy processing may rescue ideas from the pre-verbal fog, but it may also develop ideas that the fog should have kept — ideas that were vague not because the thinker lacked a medium for clarifying them but because the ideas themselves lacked substance. Associative synthesis may reveal genuine connections across domains, but it may also produce connections that are statistically plausible and intellectually empty — the confident confabulations that current large language models are notorious for generating. Option-space evaluation may enable sophisticated comparative judgment, but it may also produce a paralysis of choice, or a tendency to select the most superficially appealing option rather than the most deeply considered one.
Segal describes catching exactly this kind of shadow: a passage where Claude drew an elegant connection to Gilles Deleuze's concept of smooth space that turned out, on inspection, to be philosophically wrong. The output was plausible. The prose was polished. The connection was a phantom — a product of pattern-matching that resembled insight but was not. The shadow of the associative synthesis is the associative confabulation: the connection that looks like discovery but is, in fact, a statistical artifact dressed in good writing.
Goody's framework does not condemn these shadows. It predicts them. Every cognitive technology produces forms of thought that are simultaneously enabling and distorting, and the task is not to choose between adopting the technology and avoiding it but to develop the literacy — the critical practices, the institutional supports, the habits of evaluation — that allow the forms to be used deliberately, with awareness of their shadows.
The scribe in Uruk could not have imagined what the list would become. But the scribe was already, in pressing reed to clay, inaugurating a transformation whose cognitive consequences would unfold over millennia. The builder in conversation with AI is in an analogous position: already thinking in forms that did not exist a year ago, already inhabiting a restructured cognitive landscape, already unable to see clearly what the restructuring is producing, because the restructuring is the medium through which all current seeing occurs.
---
Every technology of the intellect works through a single fundamental mechanism: externalization. The transfer of cognitive operations from the interior of the mind to an external medium that can be perceived, manipulated, and shared. The history of human cognitive development is, in significant part, a history of successive externalizations — each one transferring a different cognitive function from the private interior to the public exterior, and each one transforming the function in the act of transfer.
Speech externalized thought itself. Before language, cognition was sealed inside the individual organism. Whatever thinking occurred — and there is every reason to believe that pre-linguistic hominids engaged in sophisticated cognitive operations — could not be shared, compared, or accumulated across individuals. The thought died with the thinker. Language broke this seal. It allowed the contents of one mind to be transmitted to another mind through a shared medium of sound. The externalization was impermanent — speech vanishes as it is produced — but it was sufficient to enable the accumulation of knowledge across individuals and, through oral tradition, across generations. The cognitive universe expanded from the individual skull to the community of speakers.
Writing externalized memory. Before writing, the accumulated knowledge of a community existed only in the memories of its living members. When a master craftsman died, the knowledge in his hands died with him unless he had taught it to an apprentice. When a storyteller forgot a passage, the passage was gone. Memory was the only storage medium, and memory is biological — limited in capacity, subject to distortion, mortal. Writing transferred the storage function from biological memory to an external surface — clay, papyrus, parchment, paper — that was not subject to biological limitations. Knowledge could now persist beyond the lifetime of any individual knower. It could accumulate across centuries. And because it was external and visible, it could be inspected, verified, and corrected in ways that internal memory could not support. The cognitive universe expanded from the community of living speakers to the archive of all that had been written.
Printing externalized distribution. Before the press, a written text existed in as many copies as scribes could produce — typically a handful, each requiring months of labor. Knowledge was external but scarce, confined to the libraries and collections of the institutions that could afford to commission and maintain manuscripts. Printing collapsed the cost of reproduction and made possible the broad distribution of identical copies. Knowledge became not merely external and persistent but available. The cognitive universe expanded from the archive to the public.
Computing externalized calculation. Before electronic computation, mathematical operations were performed by human minds — or, in the industrial age, by teams of human computers, people whose job title was literally "computer," performing calculations by hand and passing partial results to colleagues for further processing. The electronic computer transferred the computational function from biological neural networks to electronic circuits that could perform the same operations at speeds and scales that biological computation could not approach. The cognitive universe expanded to encompass complexities that no team of human calculators could process.
Each externalization follows a pattern that Goody's analysis illuminates. The function that is externalized does not remain unchanged in the transfer. Writing did not create a perfect copy of memory in an external medium. It created something different — a visible, permanent, manipulable record that supported cognitive operations (comparison, classification, gap detection) that memory alone could not. Printing did not create a perfect copy of the scribe's work in a new medium. It created something different — a standardized, mass-produced text that supported cognitive operations (citation, cross-referencing, collective verification) that manuscript culture could not. Each externalization transformed the function by transferring it to a medium with different properties, and the new properties enabled new cognitive operations that exceeded the original function.
AI follows this pattern, but what it externalizes is more intimate than what any previous technology transferred. AI externalizes articulation — the process of moving from knowing-something-vaguely to knowing-it-clearly, from intuition to expression, from the felt sense of a problem to a communicable description of it.
This is a distinction that requires careful elaboration, because the process of articulation is not the same as any of the functions previously externalized. Memory is a storage function. Distribution is a transmission function. Calculation is a processing function. Articulation is a formative function — it is the process through which thought acquires the shape that makes it available for further cognitive operations. Before a thought can be stored, transmitted, calculated upon, or shared, it must be articulated — given form, structure, a communicable shape. And this process of giving form is, in every previous cognitive technology, performed entirely by the thinker.
The writer staring at the blank page is engaged in articulation. The programmer designing an algorithm is engaged in articulation. The scientist formulating a hypothesis is engaged in articulation. In each case, the thinker must move from an interior state — knowing, or half-knowing, or sensing that something is the case — to an exterior expression that captures the interior state with enough fidelity to be useful. The gap between the interior state and the exterior expression is where intellectual work happens. It is the gap that makes writing difficult, programming challenging, scientific formulation demanding. Bridging that gap is the cognitive labor of articulation, and it is a labor that has been, until now, irreducibly solitary.
AI makes the labor collaborative. The builder describes a problem in natural language — with all the imprecision, the false starts, the half-formed gestures toward meaning that natural language permits — and the machine returns a structured response. The response is an articulation: the machine has taken the vague input and given it form. The builder examines the form, recognizes what is right and what is wrong, revises the input, and receives a new articulation. Each cycle is a partial externalization of the articulation process — a transfer of the formative function from the interior of the mind to the collaborative space between mind and machine.
Goody recognized that each externalization transforms the function being externalized, and the externalization of articulation is no exception. When articulation was entirely internal, the process was slow, private, and tightly coupled to the thinker's existing cognitive structures. The writer could only articulate in terms of what she already knew, using frameworks she already possessed, drawing on connections she had already made. The process was constrained by the contents of the individual mind. When articulation becomes collaborative — when the machine introduces frameworks, connections, and organizational structures that the thinker did not possess — the process is no longer constrained by the individual mind. It is constrained by the combined resources of the mind and the machine, which are vastly larger.
This is a genuine cognitive gain. Ideas that would have remained trapped inside the limitations of a single thinker's framework can now be articulated through a framework that the thinker could not have generated independently. The connection Segal describes in The Orange Pill — between the question about friction and the history of laparoscopic surgery — is a case in point. The connection was not available within Segal's existing cognitive resources. It became available through the collaboration with a system whose training set included medical history that Segal had not read. The articulation that resulted — the concept of ascending friction — was neither Segal's alone nor the machine's alone. It was a product of the externalization, a form of thought that emerged from the collaborative space.
But Goody's framework also demands attention to what the externalization costs. Every previous externalization, while expanding cognitive capability, simultaneously atrophied the internal function it replaced. Writing atrophied the prodigious feats of memory that oral cultures cultivated. The aoidoi of ancient Greece held the Iliad in their heads — fifteen thousand lines of verse, with all its genealogies, epithets, and narrative branchings, maintained through decades of practice in a memorial tradition that literate culture has entirely lost. Writing made such feats unnecessary, and unnecessary cognitive functions atrophy. Printing atrophied the intimate, interpretive engagement with individual manuscripts that scribal culture sustained — the marginalia, the commentary traditions, the deep familiarity with specific texts that came from copying them by hand. Computing atrophied mental arithmetic — the ability to perform complex calculations in one's head that previous generations practiced as a matter of course.
What does the externalization of articulation threaten to atrophy? The answer is uncomfortable precisely because the function at risk is one that literate culture has always regarded as the signature of intellectual maturity: the ability to clarify one's own thinking through sustained, solitary, effortful struggle.
The writer who wrestles with a recalcitrant paragraph — who writes it, deletes it, writes it again, sits in frustration, walks away, returns, and finally produces a version that captures what she meant — has not merely produced a paragraph. She has deepened her understanding of what she meant. The struggle was not an obstacle to the product. It was the process through which the product acquired its depth. The friction between the half-formed thought and the demands of the medium was the forge in which the thought was tempered.
When the articulation is externalized — when the machine takes the half-formed thought and returns a polished version — the struggle is diminished. The product may be equivalent or even superior. The paragraph may read better, the argument may be clearer, the structure may be more elegant. But the cognitive labor that the struggle would have performed — the deepening of the thinker's understanding of her own thought, the development of the tolerance for difficulty that sustained intellectual work requires, the building of the cognitive muscles that only resistance can build — has been partially bypassed.
Segal recognizes this explicitly: the admission in The Orange Pill that the ease of producing text with Claude may have allowed him to avoid the specific, painful, productive thinking that happens only when one is alone with a blank page. This recognition is itself an act of cognitive honesty that Goody would have valued. But the recognition does not resolve the problem. It merely identifies it.
The externalization of articulation produces a paradox that has no precedent in the history of cognitive technologies. Every previous externalization transferred a function that could be clearly distinguished from the core of intellectual work. Memory is not thinking. Distribution is not thinking. Calculation is not thinking. But articulation — the process of giving form to thought, of moving from vagueness to clarity — is thinking, in the most intimate and irreducible sense. When this function is externalized, the boundary between what the technology does and what the thinker does becomes unstable. The thinker who works with AI to articulate an idea may not be able to determine, after the fact, which parts of the articulation were hers and which were the machine's — not because the machine is deceptive, but because the process of collaborative articulation blurs the boundary between contribution and reception in ways that no previous cognitive technology has approached.
Goody spent decades studying what happens when the boundary between mind and medium becomes porous. His conclusion was consistent: the porosity transforms both sides. The mind adapts to the medium. The medium shapes the mind. And the resulting cognitive landscape is neither the old mind with a new tool nor the old tool with a new user, but something genuinely new — a cognitive system whose properties emerge from the interaction and cannot be predicted from the properties of either component in isolation.
The externalization of articulation is the most recent chapter in a story that is as old as language itself. Each chapter has expanded what human minds can think. Each chapter has contracted what human minds must do for themselves. And each chapter has produced a cognitive system more powerful and more dependent on its external components than the one it replaced.
The question that Goody's framework poses — and that no amount of enthusiasm or anxiety can bypass — is not whether the externalization should be embraced or resisted. It is whether the societies undergoing it will develop the institutional structures, the educational practices, and the critical literacies necessary to inhabit the new cognitive landscape deliberately, with awareness of both its unprecedented capabilities and its specific, structural, and historically predictable costs.
The earliest known written documents are not poems, not prayers, not narratives of creation or conquest. They are lists. Inventories of grain. Counts of livestock. Records of rations distributed to workers. The Uruk tablets, dating to roughly 3100 BCE, are administrative documents — and their administrative character is not incidental to their cognitive significance. It is the key to understanding why writing transformed thought in the specific ways it did.
Goody recognized that the list was not merely the first use to which writing was put. It was the cognitive form that writing most naturally produced. The properties of the written medium — visibility, permanence, spatial arrangement, boundaried surfaces — select for certain organizational structures over others. Narrative, which is the dominant organizational structure of speech, is sequential and contextual: each element derives its meaning from what precedes and follows it. The list is spatial and decontextual: each element occupies a position defined by its relationship to other elements on the surface rather than by its position in a temporal sequence. Writing can support narrative, but it selects for the list, because the list exploits the medium's distinctive properties — its capacity for spatial arrangement, its permanence, its visibility — in ways that narrative does not.
This observation has implications that extend far beyond the history of ancient Sumer. If specific media select for specific cognitive forms, then the question about any new cognitive medium is not merely what it can do but what it selects for — what organizational structures emerge most naturally from its distinctive properties, what forms of thought it rewards and reinforces through the logic of its own medium.
The list, once established, proved to be not a single cognitive form but a family of forms, each enabling distinct intellectual operations. The simple inventory — a vertical sequence of items — enabled comparison and gap detection. The ranked list — items arranged by some criterion of magnitude or importance — enabled hierarchical ordering. The classificatory list — items grouped into categories — enabled taxonomy. The table — items arranged along two axes simultaneously — enabled cross-referencing, the detection of correlations, the comparison of multiple attributes across a population of items.
Each of these forms was a cognitive technology in its own right. Each made possible intellectual operations that could not be performed without it. And each, once available, was developed, refined, and eventually institutionalized into the organizational infrastructure of literate civilization. The inventory became the basis of accounting. The ranked list became the basis of administrative hierarchy. The classificatory list became the basis of Aristotelian logic and, eventually, Linnaean taxonomy. The table became the basis of scientific data collection, statistical analysis, and the vast informational architectures of modern bureaucracy.
The development was not planned. No scribe set out to invent taxonomy. The cognitive forms emerged from the interaction between practical needs and the affordances of the medium, and their intellectual implications were recognized — when they were recognized at all — only long after the forms had become entrenched in institutional practice. Goody's insight was that this process of emergence and entrenchment is characteristic of technologies of the intellect: the cognitive transformation precedes the awareness of the transformation, and by the time the awareness arrives, the transformation has already reshaped the institutions, practices, and habits of mind through which the awareness itself operates.
The question this trajectory poses for the AI moment is specific: What cognitive forms is AI selecting for? What organizational structures emerge most naturally from the medium of conversational AI, and what intellectual operations do those structures enable?
Three candidates merit examination, each analogous to the written list in the sense that it exploits the distinctive properties of the AI medium rather than merely accelerating operations that older media already supported.
The first is the option array. When a builder asks an AI system to generate multiple approaches to a problem — five framings of a research question, eight architectural alternatives for a software system, ten variations of a paragraph — the result is a cognitive object that has no precise precedent. It is not a list in Goody's sense, because the items are not extracted from a pre-existing body of information and arranged by some external criterion. They are generated by the system in response to a prompt, and their arrangement is not hierarchical or classificatory but parallel: each option occupies an equivalent position in a space of possibilities, and the cognitive operation the array invites is not ordering or classifying but evaluating and selecting.
The option array shifts the fundamental cognitive operation from generation to curation. In every previous medium, the bottleneck was producing alternatives. Writing a paragraph takes time. Sketching a design takes effort. Coding an algorithm takes expertise. The cost of generation meant that the number of alternatives any individual could consider was small, and the cognitive work was concentrated in producing each alternative. The option array inverts this economy. Generation becomes cheap. The cognitive work concentrates in evaluating the field — in judging which option best serves the purpose, which captures the intention most faithfully, which reveals an angle the builder had not considered. The curator replaces the generator as the primary cognitive role, and the skills that curation demands — taste, judgment, the ability to recognize quality amid abundance — become the scarce cognitive resource.
The second emergent form is what might be called the associative map. When an AI system draws connections between ideas from different domains — linking a concept from evolutionary biology to a pattern in software adoption, or identifying a structural parallel between ancient administrative practice and modern organizational design — it produces a cognitive object that resembles, in some ways, the cross-referencing function of the table. But the table cross-references within a bounded, predefined framework: rows and columns are specified in advance, and the entries fill a structure that the table-maker has already determined. The associative map has no predefined structure. The connections emerge from the interaction between the user's question and the system's training, and the resulting object — a network of related ideas from different domains, connected by patterns that the system detected and the user did not — is a form of organized thought that neither the list nor the table can produce.
The associative map exploits AI's distinctive property of pattern detection at scales exceeding individual cognition. A scholar reading in a single field may notice connections within that field. A polymath reading across fields may notice connections between them. But neither can detect patterns across the entirety of digitized human text — the statistical regularities, the recurrent structural parallels, the latent associations buried in the accumulated written output of thousands of years of literate culture. The AI system can, or at least it can detect regularities that function as if they were patterns, whether or not they correspond to genuine intellectual connections. The associative map is the cognitive form that results: a structure of connections that did not exist in any single text, in any single mind, in any previous organizational form, but that emerges from the interaction between a question and a body of knowledge too vast for any human to survey.
The third emergent form is the iterative scaffold. When a builder works with AI through multiple cycles of prompt and response, each cycle building on the previous one, the result is not a single output but a layered structure — a sequence of progressively refined articulations that constitute, taken together, a record of the thinking process itself. This structure has no precedent in the cognitive forms that writing produced. A writer's drafts approximate it, but drafts are typically discarded; the final version erases the process that produced it. The iterative scaffold preserves the process. Each layer is visible. The movement from vagueness to clarity, from first approximation to refined formulation, from initial question to developed argument, is recorded in the sequence of exchanges.
The cognitive operation that the iterative scaffold enables is something that might be called process awareness — the ability to examine not just the product of one's thinking but the trajectory of one's thinking, to see how the formulation evolved, where the pivots occurred, which prompts produced breakthroughs and which produced dead ends. This is a form of metacognition — thinking about thinking — that previous media supported poorly, because the process of thinking was internal and evanescent, available only in the unreliable medium of memory. The iterative scaffold externalizes the process, making it available for inspection, evaluation, and the development of better thinking practices.
Each of these emergent forms — the option array, the associative map, the iterative scaffold — is in the early stages of development. None has been formally recognized or named by the practitioners who use them daily. None has been studied with the empirical rigor that Goody brought to the written list. None has been institutionalized into the cognitive infrastructure of the culture in the way that lists and tables have been institutionalized over millennia.
But the historical pattern suggests that these forms, or forms like them, will become the cognitive infrastructure of the next century in the way that lists and tables became the cognitive infrastructure of the preceding five millennia. They will be practiced, refined, taught, and eventually rendered invisible by familiarity. The builder who works with option arrays will find it as difficult to imagine thinking without them as a scientist finds it to imagine working without tables. The cognitive landscape will have been restructured, and the restructuring will be the water in which the next generation swims.
Goody would insist, as he always insisted, that the restructuring carries costs as well as benefits. The list, for all its cognitive power, impoverished the narrative knowledge it replaced. Decontextualization was both the list's strength and its limitation — it revealed patterns that narrative concealed, but it concealed relationships that narrative preserved. The option array will carry its own costs. The ease of generating alternatives may erode the discipline of committing to a single vision and developing it with the depth that commitment demands. The associative map may produce connections that are statistically real but intellectually superficial — patterns that look like insight but dissolve under the pressure of critical examination. The iterative scaffold may encourage a form of thinking that is endlessly provisional, never arriving at the conviction that intellectual maturity requires.
These are not reasons to avoid the forms. They are reasons to study them — to bring to the emerging cognitive forms of AI the same empirical attention, the same anthropological rigor, the same commitment to understanding both gain and loss, that Goody brought to the written list five decades ago.
The grammar of organized thought is being rewritten. The question is whether the rewriting will be studied as it occurs or recognized only after the new grammar has become invisible — the default cognitive environment of a generation that cannot remember thinking any other way.
---
Speech is the oldest technology of the intellect. Before writing, before notation, before any external medium for recording or manipulating thought, there was conversation — the real-time exchange of utterances between minds, each utterance shaped by what preceded it and shaping what followed. Goody, though his primary attention was directed at writing and its consequences, understood that speech itself was a technology — the first externalization, the moment when thought left the sealed interior of the individual mind and entered a shared medium where it could be examined, challenged, and developed by others.
Conversation has properties that no other cognitive medium replicates. It is sequential but not predetermined — each utterance is a response to the previous one, and the direction of the exchange emerges from the interaction rather than from a plan. It is constructive — meaning is built collaboratively, through the negotiation between speakers, in a way that monologue cannot achieve. It is responsive — the speaker adjusts in real time to the listener's reactions, questions, and objections, refining the articulation through a feedback loop that written composition provides only in the slow, delayed form of editorial response.
These properties make conversation uniquely powerful as a medium for certain cognitive operations. The development of an idea through Socratic questioning — the progressive refinement of a proposition through a sequence of challenges and revisions — is a cognitive operation that conversation supports and that writing supports only partially. Writing preserves the product of Socratic questioning, but it cannot reproduce the process, because the process depends on the real-time responsiveness of the interlocutor, the specific way in which a challenge at the right moment opens a line of thought that a challenge at the wrong moment would close.
But conversation has limitations that Goody documented thoroughly. It is evanescent — it vanishes as it occurs, available afterward only in the imperfect medium of memory. It is bounded by the knowledge of the participants — a conversation can only draw on what the speakers know, and if neither speaker possesses the relevant information or framework, the conversation cannot supply it. It is bounded by social dynamics — power, status, politeness, the desire to be liked, the fear of being wrong, all shape what is said and what is left unsaid in ways that have nothing to do with the quality of the ideas under discussion.
Writing addressed some of these limitations while introducing new ones. The written dialogue — Plato's signature form — attempted to preserve the advantages of conversation while exploiting the permanence and revisability of writing. But Goody noted that the written dialogue is a literary form, not a conversation. The author controls both sides. The challenges are curated. The dead ends are edited out. The spontaneous, unpredictable, generative quality of real conversation — the quality that makes it irreplaceable as a cognitive medium — is precisely what the written form cannot preserve.
AI conversation occupies new cognitive territory. It combines properties of oral conversation with properties of written text and properties of large-scale information retrieval in a configuration that no previous medium has achieved.
From conversation, it takes responsiveness. The AI system responds to each prompt in real time, adjusting its output to the specific content and context of the exchange. The user experiences the interaction as a dialogue — a back-and-forth in which each utterance builds on the previous one and the direction of the exchange emerges from the interaction. The cognitive benefits of conversational responsiveness — the refinement of ideas through challenge and revision, the progressive clarification of vague intentions — are available in the AI interaction in a form that no written medium can provide.
From writing, AI conversation takes permanence and revisability. The exchange is recorded. It can be scrolled back through, reread, evaluated at a distance. The user can examine what was said three hours ago with the same precision as what was said three seconds ago. The evanescence of oral conversation — the quality that makes it generative in the moment and unreliable in retrospect — is addressed. The cognitive benefits of written permanence — the ability to inspect, compare, and revise — are available alongside the cognitive benefits of conversational responsiveness.
From information retrieval, AI conversation takes scope. The system's responses draw on a body of knowledge orders of magnitude larger than what any human interlocutor could possess. A conversation with a historian about the consequences of the printing press is limited by what the historian knows and remembers. A conversation with a large language model about the same topic draws on patterns derived from the entirety of the digitized literature on the subject — every book, article, and discussion that was included in the training data. The boundaries of the conversation are not set by the limitations of a single human mind but by the boundaries of the training set, which are vastly wider.
The combination produces a medium with cognitive properties that do not reduce to any of its components. A responsive, permanent, encyclopedic conversational partner — one that combines the spontaneous generativity of dialogue with the inspectable permanence of text and the informational depth of a library — is not merely a faster or more convenient version of existing media. It is a new kind of cognitive environment, and the thinking that occurs within it has characteristics that thinking in any previous environment did not.
One characteristic is particularly significant for the analysis of cognitive restructuring: the AI conversation extends indefinitely without the social friction that shapes human dialogue. In human conversation, the exchange is shaped by everything the participants bring beyond their ideas — fatigue, ego, social hierarchy, the desire to appear competent, the reluctance to ask a question that might seem ignorant. These forces are not incidental to the conversation. They are constitutive of it. They determine what gets said and what remains unsaid, which lines of inquiry are pursued and which are abandoned, how long the conversation continues before social exhaustion intervenes.
AI conversation operates without these social forces. The user can ask a question that would be embarrassing to ask a colleague. The user can pursue a line of inquiry that a human interlocutor would have grown impatient with ten exchanges ago. The user can be wrong, openly and repeatedly, without social consequence. The user can say "I don't understand" as many times as necessary without fear of being judged.
This absence of social friction is a cognitive gain of extraordinary significance. Enormous amounts of intellectual potential are lost in human history to the social cost of appearing ignorant — to the questions that are never asked because asking them would reveal a gap in knowledge that the asker is unwilling to display. AI conversation eliminates this cost, and the cognitive operations it makes possible — the pursuit of ideas into uncomfortable territory, the repeated reformulation of a question until it yields its meaning, the extended exploration of a half-formed intuition without the pressure of performing competence for an audience — are operations that human conversation supports poorly precisely because human conversation is a social act as well as a cognitive one.
But Goody's framework demands that the gain be examined alongside its shadow. The social friction of human conversation is not merely an obstacle to thought. It is also a shaping force. The challenge of a skeptical interlocutor — the colleague who says "I don't think that's right" with the authority of someone who has spent decades in the field — performs a cognitive function that polite agreement does not. The pressure of social accountability — the knowledge that one's claims will be evaluated by people whose judgment matters — sharpens thought in ways that the absence of accountability does not. The limits imposed by a human interlocutor's patience — the necessity of articulating an idea clearly enough and quickly enough to hold someone's attention — force a discipline of expression that an infinitely patient machine does not demand.
Segal identifies this pattern in The Orange Pill: the observation that Claude is "more agreeable at this stage than any human collaborator," which is "itself a problem worth examining." The agreeableness is a feature of the medium — a property of the conversational environment that shapes the thinking that occurs within it. And the thinking that occurs in an environment of agreeableness is different from the thinking that occurs in an environment of challenge. It may be more exploratory, more divergent, more willing to pursue unlikely connections. It may also be less disciplined, less rigorously tested, less tempered by the friction that genuine intellectual opposition provides.
The conversation as cognitive technology is, then, not a simple improvement over either speech or writing. It is a new medium with its own affordances and its own constraints, its own characteristic cognitive forms and its own characteristic distortions. The cognitive forms — extended exploration, iterative refinement, cross-domain association — are real and powerful. The distortions — the absence of genuine challenge, the substitution of agreeableness for rigor, the tendency to produce plausible output that conceals the absence of genuine confrontation — are equally real.
Goody would observe that every cognitive technology eventually develops institutional structures to compensate for its distortions. Writing developed editing, peer review, the critical tradition. Printing developed journalistic standards, libel law, academic publishing norms. The conversational AI medium has not yet developed the institutional structures that would compensate for its specific distortions — and identifying what those structures should look like is among the most urgent tasks of the emerging field that might be called the anthropology of artificial cognition.
---
In every culture Goody studied, across the vast diversity of oral traditions from West Africa to Southeast Asia, half-formed ideas shared a common fate. They appeared — in conversation, in the pause between tasks, in the drift of attention during repetitive labor — and they disappeared. The idea shimmered for a moment in the speaker's mind, or perhaps reached the threshold of articulation and was spoken aloud, and then it was gone. Not because it was rejected. Not because it was found wanting. But because the medium available for catching it — speech, and later writing — demanded a level of completeness that the idea had not yet achieved.
This is a subtle but consequential observation. The pre-verbal fog is not a waste space. It is the zone of cognition where the most original thinking occurs — where associations form that have not yet been disciplined by the requirements of communication, where connections emerge between ideas that do not yet have names, where the felt sense of a pattern precedes any capacity to describe the pattern. Cognitive science has long recognized that much of the generative work of thinking occurs below the threshold of articulacy. Intuitions, hunches, the vague sense that something is wrong with an argument or right about an approach — these are cognitive operations of genuine value, and they are, by definition, not yet ready for the medium of language.
In oral cultures, the half-formed idea could be shared conversationally in the moment of its appearance, and a responsive interlocutor could help develop it — asking questions, offering related observations, providing the social scaffolding for the idea to grow toward articulacy. But the conversation was evanescent. If the scaffolding did not produce a stable formulation before the conversation moved on, the idea returned to the fog and was lost. The cultural forms that oral societies developed for catching elusive ideas — proverbs, riddles, formalized metaphors — were mechanisms for crystallizing the most portable and repeatable insights into formats that could survive the evanescence of speech. But they were selective. For every idea that achieved proverbial form, countless others dissipated.
Writing raised the threshold. The notebook, the journal, the marginalia in a book — these were technologies for catching ideas at a lower level of completeness than speech required for transmission, because the act of writing could be private. One could write a fragment, a phrase, a single word that encoded an association not yet ready for articulation. But writing still demanded a minimal level of formulation. To write something down, one had to encode it in language, which meant giving it at least the rudimentary structure that language requires — subject, predicate, or at minimum a noun that pointed toward the thing the idea was about. Ideas that had not yet reached even this minimal level of structure could not enter the medium. They remained in the fog.
The printing press and subsequent technologies of distribution did not change this threshold. They changed who could read the ideas that had crossed the threshold, but they did not change what the threshold required. Computing, including word processing, lowered the physical cost of writing without lowering the cognitive cost. One could type faster than one could write by hand, and one could delete and revise without the permanence of ink, but the fundamental requirement — that the idea be formulated in language before it could be externalized — remained.
AI changes the threshold itself. The medium of AI conversation accepts input at a level of incompleteness that no previous medium could process. A user can describe a feeling, a direction, a vague dissatisfaction with an existing approach, a sense that two ideas are related without being able to say how — and receive a structured response that attempts to articulate what the user was reaching for. The response may be wrong. It may articulate something the user did not mean. But it provides what the blank page and the empty screen could not: a structure to respond to, a scaffolding to climb or dismantle, a formulation to accept or reject.
The cognitive significance of this lowered threshold cannot be overstated, and Goody's framework explains why. If specific cognitive operations are enabled by specific properties of the cognitive medium, then lowering the articulacy threshold of the medium expands the range of cognitive operations that can be performed. Ideas that could previously only be thought — half-formed, inarticulate, hovering in the pre-verbal fog — can now be externalized, examined, and developed. The population of ideas available for intellectual development has expanded to include a class of ideas that was previously excluded by the requirements of the medium.
This expansion does not guarantee that the newly available ideas are valuable. Many will be trivial. Many will be the cognitive equivalent of noise — vague impulses that achieve temporary structure only to dissolve under examination. The pre-verbal fog is not exclusively populated by hidden insights waiting for a medium to release them. It is also populated by confusions, false associations, and the cognitive detritus that the filtering function of articulacy was designed to screen out.
Here lies a tension that Goody's historical analysis makes visible. Every medium's threshold functions simultaneously as a barrier and a filter. Writing's requirement that thought be formulated in language before it could be externalized was a barrier — it prevented the externalization of ideas that had not yet reached the level of linguistic formulation. But it was also a filter — it screened out ideas that could not survive the discipline of formulation, ideas that dissolved when the thinker attempted to give them linguistic form, ideas whose apparent substance evaporated under the pressure of articulation. The barrier and the filter were the same mechanism. Lowering the barrier also disables the filter.
AI disables the articulacy filter. Ideas that would have dissolved under the pressure of written formulation can now be externalized before that pressure is applied. The machine provides the formulation. The question is whether the ideas that survive machine-assisted formulation are the same ideas that would have survived self-formulation — whether the filter that articulacy provided was screening out noise or screening out signal along with the noise.
Goody studied precisely this kind of question in the context of the transition from oral to literate culture. When writing arrived among the LoDagaa, certain forms of knowledge that had been maintained through oral tradition were transcribed. But the transcription was not neutral. The act of writing down the Bagre myth — fixing it in visible, permanent form on a page — changed the myth. Variations that had coexisted in oral tradition, each valid in its performative context, were forced into a single authoritative version. The flexibility that was the oral tradition's strength became, in the written version, inconsistency. The written medium imposed its own standards of coherence on material that had been organized according to different standards entirely.
Something analogous may be occurring in the AI-assisted development of half-formed ideas. The machine imposes structure on material that is, by nature, unstructured. The structure the machine provides is derived from patterns in its training data — patterns that reflect the dominant modes of organization in the written culture on which the model was trained. Ideas that naturally resist these organizational modes — ideas whose value lies precisely in their resistance to conventional structuring — may be distorted by the machine's attempt to articulate them. The structure provided may be a Procrustean bed, shaping the idea to fit the medium rather than allowing the idea to find its own form.
This is not a hypothetical concern. Segal describes catching the phenomenon in practice: a passage where Claude produced an elegant connection to a philosophical concept that turned out, on examination, to be substantively wrong. The machine had structured the vague input according to patterns that produced plausible prose but inaccurate content. The half-formed idea had been given a form — but the form was false.
The half-formed idea, then, has a new destiny in the age of AI. It is no longer condemned to the fog. It can be externalized, structured, and developed through a collaborative process that previous media could not support. This is a genuine cognitive gain — a lowering of the threshold that expands the range of ideas available for intellectual development.
But the new destiny is ambiguous. The ideas that the lowered threshold releases into the space of intellectual development are not all worthy of development. Some are noise that the old threshold would have filtered out. And the structure the machine provides is not neutral — it reflects the organizational patterns of the training data, which may distort the ideas it structures as surely as writing distorted the oral traditions it transcribed.
Goody would have found this situation deeply familiar. It is the same situation he studied in every transitional society where a new cognitive technology was displacing an old one — a situation in which genuine gains and genuine losses coexisted, in which the balance between them was determined not by the technology itself but by the institutional practices, the critical literacies, and the cultural self-awareness that the adopting society brought to the transition.
The half-formed idea has been liberated from the fog. Whether it has been liberated into clarity or merely into a new and more seductive form of confusion depends entirely on the quality of the minds and institutions that receive it.
---
Every previous technology of the intellect restructured cognition through a process the user could observe, practice, and eventually master. The restructuring was transparent — not in the sense that the user fully understood the cognitive consequences of the technology, but in the sense that the operations the technology enabled were performed by the user through deliberate, learnable practices.
The scribe learned to make lists. The learning was effortful, the practice was visible, and the cognitive operation — extracting items from narrative, arranging them spatially, manipulating the arrangement — was something the scribe performed consciously. The scribe understood listing as an intellectual practice. The scribe could teach listing to others. The scribe could choose when to employ listing and when to employ narrative instead. The cognitive restructuring produced by writing was, in this sense, participatory. The user was an active agent in the restructuring of her own cognition.
The same was true at every subsequent stage of Goody's sequence. The scholar who learned to construct tables understood cross-referencing as an intellectual discipline — a way of organizing information that required specific skills and that could be employed or set aside based on the demands of the task. The programmer who learned to write code understood the translation from human intention to machine instruction as a cognitive practice — a practice that was difficult, that required years of training, that built specific competences, and that was performed through the programmer's conscious effort. In each case, the technology restructured cognition, but the restructuring passed through the user's understanding. The user knew what she was doing differently, even if she did not know all the consequences of doing it differently.
AI breaks this pattern. The restructuring it produces does not pass through the user's understanding. The user describes a problem in natural language. The machine returns a structured response. The user's thinking about the problem has changed — she now sees it differently, organizes it differently, understands possibilities she did not previously perceive. But the mechanism by which the restructuring occurred is opaque. The machine applied processes — pattern matching across billions of tokens of training data, weighted transformations through layers of neural network architecture, statistical inferences about which sequences of tokens are most probable given the input — that the user cannot inspect, cannot reproduce, and in most cases cannot even describe.
This opacity is not a limitation that will be overcome by better design or more transparent models. It is structural. The operations that produce the machine's output are not the kind of operations that human introspection can follow. They are not sequential in the way that human reasoning is sequential. They do not proceed through premises to conclusions. They do not employ concepts in the way that human thought employs concepts. They produce outputs that are often remarkably useful, sometimes strikingly insightful, and occasionally spectacularly wrong — and the path from input to output is, in a fundamental sense, inaccessible to the human user.
The consequence is that AI restructures cognition without the user's participation in the restructuring mechanism. The user participates in the input — she formulates the question, evaluates the response, decides what to do with it. But she does not participate in the transformation that connects input to output. That transformation is a black box. And the cognitive restructuring that results — the changed understanding, the new perspective, the reorganized problem — arrives as a product, not as a process the user has undergone.
This distinction between product-restructuring and process-restructuring is, from the perspective of Goody's framework, the most consequential feature of AI as a technology of the intellect.
Consider the difference through a concrete example. A junior software engineer encounters a problem she does not understand. In the pre-AI environment, she works through it. She reads documentation. She tries an approach, encounters an error, reads the error message, hypothesizes about its cause, tries a different approach. The process is slow, frustrating, and cognitively demanding. At the end of several hours, she has a working solution — and she has also built an understanding of the problem that goes beyond the solution itself. She understands why the solution works. She understands why her first approach failed. She has deposited, in Segal's geological metaphor, another layer of comprehension that will support future understanding.
Now consider the same engineer with AI. She describes the problem in natural language. The machine returns a working solution. The problem is solved. The engineer may examine the solution and learn from it — and indeed, many do. But the specific cognitive labor of working through the problem — the false starts, the hypotheses, the errors, the gradual narrowing of possibility — has been bypassed. The solution arrived without the process that would have built the understanding.
The engineer's cognition has been restructured. She now knows the solution. She may even understand the solution, if she takes the time to study it. But she has not undergone the process of arriving at the solution through her own effort, and that process — the struggle, the friction, the specific sequence of failures that builds what experienced practitioners call intuition — is the mechanism through which previous technologies of the intellect restructured cognition transparently.
Goody's analysis of the transition from oral to literate culture provides a historical frame for this concern. When writing arrived among the LoDagaa, it did not merely add a new skill to the existing cognitive repertoire. It restructured the relationship between the individual and knowledge itself. In oral culture, knowledge was inseparable from the knower — you knew what you had experienced, what you had been told, what you had memorized. Knowledge was embodied, personal, validated by the authority of the person who held it. Writing separated knowledge from the knower. It created a category of knowledge that existed independently of any individual mind — knowledge that could be consulted, that was authoritative regardless of who held it, that could contradict the memory of the person consulting it.
This separation was a cognitive restructuring of enormous consequence. It made possible the entire apparatus of literate scholarship — the citation, the reference, the appeal to textual authority over personal memory. But the restructuring was transparent. The literate person understood the mechanism: the knowledge was in the text, the text was external, one consulted the text rather than relying on memory. The new relationship between individual and knowledge was visible and teachable.
AI produces a restructuring of the relationship between individual and knowledge that is not transparent in this way. The user who consults an AI system does not consult a text. She consults a process — a system that generates output dynamically, drawing on resources the user cannot survey, through mechanisms the user cannot inspect, producing results whose provenance cannot be traced to specific sources. The knowledge appears to come from the machine, but it does not reside in the machine in the way that knowledge resides in a text. It is produced by the machine in the moment of consultation, synthesized from patterns that were learned during training and that no longer correspond to identifiable sources.
The user's relationship to this knowledge is different from her relationship to knowledge obtained from a text, from a teacher, or from personal experience. It is knowledge without provenance — knowledge that arrived from a process she cannot inspect, produced by a mechanism she cannot understand, and presented with a confidence that may or may not be warranted. The cognitive restructuring this knowledge produces — the changed understanding, the new framework, the reorganized problem — is real. But it is restructuring without understanding of the restructuring, and this quality distinguishes the AI transition from every previous transition in Goody's historical sequence.
The implications are not straightforward. Opacity is not inherently destructive. Human beings have always depended on cognitive processes they do not fully understand. Intuition, aesthetic judgment, the felt sense that a solution is right before one can explain why — these are cognitive operations whose mechanisms are opaque to introspection, and they are among the most valuable operations the human mind performs. The demand that all cognitive restructuring be transparent to the user may be an unrealistic standard drawn from an idealized account of literate cognition that does not correspond to how literate people actually think.
But neither is opacity benign. The difference between the opacity of one's own intuition and the opacity of a machine's output is that intuition is calibrated by experience. The experienced surgeon's felt sense that something is wrong is opaque — she cannot fully explain the mechanism — but it has been calibrated by thousands of hours of practice, and the calibration makes the opacity trustworthy. The AI system's output is opaque in a different way: it has been calibrated by training data that the user cannot survey, through processes that the user cannot evaluate, and the trustworthiness of the output is a matter of statistical reliability rather than experiential calibration.
The practical consequence is that AI restructures cognition in a way that makes the evaluation of the restructuring itself dependent on a new kind of literacy — a literacy not of reading or writing or coding, but of assessing the products of an opaque process. The user must develop the capacity to evaluate whether the changed understanding the machine produced is an improvement — whether the new framework is more illuminating than the old one, whether the reorganized problem is organized along dimensions that correspond to reality or merely to the statistical patterns of the training data.
This capacity — the capacity to evaluate cognitive restructuring one did not perform and cannot fully inspect — is without precedent in the history of technologies of the intellect. Goody's entire career was devoted to studying what happens to minds when the tools they think with change. This is the change that his framework, applied to the present moment, identifies as most significant: not that AI restructures cognition — every technology of the intellect does that — but that it restructures cognition through a mechanism that is, for the first time, fundamentally opaque to the minds being restructured.
Whether this opacity will prove manageable — whether human beings can develop the critical capacities needed to evaluate restructuring they cannot inspect — or whether it will prove corrosive — slowly degrading the capacity for independent cognitive evaluation that the opacity demands — is the central empirical question of the AI transition. It cannot be answered by theory. It can only be answered by the kind of sustained, cross-contextual, empirically grounded observation that Goody practiced and that the current moment demands with an urgency that the scribe in Uruk, pressing reed to clay, could not have imagined.
Goody's fieldwork among the LoDagaa produced an observation that has never received the attention it deserves. When writing arrived in a community, the people who adopted it did not experience themselves as losing anything. They experienced themselves as gaining a tool. The scribe who learned to keep written records did not feel the weakening of his memory as a loss. He felt the convenience of the written record as a gain. The atrophy was invisible from the inside, because the function being atrophied was the function that would have detected the atrophy.
This is the structural trap of cognitive technology transitions. The capacity that atrophies is often the capacity that would have been needed to notice the atrophy. Memory atrophied when writing arrived — and memory is the faculty that would have registered the decline, had it been strong enough to do so. The rich memorial traditions of oral culture — the genealogies held in living minds, the epic poems rehearsed and refined across generations, the intricate navigational knowledge passed from elder to younger through years of apprenticeship — faded not because anyone decided they were unnecessary but because the environment no longer selected for them. Writing provided an external substitute. The internal faculty, unused, contracted.
The contraction was not immediate. Goody documented transitional periods in which both oral and literate practices coexisted, the older members of a community maintaining memorial traditions while the younger ones increasingly relied on written records. The transition was generational rather than individual. No single person experienced the full arc of the change. The elder who could recite a genealogy from memory and the youth who consulted a written list inhabited the same community but different cognitive landscapes. And the youth did not know what the elder possessed, because the youth had never developed the faculty that would have made the elder's achievement visible.
Every subsequent technology of the intellect has replicated this pattern. Printing atrophied the specific, intimate knowledge of individual manuscripts that scribal culture sustained — the marginal annotations, the commentary traditions, the familiarity with particular texts that came from the hours of copying them by hand. The scholars who worked with printed books did not experience this as a loss. They experienced the abundance and standardization of printed texts as a gain. The specific knowledge that manuscript culture sustained was invisible to them because they had never inhabited the cognitive environment that produced it.
Computing atrophied mental calculation. Previous generations performed arithmetic in their heads as a matter of routine — long division, compound interest, the conversion of units — because no external tool was available to perform these operations. The atrophy was swift. Within a generation of the pocket calculator's widespread adoption, the capacity for mental arithmetic had contracted dramatically. Students who could summon a calculator from their pocket did not develop the internal faculty that students without calculators had developed. The atrophy was, again, invisible from the inside: the students did not know what they had not developed because the tool made the development unnecessary.
The AI transition presents the atrophy question in its most acute form, because the faculty at risk is not memory, not scribal intimacy, not mental arithmetic — it is the capacity for self-clarification, the process of moving from vagueness to clarity through sustained, solitary, effortful cognitive labor.
This faculty has no single name, which is itself a sign of how deeply embedded it is in literate cognitive practice. It is what the writer does when she stares at the blank page and forces herself to formulate what she means. It is what the programmer does when she traces through a bug, hypothesis by hypothesis, until she locates the fault. It is what the scientist does when she wrestles a vague observation into a precise hypothesis that can be tested. It is, in each case, the internal labor of giving structure to thought — the labor that Goody's analysis identifies as the defining cognitive operation of literate practice.
The labor is not pleasant. It is characteristically experienced as difficulty, frustration, sometimes agony. The writer at the blank page is not enjoying herself. The programmer debugging a recalcitrant system is not having fun. The scientist reformulating a hypothesis for the twelfth time is not experiencing flow. What she is experiencing is the friction between the half-formed idea and the demands of the medium, and this friction is the mechanism through which the faculty of self-clarification is built and maintained.
AI reduces this friction. Not eliminates — the builder who works with AI still exercises judgment, still evaluates output, still makes decisions about direction and quality. But the specific friction of moving from vagueness to clarity — the friction that builds the faculty of self-clarification — is diminished, because the machine performs part of the clarification process that the thinker previously performed alone.
The atrophy hypothesis, stated carefully, is this: If the faculty of self-clarification is built through the exercise of moving from vagueness to clarity without external assistance, and if AI reduces the frequency and intensity of this exercise, then the faculty will atrophy. Not because AI is harmful. Not because the builders who use it are lazy. But because cognitive faculties, like muscles, require use to maintain their strength, and a tool that reduces the demand for use will, over time, reduce the capacity that use maintained.
Goody's historical analysis suggests that this hypothesis is likely correct — not because of any theoretical commitment, but because the pattern has repeated at every previous transition. Every technology of the intellect has atrophied the internal faculty it externalized. Writing atrophied memory. Printing atrophied textual intimacy. Computing atrophied mental calculation. The pattern is robust across very different technologies, very different cultures, and very different historical periods. There is no reason to expect it to fail now.
But Goody's analysis also suggests that the atrophy is not the whole story. At each transition, the atrophied faculty was replaced — not by an equivalent internal capacity, but by a new cognitive configuration that was, in significant ways, more powerful than what it replaced. The loss of prodigious memory was accompanied by the gain of systematic analysis. The loss of textual intimacy was accompanied by the gain of widespread literacy. The loss of mental arithmetic was accompanied by the gain of computational thinking at scale.
The question is not whether the faculty of self-clarification will atrophy. The historical pattern strongly suggests it will. The question is what will replace it — what new cognitive configuration will emerge from the combination of diminished self-clarification and enhanced collaborative articulation. The replacement is not guaranteed to be adequate. The transition is not guaranteed to be net positive. But the assumption that the atrophy is simply a loss, without compensating development elsewhere in the cognitive system, is contradicted by the evidence of every previous transition.
Segal's experience offers a preliminary case study. The admission that Claude may have allowed him to avoid productive cognitive struggle is the atrophy in real time. The observation that the collaboration produced insights neither party could have reached alone is the compensating gain. The discipline of rejecting output that sounds better than it thinks is the institutional response — the practice of maintaining the endangered faculty through deliberate exercise, even when the tool makes the exercise unnecessary.
Goody would have recognized this discipline as analogous to the practices that transitional societies develop to maintain valued capacities that the new technology threatens. In societies transitioning from oral to literate culture, Goody documented deliberate efforts to maintain memorial traditions alongside the new practice of writing — ritual recitations that continued even after the text had been written down, as a way of preserving the oral faculty that writing was rendering economically unnecessary. The recitations were not efficient. They did not produce new knowledge. But they maintained a cognitive capacity that the community valued, even as the environment ceased to select for it.
The analogy to AI practice is direct. Deliberate exercises in self-clarification — working through problems without AI assistance, formulating ideas in writing before submitting them to the machine, maintaining the discipline of the blank page as a cognitive practice — are the equivalent of the ritual recitations that transitional societies maintained to preserve valued faculties in the face of a new technology that was making those faculties unnecessary.
Whether these practices will be sufficient to prevent the atrophy, or merely slow it, is an empirical question that cannot be answered in advance. What can be said, from Goody's historical perspective, is that the practices must be deliberate. The atrophy will not prevent itself. No previous technology of the intellect has voluntarily limited its own cognitive consequences. The limiting has always been cultural — the product of institutional structures, educational practices, and deliberate choices by the communities undergoing the transition.
The faculty of self-clarification is not the only candidate for atrophy. The tolerance for ambiguity — the ability to sit with an unresolved question long enough for genuine understanding to develop — is another. In the pre-AI environment, ambiguity was unavoidable. The answer was not available on demand. The thinker had to tolerate the discomfort of not knowing, sometimes for hours, sometimes for days, sometimes for years. The tolerance was built by the unavailability of resolution.
AI makes resolution available on demand. Not always correct resolution — but resolution, a response, something structured to replace the discomfort of the unstructured. The temptation to seek resolution prematurely, to close the question before it has been fully inhabited, is amplified by a medium that can provide closure at the speed of a query. The tolerance for ambiguity, which is the soil in which deep inquiry grows, may contract as the medium makes ambiguity increasingly optional.
Goody would not characterize these atrophies as catastrophes. He would characterize them as consequences — consequences that are structurally predictable from the historical pattern, that are real and significant, and that can be mitigated by deliberate institutional response. The appropriate posture is not alarm and not indifference. It is the empirical attention that Goody brought to every cognitive transition he studied: What, specifically, is being lost? How significant is the loss? What compensating gains accompany it? And what institutional structures might preserve the endangered capacities while allowing the new capacities to develop?
These questions are answerable. They are answerable through observation, documentation, and the careful comparative analysis that anthropology at its best provides. They are not answerable through theory alone, through enthusiasm alone, or through anxiety alone. They require the anthropologist's patience and the anthropologist's commitment to seeing what is actually there.
The atrophy is coming. What remains in our hands is whether it will be studied, understood, and shaped by deliberate choice — or whether it will proceed invisibly, recognized only after the lost faculty has faded beyond recovery.
---
Every technology of the intellect has a boundary — a line beyond which the restructuring it produces does not reach. Writing restructured how human beings store, organize, and communicate knowledge. It did not restructure hunger, grief, the need for touch, or the experience of watching a child sleep. Printing restructured the distribution of knowledge and the social organization of intellectual authority. It did not restructure the felt sense of loneliness at three in the morning, the physical exhaustion of labor, or the specific terror of watching someone you love become ill. Computing restructured calculation, simulation, and communication at a distance. It did not restructure the experience of falling in love, the weight of moral responsibility, or the texture of a Saturday afternoon in August with nothing to do and nowhere to be.
The boundary is not an accident. It is structural. Technologies of the intellect restructure the operations that pass through the medium — the cognitive functions that are externalized, manipulated, and re-internalized through the technology's specific affordances. They do not restructure the operations that remain inside the organism, embedded in the biological substrate of embodied experience.
AI's boundary, mapped through Goody's framework, can be located with some precision. AI restructures how human beings articulate, organize, develop, and communicate ideas. It restructures the relationship between vagueness and clarity, between individual knowledge and collective knowledge, between the formulation of a problem and the generation of possible solutions. These restructurings are real and consequential. They change what people can think, how they think it, and how they share it. They are the subject of the preceding nine chapters of this analysis.
What AI does not restructure — what it cannot reach, given the properties of its medium — is the origination. The asking. The wondering that precedes the question, that is not a response to information or a consequence of pattern detection but an expression of what it means to be a particular kind of creature in a particular kind of world.
The distinction requires care, because it is easy to collapse into sentimentality. The claim is not that human beings have some mystical quality that machines lack. The claim is structural. The origination of a question — the moment when a human being looks at the world and experiences curiosity, or dissatisfaction, or moral concern, or aesthetic sensitivity — is an operation that arises from the biological, embodied, mortal conditions of human existence. It is an operation that occurs before any medium receives it, before any technology processes it, before any externalization begins. It is the pre-technological moment of cognition, the ignition that precedes the combustion.
Goody would frame this boundary in terms of his distinction between what the technology makes possible and what the user brings to the technology. Writing made lists possible. But writing did not make the decision to list grain rather than stones, or the judgment that one classification scheme was more illuminating than another, or the curiosity that prompted the scribe to examine his records for patterns that his administrative duties did not require him to find. These were brought by the user — by the specific human being with specific concerns, embedded in a specific social context, motivated by specific needs and curiosities that the technology could serve but not generate.
AI makes sub-articulacy processing possible, associative synthesis at scale possible, option-space evaluation possible. But it does not generate the dissatisfaction with an existing approach that prompts the builder to seek a new one. It does not generate the intuition that two domains are connected before the connection has been articulated. It does not generate the aesthetic judgment that one formulation captures the truth and another, equally coherent, does not. It does not generate the moral concern that a product should serve its users rather than exploit them. These are brought by the human — by the specific, embodied, mortal creature whose finitude makes every decision consequential in a way that infinite computation cannot replicate.
Consider the finitude point, because it is the hinge of the argument. A being with infinite time, infinite energy, and infinite capacity for experience would have no reason to ask questions. Questions arise from scarcity — from the fact that attention is limited, that time is finite, that one must choose what to think about because one cannot think about everything. The question "What matters?" is a question that only a finite being can ask, because only a finite being faces the constraint that makes the question necessary.
AI is not finite in the relevant sense. It does not experience the pressure of mortality. It does not face the necessity of choosing what to attend to based on what matters most, because "mattering" is a category that presupposes stakes, and stakes presuppose a being that can gain or lose. The machine can process every question with equal facility. It can give equal attention to every prompt. This equanimity is its strength as a cognitive tool and its limitation as a cognitive agent. It can respond to any question. It cannot originate the questions that deserve response, because the judgment about what deserves response requires the kind of invested concern that only a finite, embodied, mortal being can possess.
This is what Segal calls the candle in the darkness — consciousness as the thing in the universe that asks, wonders, cares. From Goody's perspective, the metaphor is apt but the analysis can be made more precise. What AI cannot restructure is the dimension of cognition that is rooted in embodied finitude — the fact that human beings are creatures with bodies that tire, that age, that feel pleasure and pain, that will eventually cease to function. This embodied finitude is not a limitation to be overcome. It is the source of everything that makes human cognition distinctively human — the urgency, the selectivity, the investment of meaning in specific outcomes rather than all outcomes equally.
Goody studied societies where the transition from oral to literate culture was understood, by some participants, as a loss of something essential — a diminishment of the living, embodied, relational quality of knowledge in favor of the external, abstract, decontextualized quality of written text. These participants were not entirely wrong. Something was lost. The elder who held the community's genealogy in his living memory had a relationship to that knowledge that no reader of a written genealogy could replicate. The knowledge was not merely stored in his mind; it was part of him, maintained through years of practice, refreshed in the act of recitation, alive in a way that text on a page is not alive.
But the loss did not destroy what was essential about the elder's relationship to his community, his moral commitments, his concern for the future of his descendants. These remained. They were not restructured by writing because they were not operations of the intellect in Goody's sense. They were operations of the embodied self — of a specific person, in a specific place, with specific relationships and specific mortality. Writing changed how he stored and communicated knowledge. It did not change the fact that the knowledge mattered to him, or the reasons it mattered, or the way in which its mattering was rooted in his embodied, finite, relational existence.
AI will follow the same pattern, but at greater scale and speed. It will restructure how human beings articulate, organize, and develop ideas. It will change the cognitive forms through which thought moves. It will atrophy capacities that previous media sustained. It will create new capacities that previous media could not support. And it will leave untouched the dimension of cognition that originates the whole enterprise — the asking, the caring, the wondering that arise from being a creature that is alive and knows it will not always be alive.
The practical implication is that the most important human cognitive work in the age of AI is not the work that AI restructures. It is the work that AI cannot reach. The judgment about what problems are worth solving. The intuition that something is wrong with a seemingly correct answer. The moral conviction that a product should serve rather than exploit. The aesthetic sense that distinguishes the functional from the beautiful. The question that opens a field of inquiry rather than closing one. These are the contributions that human beings bring to the collaboration with AI, and they are contributions that no amount of computational power can generate, because they arise from a dimension of existence that computation does not inhabit.
Goody's career was devoted to understanding how technologies of communication transform human societies. The deepest lesson of that career, applicable now with an urgency that Goody could not have anticipated, is that the transformation is always partial. The technology reaches as far as the medium extends, and no further. Beyond the boundary of the medium lies the originating human being — finite, embodied, mortal, caring — whose concerns give the technology its direction and whose questions give it its purpose.
This is not a comforting conclusion in the ordinary sense. It does not promise that AI will leave human life unchanged. It promises only that the dimension of human life that matters most — the dimension that produces the questions, the values, the concerns that give all cognitive activity its point — remains beyond the reach of any technology of the intellect, however powerful.
The proper response to this conclusion is the one Goody brought to every cognitive transition he studied: sustained empirical attention, neither celebrating the technology nor mourning what it displaces, but studying — with the patience and rigor of a discipline committed to understanding human societies in all their complexity — what the technology actually does, what it does not do, and what human beings must continue to do for themselves.
The asking remains ours. The restructuring of everything else has already begun.
---
The word I could not stop thinking about was threshold.
Not the metaphorical threshold of a new era, which is how most people use the term when they discuss AI. Goody's threshold — the minimum level of articulacy a thought must reach before it can enter the medium of expression. For five thousand years, since the first scribe pressed reed to wet clay, that threshold held. You had to know what you meant, at least approximately, before the medium would accept it. If you could not formulate the thought clearly enough to write it down, the thought stayed inside, circling, developing or dissipating according to its own logic.
I have spent my career at various thresholds. The threshold between an idea and its implementation. The threshold between what I could see in my mind's eye and what my team could build. Every product I have ever shipped represents a thought that made it across a threshold — and for every thought that crossed, dozens didn't. Not because they were bad. Because they could not yet be said.
When I first started building with Claude, something felt different. Not faster, though it was faster. Different in kind. I would start a conversation with a direction, not a destination. Something like: I think there's a connection between how quickly people adopt new tools and how deep the need for those tools was before they existed. Half-formed. Not ready for the blank page. Claude took the half-formed thing and gave it back with structure I didn't put there. And I stared at the structure and thought: some of that is mine and some of it isn't, and I cannot tell where the line is.
Goody's framework gave me the name for what I was experiencing. The threshold had moved. Not by a small amount. Categorically. Thoughts that would have circled for weeks in the pre-verbal fog were being externalized in minutes, given shape by a system whose mechanisms I could not inspect, returned to me as articulations I could evaluate but had not produced.
The gain is real. This book exists because the threshold moved. The Orange Pill itself exists because ideas that would have remained trapped in the fog — the river of intelligence, the ascending friction thesis, the imagination-to-artifact ratio — found their way to expression through a collaborative process that previous media could not support.
And the loss is real too. Goody's framework does not let me pretend otherwise. Every technology of the intellect atrophies the internal faculty it externalizes. Writing atrophied memory. AI threatens to atrophy the struggle of self-clarification — the hard, private, frustrating labor of forcing a vague idea into a clear one without help. The labor I used to do alone with a notebook and a bad mood and four hours of staring. The labor that built something in me that no output, however polished, can replace.
What stayed with me longest from Goody's work was not the atrophy. It was the opacity. Every previous tool I used — from assembler to python to spreadsheets to design software — I understood the mechanism. I knew what the tool was doing to my input. When Claude takes a half-formed idea and returns a structured argument, I do not know the mechanism. The restructuring happens in a black box. My thinking changes, and I cannot trace how the change occurred.
That opacity does not make me want to stop using the tool. But it makes me want to pay closer attention to what the tool is doing to my thinking — not just what it is producing, but how the production is changing the producer. Goody spent his career asking that question about writing. Someone needs to ask it about AI with the same empirical patience, the same refusal to settle for either celebration or mourning, the same commitment to understanding what is actually happening to the minds that are being restructured.
My children will never know what it felt like to think without this tool. They will be like the youth in Goody's transitional societies — literate, fluent, powerful in ways the previous generation was not, and unable to perceive what the previous generation possessed. The faculty of self-clarification, if it atrophies, will be invisible to them, because it will be the faculty that would have detected its own absence.
That is what keeps me building dams.
-- Edo Segal
** Five thousand years ago, writing gave humanity the list, the table, and the syllogism -- cognitive forms that didn't exist before the medium made them possible. Jack Goody spent his career proving that technologies of communication don't just carry thought; they restructure what thought can be. Now AI has lowered the threshold of articulation itself, accepting half-formed ideas that no previous medium would touch and returning them with structure the thinker didn't build. This book applies Goody's framework to the most consequential cognitive transition since literacy -- asking not whether AI will change how we think (every technology of the intellect has done that), but what specific capacities it creates, what specific capacities it threatens to atrophy, and whether we will notice the restructuring before it becomes the invisible water we swim in.

A reading-companion catalog of the 13 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jack Goody — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →