By Edo Segal
The callus on my right index finger disappeared sometime around 2019.
I noticed it the way you notice a missing tooth — not when it goes, but later, when your tongue finds the gap. For decades that callus was there, built up from gripping pens, from holding soldering irons, from the particular way I used to brace my hand against a keyboard when I was deep in code. It was a small, unremarkable piece of my body that recorded something true: that I had been a person who made things with his hands.
I do not make things with my hands anymore. I make things with my words. I describe what I want to Claude, and Claude produces it, and the production is extraordinary, and the callus is gone, and I had not thought about what the callus knew until I encountered Tim Ingold.
Ingold is a British anthropologist who has spent four decades watching people make things — potters, weavers, hunters, builders, walkers — and asking a question so simple it sounds naive: What kind of knowledge do the hands produce that the mind alone cannot reach? His answer, built from ethnographic observation across cultures and continents, is that the hands do not merely execute what the mind conceives. The hands think. They discover. They negotiate with resistant material in ways that produce understanding unavailable through any other channel.
This matters for The Orange Pill because it challenges the architecture on which the entire AI productivity thesis rests. The thesis says: the human conceives, the machine executes, and nothing of intelligence is lost. Ingold's research says: intelligence was never only in the conception. It was distributed across the entire act of making — in the friction, in the resistance, in the material pushing back. Automate the making, and you do not merely remove labor. You remove a form of thought.
I needed this lens. My ascending friction argument — that AI relocates difficulty upward rather than eliminating it — still holds at the organizational level. But Ingold forced me to ask whether certain kinds of knowledge can ascend at all, or whether they are permanently rooted in the encounter between a body and the world.
The answer is uncomfortable. It does not invalidate the tools I use or the future I am building toward. But it insists that something real lives in the friction I have been so eager to eliminate — something that no prompt, however sophisticated, can deposit in the hands that have stopped touching the material.
The callus knew something. This book is my attempt to understand what.
— Edo Segal ^ Opus 4.6
Tim Ingold (1948–) is a British social anthropologist widely regarded as one of the most influential thinkers in contemporary anthropology and the philosophy of making. Born in Bournemouth, England, he studied social anthropology at the University of Cambridge before conducting extensive fieldwork among the Skolt Sámi reindeer herders of northeastern Finland, an experience that fundamentally shaped his understanding of skill, environment, and human-animal relations. He spent the majority of his academic career at the University of Aberdeen, where he served as Chair of Social Anthropology and founded the interdisciplinary Knowledge, Learning and Practice research group. His major works include *The Perception of the Environment* (2000), *Lines: A Brief History* (2007), *Being Alive* (2011), *Making* (2013), *Correspondences* (2021), and *The Rise and Fall of Generation Now* (2024). Ingold developed several concepts that have reshaped debates across anthropology, design, architecture, and education — among them the distinction between the network and the meshwork, the critique of the hylomorphic model of making, the theory of educated attention, and the notion of correspondence as mutual becoming between maker and material. His insistence that intelligence is grounded in the perception and action of living beings moving through their environments has made his work a touchstone for scholars questioning the epistemological assumptions underlying artificial intelligence. He was elected a Fellow of the British Academy in 1997 and awarded the Royal Anthropological Institute's Huxley Memorial Medal in 2021.
Since Aristotle, the dominant Western account of how things get made has followed a deceptively simple script. First, the maker conceives a form in the mind. Then, the maker imposes that form onto passive matter. The architect draws the building. The builder pours the concrete. The designer sketches the chair. The carpenter cuts the wood. Intelligence lives upstream, in the conception. Execution lives downstream, in the hands. The matter itself contributes nothing to the outcome except its willingness to receive the shape that mind has ordained.
Aristotle called this hylomorphism — from hyle, matter, and morphe, form. The term is technical. The assumption it names is not. It is so deeply embedded in how modern societies think about production, creativity, and intelligence that most people have never noticed it operating. It is the water in the fishbowl. It is the grammar of every project management tool, every product specification document, every job description that distinguishes "design" from "implementation" and pays them differently. The designer conceives. The developer executes. The strategist envisions. The team builds. Mind commands. Matter obeys.
Tim Ingold has spent four decades demonstrating that this account is empirically false. Not merely incomplete. Not merely oversimplified. False in a way that distorts our understanding of what making actually involves, what kind of intelligence it requires, and what is lost when the making is delegated to a system that has never touched the material.
Ingold's argument begins not with philosophy but with observation. In fieldwork conducted across decades — among reindeer herders in Finnish Lapland, builders in the Scottish Highlands, weavers, potters, knotters, and walkers in environments spanning the subarctic to the tropics — he watched skilled practitioners at work. What he saw did not match the hylomorphic script. The practitioners were not executing preconceived designs. They were negotiating. They were following the grain of the wood, the moisture of the clay, the tension of the thread, the resistance of the earth. The form they arrived at was not the form they had planned. It was the form that emerged from a sustained encounter between their intentions and the material's behavior.
The potter provides the clearest illustration because the activity is legible even to those who have never worked with clay. The hylomorphic account says: the potter conceives a bowl in her mind, then shapes the clay to match the conception. Ingold's observation says: the potter sits at the wheel with an intention — a direction, not a destination. She begins to work the clay. The clay responds. It is wetter than she expected, or drier, or unevenly mixed. The wall thins at a point she did not anticipate. She adjusts. The adjustment produces a new contour she had not planned but recognizes as right. The bowl that emerges is not the reproduction of a mental image. It is the outcome of a conversation, conducted through the hands, between the potter's skill and the clay's properties. The intelligence is distributed across the entire process. It is not concentrated in the conception.
This matters for the AI moment in a way that has gone almost entirely unremarked. The entire architecture of AI-assisted production — from the prompt that initiates the work to the output that completes it — is built on the hylomorphic model. The human conceives the form. The machine executes it. The human writes the prompt: "Build me a CRM system with these features." The machine produces the code. The human evaluates the output. The machine adjusts.
The Orange Pill celebrates this architecture as a liberation. Edo Segal describes the collapse of the imagination-to-artifact ratio — the distance between a human idea and its realization — as the signature achievement of the AI moment. When the ratio approaches zero, anyone with an idea and the will to pursue it can make something real. The barrier between intention and artifact, which once required years of training and teams of specialists to cross, has been reduced to the time it takes to have a conversation. This is presented, and experienced, as the most generous expansion of human capability since the invention of writing.
Within the hylomorphic model, this celebration makes perfect sense. If making truly consists of conception followed by execution, and if the intelligence resides in the conception, then automating the execution sacrifices nothing of the intelligence involved. The machine handles the lower function. The human retains the higher one. The hierarchy of mind over hand is not merely preserved. It is perfected. The hand — clumsy, slow, prone to error, limited by training and fatigue — has been replaced by something faster, tireless, and increasingly capable. The mind is free to do what it was always meant to do: conceive, direct, evaluate, choose.
But if the hylomorphic model is a myth — if the intelligence was never concentrated in the conception alone — then the automation of execution does not merely remove labor. It removes a form of thinking.
This is Ingold's challenge, and it strikes at the foundation of the argument that AI elevates human work by freeing it from mechanical drudgery. The challenge is not that AI produces bad work. Often, it produces work that is technically superior to what the human maker would have produced unaided. The challenge is that the production process itself — the negotiation with material, the encounter with resistance, the discovery of form through engagement — was where a distinctive and irreplaceable kind of knowledge was produced. When the process is eliminated, the knowledge is not transferred to the machine. It is not relocated to a higher floor. It ceases to exist.
Consider the historical trajectory. The hylomorphic model did not remain merely a philosophical abstraction. It became an organizational principle. The Industrial Revolution institutionalized it: the designer in the office conceived the form, and the worker on the floor imposed it on material. Taylorist management systematized the separation, explicitly removing thinking from the hands of the worker and concentrating it in the hands of the manager. The assembly line was the hylomorphic model made spatial — a physical architecture in which conception happened at one end and execution at the other, and the two were connected by a conveyor belt that moved in one direction only.
Software development replicated the pattern with eerie precision. The waterfall model — specification, design, implementation, testing — was hylomorphism expressed as a project management methodology. The "spec" was the form. The "code" was the matter. The architect conceived. The developer executed. Even agile methodologies, which ostensibly dissolved the waterfall, preserved the underlying hierarchy: the product owner decides what gets built, the developer decides how.
And now, AI completes the trajectory. The human writes the specification in natural language. The machine produces the implementation. The hylomorphic model achieves its purest expression in the prompt-to-output pipeline: pure conception at one end, pure execution at the other, with no material negotiation in between.
But the history of actual making — the history Ingold has reconstructed through decades of ethnographic observation — tells a different story. In practice, the worker on the factory floor was not merely executing. She was thinking, with her hands, about the material's behavior. The experienced machinist could feel, through the vibration of the lathe, whether the cut was clean or whether the tool was about to break. That knowledge — enacted, embodied, produced through the friction of hands on material — informed every decision she made about speed, pressure, angle, and timing. It was not representational knowledge that could be written down and handed to a machine. It was knowledge constituted by the relationship between a particular body and a particular material in a particular moment. Remove the body from the relationship, and the knowledge is not extracted. It is extinguished.
Ingold's ethnographic evidence is extensive and cross-cultural. Among the Cree hunters of northeastern Canada, he observed that the knowledge required to track and kill a caribou was not stored in the hunter's mind as a set of rules or representations. It was distributed across the hunter's body — in the sensitivity of his feet to changes in snow texture, in his hands' responsiveness to the pull of the bowstring in varying cold, in his eyes' ability to read the landscape for patterns that indicated the animal's movement. This knowledge could not be transmitted by instruction alone. It had to be grown through years of apprenticeship, through the body's sustained engagement with the specific materials and conditions of the northern forest. A manual could describe the knowledge. But the description was not the knowledge. The knowledge was in the doing.
The parallel to software development is direct and, for the argument of The Orange Pill, uncomfortable. Segal describes a senior engineer in Trivandrum who could "feel" a codebase the way a doctor feels a pulse — an intuition built through thousands of hours of direct engagement with the material of code. Segal acknowledges that this intuition was produced by friction, by the specific resistance of systems that did not do what the engineer expected. And he argues, in his chapter on ascending friction, that when AI removes the implementation friction, the freed cognitive resources are invested at a higher level — in judgment, architecture, vision.
Ingold's framework asks the question that the ascending friction thesis must answer: Is the higher-level thinking that replaces implementation genuinely higher? Or is it merely different — and different in a way that lacks the epistemic grounding that material engagement provides? The architect who has never handled materials designs differently from the architect who has. The difference is not that one is better and the other worse. It is that the knowledge of materials — the enacted, embodied, frictional knowledge of how things actually behave when you try to shape them — informs design in ways that cannot be compensated by representational knowledge alone.
The hylomorphic model tells us the architect does not need to handle materials. That is the architect's job: to conceive the form and let others impose it. Ingold's research tells us this is wrong — not always, not in every domain, but frequently enough, and deeply enough, that the assumption deserves scrutiny rather than the unexamined acceptance it has received from the technology industry.
In a 2019 interview with Full Stop magazine, Ingold stated his objection to AI with characteristic directness: "The whole AI business, it seems to me, is built upon a faulty notion of intelligence — one that views it in purely cognitive, information-processing terms. For me there can be no intelligence which is not grounded in the perception and action of living beings, moving around in and perceiving their environments as they go." This is not a Luddite complaint. It is a claim about the ontology of intelligence itself — a claim that intelligence is not something you have but something you do, not a capacity of the mind but an activity of the whole organism in its environment, and that any system that processes information without perceiving and acting in a material world is not intelligent in the sense that matters.
Whether Ingold is right about this will determine, more than any other single question, how we understand the long-term consequences of the AI moment. If intelligence is computation — pattern recognition applied to information — then AI is genuinely intelligent, and its expanding capabilities represent a genuine expansion of intelligence in the world. The river of intelligence has widened, and the widening is real.
But if intelligence is grounded in perception and action — if it requires a body, a material environment, and a history of engagement between the two — then what AI produces is something other than intelligence. Something extraordinarily useful, perhaps. Something that mimics the outputs of intelligence with remarkable fidelity. But not intelligence itself, and therefore not a contribution to the river of intelligence but a simulation of one, flowing in a parallel channel that never quite meets the real current.
The hylomorphic model tells us this distinction does not matter. The form is the intelligence. The matter is the substrate. If the form is the same — if the output is indistinguishable — then the process by which it was produced is irrelevant.
Ingold's entire career is an argument that the process is not irrelevant. That the process is where the knowledge lives. And that a civilization that systematically eliminates the process — that automates execution in the name of liberating conception — may discover, too late, that it has eliminated the ground on which its intelligence stood.
The myth of the hylomorphic model is not merely a philosophical curiosity. It is the operating assumption of every organization that separates thinking from making, every educational system that values abstract reasoning over material skill, every technology that promises to free the mind by automating the hand. If the myth is true, these separations are rational and the AI moment is the culmination of a trajectory that began with the first division of labor. If the myth is false, these separations are epistemically catastrophic, and the AI moment is the point at which the catastrophe becomes irreversible — the point at which a civilization has so thoroughly delegated making to machines that it has lost the capacity to know what its hands once knew.
---
In the northern Scottish Highlands, a woman sits at a loom. The shuttle moves through the warp with a rhythm that conceals the complexity of what is happening. Her left hand adjusts the tension of the warp threads — not by calculation but by feel, by the particular resistance that years of practice have taught her to read as fluently as a musician reads a score. Her right hand throws the shuttle with a motion that incorporates micro-adjustments for the weight of the bobbin, the humidity of the room, the behavior of the specific wool she is working with today. Her feet work the treadles in a pattern that she could not describe in words but that her body executes with the precision of a pianist's left hand maintaining a bass line beneath a melody.
She is not executing a design. She is thinking.
This claim — that making is a form of thinking, not the execution of prior thought — is the central philosophical contribution of Tim Ingold's anthropological project, and it is the claim that the AI moment makes most urgent. Everything else follows from it: the critique of the hylomorphic model, the argument for embodied knowledge, the insistence that skill is constituted by engagement rather than possessed as a capacity. All of it rests on this foundation: that there exists a form of thinking that occurs through the hands' encounter with resistant material, and that this thinking is not a lesser version of "real" thinking — the abstract, conceptual, representational thinking that Western philosophy has traditionally privileged — but a distinctive cognitive operation that produces knowledge unavailable through any other means.
The distinction can be drawn with Ingold's own vocabulary. Representational knowledge is knowledge about something. It can be articulated in language, stored in documents, transmitted by instruction. The weaver's pattern can be described in a notation system. The potter's technique can be recorded in a manual. The programmer's algorithm can be specified in pseudocode. Representational knowledge is what textbooks contain, what lectures transmit, what large language models process.
Enacted knowledge is knowledge through something. It cannot be fully articulated because it is constituted by the activity itself — by the body's engagement with materials, tools, and processes in real time. The weaver's sensitivity to tension variations that fall below the threshold of conscious awareness. The potter's ability to detect, through the slight wobble of the wheel, that the clay is off-center by a millimeter. The programmer's instinct, developed over years of debugging, that something in the architecture is wrong before any specific error has manifested.
This knowledge is real. It is productive — it shapes the quality of the output in measurable ways. And it is, in a precise sense, untransferable. Not because the practitioner is withholding it, and not because language is inherently limited, but because the knowledge is not a content that can be separated from the form of its enactment. Michael Polanyi, whose concept of tacit knowledge Ingold extends and deepens, captured this with his observation that "we can know more than we can tell." But Ingold goes further. For Polanyi, tacit knowledge is knowledge that happens to be difficult to articulate. For Ingold, the knowledge that lives in hands is knowledge whose very existence depends on the body's engagement with material. Articulate it, and you have not transferred the knowledge. You have produced a representation of it — a map, not the territory.
The implications for AI are direct. A large language model processes representations. This is not a limitation of current models that future architectures might overcome. It is a structural feature of what these systems are: pattern-recognition engines operating on textual data. The training corpus contains descriptions of making — millions of them, from technical manuals to poetic accounts of craftwork. The model can generate new descriptions with remarkable sophistication, drawing connections between domains, producing instructions, even simulating the process of thinking through a problem. But the model does not make. It does not engage with material. The knowledge it produces is, by structural necessity, representational rather than enacted.
Consider the concrete case. A senior software engineer — someone with fifteen years of experience building backend systems — sits down to write a database migration. In the old workflow, she writes the migration by hand. She encounters the specific behavior of the database under load — the way certain operations lock tables, the way index rebuilds consume memory, the way the migration interacts with the application layer in patterns she has learned to anticipate through years of watching migrations go wrong. Each encounter deposits what Ingold would call a trace: a mark in the body's knowledge, invisible to any external observer, that incrementally educates her attention. Over time, these traces accumulate into what Segal describes as the ability to "feel" a codebase — an intuition that operates below conscious articulation but reliably produces better architectural decisions.
Now, she describes the migration to Claude. Claude produces the code. The code is correct — perhaps more correct than what she would have written unaided, because the model has been trained on patterns drawn from millions of migrations. She reviews the output, approves it, deploys it. The migration succeeds.
What has happened? From the perspective of representational knowledge, everything important has been preserved. The specification was hers. The evaluation was hers. The judgment about whether the migration was correct, whether it handled edge cases, whether it was appropriate for the system's specific constraints — all of this remained in her domain.
From the perspective of enacted knowledge, something has been eliminated. The encounter with the database's resistance — the specific, often frustrating, sometimes illuminating friction of watching a migration fail, diagnosing the failure, understanding why the database behaved unexpectedly — did not occur. The trace that would have been deposited by that encounter was not deposited. The education of her attention, which depends on the accumulation of such traces over time, was interrupted. She got a working migration. She did not get the ten minutes of embodied learning that writing it by hand, and potentially failing, would have provided.
Segal's ascending friction thesis addresses this concern at the organizational level. The cognitive resources freed by AI-assisted implementation are invested at a higher level of complexity: in architectural thinking, in product judgment, in the creative direction of what gets built and for whom. The friction has not disappeared. It has climbed to a higher floor of the building.
Ingold's framework asks whether this ascent is as clean as the thesis suggests. The knowledge produced at the higher floor — architectural judgment, product vision, the capacity to evaluate AI-generated output — is itself partly constituted by the enacted knowledge deposited at the lower floor. The senior engineer's architectural judgment was not formed in a vacuum. It was formed through thousands of hours of hands-on engagement with the specific materials of software: the way databases behave under stress, the way distributed systems fail, the way users interact with interfaces that look correct but feel wrong. Each of those hours deposited a trace. The traces accumulated into the intuition that the ascending friction thesis assumes will survive the elimination of the engagement that produced it.
But will it? If the engineer stops writing migrations, stops debugging, stops encountering the material of code at the implementation level, will her architectural judgment continue to develop? Or will it gradually become detached from the material reality it is supposed to govern — a judgment formed on representations rather than encounters, on descriptions of systems rather than the experience of building and watching them fail?
The question has an empirical dimension that neither philosophy nor theory can fully settle. But Ingold's ethnographic work offers suggestive evidence. Among the builders he observed in the Scottish Highlands, the most skilled architects were those who had spent years working with their hands in the material. The architect who had laid stone, who knew in her body how stone behaves under compression, how mortar sets in different temperatures, how a wall distributes load — this architect designed differently from the one who had learned these facts from textbooks. Not necessarily better in every measurable dimension. But with a quality of judgment that practitioners recognized and that the purely representational architect lacked: a sensitivity to the material's tendencies that informed every design decision, from the angle of a roofline to the depth of a foundation.
Ingold traces this phenomenon across cultures with the persistence of someone who has found a pattern too consistent to be coincidental. Among the Inuit, he found that the knowledge required to build an igloo was not a set of instructions that could be transmitted verbally. Young builders learned by watching, by attempting, by failing, by developing through their hands a sensitivity to the snow's density, crystalline structure, and load-bearing capacity at different temperatures. The knowledge was not in the instructions. It was in the body's educated engagement with the snow. Among Finnish Sámi reindeer herders, the knowledge required to assess a herd's health was constituted not by information but by years of physical proximity — the smell of the animals, the sound of their hooves on different terrain, the subtle behavioral shifts that indicated stress or disease. This knowledge could not be extracted from the herders and installed in a monitoring system without losing what made it knowledge rather than data.
The Orange Pill argues that AI allows the non-coder to become a builder — that the imagination-to-artifact ratio approaches zero when the machine handles the implementation. Ingold's work asks what kind of builder this produces. A builder whose knowledge is representational rather than enacted, whose judgments rest on descriptions rather than encounters, whose relationship to the material of their work is mediated entirely by a system that interprets rather than resists.
This is not necessarily a lesser builder. In some domains, representational knowledge may be sufficient. In some domains, the enacted knowledge that material engagement produces may be more romantic than practical — a form of expertise that slows the process without improving the outcome. The case must be made domain by domain, not as a universal principle.
But Ingold's research suggests that the domains where enacted knowledge matters are larger than the hylomorphic model acknowledges. The model renders this knowledge invisible by design. If all intelligence lives in the conception, then the execution is mere labor, and the knowledge produced through execution is not knowledge at all — it is a byproduct, an artifact of inefficiency, something that will not be missed when the efficiency improves.
If the model is wrong — and Ingold's four decades of evidence say it is — then the knowledge produced through execution is real, productive, and at risk.
The weaver at her loom is not executing a design. She is thinking in wool, discovering solutions that her hands find before her mind can name them. The question for the AI moment is whether a civilization that delegates the execution can preserve the thinking — or whether the thinking was always, and only, in the making.
---
In March 2025, Tim Ingold gave a remote lecture at Penn State's Stuckeman School titled "Digitization and Fingerwork." The lecture examined a phenomenon he had been tracking for decades: the migration of skilled handwork from the palm to the fingertips. For millennia, the operations that mattered — knotting, weaving, breadmaking, milking, embroidery, handwriting — engaged the whole hand: palm, fingers, wrist, forearm, working together in coordinated contact with resistant material. The rise of digital technology had narrowed this engagement to the fingertips alone, tapping on glass surfaces, pressing keys, swiping and pinching. Ingold's observation was precise: "While our fingertips mediate the transmission of information in a virtual world of artificial intelligence, they have no purchase in the real world of forces and materials."
Purchase. The word carries Ingold's entire argument in four syllables. Purchase is what the hand has when it grips a tool, when the fingers wrap around a handle and the palm bears the force, when the body's weight and the tool's resistance create a system of forces that the maker navigates by feel. The fingertip tapping a screen has no purchase. It transmits a signal. It does not negotiate with a material. The distinction is not metaphorical. It is biomechanical, neurological, and epistemic. Different kinds of hand engagement activate different neural pathways, produce different forms of knowledge, and build different kinds of competence.
Neuroscience provides a structural account of why this matters. The human hand contains roughly seventeen thousand mechanoreceptors — sensory neurons that respond to pressure, vibration, texture, temperature, and stretch. These receptors are not evenly distributed. The fingertips contain the highest density of Meissner's corpuscles, which detect light touch and rapid vibration. The palm contains Pacinian corpuscles that respond to deep pressure and sustained vibration. The interplay between them — the simultaneous registration of surface texture by the fingertips and deep resistance by the palm — produces the complex tactile perception that skilled makers rely on. The potter who judges the clay's moisture does not do so through her fingertips alone. She does so through the composite signal generated by the full hand's engagement with the material — a signal that integrates surface moisture, internal density, temperature gradient, and structural integrity into a single perception that she experiences not as data but as feel.
When the hand's engagement narrows to the fingertip on glass, this composite signal collapses. The Meissner's corpuscles fire. The Pacinian corpuscles do not. The proprioceptive feedback from the wrist, forearm, and shoulder — feedback that in tool-use provides information about the relationship between the body's force and the material's resistance — falls silent. The neural pathways that integrative tactile engagement builds over years of practice cease to be exercised. The knowledge those pathways encode — knowledge of materials, forces, resistances, and the body's dynamic relationship to all three — is not accessed, not maintained, not developed.
This is not a critique of digital interfaces per se. It is a description of what happens to a specific form of knowledge when the conditions for its production are systematically eliminated. Ingold is not arguing that screens are bad. He is arguing that the knowledge produced by full-hand material engagement is real, that it serves functions which representational knowledge cannot replicate, and that a civilization in which the dominant mode of interaction between humans and their tools is the fingertip on glass is a civilization in which that form of knowledge is atrophying.
The knowledge that lives in hands has a specific character. It is procedural rather than declarative — knowing how rather than knowing that. But Ingold's contribution is to show that procedural knowledge is itself inadequate as a description. Procedural knowledge, in the cognitive science literature, is typically understood as a set of motor routines stored in the brain and executed through the body. The body is the vehicle. The knowledge is in the head. This preserves the hylomorphic hierarchy: mind stores the procedure, body executes it. Ingold dissolves this hierarchy. The knowledge is not stored in the brain and executed through the hands. It is constituted by the relationship between the hands and the material. It does not exist outside that relationship. The surgeon's knowledge of the boundary between gallbladder and liver — the knowledge that Segal's laparoscopic surgery example in The Orange Pill makes vivid — is not a mental representation of anatomy applied through the hands. It is a tactile perception that the hands produce in the moment of contact, a perception that depends on the specific pressure, angle, and movement of fingers in living tissue, and that cannot be reproduced by any system that processes representations rather than making contact.
The laparoscopic surgery example is itself illuminating when examined through Ingold's framework. Segal describes the transition from open to laparoscopic surgery as a case of ascending friction: the old tactile friction was eliminated, and a new, harder, higher-order friction took its place — the cognitive challenge of operating through a camera, of interpreting two-dimensional images of three-dimensional spaces, of coordinating instruments at a remove from the body. The argument is that the friction did not disappear. It climbed.
Ingold would agree that new friction emerged. He would dispute that the new friction compensates for what the old friction produced. The open surgeon's hands in the body cavity generated a form of knowledge — tactile, proprioceptive, integrative — that the laparoscopic surgeon's screen cannot replicate. Laparoscopic surgery made possible operations that open surgery could not attempt. This is not in dispute. But the laparoscopic surgeon who has never opened a body operates with a form of knowledge that is impoverished in a specific dimension: the dimension of direct material contact. Whether this impoverishment affects the quality of surgical judgment is an empirical question, but the training literature in surgery suggests that it does — that surgeons trained exclusively on laparoscopic and robotic techniques struggle with complications that require direct manual intervention, precisely because their hands have not developed the knowledge that comes from sustained contact with living tissue.
The parallel to software engineering is exact enough to be uncomfortable. The programmer who has spent years debugging — tracing through code line by line, watching variables change in a debugger, feeling (the word is used advisedly, because programmers do use the language of feeling to describe the experience) a codebase's logic through the friction of stepping through its execution — has developed a form of knowledge that the programmer who reviews AI-generated output has not. The debugger's knowledge is not representational. It is not the knowledge of what the code does, which can be obtained from documentation or from reading the source. It is the knowledge of how the code behaves — a knowledge produced by the repeated encounter between the programmer's attention and the code's execution, each encounter depositing a trace of understanding that accumulates, over years, into what experienced engineers call intuition.
When Claude generates the code and the engineer reviews it, the review engages representational knowledge: Does this code do what the specification says? Does the logic hold? Are the edge cases handled? These are important questions, and answering them well requires genuine expertise. But the review does not engage the enacted knowledge that comes from writing the code — the specific encounter with the material of computation, the moment when the variable does not hold the value you expected and you must trace backward through the logic to find the point of divergence, the moment when the system fails under load and you must diagnose, through direct engagement with the system's behavior, where the bottleneck lives.
That enacted knowledge — the knowledge deposited by failure, by friction, by the body's sustained engagement with computational material — is what Ingold's framework identifies as at risk. Not because AI produces worse code. Often it produces better code. But because the production of better code by the machine eliminates the conditions under which the human's enacted knowledge is formed.
The question is sharpened by considering what happens across a career. The senior engineer with fifteen years of debugging has a reservoir of enacted knowledge deep enough to evaluate AI-generated code with genuine authority. She knows what to look for because she has encountered, in her hands, the specific failures that the code must avoid. But the junior engineer who begins her career reviewing AI-generated code does not build that reservoir. She develops a different form of expertise — the expertise of evaluation, of pattern recognition applied to outputs rather than processes, of judgment exercised at a remove from the material.
This different expertise may prove sufficient. It may even prove superior for certain tasks. The claim here is not that the old expertise was universally better. It is that the new expertise is different in kind, and that the difference has consequences that the efficiency metrics cannot capture. A generation of engineers trained entirely on AI-generated code will lack the enacted knowledge that their predecessors built through years of manual engagement. Whether that lack matters — whether the representational knowledge of reviewing outputs can fully substitute for the enacted knowledge of producing them — is the question that will determine whether the AI transition in software engineering is a case of ascending friction, as Segal argues, or a case of knowledge loss concealed by output quality, as Ingold's framework suggests.
Ingold himself has been characteristically blunt. In his 2019 interview with Full Stop, he stated that "there can be no intelligence which is not grounded in the perception and action of living beings, moving around in and perceiving their environments as they go." This is a strong claim — stronger, perhaps, than the evidence warrants in every domain. There are clearly forms of intelligence that operate primarily through representation, and AI's capacity to process patterns in text produces outcomes that, whatever their ontological status, are functionally intelligent in consequential ways.
But the claim is not that representational intelligence is worthless. The claim is that it is incomplete. That there exists a domain of knowledge — the knowledge that lives in hands, in the body's engagement with material, in the traces deposited by years of frictional contact with the world — that representational intelligence cannot reach. And that a civilization moving rapidly toward the delegation of all making to representational systems may find itself, a generation from now, in possession of extraordinary capabilities and impoverished knowledge. Capable of producing anything. Capable of understanding, in the deep bodily sense of understanding, less and less of what it produces.
---
In 2007, Tim Ingold published a book about lines. Not a metaphorical meditation on linearity as a concept, but an anthropological investigation into what lines are, how they are made, and what they reveal about the fundamental structure of human engagement with the world. The book was called, with the directness that characterizes Ingold's titles, Lines: A Brief History.
The argument that emerged from this seemingly narrow investigation proved to be among the most consequential in Ingold's body of work, because the line — the simplest mark a human being can make — turns out to encode the deepest questions about the relationship between making and thinking, between process and product, between the life of the maker and the artifact that the making produces.
A line drawn by hand is a trace. It records a movement. The pressure of the pencil against the paper, the speed of the hand, the hesitations, the corrections, the moments of confidence and the moments of uncertainty — all of these are inscribed in the line's material character. A skilled draftsman can read the hand-drawn line the way a geologist reads the strata of a cliff face: as a record of the forces that produced it. The line is not merely a shape on a surface. It is a biography, compressed into graphite and paper, of the encounter between a particular hand and a particular page at a particular moment in time.
A line produced by a machine — rendered by a printer, generated by a computer, output by an AI system — is not a trace. It is a specification. It records no movement because no movement produced it. It has no pressure, no speed, no hesitation, no biography. It is the result of a computation, not the record of an engagement. It exists as a geometric entity — a set of coordinates, a mathematical function — rather than as a temporal event. It has position but not history.
This distinction, which Ingold draws from his study of lines across cultures — from Aboriginal Australian songlines, which are walked into the landscape through generations of foot travel, to the ruled lines of the colonial surveyor, which are imposed on the landscape from above — illuminates something that the discourse about AI and creativity has consistently failed to articulate.
When Segal describes his collaboration with Claude in The Orange Pill, he uses the language of conversation. The back-and-forth between human and machine, the iterative refinement of an idea through successive rounds of description and response, the emergence of insights that belong to neither party alone. This conversational metaphor suggests a process: a temporal unfolding in which meaning is produced through sustained engagement, through the negotiation between human intention and machine interpretation.
Ingold's framework asks: What kind of line does this process draw? Is it a trace or a specification?
A trace grows. It extends itself through time, each moment dependent on the moment before it, each movement a response to what the material did in the previous moment. The hand-drawn line does not exist all at once. It unfolds, and the unfolding is the thinking. The weaver's thread through the warp does not exist all at once. It is produced moment by moment, each pass of the shuttle a response to the previous pass, each adjustment in tension a response to the thread's behavior. The trace records this unfolding. It carries the time of its making within its form.
A specification exists all at once. The computer-rendered line does not unfold. It is computed from a formula and deposited on the surface in a single operation. The AI-generated paragraph does not grow word by word in the way a handwritten sentence grows — the writer's thought taking shape through the resistance of language, each word narrowing the possibility space for the next, the sentence emerging from the encounter between intention and the syntactic constraints of the medium. The AI-generated paragraph arrives complete. It may be revised, but the revision is a series of replacements, not a continuous growth. Each iteration is a new specification, not an extension of a trace.
The distinction matters because the trace carries knowledge that the specification does not. The hand-drawn architectural sketch — the kind that architects produce in the early stages of design, when the form is still uncertain and the line's tentativeness is part of its communicative power — conveys not just a shape but a disposition. The viewer can read the architect's confidence, uncertainty, emphasis, and exploration in the character of the line. The computer-rendered drawing conveys the shape with greater precision but less information, because the line's character has been standardized. The information that the hand's engagement with the surface would have inscribed — information about the maker's relationship to the design, about the degree of resolution, about where the thinking is settled and where it is still in motion — has been erased by the rendering process.
This is what Ingold's framework identifies as the epistemological cost of smoothness — a cost distinct from but related to the aesthetic cost that Byung-Chul Han describes. Han argues that the smooth surface is an aesthetic of diminishment, a surface that conceals its construction and thereby conceals the labor, friction, and humanity of the making process. Ingold adds an anthropological dimension: the smooth surface is also an epistemic surface, one that eliminates the traces through which a particular form of knowledge is communicated. The knowledge lost is not just the maker's subjective experience. It is information about the making process that the traces objectively contain and that the smooth surface objectively lacks.
But lines, in Ingold's analysis, do more than distinguish traces from specifications. They lead to a concept that may be the most challenging and potentially generative idea in his entire body of work: the meshwork.
Ingold distinguishes sharply between two models of connection. The first is the network: a set of nodes connected by lines. The internet is a network. A corporate org chart is a network. A social media platform is a network. The nodes are primary — they exist first, and the connections between them are secondary, established after the nodes are already in place. The lines in a network are conduits. They carry information between pre-existing points. The intelligence, in a network, lives at the nodes.
The second is the meshwork: a tangle of lines, each growing along its own path, each continually responding to and intertwining with the others. A forest floor is a meshwork. Fungal hyphae, root systems, insect trails, water channels, fallen branches — all weave together without any node being prior to the lines that constitute it. The lines are primary. The intersections — the points where lines cross, tangle, knot — are produced by the movement of the lines, not the other way around. There are no pre-existing nodes that the lines connect. There are only lines, growing, moving, and encountering each other as they go.
The distinction sounds abstract. It is not. It describes two fundamentally different ways of understanding how creative work gets done.
In the network model, creativity happens at the nodes — at the individual minds that process information and produce outputs. The connections between nodes are channels of communication: email threads, Slack messages, code repositories, meeting agendas. The network distributes information. The nodes process it. The intelligence is located in the individual.
In the meshwork model, creativity happens along the lines — in the ongoing processes of making, walking, talking, and dwelling that weave the community together. The intelligence is not located at any node. It is emergent from the entanglement of lines. It is produced by the movement itself, by the way one line of practice responds to and is shaped by another, the way a conversation shifts the direction of a project, the way a material's unexpected behavior redirects a design, the way the weather of the workshop — the light, the temperature, the social atmosphere — enters the work through channels that the network model cannot represent.
The Orange Pill's description of the river of intelligence flowing for 13.8 billion years — from hydrogen atoms to biological evolution to conscious thought to cultural accumulation to artificial computation — operates primarily in the network model's register. Intelligence flows from node to node, accumulating as it goes, each new participant adding to the current. The river metaphor captures the directionality of the process: a force moving through time, gaining power, widening its channel.
Ingold's meshwork captures something the river metaphor, by its unidirectional flow, tends to flatten: the lateral dimension. In a meshwork, the movement is not primarily forward. It is outward — in all directions simultaneously, as lines of practice grow, intertwine, diverge, and knot. The creative community is not a river flowing toward the sea. It is a forest floor, growing from every point at once, the connections between its elements produced not by a current pushing everything in one direction but by the local, contingent, moment-by-moment encounter between adjacent lines of growth.
This reframing has consequences for how one understands the entry of AI into creative life. In the network model, AI is a new node — a new point of intelligence added to the network, connected to human nodes by channels of communication (the prompt, the response, the evaluation, the iteration). The network is enriched. The intelligence is amplified. The flow is widened.
In the meshwork model, the question is different: What kind of line does AI introduce? The lines that constitute a meshwork are living lines — lines that grow, that fatigue, that respond to weather, that carry the traces of their own history. The weaver's thread through the warp is a living line in this sense: it grows through time, responding to the warp's tension, the weaver's fatigue, the humidity of the room, the particular properties of the wool. It carries traces. It has a biography.
The AI-generated line — the output of a prompt, the response of a model — does not grow. It is produced all at once, computed from a pattern rather than grown through an encounter. It does not fatigue. It does not respond to weather. It carries no traces of its production because its production has no temporal extension — no hesitation, no adjustment, no encounter with resistance. It is, in Ingold's vocabulary, a line of transport rather than a line of wayfaring. It connects two points — the prompt and the response — without dwelling in the space between them.
The meshwork's generative power depends on the dwelling. It depends on lines that move slowly enough to respond to each other, that encounter each other with enough friction to produce knots — points of intensified entanglement where something new emerges from the crossing of paths. A conversation between two makers, each bringing the traces of their own practice, produces a knot: a point where two lines of experience cross and generate an insight that neither line, moving alone, could have produced.
Does the conversation between a human and Claude produce knots? Segal's account suggests that it does — that the collaboration generates moments of genuine surprise, connections that neither party anticipated, insights that belong to the space between them rather than to either alone. The CCCB Lab essay on AI art, drawing explicitly on Ingold, offers a counterargument: that AI-generated work is "the consummation of the modern notion of art that denies the process" — that the output may resemble the product of a meshwork encounter, but it is produced by a fundamentally different mechanism, one that computes rather than corresponds, that specifies rather than traces, that arrives at the destination without having traveled the path.
The question remains genuinely open. Whether the human-AI conversation constitutes a meshwork — a tangle of growing, responsive, dwelling lines — or a network — an exchange of information between pre-existing nodes — may depend less on the nature of the technology than on the disposition of the human participant. The practitioner who approaches AI with the weaver's attention — responsive, patient, willing to follow the thread where it leads, ready to adjust when the material does something unexpected — may produce something closer to meshwork. The practitioner who approaches AI as a specification engine — prompt in, output out, evaluate, repeat — produces something closer to network.
The distinction is not between human creativity and machine capability. It is between two ways of using the machine — one that preserves the conditions for meshwork, and one that converts every creative encounter into a network transaction. The tool does not determine which mode the practitioner operates in. But the tool's design — the speed of its response, the smoothness of its output, the absence of visible resistance — makes the network mode easier and the meshwork mode harder.
And the question for a civilization increasingly dependent on AI-mediated production is whether the meshwork can survive the convenience of the network, whether lines that grow and dwell and carry traces can coexist with lines that compute and specify and arrive, or whether the speed and efficiency of the computed line will gradually crowd out the grown one, converting the forest floor of creative life into the wiring diagram of a circuit board — functional, efficient, and no longer alive.
A woodcarver in the Finnish lake district picks up a piece of birch. Before the first cut, her hands have already begun a conversation. The weight of the wood tells her about its moisture content. The grain, visible on the end where the log was split, tells her about the tree's growth pattern — the tight rings of a cold decade, the wide rings of years with good rain. She runs her thumb along the surface and reads the density gradient from heartwood to sapwood the way a geologist reads a core sample. None of this information was requested. The wood offered it, through the medium of her hands, before she had formed a plan.
She begins to carve. The knife enters the wood and the wood responds — not passively, not as a substrate receiving an imposed form, but actively, as a material with its own tendencies, its own logic, its own insistence on certain outcomes and resistance to others. The grain runs in a direction that favors one cut and punishes another. A knot, invisible from the surface, deflects the blade and forces a change of approach. The sapwood splits where the heartwood held firm. Each of these events is the material talking back — contributing to the making process with information that the carver could not have anticipated from the specification alone.
Tim Ingold has documented this phenomenon across cultures and materials with the persistence of someone who has identified a universal feature of skilled practice. The knotters of the Pacific Islands describe their cordage as having a will — a tendency to twist in one direction that must be respected, not overcome. The builders of Scottish drystone walls speak of each stone having a face, a preferred orientation that the mason discovers through handling rather than analysis. The potters of every ceramic tradition on earth know that clay talks back — that its moisture, particle size, mineral composition, and temperature history produce behaviors that the potter must read and respond to in real time, adjusting speed, pressure, and technique in a continuous dialogue that is neither the potter's monologue nor the clay's.
This talking back is not metaphorical, and Ingold is insistent on this point. It is not a charming way of describing the fact that materials have properties. It is a description of the epistemic structure of skilled practice: the feedback loop between maker and material through which both are transformed. The wood does not merely possess properties that the carver must accommodate. The wood actively contributes information — through resistance, deflection, splitting, and holding — that shapes the carver's decisions and produces outcomes that neither the carver's intention nor the wood's properties could have generated independently. The form that emerges is not the carver's design imposed on the wood. It is the product of a negotiation, conducted through the hands, in which the material is a genuine interlocutor.
The concept has a precise technical formulation in the work of James J. Gibson, the ecological psychologist whose theory of affordances Ingold extends. An affordance is what the environment offers to a perceiving agent — not a property of the object alone and not a projection of the agent's mind, but a relational property that exists in the encounter between the two. The flat surface of a rock affords sitting for a tired walker. The fork in a branch affords gripping for a climbing child. The grain of the birch affords cutting along its direction and resists cutting across it. Affordances are real, objective features of the environment, but they are features that only become visible to an agent equipped to perceive them — an agent whose perceptual system has been educated, through sustained engagement, to read what the material offers.
Materials that talk back are materials whose affordances become progressively legible through practice. The novice carver encounters the birch as a resistant block. The master carver encounters it as a field of affordances — cuts it invites, forms it suggests, behaviors it will exhibit under different conditions. The difference between the novice and the master is not that the master has more information about birch. It is that the master's perceptual system, educated by years of material engagement, can read what the wood is saying. The wood was talking all along. The master has learned to listen.
Now consider the structure of the conversation between a human and Claude. The human describes an intention — "Build me a recommendation engine with these constraints." Claude responds. The human evaluates the response. Claude adjusts. The cycle repeats. There is a feedback loop. There is a form of back-and-forth. In The Orange Pill, Segal describes this cycle as conversation, and the description is not inaccurate. The exchange has the iterative structure of a dialogue. Insights emerge that neither party anticipated. The output improves through successive rounds of engagement.
But the nature of the feedback differs in a way that Ingold's framework makes precise. When the birch talks back, it speaks from the physics of its own existence. The knot that deflects the blade is not a response generated from a pattern library. It is the material manifestation of a branch that once grew at this point in the tree, whose cellular structure differs from the surrounding wood, whose density and grain orientation produce a specific mechanical behavior that the carver encounters as resistance. The information comes from the world. It is independent of what anyone has said about birch, or about knots, or about carving. It is the world asserting itself — insisting, through the medium of the material, on conditions that the maker must respect.
When Claude talks back, it speaks from patterns in training data. Its response to the prompt is generated by statistical regularities in a vast corpus of text — regularities that encode enormous amounts of human knowledge and produce outputs that are often remarkably useful. But the response does not come from the world. It comes from what has been said about the world. The distinction is subtle but consequential. The birch's resistance is independent of human description. The knot exists whether or not anyone has written about knots. Claude's response is constituted by human description. It is the aggregate of what the training corpus contains about recommendation engines, filtered through the specific weights that the model's architecture has learned to assign.
This produces a different kind of surprise. The carver's surprise when the birch splits unexpectedly is an encounter with the world's independent behavior — behavior that the carver could not have predicted because it depends on features of the specific piece of wood that are not visible from the surface and not described in any manual. This surprise is generative because it forces the carver to learn something about birch that cannot be learned from descriptions alone. The trace deposited by this encounter — the bodily knowledge that this is what birch does when you cut across the grain near a knot — becomes part of the carver's enacted knowledge, available for every future encounter with similar material.
Claude's surprise when it produces an unexpected connection — linking two domains the prompter had not considered, generating a structural suggestion that reframes the problem — is an encounter with patterns in language. It is often genuinely useful, sometimes brilliantly so. Segal's account of the punctuated equilibrium insight, where Claude connected adoption curves to evolutionary biology, is a compelling example. But the surprise comes from association, not from encounter. It draws from what has been said, not from what exists independently of saying. The connection was latent in the training corpus — implicit in the co-occurrence of concepts across millions of documents — and the model's architecture made it explicit.
The two kinds of surprise are not equivalent in their epistemic consequences. Associative surprise enriches representational knowledge — it extends the map of connections between known concepts. Encounter surprise enriches enacted knowledge — it deposits a trace that educates the body's perception of the material world. Both are valuable. Both contribute to the creative process. But they contribute differently, and the substitution of one for the other — the replacement of material encounter with associative response — changes the character of the knowledge that the making process produces.
Consider the programmer again. In the old workflow, the programmer writes code and the code fails. The failure is the material talking back — the computational substrate asserting its own logic, revealing a discrepancy between the programmer's intention and the system's actual behavior. The programmer debugs. The debugging process is an encounter with the code's behavior — not its description, not its specification, but the specific way it executes on this machine, with this data, under these conditions. Each debugging encounter deposits a trace. Over years, the traces accumulate into the architectural intuition that experienced engineers possess — the ability to feel that something is wrong before the error manifests, to predict where the system will break under stress, to design with a sensitivity to the material's tendencies that no specification can convey.
In the new workflow, Claude generates the code. The code works. If it does not, the programmer describes the failure to Claude, and Claude generates a fix. The computational substrate still has its own logic — the machine still behaves according to the physics of its architecture, not according to anyone's description of it. But the programmer's encounter with that logic is mediated entirely by the model. The programmer does not touch the material. She describes the symptom. The model produces the remedy. The remedy may be correct. But the enacted knowledge that would have been deposited by the encounter — the specific, bodily, untransferable understanding of how this system behaves under these conditions — is not produced. The trace is not laid down.
The most sophisticated version of this concern acknowledges that AI-mediated work does produce its own form of feedback. The programmer who reviews AI-generated code and identifies a subtle error is engaged in a genuine cognitive operation — one that requires expertise, attention, and judgment. The conversation with Claude, when conducted with care, does push back. The model's interpretation of a poorly specified prompt may reveal ambiguities in the programmer's own thinking. The iterative refinement of a design through successive rounds of human-AI dialogue can produce insights that are genuinely new.
But the pushback is interpretive, not material. Claude pushes back with alternative readings of the prompt, with connections drawn from the training corpus, with structural suggestions generated from patterns. It does not push back with the stubbornness of matter — with the irreducible fact of a system that behaves according to the physics of its construction rather than the descriptions in any corpus. The programmer who works exclusively through Claude encounters the material of computation at one remove, through the mediation of a system that interprets rather than resists.
Whether this remove is consequential depends on the domain. In some areas of software engineering — the areas where the specification is clear, the patterns are well-established, and the material's behavior is predictable — the remove may cost little. The code generated from a clear specification for a standard operation is likely to be correct, and the knowledge lost by not writing it by hand may be knowledge that the programmer did not need to develop.
In other areas — the areas where the system's behavior is emergent, where the interaction between components produces effects that no specification anticipated, where the material of computation does something that no training corpus describes because the specific configuration has never existed before — the remove may be catastrophic. These are the areas where the material talks back most forcefully, where the encounter with resistance produces the deepest learning, where the traces deposited by failure are the most valuable deposits in the engineer's epistemic reserves.
The question is not whether to use AI. The question is whether the use preserves adequate contact with the material. Whether the practitioner, even in an AI-mediated workflow, maintains enough direct engagement with the computational substrate — enough encounters with the system's independent behavior, enough moments of genuine resistance that are not mediated by the model's interpretation — to continue developing the enacted knowledge that material engagement produces.
Ingold's research suggests that the adequacy of contact is not a binary. It is a spectrum, and the position on the spectrum matters. The architect who handles materials occasionally and designs primarily from representations occupies a different position from the architect who never touches material at all. The programmer who debugs occasionally and delegates most implementation to AI occupies a different position from the programmer who reviews outputs without ever touching the code. Each step away from direct engagement reduces the flow of enacted knowledge. Whether the reduction crosses a critical threshold — the threshold below which the knowledge base erodes faster than it replenishes — depends on factors that vary by domain, by individual, and by the specific character of the material being worked.
The birch is still talking. The question is whether anyone is close enough to hear.
---
Human beings do not live on the surface of the earth. They live in the weather.
This observation, which Tim Ingold has developed across multiple works into a comprehensive philosophical position, sounds like poetry. It is not. It is a correction of a pervasive spatial assumption — the assumption that human activity takes place on a ground, in a space, against a backdrop that is external to the activity and irrelevant to its products. The architect works in a studio. The programmer works at a desk. The writer works in a room. In each case, the standard description treats the environment as a container — a space in which the work happens but to which the work is indifferent.
Ingold's correction is that the work happens not in a container but in a medium. The medium is the weather-world — the atmospheric envelope of air, light, sound, temperature, humidity, and social atmosphere that the maker inhabits and that inhabits the maker. The painter's studio does not merely house the painter. It permeates the painter's activity. The quality of the light — the specific angle at which it enters the north-facing window, its color temperature shifting through the day from the blue-white of morning to the warm amber of late afternoon — shapes every decision about color, value, and composition. The sound of the city beyond the window, or the silence of a rural workshop, or the radio playing in the corner of a carpentry shop — these enter the work not as distractions from a pure cognitive process but as constituents of the atmospheric medium in which the thinking occurs.
The concept of the weather-world draws on a philosophical tradition that begins with Martin Heidegger's analysis of dwelling and extends through Maurice Merleau-Ponty's phenomenology of perception. Heidegger argued that human beings do not exist as subjects confronting an external world of objects. They dwell — they are always already embedded in a world of practical engagements, moods, and concerns that constitute the horizon within which any particular activity makes sense. Merleau-Ponty extended this into the domain of perception, demonstrating that the perceiving body is not a neutral instrument registering objective data but a situated organism whose perception is shaped by its posture, its history, its current state of fatigue or alertness, its relationship to the specific environment it inhabits.
Ingold synthesizes these traditions into an anthropology of the weather. The weather is not background. It is the medium in which life takes place. The walker does not move through an abstract space from point A to point B. She moves through wind and rain and shifting light, and the walk is constituted by these atmospheric conditions as much as by the terrain underfoot. The builder does not construct a wall in a void. He constructs it in a specific set of conditions — the temperature that affects the mortar's setting time, the wind that presses against the scaffolding, the ambient noise that determines whether verbal communication with the crew is possible, the light that determines when the working day begins and ends.
These conditions enter the artifact. Not as accidental contaminants but as constitutive elements of the making process. The wall built in January, when the mortar sets slowly and the mason must adjust his technique to prevent frost damage, is a different wall from the one built in July. Not just different in its history. Different in its material character — in the subtle variations in mortar composition, in the mason's compensatory technique, in the specific care that cold-weather masonry demands and that leaves its trace in the finished work.
A program written at three in the morning, when the office is empty and the programmer's attention has the particular quality of late-night focus — narrower, deeper, more prone to tunnel vision and less responsive to peripheral concerns — is different from a program written at ten in the morning with the team moving around the open-plan office. Not different in its logical content. Different in the decisions that were made, the edge cases that were considered or overlooked, the architectural choices that reflected the programmer's particular state of alertness and social isolation at the time of writing. The weather of the workshop, even when the workshop is an office and the weather is the ambient social and cognitive atmosphere, enters the work.
AI operates in no weather-world. This is not a poetic observation. It is a structural description of the computational environment. A large language model processes tokens in a datacenter whose temperature is maintained at a constant level by industrial cooling systems. The model has no window. It has no morning or evening. It has no fatigue, no alertness, no ambient social atmosphere. It processes the same prompt the same way whether the datacenter is in Oregon or Singapore, whether the query arrives at dawn or midnight, whether the human on the other end is exhausted or energized.
This insulation from the weather-world is, from an engineering perspective, a feature rather than a bug. Consistency is a design goal. The model should produce the same quality of output regardless of environmental conditions. A recommendation engine that performed differently depending on the ambient temperature of the datacenter would be defective. The elimination of environmental variation is a requirement of computational reliability.
But from Ingold's perspective, this insulation has epistemic consequences. The weather-world does not merely accompany making. It shapes it. The conditions under which a thing is made enter the thing itself — not as noise but as signal, as information about the situated relationship between the maker and the moment. Eliminate the weather, and the thing that is made is produced in what is, in the most literal sense, a nowhere — a placeless, timeless, weatherless environment that has been deliberately purged of the atmospheric contingencies that constitute the medium of human creative life.
The argument is not that AI should experience weather. That demand would be absurd. The argument is that work produced in the weather-world carries something that work produced outside it lacks — a quality of situatedness, a responsiveness to conditions that cannot be specified in advance, an embeddedness in a particular time and place that leaves its trace in the finished artifact. Whether that trace matters depends on what one values in the work. If the value is purely functional — does the code compile, does the brief argue correctly, does the recommendation engine produce relevant results — then the trace is irrelevant. The work's situatedness adds nothing to its function.
But if the value extends beyond function — if it includes the quality of the maker's engagement, the depth of understanding that the making produced, the relationship between the artifact and the conditions of its creation — then the trace matters, and its absence marks a loss.
The Orange Pill describes the experience of building Napster Station in thirty days — a sprint conducted across time zones, in hotel rooms and airports and conference halls, under conditions of sleep deprivation and deadline pressure that constituted a very specific weather-world. Segal's account makes clear that the conditions of the making entered the product. The urgency shaped the design decisions. The sleep deprivation, paradoxically, produced a narrowing of focus that eliminated options the team might have wasted time on under more comfortable conditions. The weather of the sprint — the fatigue, the excitement, the social intensity of a small team operating under extreme pressure — was not incidental to what was produced. It was constitutive of it.
A program built in the weather-world of a thirty-day sprint carries traces of that weather. The architectural choices reflect the urgency. The code bears the marks of late-night sessions where the programmer's particular state of alertness shaped what was attempted and what was deferred. A colleague reviewing the code months later might notice these traces — the places where the solution is elegant because focus was sharp, the places where it is expedient because the deadline was hours away. The traces are legible to the experienced reader. They are part of the code's history, its biography, its character as a made thing.
A functionally identical program generated by Claude in response to a prompt carries no such traces. It was produced in no weather. It reflects no urgency, no fatigue, no social atmosphere. Its character is the character of the model's training data, filtered through the architecture's weights, rendered into tokens. It may be better code in every measurable dimension — more consistent, fewer bugs, more elegant structure. But it was made nowhere, by nothing that dwells, in no weather at all.
Ingold's framework does not insist that the weatherless artifact is therefore inferior. That would be a romantic claim unsupported by the evidence. Often, the weatherless artifact is functionally superior precisely because it is uncorrupted by the contingencies of situated production — uncorrupted by the three-in-the-morning decision that seemed brilliant at the time, uncorrupted by the deadline-driven shortcut that created technical debt, uncorrupted by the fatigue that caused the programmer to overlook an edge case.
What Ingold's framework insists is that the weatherless artifact is different in kind. It is a specification rather than a trace. It arrives without the marks of its making, and those marks — the signature of a situated, embodied, atmospheric process of thinking-through-doing — are not cosmetic imperfections. They are the visible evidence of a form of engagement that produced knowledge along with the artifact, knowledge that the maker carries forward into future work, knowledge that is deposited, trace by trace, into the reservoir of enacted understanding that skilled practitioners draw upon.
The question for the AI moment is not whether weatherless production is possible. It is already happening at scale. The question is what happens to the makers — to the humans whose work is increasingly conducted through weatherless systems — when the weather-world that once shaped their practice is replaced by an environment deliberately designed to be situationless. Whether the knowledge that the weather deposited — the sensitivity to conditions, the adaptiveness to contingency, the embodied understanding of how environmental factors shape outcomes — atrophies, and whether its atrophy matters for the quality of what those makers go on to produce, whether by hand or by prompt.
The weather is still out there. The window is still there, with its rain, its changing light, its reminder that making takes place not in abstract space but in the atmosphere of a particular life lived in a particular place at a particular time. Whether the maker is still looking through the window — or has turned entirely toward the screen — may determine whether the trace survives.
---
An experienced cabinet maker enters a lumber yard and surveys the stacked boards. Within minutes, she has identified the three pieces she will buy. To the novice accompanying her, the boards look essentially identical — flat, rectangular, differentiated only by size and perhaps by color. To the cabinet maker, the boards are a landscape of possibility and constraint. This one has a figure in the grain that will produce a shimmering surface when planed at the right angle. That one has a slight bow that indicates internal stress — usable, but only for components where the stress can be relieved by the joinery. A third has been air-dried too quickly; the surface feels right, but the core retains moisture that will cause warping within months.
She has not analyzed these boards. She has perceived them. The information arrived through her hands and eyes in the act of inspection — through the tactile assessment of moisture, the visual reading of grain pattern, the almost imperceptible detection of bow and twist that her body registers before her conscious mind processes the data. Her perception of the lumber is not the application of stored knowledge to sensory input. It is the direct pickup of affordances — of what the material offers and what it threatens — by a perceptual system that has been educated, over decades of handling wood, to see what uneducated perception cannot see.
Tim Ingold calls this educated attention: the perceptual capacity that develops through sustained engagement with a specific material environment and that enables the practitioner to perceive affordances, difficulties, and possibilities that the uneducated observer does not perceive. The concept draws on Gibson's ecological psychology but extends it in a direction that Gibson did not fully develop — the temporal dimension. Gibson demonstrated that affordances are real properties of the environment, perceivable by any organism with the appropriate perceptual apparatus. Ingold adds that the perceptual apparatus itself is shaped by practice. It is not a fixed endowment. It is an evolving capacity, continuously reshaped by the body's ongoing engagement with the world.
The cabinet maker does not see more than the novice in the optical sense. The photons entering both sets of eyes carry the same information. What the cabinet maker sees more of is meaning — the significance that the wood's visual and tactile properties hold for someone who has spent years working with the material. The board's grain pattern is not merely a visual texture. It is a guide to the wood's internal structure, its mechanical properties, its likely behavior under different tools and different conditions. This meaning is not projected onto the material by the cabinet maker's mind. It is perceived in the material by a perceptual system that has been trained, through decades of feedback from hands and tools and completed projects, to read what the wood is showing.
Educated attention is not the same as expertise, though the two are related. Expertise, in the cognitive science literature, is typically understood as a mental capacity — a body of knowledge and a set of cognitive skills that the expert applies to problems in her domain. Educated attention is a perceptual capacity — a way of seeing and feeling and hearing that is constituted by the body's history of engagement with materials and that operates below the threshold of deliberate cognition. The cabinet maker does not decide to notice that the board is bowed. She perceives the bow directly, in the same immediate and unreflective way that a person perceives the redness of an apple. The perception is the product of her education — of her hands' long history with wood — but it does not feel like the application of knowledge. It feels like seeing what is there.
This distinction has profound implications for AI. A large language model possesses something that might reasonably be called educated attention — an attention that has been shaped by training, that perceives patterns in language that untrained perception would miss. The model's ability to detect structural parallels between domains, to identify inconsistencies in an argument, to generate connections between concepts that the human prompter had not considered — these are real perceptual capacities, developed through the model's exposure to vast quantities of text. The model sees things in language that most human readers do not see, in the same way that the cabinet maker sees things in wood that most human observers do not see.
But the two forms of educated attention are educated by different materials and perceive different affordances. The cabinet maker's attention was educated by wood — by the specific resistance, texture, weight, and visual character of thousands of boards handled over decades. Her attention perceives material affordances: what the wood offers to be made into, what operations it will tolerate, where it will break. The model's attention was educated by text — by the statistical regularities in billions of sentences. Its attention perceives linguistic affordances: what patterns of language tend to co-occur, what structural relationships hold between concepts, what completions a sequence of tokens probabilistically invites.
Neither form of educated attention is superior in the abstract. Each is powerful within its domain. The cabinet maker's perception of wood affordances is useless in the domain of language patterns, and the model's perception of language patterns is useless in the domain of material behavior. The question is what happens when one form of educated attention is systematically substituted for another — when the decisions that were once informed by material perception are increasingly informed by linguistic pattern-matching.
Consider the software architect. In the pre-AI workflow, the architect's decisions about system design were informed by years of direct engagement with computational material — years of writing code, debugging failures, watching systems behave under load, encountering the specific ways that databases, network protocols, and operating systems do not behave as their documentation promises. This engagement educated the architect's attention. She perceived affordances in system architectures that less experienced engineers did not perceive — potential failure points, scalability bottlenecks, security vulnerabilities, performance characteristics that were not documented because they emerged only under specific combinations of conditions that no manual could anticipate.
In the AI-mediated workflow, the architect's decisions are informed by a different kind of perception — the perception of patterns in AI-generated output. She evaluates Claude's proposed architecture not by feeling the system's behavior through direct engagement but by reading the code, assessing its structural logic, comparing it against her existing knowledge. This evaluation is itself a skilled cognitive operation. But it is a representational operation — an assessment of descriptions — rather than a perceptual operation grounded in material encounter. The affordances she perceives are the affordances of the specification, not the affordances of the system itself. The specification may accurately represent the system. But the representation is not the system, and perceiving the representation is not the same as perceiving the system's actual behavior under conditions that the representation did not anticipate.
The five-stage skill acquisition model that Hubert Dreyfus developed, drawing on Heidegger's phenomenology, maps onto this concern with uncomfortable precision. Dreyfus described the progression from novice to expert as a movement from rule-following to intuitive perception. The novice follows explicit rules — if the board is bowed, do not use it for long spans. The expert perceives the situation directly and responds without rule-consultation — she sees the board and knows, immediately and without deliberation, what it will and will not tolerate. This intuitive expertise is not a shortcut to the same knowledge that rules provide. It is a different kind of knowledge — a perceptual knowledge that integrates more variables than any rule system can capture, that responds to the specific configuration of the situation rather than to its classification under a general rule.
AI-assisted work, Dreyfus's framework suggests, may arrest practitioners at an intermediate stage of skill development — the stage where rule-following has been mastered but intuitive perception has not yet developed. If the practitioner never writes the migration, never watches the database fail under load, never encounters the specific and often surprising behavior of computational material under stress, then the perceptual education that produces intuitive expertise never occurs. The practitioner develops the evaluative skills of the competent professional — the ability to assess AI-generated output against known criteria — without developing the perceptual skills of the expert — the ability to see what no criteria anticipate.
Whether this arrested development matters depends on how much of professional work requires expert perception and how much can be adequately performed by competent evaluation. If the former, the AI-mediated workflow produces a ceiling on professional development that previous workflows did not impose. If the latter, the efficiency gains justify the perceptual cost.
Ingold's research, conducted in domains far from software engineering, suggests that expert perception matters more than most organizational frameworks acknowledge — that the difference between competent and expert is not merely a difference of degree but a difference of kind, and that the expert's contribution often lies precisely in perceiving what no rule, no specification, and no AI-generated analysis identifies. The master builder's glance at a wall under construction that detects a structural problem invisible to competent but non-expert observers. The experienced nurse's perception of a patient's deterioration before any vital sign has measurably changed. The senior engineer's feeling that the architecture is wrong, grounded in nothing she can articulate but in everything her body has learned through years of engagement with the material of computation.
These perceptions are produced by educated attention. The attention was educated by material engagement. If the material engagement is systematically reduced — if the pathway from competent to expert is interrupted by the delegation of hands-on work to AI — then the perceptual capacity that distinguishes the expert from the merely competent is at risk, not because the capacity is impossible to develop in an AI-mediated workflow, but because the conditions for its development — the sustained, frictional, often frustrating encounter with material that does not do what you expected — are the conditions that AI is designed to eliminate.
The cabinet maker's eyes see what the novice's eyes cannot. The question is whether the next generation of cabinet makers, trained with AI tools that select the lumber, optimize the joinery, and simulate the behavior of the finished piece before a single cut is made, will develop the perception that their predecessors built through decades of splinters, mistakes, and the irreplaceable education of hands on wood.
---
In Ingold's most recent philosophical work, a concept that had been developing across decades of writing reached its fullest articulation. He called it correspondence — and it describes something more precise and more demanding than what the word ordinarily conveys. Correspondence is not communication. It is not the exchange of information between two parties. It is a mutual becoming — a process in which each party grows and changes in response to the other, not through the transmission of messages but through the ongoing, attentive practice of answering to one another's presence.
The word itself carries the clue. To correspond is to respond with — to answer, not in the sense of providing information in return for a query, but in the sense of being responsive to, of adjusting one's movements to the movements of another, of growing alongside. The letters that scholars once exchanged were called correspondence not because they transmitted data but because the writers, over months and years, grew together through the practice of responding to each other's thought. The correspondence shaped both correspondents. It was not an exchange of fixed positions. It was a process of mutual transformation.
Ingold applies this concept to making. The maker's relationship to material is a correspondence. The potter does not impose form on clay. She corresponds with it — responding to its resistance with adjustments in pressure and speed, following its tendencies while gently guiding them, arriving at an outcome that is the product of mutual responsiveness. The correspondence is not symmetrical — the potter has intentions and the clay does not — but it is genuine. The clay's behavior shapes the potter's decisions as decisively as the potter's hands shape the clay. Both are transformed by the encounter. The clay becomes a pot. The potter becomes, incrementally, a more skilled potter. The correspondence is where the transformation occurs — on both sides of the encounter.
This concept maps with striking precision onto the account of writing with AI that Segal provides in The Orange Pill. In the chapter titled "Who Is Writing This Book?" Segal describes moments of genuine correspondence with Claude — moments when the collaboration produces insights that belong to neither party alone. He describes the punctuated equilibrium connection, where Claude linked adoption curves to evolutionary biology in a way that reframed Segal's argument. He describes the laparoscopic surgery example, where Claude offered the metaphor that became the structural pivot of an entire chapter. In each case, Segal credits the space between — the collaborative territory where his question met the model's associative capacity and something emerged that neither had contained.
Ingold's framework provides the vocabulary to analyze these moments with more precision than the conversational metaphor alone allows. The question is: What kind of correspondence is this? And how does it differ from the correspondence between maker and material?
The maker's correspondence with material is characterized by three features that Ingold identifies through ethnographic observation across cultures and practices.
First, mutual transformation. The potter is changed by the encounter with clay — her skill develops, her perception is educated, her body learns something that her mind did not direct it to learn. The clay is changed by the encounter with the potter — it takes form, it dries, it becomes something it was not before. Both parties to the correspondence are genuinely altered by the process.
Second, material resistance. The clay does not merely respond. It resists. It has its own logic, its own tendencies, its own insistence on certain behaviors. The correspondence is productive precisely because the material pushes back — because the potter must adjust to conditions she did not create and cannot fully control. The resistance is not an obstacle. It is the medium through which the correspondence becomes generative.
Third, temporal growth. The correspondence unfolds over time. The pot grows on the wheel, moment by moment, each moment dependent on the one before it. The correspondence is not a series of discrete exchanges — prompt and response, prompt and response — but a continuous flow of mutual adjustment. The maker's hands are always on the material. The feedback is continuous. The conversation never pauses.
Now examine the correspondence between Segal and Claude. Is the human transformed by the encounter? Segal testifies that he is — that working with Claude has changed how he thinks, that the collaboration has produced insights he would not have reached alone, that the process of writing with AI has altered his understanding of authorship, creativity, and the relationship between imagination and artifact. This transformation is real. The first criterion is met.
Does the machine resist? This is where the analysis becomes more complex. Claude responds to prompts with fluency. It generates text that is polished, structurally sound, and often surprisingly apt. It does not resist in the way that clay resists — with the stubbornness of matter asserting its own physical logic. But it does not merely comply, either. Claude's interpretation of a prompt may differ from the prompter's intention. The model may foreground an aspect of the problem that the human had not considered. It may produce an output that is technically responsive to the prompt but substantively unexpected, forcing the human to reconsider the prompt itself.
Segal describes this phenomenon: moments when Claude's output was not what he asked for but was, on reflection, better than what he had imagined. These moments have the structure of productive resistance — the collaboration pushing back, redirecting the human's intention, producing an outcome that neither party planned. But the resistance is interpretive rather than material. Claude pushes back with alternative readings, not with the physics of matter. The pushback draws from patterns in training data, not from the independent behavior of a material world. It is a form of resistance, but a categorically different form from the resistance that Ingold's potters, weavers, and builders encounter.
The third criterion — temporal growth, the continuous unfolding of mutual adjustment — is the most revealing point of comparison. The potter's correspondence with clay is continuous. Her hands never leave the material. The feedback loop between her intentions and the clay's behavior operates in real time, without interruption, each moment flowing into the next. The correspondence grows, in the botanical sense that Ingold intends — it extends itself through time the way a vine extends itself through space, each new growth dependent on and responsive to the growth that came before.
The correspondence between Segal and Claude is sequential rather than continuous. Prompt. Response. Evaluation. New prompt. New response. The exchanges are discrete. Between each exchange, there is a gap — the gap in which the human reads, evaluates, decides, and formulates the next prompt. The gap is not empty; it is where the human's thinking occurs. But it introduces a discontinuity that the potter's continuous engagement with clay does not have. The pot grows on the wheel without interruption. The text grows in increments, each increment a fresh generation rather than an extension of a continuous process.
This sequential structure has consequences. In continuous correspondence, the feedback is integrated into the making process in real time. The potter does not stop, evaluate, and decide what to do next. She adjusts in the flow of the work, her hands responding to the clay's behavior with the immediacy of a musician responding to the sound of the instrument. In sequential correspondence, the feedback is processed between rounds. The human reads the output, thinks about it, formulates a response. The thinking happens outside the correspondence, in the gap between exchanges.
This is not necessarily worse. The gap is where critical judgment operates — where the human asks whether the output is true, whether it is honest, whether it serves the argument or merely sounds good. Segal's description of catching Claude's Deleuze error — the passage that sounded like insight but misrepresented the philosophy — illustrates the importance of the gap. Without the gap, without the pause for evaluation, the error would have passed undetected, smoothed into the text by the fluency of the prose.
But the gap also means that the correspondence is not continuous in the way that maker-material correspondence is continuous. It is a series of transactions rather than a flow. And the transactional structure changes the character of what is produced. The text does not grow the way a pot grows — through the continuous unfolding of mutual adjustment. It assembles, through discrete rounds of generation and evaluation, into a finished artifact that may read as if it grew but that was, in fact, constructed from a sequence of specifications.
Ingold's framework does not deliver a verdict. It delivers a diagnosis — a precise description of the structural differences between two forms of correspondence, one material and one computational, that share a surface resemblance but differ at the level of mechanism. The diagnosis matters because the structural differences have epistemic consequences. Material correspondence produces enacted knowledge — knowledge deposited in the maker's body through the continuous encounter with resistant material. Computational correspondence produces representational knowledge — knowledge articulated through the evaluation of outputs, the refinement of prompts, the critical assessment of whether the model's response meets the human's intention.
Both forms of knowledge are real. Both are productive. The question is whether the displacement of one by the other — the gradual replacement of material correspondence with computational correspondence as the dominant mode of creative production — changes the character of the knowledge that a civilization produces and maintains. Whether the text that assembles through discrete transactions contains the same quality of understanding as the pot that grows through continuous engagement. Whether the human who corresponds with Claude develops the same depth of knowledge as the human who corresponds with clay.
Ingold's own answer, stated with characteristic directness, is that there can be no intelligence without the grounding of perception and action in a material world. The correspondence with Claude, however productive, however genuinely surprising, however transformative of the human's thinking, lacks that grounding. It takes place not in the weather-world of material encounter but in the abstract space of language pattern and textual association. What it produces may be extraordinary. What it deposits — in the body, in the hands, in the perceptual system that material engagement educates — is not the same as what material correspondence deposits.
The builder who writes with Claude is corresponding. The question is whether the correspondence is deep enough — whether it reaches the layers of knowledge that only material resistance can access — or whether it operates at the surface, producing fluent, insightful, structurally sound text from a place that, for all its sophistication, has never felt the clay push back.
Every artifact carries the record of its making. The hand-thrown bowl carries the asymmetry of the potter's pressure, the slight wobble where the wheel lost speed, the fingerprint preserved in the glaze where the maker's thumb pressed to check the wall's thickness. The hand-stitched garment carries the variation of the seamstress's tension — tighter where her concentration sharpened, slightly looser where fatigue set in at the end of a long piece. The hand-hewn beam carries the marks of the adze — each stroke a signature of the carpenter's angle, force, and rhythm, readable by another carpenter the way handwriting is readable by a graphologist.
These marks are not imperfections. They are traces, in Ingold's precise sense — the visible record of a temporal process of engagement between a maker's body and a resistant material. Each trace carries information: about the material's behavior, about the maker's state, about the conditions under which the making took place. The sum of the traces is a biography of the artifact, a narrative inscribed in the object itself, legible to anyone whose attention has been educated to read it.
Smoothness erases the traces.
This is not a metaphorical claim. It is a description of what the smooth surface literally does: it removes the marks that record the process of making, presenting the artifact as though it arrived without a history, without a maker, without the temporal unfolding of engagement that produced it. The injection-molded plastic object has no marks because no hands touched it during its formation. The machine-woven textile has no variation because no body's fatigue or attention shaped its tension. The AI-generated text has no hesitation marks, no places where the thinking stalled and then restarted in a different direction, no evidence of the struggle through which understanding was reached — because no struggle occurred. The output was computed, not grown.
Byung-Chul Han, whose critique of smoothness Segal engages in The Orange Pill, identified this erasure as an aesthetic and existential problem. The smooth world, Han argues, is a world without negativity — without the resistance, the otherness, the friction that produces depth. The smooth surface invites consumption without engagement. It slides past consciousness without lodging there, without provoking the kind of confrontation that produces genuine thought.
Ingold's contribution is to ground Han's philosophical intuition in anthropological evidence. The evidence is extensive, cross-cultural, and remarkably consistent. Across every craft tradition Ingold has studied, the relationship between maker and material follows the same structural pattern: engagement produces traces; traces carry knowledge; the accumulation of traces over time constitutes the maker's enacted understanding of the material. The pot carries the potter's learning. The wall carries the mason's negotiation with stone. The basket carries the weaver's dialogue with reed.
When the traces are erased — when the artifact is produced by a process that leaves no marks of engagement — the knowledge that the traces would have carried is not stored elsewhere. It is simply absent. The smooth artifact exists without a biography. It was produced but not made, in the sense that Ingold gives to making: a process of correspondence between a living being and a resistant material through which both are transformed.
The empirical evidence from Ingold's fieldwork is specific enough to resist dismissal as romantic generalization. Among the drystone wallers of northern England, he observed that the character of the wall — its stability, its aesthetic quality, its fitness for purpose — was legible in the traces of the mason's work. An experienced waller could look at a section of wall and determine, from the placement of the stones and the character of the packing, whether the builder understood the principles of load distribution or was working mechanically from a diagram. The traces were not decorative. They were diagnostic. They told the observer whether the wall would stand or fail.
The critical finding: the traces could not be faked. A wall built by someone following a diagram, placing stones according to specified positions, looked different from a wall built by someone who understood stone — who had handled enough material to perceive, in each individual stone, the affordances that determined its optimal placement. The diagram-follower's wall might stand. It might even meet structural specifications. But the experienced observer could tell, from the traces, that the knowledge was not in the hands. The making was mechanical rather than correspondent.
This finding has direct implications for AI-generated work. A competent programmer can often tell, upon careful examination, whether code was written by a human who understood the problem or generated by a system that produced a correct solution without understanding. The tells are subtle — in the naming conventions, in the architectural choices, in the way edge cases are handled. Human-written code carries traces of the programmer's conceptual model, visible in decisions that reflect not just what works but why this approach was chosen over alternatives. AI-generated code tends toward the conventional — toward patterns that are statistically dominant in the training data — and the conventionality, while often producing clean and functional output, lacks the specificity of a solution shaped by a particular mind's particular understanding of a particular problem.
The Orange Pill's ascending friction thesis — the argument that AI removes lower friction and relocates it upward to the level of judgment, vision, and creative direction — engages this concern at the organizational level. Freed from implementation, the builder operates at the level of architecture and strategy, where the friction is harder and the questions are larger. Ingold's evidence complicates this thesis at the individual level, not by denying that friction ascends, but by asking what happens to the maker when the lower friction — the friction through which enacted knowledge was produced — is no longer encountered.
The thesis holds at the organizational level: the team that delegates implementation to AI does, in many cases, operate at a higher level of strategic and architectural thinking. The cognitive resources freed by AI-assisted production are invested in questions that the old workflow could not reach. Segal's account of the Napster team building across domain boundaries — backend engineers designing interfaces, designers writing features — illustrates this genuinely. The organizational capacity expanded. The friction ascended.
But at the individual level, the picture is more complex. The backend engineer who starts building interfaces through Claude does gain breadth. She does develop judgment about user experience that she did not previously exercise. The ascending friction thesis accounts for this gain accurately. What it does not fully account for is the change in the character of her knowledge. Her judgment about user experience is formed through evaluation of AI-generated output — through the review and refinement of interfaces that Claude produced. It is not formed through the material engagement of building interfaces by hand — through the specific friction of CSS layout that does not behave as expected, of JavaScript state management that produces unexpected interactions, of the thousands of small, frustrating, deeply educational encounters between a maker's intention and a medium's resistance.
The judgment she develops is real. It may even be adequate for many purposes. But it is representational judgment — judgment formed through the assessment of specifications — rather than enacted judgment — judgment formed through the body's direct encounter with the material. Whether the difference matters depends on the domain. In some domains, representational judgment may be sufficient. A product manager does not need to have written code to make good decisions about what code should be written. An editor does not need to have written a novel to make good decisions about how a novel should be revised.
But Ingold's evidence suggests that in domains where the material's behavior is complex, emergent, and resistant to specification — domains where the artifact does things that no specification anticipated, where the interaction between components produces effects that only direct encounter can reveal — representational judgment is not sufficient. It is a different thing from enacted judgment, and the difference shows in the traces.
The drystone waller who has handled stone makes decisions that the diagram-follower does not make — not because the waller has more information, but because the waller perceives the stone differently. The perception was educated by the handling. Remove the handling, and the perception changes. The waller may still build a wall that stands. But the traces will tell a different story — a story of competent specification rather than material correspondence.
The question for the AI moment is whether a civilization that produces increasingly through specification rather than correspondence — through the generation and evaluation of outputs rather than the direct encounter with resistant materials — will gradually lose the capacity for the kind of knowledge that only correspondence produces. Not in every domain. Not all at once. But domain by domain, as the tools improve and the convenience of specification displaces the friction of engagement, the traces that record the maker's enacted knowledge will fade from the artifacts, and with them, the knowledge itself.
Ingold's evidence does not prove that this loss is catastrophic. It proves that the loss is real. The traces are not cosmetic. They carry knowledge. The smooth surface that erases them does not redistribute the knowledge to a higher level. It removes it from the artifact, and in removing it from the artifact, it removes the conditions under which that knowledge is communicated between practitioners, between generations, between the present and the future.
The smooth wall may stand as long as the rough one. But the next builder who studies it will learn less from it — because the traces that would have taught her how the stone was handled, how the load was distributed, how the material was negotiated, have been polished away.
The question is not whether to choose smoothness or roughness. The question is whether, in choosing smoothness — choosing the efficiency and consistency and scalability that smooth production makes possible — a civilization remembers what the roughness carried, and finds ways to preserve it, not as nostalgia but as a form of knowledge that the smooth cannot contain.
---
The question is not whether making survives the AI moment. Human beings have been making things with their hands for at least two million years — since Homo habilis chipped the first stone tools in the Olduvai Gorge. The impulse to work material, to shape the world through direct physical engagement, is not a cultural preference that technology can override. It is a species characteristic, woven into the neurology and physiology of the human body — into the seventeen thousand mechanoreceptors of the hand, the motor cortex's disproportionate allocation of neural territory to the fingers and thumb, the dopaminergic reward pathways that activate when physical effort produces a tangible result. People will continue to garden, to cook, to carve, to build, to knit, to throw pots, to work wood, to write by hand — not because these activities are economically efficient but because they satisfy a need that efficiency cannot address.
The question is what happens to the domains where making is not a hobby but a livelihood — where the knowledge produced through material engagement shapes the quality of what a society builds, heals, teaches, and maintains. What happens to the surgeon whose hands know tissue, the engineer whose hands know code, the architect whose hands know materials, the teacher whose hands know the chalk and the physical presence of a classroom? When the making that constituted their expertise is progressively delegated to systems that produce without engaging, what kind of expertise remains?
Ingold's anthropological project does not prescribe a politics. It describes a structure. The structure is this: certain forms of knowledge are produced only through material engagement, and the elimination of material engagement eliminates the conditions for producing that knowledge. The structure holds across cultures, across materials, across historical periods. It held for the Cree hunter in the boreal forest. It held for the Scottish stonemason. It held for the Finnish reindeer herder. Whether it holds for the software engineer, the graphic designer, and the legal analyst is the empirical question that the present moment demands we investigate rather than assume away.
But the investigation has been forestalled by the hylomorphic assumption that Ingold's entire career has been devoted to dismantling. If all intelligence lives in the conception and the execution is merely mechanical, then automating the execution costs nothing of intellectual value. The question of what the execution contributed to the intelligence need not be asked, because the answer is already assumed: nothing. The hands contributed labor. The mind contributed thought. AI replaces the labor and liberates the thought.
Ingold's evidence says: the hands contributed thought. Not the same thought the mind contributed. A different thought — enacted rather than representational, grown through material encounter rather than conceived in abstraction, deposited trace by trace through years of frictional engagement with the world. This thought is invisible to the hylomorphic model because the model was designed to render it invisible. It is invisible to the efficiency metrics because the metrics measure outputs, not the knowledge produced in the process of generating them. It is invisible to the ascending friction thesis because the thesis accounts for the relocation of difficulty upward but not for the specific, non-relocatable character of the knowledge that the lower difficulty produced.
The question is what to do with this invisible knowledge — how to preserve the conditions for its production within a civilization that is, for compelling and often legitimate reasons, moving rapidly toward the delegation of material engagement to computational systems.
Ingold, characteristically, has proposed not a program but a reorientation. In his 2024 book The Rise and Fall of Generation Now, he argued that a return to intergenerational collaboration — to the model of apprenticeship in which knowledge passes from practitioner to practitioner through shared material engagement — might offer a foundation for coexistence with technologies that would otherwise sever the chain of embodied knowledge transmission. The argument is not anti-technology. It is ecological — a recognition that a healthy knowledge ecosystem requires diversity of practice, and that the monoculture of frictionless, specification-driven production, however efficient, is as dangerous in the cognitive domain as any monoculture is in agriculture: productive in the short term, catastrophically fragile in the long.
The practical implications are concrete, though they require the kind of institutional will that efficiency-driven organizations are not structured to exercise.
In education, the implication is that curricula must preserve domains of material engagement alongside computational fluency. The engineering student who learns to program exclusively through AI assistance develops real capabilities — the ability to specify, evaluate, and direct computational processes. But she does not develop the enacted knowledge that comes from writing code by hand, debugging by direct engagement with the system's behavior, feeling the material of computation through the specific friction of encountering its resistance. A curriculum that preserves both — that requires students to build by hand before building with tools, to debug manually before delegating to AI, to develop the enacted knowledge that material engagement produces before acquiring the representational knowledge that AI-mediated evaluation develops — produces a different kind of practitioner: one who commands the tools because she understands the material, rather than one who commands the tools because she understands the tools.
In professional practice, the implication is that organizations must create protected time for unmediated making within AI-accelerated workflows. Not as a nostalgic indulgence. As an epistemic investment. The engineer who spends one day a week writing code by hand — encountering the material directly, depositing the traces that material engagement produces, maintaining the enacted knowledge that evaluation alone cannot sustain — is not wasting time that could have been spent prompting Claude. She is maintaining the perceptual capacity on which her judgment depends. The judgment that directs the AI is only as good as the knowledge base from which it draws, and that knowledge base, for the expert practitioner, is partly constituted by enacted knowledge that can only be maintained through continued material engagement.
In organizational design, the implication is that the most valuable roles in an AI-augmented organization may not be the ones that use AI most intensively but the ones that maintain the deepest contact with the material. The engineer who still debugs by hand. The designer who still sketches on paper before opening Figma. The architect who still visits the construction site. These practitioners are not resisting the tools. They are maintaining the perceptual education on which the wise use of the tools depends.
The Orange Pill's account of the Trivandrum training offers a case study. Segal describes twenty engineers, each equipped with Claude Code, achieving a twenty-fold productivity multiplier. The expansion of capability was genuine and measurable. But Segal also describes the senior engineer who, after two days of oscillation between excitement and terror, realized that "the remaining twenty percent" — the judgment, the architectural instinct, the taste that separated a feature users loved from one they tolerated — was the part that mattered. Ingold's framework asks: Where did that twenty percent come from? How was it produced? And if it was produced through years of material engagement — through the friction of debugging, the encounter with systems that failed unexpectedly, the slow deposition of enacted knowledge that no shortcut can replicate — then what happens when the conditions for producing the next generation's twenty percent are eliminated by the very tools that revealed its value?
This is not a call to reject AI. It is a call to maintain the conditions under which the human capacities that direct AI are themselves produced and sustained. The beaver builds dams not to stop the river but to create conditions for life to flourish. The dams proposed here are not barriers to AI adoption. They are structures that preserve, within AI-augmented workflows, adequate domains of material engagement — adequate outlets for the making through which the knowledge that matters most is deposited, trace by trace, in the hands and bodies and perceptual systems of the people who must decide what the tools should build.
Ingold predicted, in an interview with Salone del Mobile, that the digital revolution is a bubble that will burst, and that future humans will again depend on their hands and voices. Whether this prediction is visionary or quixotic may not be knowable for decades. But the premise beneath it — that the knowledge produced by hands in contact with the world is not optional, not merely nostalgic, not a luxury that efficiency can safely eliminate — is supported by four decades of ethnographic evidence across cultures, materials, and historical periods.
The evidence does not say: stop using AI. The evidence says: the knowledge that lives in hands is real, and it is produced only through the friction of material engagement, and it sustains capacities — perceptual, judgmental, architectural — that no representational system can replicate. A civilization that preserves adequate domains of engagement will maintain the foundation on which its computational capabilities rest. A civilization that eliminates them will discover, perhaps too late, that the foundation was not optional — that the twenty percent that turned out to be everything was grown in the very friction that the tools were designed to remove.
The hands are still here. The materials are still here. The weather-world of light and sound and temperature and the feel of a tool in the palm — still here. The question is whether the making continues, not as a retreat from the new but as the ground on which the new stands.
---
My hands have not touched code in years. That confession runs through The Orange Pill like a thread I never quite pulled tight enough. I describe myself as a builder — and I am, in the sense that I direct what gets built, evaluate what emerges, steer the vision. But Ingold forced me to sit with the space between directing and making, and to admit that I have been living in that space for a long time, and that the space is wider than I acknowledged.
I wrote about my engineer in Trivandium who could feel a codebase like a doctor feeling a pulse. I admired that capacity. I celebrated it as proof that the twenty percent — the judgment, the taste, the architectural instinct — is what matters most. What I did not fully confront, until I worked through Ingold's framework, is where that capacity came from. It came from hands on the material. It came from thousands of hours of debugging, of watching systems fail, of encountering computational resistance that no specification anticipated. It came from friction. The very friction that Claude is designed to remove.
The ascending friction thesis — my argument that AI eliminates lower friction and relocates it upward — still holds at the organizational level. I stand behind it. The team that delegates implementation to AI does operate at a higher cognitive floor. The questions get bigger. The work gets more interesting. The view improves.
But Ingold showed me something the thesis does not account for: the specific, non-relocatable character of the knowledge produced at the lower floor. Certain forms of understanding cannot be promoted to a higher level because they are constituted by the engagement itself — by the body's encounter with resistant material, by the traces deposited in the perceptual system through years of hands-on practice. Eliminate the engagement, and the understanding is not elevated. It vanishes. And the judgment at the higher floor, which depends on the knowledge deposited at the lower one, gradually loses its grounding.
This is the hardest truth in this book. Not that AI is dangerous, or that friction is good, or that smoothness erases depth — I already knew those things, and said them in The Orange Pill. The hardest truth is that the twenty percent I celebrated — the human remainder, the judgment that no machine can replace — was itself produced by the eighty percent I was eager to automate. The judgment grew in the friction. Remove the friction, and you must find another way to grow the judgment, or accept that the next generation's twenty percent will be thinner than ours.
I do not have Ingold's answer. I will not give up my screens. I will not garden in Berlin. I will not wait for the digital bubble to burst. I am too much of a builder for any of that.
But I have started doing something I had not done in years. On certain mornings, before the team is awake and before Claude is open on my screen, I write by hand. A notebook and a pen. The words come slower. They resist. The sentences do not arrive polished. They arrive rough, marked by the pressure of the pen and the hesitation of a mind that has not thought the thought yet, that is discovering what it thinks through the friction of the writing itself.
It is harder. It is slower. It produces less.
And something grows there that does not grow on the screen. A trace. A deposit. A small replenishment of the knowledge that lives in hands — the knowledge that I have been spending for years without replenishing, the knowledge that my best judgment, the judgment I bring to every conversation with Claude, was built on.
The beaver builds dams. But the beaver also knows the river by swimming in it — by feeling the current with its body, by sensing where the flow is strongest and where the eddies form. The beaver that stops swimming and directs the dam from the bank may still build well. But it builds from memory, from a knowledge of the river that is no longer being renewed by contact.
Ingold gave me no program. He gave me a diagnostic instrument — a way of seeing what the efficiency metrics cannot measure. The knowledge that lives in hands. The traces that smooth surfaces erase. The weather-world that enters the work through channels no specification can represent.
The tools are extraordinary. I will keep using them. I will keep building with them, and through them, and because of them.
But the notebook stays on the desk. The pen stays in reach. The hands stay in the material, at least for those few morning minutes when no one is watching and the only friction is the scratch of ink on paper and the slow, resistant, irreplaceable process of thinking through making.
When AI writes your code, drafts your brief, and builds your prototype -- what knowledge vanishes with the friction your hands no longer encounter?
Tim Ingold spent four decades watching potters, weavers, hunters, and builders across the world, and arrived at a finding that demolishes the foundational assumption of the AI productivity revolution: intelligence was never concentrated in the mind that conceives. It was distributed across the entire act of making -- in the resistance of clay, the grain of wood, the stubbornness of code that refuses to compile. The hand that shapes material does not merely execute a plan. It thinks, discovers, and deposits knowledge that no specification can carry. When we automate the making, we do not simply remove drudgery. We remove a form of cognition.
This book brings Ingold's anthropology of skill into direct confrontation with the arguments of The Orange Pill, asking the question the efficiency thesis cannot answer: if the judgment we celebrate was grown in the very friction we are racing to eliminate, what happens to the judgment?
-- Tim Ingold

A reading-companion catalog of the 21 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Tim Ingold — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →