By Edo Segal
The engineer who couldn't explain her own code was the one who finally made me pick up Dreyfus.
She had built something remarkable with Claude — a detection system for Napster Station that worked beautifully in production. I asked her to walk me through the logic. She opened the file, scrolled through it, and stopped. "I know what it does," she said. "I don't know why it works."
She was not embarrassed. She was confused. The output was hers in every meaningful sense — she had directed it, tested it, shaped it through iteration. But the understanding that should have accompanied the building had not arrived. The artifact existed. The knowledge that building usually deposits in your hands and your gut and your instinct for what will break next — that was missing.
I described this moment in *The Orange Pill* through what I called the geological metaphor: each hour of struggle depositing a thin layer of understanding that accumulates into something you can stand on. What I did not have, when I wrote that passage, was the philosophical framework that explains *why* the layers matter — not just practically, but at the level of what intelligence actually is.
Hubert Dreyfus spent fifty years building that framework. He argued, starting in the 1960s when it made him a pariah in computer science departments, that intelligence is not computation. It is not information processing. It is not pattern matching, however sophisticated. Intelligence is what happens when a being that has a body, a history, and something at stake engages with a world that pushes back. The pushing back is not an obstacle to intelligence. It is the medium through which intelligence develops.
Every builder I know has felt the truth of this without having the language for it. The senior engineer who "feels" a bug before she can articulate it. The architect who knows a system is fragile because something in her body tightens when she reads the code. The decades of friction that produced that bodily knowledge — Dreyfus explains why no shortcut can replace them, and why the question of what AI removes matters as much as the question of what it adds.
This book applies Dreyfus's framework to the specific claims and confessions of *The Orange Pill*. It does not dismiss what AI can do. It illuminates what AI cannot be — and why that distinction determines whether the amplifier serves genuine intelligence or gradually hollows it out.
The engineer eventually rebuilt the system by hand. It took her a week. The second version was worse in several measurable ways. But she could explain every line. She could feel where it was fragile. She owned it in a way the first version never allowed.
That difference is what Dreyfus spent his life trying to articulate. Now is exactly the moment to listen.
-- Edo Segal ^ Opus 4.6
1929–2017
Hubert Dreyfus (1929–2017) was an American philosopher who spent most of his career at the University of California, Berkeley, where he became the most prominent philosophical critic of artificial intelligence in the twentieth century. Born in Terre Haute, Indiana, he studied at Harvard under Willard Van Orman Quine before immersing himself in the European phenomenological tradition of Martin Heidegger and Maurice Merleau-Ponty. His landmark works *What Computers Can't Do* (1972) and its revised edition *What Computers Still Can't Do* (1992) argued that human intelligence is fundamentally embodied, situated, and rooted in practical engagement with the world — and therefore cannot be replicated by systems that manipulate symbols according to formal rules. Together with his brother Stuart Dreyfus, he developed the influential five-stage model of skill acquisition, which demonstrated that expertise involves not faster rule-following but the progressive abandonment of rules in favor of holistic, intuitive, bodily perception. Initially dismissed and ridiculed by the AI research community, Dreyfus lived to see many of his central critiques acknowledged as prescient. His work remains the most rigorous philosophical challenge to the assumption that intelligence is fundamentally computational — a challenge that the arrival of large language models has made not less relevant but more urgent.
In 1965, a philosopher working at the RAND Corporation published a paper with a title designed to provoke. Hubert Dreyfus called it Alchemy and AI, and its central claim was simple enough to fit on an index card: the entire field of artificial intelligence rested on a philosophical mistake. The researchers at MIT and Stanford and Carnegie Mellon who were building chess programs and theorem provers and natural language parsers had assumed, without argument and mostly without awareness, that human intelligence consists of manipulating symbolic representations according to formal rules. Dreyfus said this assumption was not merely unproven but demonstrably false, and that everything built on top of it would eventually collapse under the weight of problems it could not solve.
The AI community's response was not measured. Seymour Papert at MIT wrote a rebuttal titled "The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies." Researchers circulated jokes. One popular story — almost certainly apocryphal but revealing in its persistence — had a chess program beating Dreyfus in a game, as though defeating a philosopher at chess would refute a philosophical argument about the nature of intelligence. The hostility was itself diagnostic. Dreyfus had not attacked a research program. He had attacked a worldview. The researchers who dismissed him understood, at some level, that if his critique held, their life's work rested on sand.
The critique did hold. Not in every particular — Dreyfus was wrong about some predictions and imprecise about others — but in its central philosophical claim. The symbolic AI of the 1960s and 1970s ran aground on exactly the problems Dreyfus identified: the frame problem, which is the impossibility of specifying in advance which features of a situation are relevant to a given task; the common-sense knowledge problem, which is the impossibility of encoding the vast background of shared understanding that human beings bring to every encounter; and the embodiment problem, which is the impossibility of replicating, in a disembodied machine, the situated, bodily, care-laden engagement with the world that constitutes human understanding. By the 1990s, as AI historian Daniel Crevier acknowledged, "time has proven the accuracy and perceptiveness of some of Dreyfus's comments." By the early years of the twenty-first century, several of his radical opinions had become mainstream.
The vindication was partial and ironic. The field did not adopt Dreyfus's philosophical framework. It simply abandoned the approach his framework had critiqued and moved to a different one — connectionism, neural networks, and eventually the deep learning revolution that, beginning around 2012, produced systems of astonishing capability. These systems did not manipulate symbols according to explicit rules. They learned statistical regularities from vast quantities of data and generated outputs that were often indistinguishable from the products of genuine intelligence. The philosophical foundations Dreyfus had demolished were quietly discarded, but the ambition they had supported — the ambition to build machines that think — survived the demolition and found new ground.
The question that now presses with genuine urgency is whether Dreyfus's deeper arguments survive the transition from symbolic to statistical AI. The surface-level critique — that rule-based systems cannot capture the flexibility of human cognition — has been rendered moot by systems that do not use explicit rules. But the deeper critique was never about rules per se. It was about embodiment, about being-in-the-world, about the difference between a system that processes information and a being that cares about what the information means. That critique operates at a level that the shift from symbolic to statistical methods does not touch.
Consider the phenomenon that The Orange Pill documents with unusual candor. Edo Segal describes working with Claude, Anthropic's large language model, to write a book about the very transformation that the tool represents. The collaboration produces extraordinary results. Ideas connect across disciplinary boundaries that no single human mind could bridge alone. Prose emerges that captures half-formed intuitions with startling precision. The author describes tearing up at the beauty of passages that the collaboration excavated from his thinking — ideas that were present in his mind but unreachable without the machine's capacity to hold multiple threads simultaneously and find the connections between them.
And then, in moments of disarming honesty, Segal describes the failures. A passage about Gilles Deleuze that sounded like insight but broke under philosophical examination. An argument that was rhetorically elegant and intellectually hollow. The specific, seductive danger of "confident wrongness dressed in good prose." The discipline required to catch these failures — to distinguish between output that looks like understanding and output that actually understands — and the creeping suspicion that the discipline might atrophy with use, that the capacity to detect the seam between genuine and simulated thought might itself be eroded by reliance on the tool.
These moments of failure are, from the perspective of Dreyfus's framework, the most philosophically significant passages in the entire book. They are the moments when the difference between embodied intelligence and its statistical simulation becomes visible. The machine produces the linguistic traces of understanding — the kind of prose that an embodied, situated, caring human being would produce if she genuinely understood Deleuze — without possessing the understanding that would make the traces reliable. The output looks like the output of a mind that has read Deleuze, struggled with his concepts, argued about them in seminars, applied them to real problems, and arrived at a considered position. But no such mind produced it. What produced it was a system that has processed vast quantities of text in which Deleuze is discussed and has learned the statistical regularities of how Deleuze's name co-occurs with certain concepts, certain rhetorical structures, certain patterns of philosophical argument. The output is not understanding. It is the residue of understanding, extracted from the textual traces that embodied understanding leaves behind and recombined according to probabilistic patterns.
The distinction may seem academic when the output is good enough. And often — perhaps most of the time — the output is good enough. This is what makes the new AI genuinely different from the old AI, and what requires Dreyfus's critique to be updated rather than simply reasserted. The symbolic AI of the 1960s failed conspicuously. Its limitations were visible on the surface of its outputs, which were brittle, inflexible, and obviously mechanical. The new AI fails inconspicuously. Its limitations are hidden beneath a surface of fluent, contextually appropriate, often genuinely useful prose. The failure mode has shifted from obvious inadequacy to plausible mimicry, and plausible mimicry is far more dangerous than obvious inadequacy, because it erodes the capacity to tell the difference.
Dreyfus identified, across decades of philosophical work, four assumptions that he believed undergirded the AI project and that he argued were false. The biological assumption: that the brain is analogous to computer hardware and the mind to software. The psychological assumption: that the mind works by performing discrete computations on discrete representations. The epistemological assumption: that all knowledge can be formalized in rules or laws. The ontological assumption: that reality consists of independent, atomic facts that can be represented individually. Symbolic AI depended on all four. Large language models depend on none of them in their original form. They do not manipulate symbols according to rules. They do not represent atomic facts. They learn continuous, distributed representations from data in a way that is, at least superficially, more analogous to how biological neural systems process information.
But Dreyfus's critique went deeper than these four assumptions. Beneath them lay a more fundamental claim about what intelligence is — not a property of a computational system, however organized, but a mode of being. Intelligence, in the Heideggerian tradition that Dreyfus spent his career translating into terms the AI community could engage with, is not something a mind does. It is something a being is. To be intelligent is to be thrown into a world that matters, to care about outcomes, to have projects and concerns that are not chosen from a menu but discovered in the course of living a life that is finite, embodied, and saturated with significance. A being that lacks these features — that has no body, no mortality, no childhood, no cultural formation, no mood, no concern — may process information with extraordinary sophistication. It may produce outputs that are indistinguishable from the outputs of genuine intelligence. But it is not intelligent in the sense that matters, because intelligence in the sense that matters is not a computational achievement. It is an existential condition.
This claim is not mystical. It is phenomenological, which is to say it is based on careful description of how things actually appear in lived experience. When a senior software engineer looks at a codebase and feels that something is wrong — the phenomenon Segal describes through his geological metaphor of understanding deposited through years of struggle — she is not performing a faster version of the junior engineer's rule-following. She is perceiving the situation directly, with an immediacy that bypasses deliberation entirely, because decades of engaged practice have given her a bodily sensitivity to the codebase's structure that operates below the threshold of conscious analysis. Her knowledge is not in her head. It is in her hands, her posture, her sense of readiness, the particular quality of attention she brings to the screen that has been shaped by thousands of hours of friction-rich encounter.
No large language model possesses this embodied history. Claude has processed more text about software engineering than any human being could read in a thousand lifetimes. It can generate code, identify patterns, suggest architectural improvements. But it does not feel the codebase. It does not carry the bodily residue of having struggled with it, failed, tried again, and gradually developed the intuitive grasp that Dreyfus calls expertise. Its outputs are drawn from the statistical distribution of what expert engineers have written about their work. The outputs are often excellent. They are not expertise.
The distinction matters most precisely where The Orange Pill is most optimistic. Segal's central argument is that AI is an amplifier — that it magnifies whatever signal the human feeds it, and that the quality of the amplified output depends on the quality of the human input. Dreyfus's framework accepts the premise but presses the implication. An amplifier amplifies the signal it receives. If the signal comes from a being whose embodied engagement with the world is intact — who still reads the code, still feels the architecture, still cares about the users, still carries the geological layers of accumulated understanding — then the amplification serves genuine intelligence. But if the signal comes from a practitioner whose embodied engagement has atrophied through reliance on the tool, who reviews output without understanding it, who has become a passive consumer of the machine's productions rather than an active participant in the work — then the amplification serves something else entirely. It amplifies the appearance of intelligence without the substance. And the machine, which cannot tell the difference, will amplify both with equal fidelity.
Dreyfus was not a Luddite. This distinction requires emphasis because the caricature has been persistent and damaging. He did not argue that computers are useless or that AI research should be abandoned. He argued that a specific conception of intelligence — the conception that treats it as disembodied computation — is philosophically wrong, and that building AI on this foundation will produce systems that mimic intelligence without achieving it. The mimicry may be commercially valuable, practically useful, even transformative. What it cannot be, on Dreyfus's account, is intelligence in the full sense — the sense that involves a body, a world, a mood, a concern, a life that is at stake.
The arrival of large language models does not refute this argument. It raises the stakes. When the mimicry was poor, the distinction between genuine and simulated intelligence was obvious and the practical consequences were limited. Now that the mimicry is extraordinary — now that the output of disembodied processing is often indistinguishable from the output of embodied understanding — the distinction has become simultaneously harder to see and more important to maintain. The philosophical question is no longer whether computers can mimic intelligence. They can. The question is whether the mimicry is the thing itself, or whether something essential is missing — something that cannot be detected in the output but only in the process that produced it, and in what that process does, or fails to do, to the human being who depends on it.
Dreyfus spent five decades arguing that something essential is missing. The argument was right about symbolic AI. The question this book pursues — with genuine uncertainty about the answer, because genuine uncertainty is the only honest posture — is whether it remains right about the AI that has inherited the ambition while abandoning the method. The chapters that follow apply Dreyfus's philosophical framework to the specific claims, examples, and confessions that The Orange Pill provides, not to dismiss Segal's argument but to test it against the most rigorous philosophical critique of AI ever produced — and to see what survives.
---
Martin Heidegger wrote Being and Time in 1927 in a cabin in the Black Forest, and the central concept of that forbidding, essential work is one that Dreyfus spent his career translating into language that could do philosophical work outside of German phenomenology. The concept is Dasein — the being for whom its own being is an issue — and its fundamental character is what Heidegger called In-der-Welt-sein: being-in-the-world.
The phrase sounds redundant until you understand what it is denying. It denies the picture of intelligence that has dominated Western philosophy since Descartes: the picture of a mind that exists first as a thinking thing, a res cogitans, and then reaches out to encounter a world that exists independently of it. On this Cartesian picture, the fundamental situation of a knowing being is that of a subject confronting an object — a mind looking at a world through the window of the senses, trying to build an accurate internal representation of what lies outside.
Dreyfus argued, following Heidegger, that this picture is not just wrong but catastrophically misleading, and that it is the hidden foundation of the entire AI project. If intelligence consists of a mind building representations of an external world and manipulating those representations according to rules, then of course a computer can be intelligent — you need only give it the right representations and the right rules. The entire research program of classical AI follows from the Cartesian picture as naturally as water flows downhill. And the entire research program fails, Dreyfus argued, because the Cartesian picture fails. Intelligence is not a mind looking at a world. It is a being in a world — already engaged, already caring, already thrown into a situation that is saturated with significance before any conscious act of representation occurs.
Being-in-the-world, in Heidegger's analysis, is not a spatial relationship. It is not the fact that a human body occupies a location in physical space the way a stone occupies a location. It is the way a human being inhabits a world that matters to it — a world of projects and possibilities, of concerns and commitments, of things that show up as useful, dangerous, beautiful, boring, relevant, or irrelevant depending on what the being is trying to do. The world is not a neutral backdrop against which a mind performs computations. It is an environment constituted by the being's engagement with it, and the engagement is prior to any act of detached contemplation.
Consider the difference between two descriptions of what happens when a software engineer sits down at her terminal. On the Cartesian account — the account that AI research has traditionally assumed — the engineer surveys her codebase, forms a mental representation of its structure, identifies the problem, retrieves relevant rules and heuristics from memory, applies them to the representation, and produces a solution. Intelligence is the manipulation of representations. The better the representations and the more powerful the manipulation, the better the intelligence.
On the Heideggerian account that Dreyfus championed, this description gets the phenomenology exactly backward. The engineer does not survey the codebase from a position of detached observation. She is already in it. She has been in it for months or years. The codebase is not an object she contemplates but an environment she inhabits, the way a carpenter inhabits a workshop. The functions and classes and data structures are not neutral entities she represents mentally. They are ready-to-hand tools she uses without thinking about them, the way the carpenter uses the hammer without thinking about the hammer. Her understanding of the system is not stored in mental representations. It is distributed across her hands, her posture, her habits of attention, her sense of where the code is fragile and where it is robust — a sense that has been built through thousands of hours of engaged, friction-rich practice and that operates below the threshold of conscious articulation.
When something goes wrong — when a bug appears, when a function behaves unexpectedly, when the system resists her intentions — the engineer does not calmly retrieve rules from memory and apply them to a representation. She feels the wrongness. The phenomenology is one of disruption, of something that should be flowing smoothly suddenly becoming obtrusive, demanding attention. The system has broken down, and in the breakdown, the engineer's relationship to her tools shifts from absorbed engagement to conscious inspection. Heidegger called this shift the transition from ready-to-hand to present-at-hand, and Dreyfus identified it as one of the fundamental structures of intelligent activity that no AI system has ever replicated — not because the shift is computationally complex, but because it requires a being that was absorbed in the first place.
A system that was never absorbed cannot experience disruption. A system that was never engaged cannot be surprised by failure. A system that has no projects of its own cannot feel the specific frustration of a project that is going wrong. And without absorption, disruption, and frustration — without the full emotional and bodily texture of what it is like to be a being whose work matters to it — the intelligent response to breakdown cannot occur. What occurs instead is processing: the application of pattern-matching routines to an input that has been flagged as anomalous. The output may be useful. It is not understanding.
The Orange Pill documents a specific instance of being-in-the-world with unusual vividness. Segal describes flying to Trivandrum, India, in February 2026 to train his engineering team in the use of Claude Code. The description is rich with the texture of embodied engagement: the physical displacement of travel, the specific quality of being in a room with twenty engineers whose careers are being restructured in real time, the oscillation between exhilaration and terror that Segal describes with a candor that carries the phenomenological weight of genuine experience. This is not a mind contemplating a problem from a position of theoretical detachment. This is a being thrown into a situation — thrown by decisions already made, by commitments already undertaken, by a history of building and breaking and rebuilding that has deposited its layers in the specific way Segal sits in the room, the specific way he reads his team's faces, the specific way he feels the weight of what he is asking them to do.
Claude has no such history. Claude was not thrown into the Trivandrum training room. It did not fly there. It does not carry the bodily residue of thirty years of technology building. It does not feel the specific anxiety of asking twenty experienced professionals to reconceive their relationship to their own expertise. It processes the description of these events with extraordinary linguistic sophistication. It can generate prose about what it would feel like to be in that room that might read, to a reader unfamiliar with the situation, as though it were written by someone who was there. But the prose comes from statistical regularities in the training data — from the patterns of how humans describe such experiences — not from having had the experience. The output resembles the output of being-in-the-world. It is produced by a system that is not in any world.
Dreyfus would identify the core of the confusion in Segal's river-of-intelligence framework, which proposes that intelligence is a force of nature flowing through various substrates — from hydrogen atoms to biological evolution to human consciousness to artificial computation. The framework is elegant and carries genuine insight. But from a Heideggerian perspective, it makes a specific error that matters: it treats intelligence as a single substance that differs only in the channel through which it flows. Hydrogen atoms, neurons, algorithms — all channels for the same river.
Dreyfus's framework insists on a different picture. Intelligence is not a substance that flows through channels. It is constituted differently by each mode of being through which it manifests. The intelligence of a living organism that has been shaped by evolution to respond to its environment through embodied engagement is not the same intelligence, differing only in degree, as the information processing of a computational system that has been trained on the textual residue of such engagement. The mode of being determines the character of the intelligence. A being that has a body, that has been thrown into a world not of its choosing, that cares about what happens, that will die — such a being's intelligence has a character that no disembodied system can replicate, because the character is constituted by the embodiment, the thrownness, the care, and the mortality.
This does not mean that what Claude does is trivial or unimpressive. It means that what Claude does is something other than what a human being does when she understands, even when the outputs are identical. The identification of output with intelligence — the assumption that if the output looks intelligent, the process that produced it must be intelligent — is the Cartesian assumption in its most contemporary and most seductive form. Classical AI made this assumption crudely, and the crudeness of its outputs exposed the error. Large language models make the assumption with such sophistication that the error has become nearly invisible.
Nearly, but not entirely. The Orange Pill contains a passage that, from Dreyfus's perspective, is the most philosophically revealing in the book. Segal describes the moment when Claude produced a passage about Deleuze that was elegant, well-structured, and wrong — wrong in a way that was obvious to anyone who had actually read Deleuze but entirely invisible in the quality of the prose. The smooth surface of the output concealed the absence of understanding beneath it. Segal caught the error because he possessed the background — the embodied, culturally constituted, philosophically informed background — against which the error was visible. The machine did not catch it because the machine has no such background. It has a statistical approximation of a background, derived from the textual traces of millions of embodied beings, and the approximation was good enough to produce prose that sounded right without being right.
This is the structural danger that Dreyfus's framework identifies. Not that AI produces bad output — it often produces remarkably good output — but that it produces output whose quality cannot be assessed from the output alone. Assessment requires a background that only an embodied being possesses. The more sophisticated the output becomes, the more demanding the assessment becomes, and the greater the temptation to skip it — to trust the surface, to accept the plausible, to let the aesthetic of the smooth substitute for the work of genuine evaluation.
Segal's response to this danger is what he calls the discipline of collaboration — the commitment to reject Claude's output when it sounds better than it thinks, when the prose is smooth but the idea beneath it is hollow. This discipline is itself a form of being-in-the-world. It requires the embodied judgment that only a life of engaged intellectual practice can produce: the capacity to feel, in one's reading body, the difference between prose that carries understanding and prose that merely resembles it. The discipline works only as long as the disciplinarian maintains the embodied capacity that makes it possible. And this is Dreyfus's deepest concern — not that AI will replace human intelligence, but that reliance on AI will erode the embodied practices through which the capacity for genuine intelligence is developed and maintained.
The being-at-the-terminal can be a being-in-the-world: engaged, caring, bringing the full weight of embodied experience to the collaboration. That is Segal at his most productive, directing the tool with the accumulated judgment of decades. But the being-at-the-terminal can also become something else — a passive recipient of outputs, a reviewer rather than a creator, a being whose relationship to its own work has shifted from absorbed engagement to detached consumption. That shift is not dramatic. It happens in small increments, one accepted-without-questioning output at a time. And the being that has made the shift may not notice it has occurred, because the outputs remain excellent. The surface holds. The embodiment beneath it thins. The world in which the being dwelt recedes, replaced by the screen, the prompt, the response, the next prompt — a cycle that resembles engagement while systematically replacing it with something smoother, faster, and emptier.
Heidegger would have recognized the phenomenon. He called it Verfallenheit — fallenness, the tendency of Dasein to lose itself in the anonymous routines of everyday existence, to be absorbed not in its own authentic projects but in the das Man, the "they-self" that dictates what one does without anyone deciding. The digital Verfallenheit that Dreyfus's framework identifies is fallenness into the output — the loss of the self that judges, replaced by the self that accepts. It is not a failure of the machine. It is a failure of the being that has stopped being in its world and has begun being in the machine's.
---
In 1980, Hubert Dreyfus and his brother Stuart, an industrial engineer and operations researcher at Berkeley, published a report for the United States Air Force titled "A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition." The report was commissioned because the Air Force needed to understand how pilots develop expertise, and the prevailing cognitive science models — which treated expertise as the accumulation of increasingly sophisticated rules — could not explain what experienced instructors observed in their students. The transition from adequate to excellent piloting did not look like the acquisition of better rules. It looked like the gradual disappearance of rules altogether, replaced by something the instructors could recognize but not articulate: a feel for the aircraft, a sense of the situation, an immediacy of response that bypassed deliberation entirely.
The model that emerged from this research identified five stages — novice, advanced beginner, competent, proficient, and expert — and its central insight was that these stages are not points on a continuum of the same kind of cognitive activity. They represent qualitatively different modes of engagement with the task. The transition from stage to stage is not an improvement in rule-following but a progressive abandonment of rules in favor of something fundamentally different: holistic, situational, embodied perception that operates below the threshold of conscious analysis.
The novice operates with context-free rules. The student pilot learns: when the airspeed drops below a certain number, increase throttle. The rule is explicit, general, and disconnected from the specific situation. It works, after a fashion, but the novice's performance is rigid, mechanical, unable to adapt to the particularities of the moment. A novice chess player learns: control the center, develop knights before bishops, castle early. The rules are sound. They produce adequate play. They do not produce understanding.
The advanced beginner starts to recognize situational elements that the rules cannot capture. The student pilot begins to feel when the aircraft is "behind the power curve" — not because an instrument reading triggers a rule, but because the accumulated experience of flying has deposited a sensitivity to the aircraft's behavior that manifests as a felt quality of the situation. The advanced beginner is still following rules, but the rules are becoming supplemented by experiential recognition that no instruction manual could provide.
The competent performer chooses a perspective from which to organize the situation and formulates a plan. This stage involves, for the first time, genuine emotional engagement. Because the competent performer has chosen an approach — has taken ownership of the decision — she experiences the outcome as her own. Success feels like accomplishment. Failure feels like failure. This emotional involvement is not incidental to the development of expertise. It is, in the Dreyfus model, constitutive of it. The emotions deposit the experiential traces that will, over time, become the expert's intuitive grasp. A competent performer who experiences no emotional involvement — who makes choices without investment, who succeeds and fails without caring — does not progress to proficiency. The caring is the mechanism.
The proficient performer perceives the situation holistically and intuitively, then deliberates about what to do. The experienced pilot does not analyze the instruments one by one and combine them into a picture. She sees the situation — the aircraft's state, the weather, the terrain, the mission parameters — as a unified whole that immediately suggests certain responses while ruling out others. The perception is immediate and intuitive. The response still requires thought. The proficient performer knows what the situation demands but must still work out how to deliver it.
The expert perceives and acts in a single, unified response. The distinction between seeing the situation and deciding what to do has collapsed. The master pilot responds to the developing situation the way a native speaker responds to a question in her first language — immediately, without deliberation, from a place so deep in embodied practice that the very notion of following a rule seems absurd. The expert's knowledge is not in her head. It is distributed across her entire organism — her hands on the controls, her body's orientation in the seat, her peripheral awareness of the instruments, her feel for the aircraft's response to input that has been calibrated through thousands of hours of actual flight.
The critical observation, the one that carries the most weight for the current discussion, is what is required for the transition between stages. It is not instruction. It is not the acquisition of better rules. It is friction — the specific, embodied, emotionally engaged, failure-mediated friction of working through problems that resist solution. Each stage deposits experiential traces that become the foundation for the next. The advanced beginner's situational recognition is built from the novice's rule-following encounters. The competent performer's emotional investment is built from the advanced beginner's growing sensitivity to context. The proficient performer's holistic perception is built from the competent performer's thousands of emotionally invested decisions and their outcomes. And the expert's unified perception-and-action is built from the proficient performer's extended experience of seeing the situation clearly and working out the right response, until the working-out disappears and only the seeing-and-responding remains.
Remove the friction from any stage, and the experiential traces are not deposited. The next stage cannot be reached, because the developmental foundation has not been laid. This is not a metaphor. It is the structure of the model, supported by decades of empirical observation across domains ranging from aviation to chess to nursing to second-language acquisition.
Now consider what happens when AI enters this developmental trajectory. The phenomenon The Orange Pill documents is, from the perspective of the Dreyfus model, a historically unprecedented experiment in skill-acquisition disruption. A novice or advanced beginner sits down with Claude Code and produces output that is, by any external measure, expert-level. Working code. Functional interfaces. Architectural decisions that reflect patterns the novice has never encountered through personal experience but that the model has extracted from the aggregate experience of millions of developers.
The output is real. The code runs. The product works. The novice has not pretended to be an expert. She has used a tool that allows her to produce expert-level output without possessing expert-level understanding. And this is, from a purely pragmatic perspective, an extraordinary achievement. It is also, from the perspective of the Dreyfus model, a developmental catastrophe.
The catastrophe is not in the output. It is in what the output's availability does to the developmental trajectory. The novice who produces expert-level output with AI assistance has not progressed through the stages. She has skipped them. The situational recognition that the advanced beginner develops through thousands of encounters with code that does not work as expected — she has not developed it, because the code works on the first attempt. The emotional investment that the competent performer develops through choosing an approach and living with the consequences — she has not developed it, because the tool chose the approach and the consequences are not hers. The holistic perception that the proficient performer develops through years of emotionally invested pattern recognition — she has not developed it, because the patterns were recognized by the model, not by her organism.
The result is a practitioner who can produce at the expert level as long as the tool is available and cannot produce at the advanced beginner level without it. The tool has not augmented her expertise. It has replaced the developmental process through which expertise is acquired. She has the output. She does not have the capacity.
Segal's ascending friction thesis — the argument that AI removes friction at one level and relocates it to a higher cognitive floor — is the strongest counter to this concern. When the novice no longer struggles with syntax, she can engage with architecture. When the advanced beginner no longer struggles with debugging, she can engage with product strategy. The friction has not disappeared. It has ascended. And the practitioner who operates at the higher level is not shallower than the one who operated at the lower level. She is differently skilled, working on harder problems, exercising cognitive capacities that the lower-level friction prevented her from reaching.
Dreyfus's framework accepts this argument in part and rejects it in part. The acceptance: yes, higher-level challenges are real challenges. Product strategy is harder than syntax. Architectural judgment is harder than debugging. The practitioner who works at the higher level is genuinely engaged with genuine difficulty. The rejection: the higher-level challenges are challenges of a different kind. They are more abstract, more disembodied, further removed from the material reality the builder is shaping. And in the Dreyfus model, the embodied engagement with the material — the specific, friction-rich, failure-mediated encounter with code that does not work, with systems that resist, with problems whose resistance teaches you something that no abstraction can convey — is not a lower form of intelligence that the higher form supersedes. It is the foundation on which the higher form rests, and without which the higher form eventually detaches from reality and becomes what Dreyfus, borrowing from Heidegger, would call a form of sophisticated rootlessness.
A recent study of investment banking offers empirical confirmation of this concern. Researchers found that AI, by automating seventy-three percent of junior analysts' fundamental tasks, severed the tacit knowledge transfer and disrupted the trial-and-error cycle through which analytical expertise had traditionally developed. The result was what the researchers called "Competency Collapse" — a systematic inability among AI-assisted juniors to perform the analyses they routinely produced, once the tool was removed. The output was excellent. The understanding was absent. The ascending friction had ascended past the level where the developmental deposits could be made.
The engineer in Trivandrum who built frontend features without frontend training is the case study in miniature. She produced working interfaces. She did not develop the embodied understanding of frontend development that would have come from months of wrestling with CSS that does not render as expected, JavaScript event models that behave counterintuitively, responsive layouts that break on devices she has not tested. Her output was real. Her expertise was borrowed. And the question the Dreyfus model poses with uncomfortable precision is: what happens when the tool changes, when the platform shifts, when a genuinely novel problem arrives that does not match the patterns in the training data — when the situation demands not the reproduction of known solutions but the creative, embodied, intuitive response of a practitioner who has been in this kind of trouble before and whose organism knows what to do?
The ascending friction thesis says she will engage with the novel problem at a higher cognitive level, using judgment and taste to direct the tool toward a solution. Dreyfus's framework says that judgment and taste are not disembodied faculties that float above experience. They are the distillation of experience — of embodied, emotionally invested, failure-mediated experience — and that a practitioner whose experience has been largely mediated by a tool rather than by direct engagement with the material will bring less to the novel problem than the thesis predicts. Not because she lacks intelligence. Because her intelligence has not been deposited in the specific, embodied, situation-sensitive form that novel problems demand.
The expert's feel for the situation is not a metaphor for fast analysis. It is a different kind of knowing, built from the body up, through ten thousand encounters that left their mark. No shortcut preserves what the long way builds. The question is not whether shortcuts are useful. They manifestly are. The question is whether a generation of practitioners raised on shortcuts will possess, when the moment demands it, the embodied foundation that no shortcut can provide.
---
Merleau-Ponty died in 1961, four years before Dreyfus published Alchemy and AI, but the phenomenology of perception he developed in the 1940s is the philosophical foundation on which Dreyfus's most powerful arguments rest. Heidegger provided the ontological framework — being-in-the-world, the structure of care, the analysis of readiness-to-hand. Merleau-Ponty provided something more specific and more visceral: a phenomenology of the body as the primary medium of intelligent engagement with the world.
The body, in Merleau-Ponty's analysis, is not an instrument the mind uses to interact with the world. It is the subject of perception. When a skilled typist's fingers find the keys, the knowledge of where the keys are is not stored in a mental representation that the mind consults and the fingers execute. The knowledge is in the fingers. The hands know the keyboard the way the tongue knows the mouth — not as an object represented in consciousness but as a field of possibilities that is available to the body directly, without the mediation of thought. Merleau-Ponty called this "motor intentionality": the body's capacity to direct itself toward objects and situations in the world without passing through an explicit representation of what it is doing.
Dreyfus recognized in this analysis the philosophical key to what AI researchers could not replicate. The skilled practitioner's intelligence is not a computational process that happens to be implemented in biological hardware. It is an embodied capacity — a way of being in the world that involves the whole organism, not just the brain, and that is built through the specific history of that organism's engagement with its environment. The expert's knowledge is not propositional knowledge — "knowing that" such-and-such is the case. It is practical knowledge — "knowing how" to do things — and knowing-how cannot be extracted from the body that has it and transferred to a system that has no body, because knowing-how is not information. It is a bodily capacity.
The distinction between embodied coping and disembodied processing is not a distinction between two levels of the same activity. It is a distinction between two fundamentally different modes of being in relation to a task. Embodied coping is what happens when the carpenter drives a nail, when the surgeon's hands navigate tissue, when the musician's fingers find the chord, when the experienced driver adjusts for a patch of ice before she has consciously registered its presence. In each case, the being is engaged with the world through its body in a way that is immediate, pre-reflective, and sensitive to the situation's demands in a manner that no set of rules or representations could capture.
Disembodied processing is what happens when a computational system — however sophisticated — generates an output from an input according to learned patterns. The system has no body. It has no pre-reflective engagement with anything. It does not cope with the world because it is not in the world. It processes representations of the world — specifically, the textual representations that embodied beings have produced in the course of their worldly engagement — and generates new representations that are statistically consistent with the patterns in its training data.
The Orange Pill provides a case study that illuminates this distinction with exceptional clarity, though the book itself does not draw the Dreyfusian conclusion. Segal describes the difference between laparoscopic and open surgery at length, using it as the central example for his ascending friction thesis. The open surgeon's hands are inside the body. She feels the tissue — its resistance, its texture, the subtle difference between healthy and diseased tissue that announces itself through the fingertips before any visual inspection could detect it. The laparoscopic surgeon operates through tiny incisions, guiding instruments by watching a screen. She has lost the tactile knowledge. She has gained the ability to perform operations that open hands could never reach.
Dreyfus would accept the empirical description but challenge the conclusion drawn from it. Segal argues that the friction has ascended — that the laparoscopic surgeon faces harder cognitive challenges at a higher level, and that the loss of tactile knowledge is compensated by the gain in operative capability. The argument is not wrong in its own terms. But Dreyfus's framework reveals a dimension that the ascending friction thesis cannot accommodate: the tactile knowledge and the cognitive knowledge are not denominated in the same currency. They are different kinds of knowing, rooted in different modes of embodied engagement, and the loss of one cannot be compensated by the gain of the other, because they are not exchangeable.
The open surgeon's tactile knowledge is motor intentionality in Merleau-Ponty's precise sense. Her hands know the body the way the typist's fingers know the keyboard — not through representation but through direct, bodily engagement. When she encounters abnormal tissue, she does not first perceive the abnormality visually and then reason about its significance. She feels it. The knowledge is in the contact. The significance is in the resistance. Her hands have been educated by thousands of surgeries to detect what the eyes cannot see — the subtle changes in tissue consistency that indicate pathology, the precise amount of traction that can be applied without tearing, the feel of a surgical plane that opens naturally versus one that must be forced.
The laparoscopic surgeon has developed a different embodied capacity: the capacity to interpret a two-dimensional image of a three-dimensional space, to coordinate instruments she cannot directly feel, to operate at a remove from the body that requires a constant, cognitively demanding translation between what she sees on the screen and what is happening inside the patient. This is genuine skill. It involves years of practice. It constitutes a real form of expertise. But it is expertise of a different kind — more visual, more cognitive, less tactile, less directly engaged with the material reality of the body.
Both forms of expertise satisfy the conditions of Dreyfus's highest stage: both involve holistic, intuitive, immediate perception-and-response that bypasses deliberation. The open surgeon's expertise is expressed through her hands' direct engagement with tissue. The laparoscopic surgeon's expertise is expressed through her interpretation of visual information and her coordination of remote instruments. Both are embodied, but the embodiment is different — and the difference matters, because the kind of problems each form of expertise can solve is determined by the kind of embodied engagement from which it was built.
The parallel to software engineering is direct. The engineer who debugs code by reading it, line by line, developing a feel for the logic that operates below conscious analysis — who senses the null pointer exception before the stack trace confirms it, who knows that this particular pattern of variable naming indicates a developer who was careless about edge cases, who can look at a function and feel that it is doing too much — possesses a form of motor intentionality applied to text. Her eyes have been educated by thousands of hours of reading code to detect what a casual reader would miss. The knowledge is in the reading body: the particular quality of attention, the rhythm of scanning, the felt sense of wrongness that manifests as a tightening in the stomach before the conscious mind has identified the bug.
Claude processes code with extraordinary sophistication. It can identify bugs, suggest fixes, refactor functions, generate tests. Its output is often better than what the human engineer would produce, in the sense that it is more comprehensive, more consistent, more attuned to best practices extracted from millions of code repositories. But Claude does not read code the way the experienced engineer reads code. It does not feel the wrongness. It does not carry the bodily history of having been burned by this particular pattern before. It processes the text against statistical patterns and generates a response. The response may identify the same bug the engineer would have felt. But the engineer's feeling and the model's processing are different phenomena entirely, and the difference shows up not in the individual output — which may be identical — but in the trajectory of the practitioner's development.
This is where Dreyfus's distinction carries its greatest practical weight. The engineer who debugs her own code, who sits with the frustration of a system that does not work, who traces the logic through its branches and feels the moment when the error reveals itself — this engineer is depositing the experiential traces that will become, over years, the expert's intuitive grasp. Each encounter with a specific kind of failure educates her reading body. Each resolution deposits a layer of embodied understanding. The geological metaphor that The Orange Pill employs is phenomenologically exact: the layers accumulate through friction, through resistance, through the specific embodied experience of struggling with a problem that does not yield easily.
The engineer who uses Claude to debug — who describes the problem in natural language and receives a solution in seconds — has not deposited those traces. The bug is fixed. The code works. The next task awaits. But the encounter that would have educated her reading body did not occur. The frustration that would have invested the experience with emotional significance — the significance that, in the Dreyfus model, is constitutive of the progression from competent to proficient — was bypassed. The solution arrived without the struggle that would have made the solution meaningful in the specific, embodied sense that the word "meaningful" carries in phenomenological analysis.
Segal acknowledges something close to this concern in his account of the senior engineer in Trivandrum who spent his first two days oscillating between excitement and terror. The excitement was for the expanded capability. The terror was for the question the capability forced: if the implementation work that had consumed eighty percent of his career could be handled by a tool, what was the remaining twenty percent actually worth? Segal's answer is: everything. The twenty percent — the judgment, the architectural instinct, the taste that separates adequate from excellent — turned out to be the part that mattered.
Dreyfus's framework suggests a complication that Segal's answer does not address. The twenty percent did not develop in isolation from the eighty percent. The architectural instinct was built through the specific embodied experience of implementing architectures that failed. The taste was calibrated through thousands of encounters with code that was adequate but not excellent, encounters whose emotional texture — the dissatisfaction of the good-enough, the satisfaction of the elegant — deposited the experiential traces that eventually became intuition. The judgment is the distillation of the implementation, not a separate faculty that existed independently of it.
If this is correct — and the Dreyfus model, supported by decades of evidence across domains, argues that it is — then the eighty-twenty split is more problematic than The Orange Pill acknowledges. Liberating the twenty percent from the eighty percent does not free a pre-existing capacity. It severs a capacity from the process through which it was formed. The senior engineer who possesses the twenty percent earned it through decades of the eighty percent. A junior engineer who has never done the eighty percent — who has always had it done by the tool — may never develop the twenty percent at all. Not because she lacks talent or intelligence, but because the experiential foundation required for the development of expert judgment was never laid.
The question is not whether AI produces useful output. It does. The question is not whether AI-mediated work can be deeply satisfying. Segal's testimony confirms that it can. The question is whether a generation of practitioners who have used AI tools from the beginning of their careers will develop the embodied capacities — the feel, the intuition, the situated judgment — that the practitioners who built those tools possessed. And Dreyfus's framework, applied with rigor to the evidence The Orange Pill provides, suggests that the answer is not certain, that the developmental process has been disrupted at a level the ascending friction thesis cannot fully address, and that the cost of the disruption will become visible only when a genuinely novel problem arrives that the tool cannot solve and the practitioner cannot solve without the embodied knowledge the tool's presence prevented her from acquiring.
The amplifier amplifies the signal. But the signal is not information. It is embodied intelligence — the kind that is built through years of friction and failure and care. Thin the signal, and the amplification produces not power but noise. The output may look the same. The understanding beneath it will not be.
Every competent adult navigates a world saturated with understanding that has never been articulated and almost certainly cannot be. The knowledge that a restaurant is not a place to lie down on the floor. The knowledge that a person who says "Can you pass the salt?" is not asking about your physical capabilities. The knowledge that a news headline reading "Milk Drinkers Turn to Powder" is about the dairy industry and not about human disintegration. The knowledge that when your colleague's email says "Fine" in response to a detailed proposal, she is not fine. These understandings are not stored as rules. They are not retrieved from a mental database. They are not the product of inference from premises. They are the background — the vast, tacit, culturally constituted fabric of shared practice and common sense against which every explicit thought, every deliberate act of reasoning, every utterance and interpretation takes place.
Dreyfus identified the background as the fundamental obstacle to artificial intelligence in any form, and the decades since have not weakened the identification. His argument, distilled to its philosophical core, is this: every act of human understanding presupposes a background of shared practices so vast, so pervasive, and so deeply embedded in embodied experience that it cannot be made explicit without infinite regress. The attempt to formalize the background — to write down the rules of common sense, to represent in a database everything a competent adult knows about how the world works — was the project of classical AI, and it failed. Not because the database was too small, not because the rules were poorly written, but because the background is not the kind of thing that can be formalized at all. It is constituted by the way human beings inhabit the world, by the bodily habits and cultural practices and shared expectations that are not represented in consciousness but that make consciousness possible.
The classical AI failure on this front was dramatic and instructive. Douglas Lenat's Cyc project, begun in 1984, attempted to encode common-sense knowledge in a vast ontology of assertions — millions of them, painstakingly entered by hand, covering everything from "water flows downhill" to "people generally do not enjoy being hit in the face with a fish." The project consumed decades and tens of millions of dollars. It did not produce common sense. It produced a very large database. The difference between a very large database and common sense is the difference between a map and the territory: however detailed the map becomes, it remains a representation, and a representation is not the thing it represents. The territory — the lived, embodied, culturally constituted world in which common sense operates — has a texture, a depth, an interconnectedness that no finite set of propositions can capture.
Large language models approach the background from a radically different direction, and the approach is sufficiently different that the question of whether Dreyfus's critique still applies requires careful examination rather than reflexive assertion. Claude and its peers have not attempted to formalize the background. They have absorbed it — or, more precisely, they have absorbed the textual traces that the background leaves behind. When millions of human beings write about restaurants, their writing presupposes and therefore implicitly encodes the background knowledge that one does not lie on the floor in a restaurant. The model does not possess this knowledge in the way a human being possesses it — as a bodily disposition, an unthought assumption, a felt sense of what is appropriate. The model possesses it as a statistical regularity in the training data: the pattern that "restaurant" co-occurs with certain behaviors and not others, and that generating text about lying on a restaurant floor would be statistically anomalous in most contexts.
The functional result is often indistinguishable from genuine common sense. Ask Claude about appropriate behavior in a restaurant, and the answer will be sensible, nuanced, and culturally informed. The model has, in some functional sense, captured enough of the background's textual residue to produce contextually appropriate responses across an astonishing range of situations. This is what makes the new AI genuinely different from the old AI, and what requires Dreyfus's critique to be updated rather than merely repeated. Cyc failed because it tried to formalize the background. Large language models succeed, to a remarkable degree, because they do not formalize it — they approximate it through statistical regularities in the data that the background has shaped.
But approximation and possession are different things, and the difference becomes visible at the edges — precisely where common sense matters most. Common sense is most needed when the situation is novel, when the standard patterns do not apply, when the background understanding that is ordinarily invisible must be brought to bear on a problem that the usual routines cannot handle. A human being navigates a novel situation by drawing on the full depth of her embodied background — the felt sense of what is appropriate, the intuitive grasp of how the relevant norms interact, the capacity to improvise a response that is not dictated by any rule but that draws on the totality of experience. The model navigates a novel situation by extrapolating from the statistical patterns in its training data, and when the novel situation is sufficiently distant from those patterns, the extrapolation fails.
The failures have a specific and revealing character. They are not random errors. They are not the kind of obvious mistakes that a competent human being would immediately recognize and correct. They are plausible errors — outputs that look right, that read well, that could pass a cursory inspection, but that are wrong in ways that only a person with genuine background understanding can detect. Dreyfus's framework predicts exactly this failure mode, because the framework identifies the background as something that cannot be captured in patterns, however sophisticated, but only in the embodied engagement of a being that has lived in the world the patterns describe.
The Orange Pill provides the case study that Dreyfus's framework illuminates most powerfully. Segal describes a passage about Gilles Deleuze that Claude produced in an early draft — a passage that connected Deleuze's concept of "smooth space" to Csikszentmihalyi's flow state in an elegant and rhetorically persuasive way. The connection sounded right. It read well. It had the quality of genuine philosophical insight. And it was wrong — wrong in a way that was invisible in the prose but obvious to anyone who had actually engaged with Deleuze's work. The concept of smooth space, in Deleuze and Guattari's actual usage, has almost nothing to do with how Claude had deployed it. The model had generated a connection based on the surface similarity of the words "smooth" and "flow," producing an output that was statistically consistent with the patterns of how philosophical concepts are discussed but semantically disconnected from what the concepts actually mean.
This is a background failure in Dreyfus's precise sense. The model's approximation of the philosophical background was sufficient to produce prose that sounded philosophical. It was not sufficient to produce prose that was philosophical — that actually engaged with the concepts at the level of meaning rather than the level of linguistic pattern. The distinction is invisible on the surface. The prose reads identically whether the understanding is genuine or simulated. Only a reader who possesses the genuine background — who has read Deleuze, struggled with his concepts, argued about them, applied them, and developed the kind of felt sense for what they mean that only engaged study produces — can detect the seam.
Segal caught the error. This is the crucial detail. He caught it not through a systematic verification process — he was not checking every reference against the source text — but through a felt sense that something was not right. He describes it as nagging, a quality of unease that manifested the morning after he had read and approved the passage. The nagging is itself a form of embodied background understanding. It is the kind of knowledge that Dreyfus's five-stage model locates at the proficient or expert level: the holistic perception that something in the situation does not fit, registered not as a deliberate judgment but as a bodily signal — a tightening, an itch, a quality of attention that has been shaped by years of reading philosophy to distinguish between the genuine article and its imitation.
The machine cannot nag itself. It has no felt sense of rightness or wrongness. It has probability distributions, and within those distributions it operates with breathtaking facility, but the distributions are not understanding. They are the residue of understanding, extracted from text, compressed into parameters, and deployed without the background that would make them reliable in the cases where reliability matters most — the cases at the edges, where the patterns run out and only genuine understanding can guide.
Heidegger introduced a concept that bears directly on this problem. He called it the Bewandtnisganzheit — the totality of involvements, the web of relationships and references that constitutes the meaningful context within which any particular thing shows up as what it is. A hammer shows up as a hammer not because of its physical properties — a philosophical point Heidegger insisted on — but because of its place in a web of involvements that includes nails, wood, construction, shelter, dwelling, and the entire form of life within which building things makes sense. Remove the web and the hammer is just an oddly shaped object. The physical properties remain. The meaning disappears.
The background that Dreyfus identified as the fundamental obstacle to AI is Heidegger's totality of involvements rendered in the language of cognitive science. Every competent human being inhabits a web of involvements so vast and so densely interconnected that no finite representation can capture it. When the engineer reads code, she reads it within a web that includes the project's goals, the team's history, the product's users, the deadline's pressure, the specific way this codebase has evolved through decisions made by people she knows and whose tendencies she has learned to anticipate. The code is not text. It is a node in a web of involvements, and the engineer's understanding of the code is her grasp of its place in that web.
Claude reads code as text. Extraordinarily well. With a facility for pattern recognition that exceeds any individual human's capacity. But it reads text, not involvements. It does not know the team's history. It does not feel the deadline's pressure. It does not anticipate the tendencies of the developers whose decisions shaped the codebase. It processes the textual surface with remarkable sophistication and generates responses that are consistent with the patterns of how competent engineers discuss code. The responses are often useful, sometimes brilliant. They are not situated in the web of involvements that gives the code its meaning for the people who live with it.
Hallucinations — the term the AI community has adopted for outputs that are fluent, confident, and wrong — are the structural consequence of this absence. A system that processes patterns without inhabiting the web of involvements those patterns presuppose will inevitably generate outputs that are pattern-consistent but involvement-inconsistent. The Deleuze passage was pattern-consistent: it followed the statistical regularities of how philosophical concepts are discussed. It was involvement-inconsistent: it did not reflect what the concepts actually mean to a being that has engaged with them. The hallucination was not a bug. It was a consequence of the architecture. A system that approximates the background without possessing it will produce plausible nonsense whenever the approximation encounters a gap it cannot bridge — and the gaps are invisible from the inside, because the system has no way of distinguishing between what it knows and what it is merely pattern-matching toward.
Dreyfus would observe that the discipline Segal describes — the commitment to vigilance against plausible wrongness, the willingness to reject output that sounds better than it thinks — is itself evidence for the claim that the background cannot be offloaded. The discipline works only because the human in the collaboration possesses the genuine background that the machine approximates. Remove the human's background — let the practitioner's philosophical reading lapse, let the engineer's deep knowledge of the codebase atrophy through disuse, let the felt sense of rightness and wrongness fade from lack of exercise — and the discipline loses its foundation. The nagging stops. The passages that should provoke unease pass unexamined. The plausible nonsense is absorbed into the output as though it were genuine understanding, and no one is the wiser, because the surface is smooth.
The deepest irony of the background problem, from Dreyfus's perspective, is that the better the approximation becomes, the more dangerous the remaining gaps are. When the approximation was poor — when classical AI produced obviously mechanical, contextually inappropriate outputs — the gaps were visible and the background's necessity was obvious. Now that the approximation is extraordinary — now that Claude produces outputs that are contextually appropriate ninety-five or ninety-nine percent of the time — the remaining one or five percent of failures are hidden behind a surface of consistent excellence, and the temptation to trust the surface without checking becomes correspondingly greater.
A system that fails conspicuously teaches its users vigilance. A system that fails rarely and plausibly teaches its users trust. And trust, in the context of a system that approximates understanding without possessing it, is the precise condition under which the background's absence becomes catastrophic. Not because the catastrophe happens often. Because when it happens, no one sees it coming, and no one detects it afterward, and the output that should have been caught is absorbed into the world as though it were the product of genuine understanding — which it resembles in every respect except the one that matters.
The background cannot be formalized, Dreyfus argued for five decades, and the decades proved him right about symbolic AI. The background cannot be statistically approximated without remainder, the current evidence suggests, and the remainder — the gap between the approximation and the real thing — is where the most consequential failures live. The question is not whether to use the tool despite the gap. The tool is manifestly useful despite the gap. The question is whether the people using it will maintain the embodied background that allows them to see the gap when it matters. And that question points back, as every question in Dreyfus's framework does, to the life the practitioner lives away from the screen — the reading, the struggling, the failing, the slow accumulation of understanding that no machine can provide and no shortcut can replace.
---
There is a passage in Division I of Being and Time that Dreyfus returned to more often than any other in his career, because it contains, in compressed form, the phenomenological key to understanding the relationship between human beings and their equipment. Heidegger is describing what happens when a carpenter uses a hammer. The description seems mundane. Its implications are not.
When the hammer is working well — when the carpenter is absorbed in the task of driving a nail — the hammer is not an object of attention. The carpenter is not aware of the hammer. She is aware of the nail, the board, the joint she is constructing, the project the joint serves. The hammer has withdrawn from consciousness. It is, in Heidegger's terminology, zuhanden — ready-to-hand. It has become, phenomenologically, an extension of the carpenter's body, transparent to her intention, invisible in its function. The carpenter does not think "I am using a hammer." She thinks, if she thinks at all — and Heidegger's point is that at the level of absorbed coping, there is no thinking in the reflective sense — about the joint. The tool is the medium of her engagement with the work, and the medium, when it functions well, disappears.
Now the hammer breaks. The head loosens. The grip cracks. The nail bends. In the moment of breakdown, the hammer undergoes a phenomenological transformation. It is no longer ready-to-hand. It becomes vorhanden — present-at-hand. It is suddenly there, obtrusive, demanding attention not as a transparent medium of work but as a thing in itself, with properties: weight, material, balance, condition. The carpenter's relationship to the hammer has shifted from absorbed engagement to conscious inspection. She is no longer using the tool. She is looking at it.
Dreyfus spent decades drawing out the implications of this analysis for artificial intelligence, and the implications bear directly on the phenomenon The Orange Pill describes. The builder working with Claude Code is using a tool. When the tool functions well — when Claude generates code that works, suggestions that are apt, connections that illuminate — the tool is ready-to-hand. The builder is not thinking about Claude. She is thinking about the product, the user, the problem she is solving. Claude has withdrawn from consciousness, become transparent, disappeared into the flow of the work. This is, on the Heideggerian account, exactly how tools should function. Readiness-to-hand is the condition of effective tool use, and achieving it with a tool as cognitively complex as an AI system is a remarkable accomplishment, both of the tool's designers and of the practitioner who has learned to integrate it into her workflow.
But readiness-to-hand carries a risk that Heidegger identified and that Dreyfus amplified: when the tool never breaks, the moment of critical inspection never arrives. The carpenter whose hammer never loosens never examines the hammer. She never confronts it as a thing with properties, limitations, assumptions built into its design. She never asks whether this hammer is the right hammer for this task, whether the hammer's weight is appropriate, whether a different tool would serve better. The hammer works. She uses it. The work gets done. The question of whether the tool is shaping the work in ways she has not noticed — whether the hammer's weight is determining the force of the blow, whether the blow's force is determining the joint's character, whether the joint's character is determining the project's quality — never arises, because the tool's transparency prevents it from arising.
Claude's failure modes — the hallucinations, the plausible nonsense, the confident wrongness — are, on this analysis, not merely problems to be solved. They are philosophically essential. They are the moments when the tool breaks, when readiness-to-hand collapses into present-at-hand, when the builder is forced to confront the tool as a thing in itself rather than a transparent medium of work. In the moment of breakdown, the builder sees Claude — not as a collaborator, not as an extension of her intention, but as a system with specific characteristics, specific limitations, specific failure modes that she must understand if she is to use it wisely.
Segal describes these moments with the precision of someone who has experienced them repeatedly and reflected on their significance. The Deleuze failure. The fabricated anecdotes in early drafts. The passages where the prose outran the thinking. Each of these failures was a breakdown in the Heideggerian sense — a moment when the smooth surface of the collaboration cracked and the tool became visible as a tool, with all the limitations that visibility reveals.
The discipline Segal advocates — the commitment to questioning Claude's output, to rejecting prose that sounds better than it thinks — is the discipline of deliberately inducing breakdown. Of refusing to let the tool remain permanently ready-to-hand, of insisting on moments of present-at-hand inspection even when the tool is functioning well. This discipline is philosophically sound. It is also, as Segal honestly acknowledges, difficult to maintain, because readiness-to-hand is seductive. The tool that works smoothly invites trust. The flow that comes from absorbed engagement with a well-functioning tool resists interruption. Every moment of deliberate inspection breaks the flow, forces the builder out of absorbed coping and into reflective evaluation, costs time and energy and the specific pleasure of uninterrupted creation.
Byung-Chul Han's aesthetics of the smooth, which The Orange Pill engages extensively in its middle chapters, is illuminated by the Heideggerian analysis in a way that both sharpens and complicates Han's critique. Han argues that contemporary culture is defined by the elimination of friction, the pursuit of seamlessness, the aesthetic ideal of a surface so polished that nothing catches, nothing resists, nothing demands the moment of confrontation that friction produces. Dreyfus's framework reveals a crucial distinction that Han's analysis elides: the difference between readiness-to-hand and smoothness.
Readiness-to-hand is a specific phenomenological condition of tool use. The tool withdraws from attention because the user is absorbed in the work. The tool's transparency is a function of the user's engagement. And critically, readiness-to-hand exists in dialectical relationship with present-at-hand: the tool can always break, the absorption can always be disrupted, and the disruption is not a failure of the system but an essential feature of it, because the disruption is what forces reflection, evaluation, and the critical engagement without which tool use degenerates into habit.
Smoothness, in Han's sense, is something different. It is the elimination of the possibility of breakdown. It is the design of tools and environments and experiences from which all resistance has been removed — not because the user is absorbed in genuine work, but because the friction that would produce absorption has itself been polished away. A smooth tool does not withdraw from attention because the user is engaged. It fails to demand attention because there is nothing to demand it. The user is not absorbed in the work. The user is gliding across the surface of the work, producing output without the engagement that gives output its meaning.
The distinction matters because the counter-argument to Han that The Orange Pill advances — that AI enables flow, and flow is a state of positive, absorbed engagement — conflates readiness-to-hand with smoothness. Flow, in Csikszentmihalyi's account and in the Heideggerian analysis that underlies it, involves a tool that has become transparent because the user is deeply engaged with the genuine resistance of the task. The challenge-skill balance that defines flow requires resistance: the task must be hard enough to demand full attention. Remove the resistance and you remove the condition for flow. What remains is not flow but something that resembles flow from the outside while lacking its essential phenomenological structure: the encounter with genuine difficulty that absorbs the self and reveals the world.
Segal describes nights of genuine flow with Claude — nights when the work is meaningful, the connections are surprising, the output exceeds what either human or machine could produce alone. These nights are, on the Heideggerian account, nights when the tool is ready-to-hand and the work retains its genuine resistance. The builder is struggling with real questions — what to argue, what to include, how to hold contradictory truths in tension — and the tool is serving that struggle by removing mechanical obstacles while preserving intellectual ones.
Segal also describes nights of grinding compulsion — nights when the work has lost its savor, when the output is flowing but the meaning has drained away, when the builder cannot stop not because the work is absorbing but because stopping has become intolerable. These nights are, on the Heideggerian account, nights when readiness-to-hand has degenerated into smoothness. The tool is not transparent because the builder is absorbed. The tool is frictionless because the resistance that would produce absorption has been eliminated. The builder is not coping with the world. She is sliding across its surface.
The difference between these two states is not detectable from outside. Both look like intense, productive work. A camera would record the same image: a person at a screen, typing, focused, generating output. The difference is internal, phenomenological, detectable only by the person experiencing it — and detectable only if that person has maintained the capacity for the kind of self-reflective awareness that absorbed coping, paradoxically, tends to suppress. The absorbed builder does not monitor her own absorption. She is too absorbed to monitor anything. The transition from genuine absorption to smooth compulsion happens below the threshold of awareness, and by the time the builder notices, the transition is complete.
Heidegger would recognize this as a form of Verfallenheit — the fallenness he identified as Dasein's tendency to lose itself in the anonymous routines of everyday existence. The digital form of fallenness is subtler than the everyday form, because it disguises itself as its opposite. The builder who has fallen into the smooth is not aware of having fallen. She experiences the smoothness as productivity, as capability, as the exhilarating expansion of what she can accomplish. The absence of friction feels like freedom. It is, on the Heideggerian analysis, the opposite: the freedom that comes from never confronting the thing you are doing, never being brought up short by its resistance, never having to ask whether this is what you should be doing.
The hammer that never breaks is the tool that never teaches. Every breakdown is a lesson: about the tool's limitations, about the work's demands, about the builder's relationship to both. A career of breakdowns is a career of learning. A career of smoothness is a career of production without growth — output without the developmental deposits that only resistance can provide.
Dreyfus would not counsel the builder to break the tool on purpose. He would counsel her to pay attention to when it breaks on its own — to the hallucinations, the hollow passages, the moments when the smooth surface cracks — and to treat those moments not as failures to be minimized but as disclosures to be studied. In the breakdown, the tool reveals itself. In the revelation, the builder learns something about the tool, about the work, and about herself that the smooth functioning of the system would never have disclosed. The breakdown is the teacher. The smoothness is the truant.
---
Mihaly Csikszentmihalyi's Flow was published in 1990, and its central finding — that the moments of greatest human satisfaction occur during intense, voluntary engagement with something difficult — has become one of the most cited results in positive psychology. The Orange Pill deploys flow as a counter-argument to Han's diagnosis of pathological intensity, arguing that the builder's inability to stop working with Claude is not necessarily auto-exploitation but may be the optimal human experience: challenge and skill matched, attention fully absorbed, self-consciousness dropped away, time distorted in the particular manner that signals deep engagement.
Dreyfus would have appreciated Csikszentmihalyi's empirical findings while challenging the theoretical framework that supports them. His concept of absorbed coping — developed from Heidegger's analysis of being-in-the-world and Merleau-Ponty's phenomenology of bodily engagement — describes a phenomenological state that overlaps significantly with flow but differs in a way that matters for the question at hand. The difference is this: flow, in Csikszentmihalyi's account, is a psychological state. Absorbed coping, in the Heideggerian account that Dreyfus developed, is an ontological condition. And the gap between a psychological state and an ontological condition is the gap that determines whether AI-mediated work constitutes genuine human flourishing or its increasingly convincing simulation.
A psychological state is characterized by the subjective experience of the person in the state. Flow is defined by what it feels like: the absorption, the time distortion, the loss of self-consciousness, the sense of control, the intrinsic reward. These are experiential markers, and they can be reported, measured, and correlated with other variables. Csikszentmihalyi's research is exemplary in its empirical rigor. The question is not whether flow feels the way Csikszentmihalyi says it feels. The evidence is overwhelming that it does. The question is whether feeling that way is sufficient to constitute the kind of engaged, world-disclosing activity that Dreyfus identifies as the highest form of human intelligence.
An ontological condition is characterized not by how it feels but by what it discloses. Absorbed coping, in the Heideggerian analysis, is the mode of being in which the world's structure becomes available to the practitioner in a way that is inaccessible from any other mode. The master carpenter in absorbed coping does not merely feel good about her work. She perceives the wood's grain, the joint's possibilities, the structure's tensions in a way that is available only to a being whose body has been shaped by years of engaged practice and whose attention has been calibrated by thousands of encounters with materials that resist. The perception is not added to the engagement. It is constituted by it. Remove the engagement and the perception vanishes, not because the carpenter has lost access to information but because the kind of knowing that absorbed coping produces — the direct, bodily, situation-sensitive knowing that Dreyfus calls expertise — requires the specific embodied condition of being-in-the-world that the engagement provides.
The distinction has direct implications for the phenomenon The Orange Pill describes. Segal reports the subjective markers of flow during his most productive sessions with Claude: time distortion, absorbed attention, the sense of creative connection, the inability to stop that arises not from compulsion but from genuine engagement. By Csikszentmihalyi's criteria, these sessions qualify as flow states. The challenge-skill balance is right: the work is hard enough to demand full attention, and the tool extends the builder's capability to the point where the challenge remains at the edge of the possible. The feedback is immediate: describe the problem, receive a response, evaluate, adjust, iterate. The goals are clear. The sense of control is present.
Dreyfus would ask a different question. Not "Does this feel like flow?" but "What does this disclose?" The master carpenter in absorbed coping discloses the wood. She perceives possibilities in the material that are invisible to a less engaged practitioner — the way the grain suggests a curve, the way the density invites a specific kind of joint, the way the wood's history (where it grew, how it was dried, what tensions it carries) manifests in properties that only an educated hand can detect. This disclosure is not a subjective experience added to the wood. It is the wood's reality, made available through the specific mode of embodied engagement that only absorbed coping provides.
What does the builder in flow with Claude disclose? This is the question that Csikszentmihalyi's framework cannot answer, because the framework does not ask it. Csikszentmihalyi measures the quality of the experience. Dreyfus asks about the quality of the disclosure. And the quality of the disclosure depends on the mode of engagement — specifically, on whether the engagement is with the material itself or with a representation of the material generated by a tool that has absorbed the material's resistance.
Consider two builders working on the same product. The first builds without AI assistance. She writes the code herself, debugs it herself, confronts the material's resistance directly. When the system breaks, she is the one who must diagnose the failure, trace the logic, understand why the code behaves as it does and not as she intended. Her engagement is direct, embodied, friction-rich. In absorbed coping, she discloses the system — she perceives its tensions, its fragilities, its latent possibilities, in a way that is available only through the specific embodied history of having built it with her own hands and having been burned by its failures.
The second builder works with Claude. She describes what she wants. Claude produces it. She evaluates the output, adjusts, iterates. The flow is real — she is absorbed, challenged, engaged. But her engagement is not with the system itself. It is with the description of the system, mediated through a tool that has translated her intention into implementation. The material's resistance has been absorbed by the tool. What remains for the builder is the higher-level resistance of strategy, architecture, product judgment — genuine challenges, as the ascending friction thesis correctly notes. But the disclosure is different. The first builder discloses the system from the inside, through the specific bodily history of having constructed it. The second builder discloses the system from the outside, through the mediated experience of having directed its construction.
Both may experience flow. Only one is in absorbed coping in the full Heideggerian sense, because only one is engaged with the material at the level of embodied, friction-rich, direct encounter that the concept requires. The difference is not in the feeling. Both builders may feel equally absorbed, equally satisfied, equally convinced that they are operating at the peak of their capability. The difference is in what the absorption produces — not in output, where the second builder may surpass the first, but in the practitioner's developing relationship to the work. The first builder's absorption builds the geological layers. The second builder's absorption may produce excellent work while leaving the geological substrate undisturbed.
Dreyfus developed this analysis through sustained engagement with the phenomenology of expertise across domains. He studied chess players, nurses, airline pilots, and drivers — practitioners in fields where the difference between competent performance and expert mastery is empirically measurable and practically consequential. In each domain, the transition from proficiency to expertise involved a shift from deliberate judgment to absorbed coping — from consciously choosing a response to perceiving the response directly, without the gap between seeing and acting that characterizes every earlier stage.
The critical finding, repeated across domains, was that this transition could not be shortcut. Proficient pilots who were given decision-support systems that provided expert-level recommendations did not develop expertise faster. They developed it slower, or not at all, because the system's recommendations removed the need for the emotionally invested, consequence-laden encounters that deposit the experiential traces from which expertise eventually crystallizes. The pilots experienced something that looked like flow — they were absorbed in the task, the system was providing immediate feedback, the challenge-skill balance was maintained. But the disclosure was impoverished, because the pilots were engaging with the system's recommendations rather than with the flying situation itself.
The parallel to AI-mediated work is uncomfortable in its precision. The builder in flow with Claude is engaged in a genuine cognitive challenge. The challenge is real. The satisfaction is real. The output may be exceptional. But the question Dreyfus poses is whether the engagement is at the right level — whether the builder is disclosing the work's reality or the tool's representation of it. And this question cannot be answered by measuring the flow state's subjective qualities, because the subjective qualities are identical in both cases. It can only be answered by examining what the practitioner can do when the tool is removed — when the situation demands the direct, embodied, expert response that no tool can mediate.
Segal's honest description of the difference between his generative flow nights and his compulsive grinding nights maps, on Dreyfus's analysis, not onto Csikszentmihalyi's categories but onto a distinction within absorbed coping itself. The generative nights are nights when Segal brings the full weight of his embodied experience — thirty years of building, the specific biographical architecture that makes his perspective irreplaceable — to the collaboration. The tool serves his engagement. His disclosure of the work's possibilities is genuine, because it draws on a background that the tool cannot provide and that his embodied history has built. The compulsive nights are nights when the background has receded, when the engagement is with the tool rather than with the work, when the flow has become a circuit between prompt and response that generates output without disclosing meaning.
The distinction cannot be drawn from outside. It can barely be drawn from inside. But it is the distinction on which the philosophical evaluation of AI-mediated work finally turns. Flow is not enough. The question is whether the flow discloses the world or merely produces the feeling of disclosure while the world recedes behind the smooth surface of the tool.
Csikszentmihalyi gave psychology a rigorous account of what optimal experience feels like. Dreyfus asks what it must be rooted in to produce not just satisfaction but understanding — the embodied, world-disclosing understanding that is the mark of genuine expertise and that no amount of subjective satisfaction can replace.
---
Early in The Orange Pill, in the passage that establishes the book's governing metaphor, Segal describes what he calls the fishbowl: the set of assumptions so familiar they have become invisible, the water in which every mind swims without noticing it, the glass that shapes perception before perception begins. The scientist's fishbowl is shaped by empiricism. The filmmaker's by narrative. The builder's by the question of what can be made. Every fishbowl reveals part of the world and hides the rest. The effort of genuine thinking, Segal argues, is the effort to press your face against the glass and see the world beyond the water's refractions — to become aware, even momentarily, of the medium through which you have always perceived.
Segal may not know it, but he has described one of the central concepts of Heidegger's hermeneutic phenomenology in the language of a builder's intuition. Heidegger called it the Vorstruktur des Verstehens — the fore-structure of understanding — and its analysis occupies some of the most demanding and consequential pages of Being and Time. The fore-structure is the totality of pre-judgments, expectations, frameworks, and background assumptions that a human being brings to every encounter before any act of conscious interpretation begins. Understanding is never a blank reception of data. It is always already shaped by what the understander brings to the situation — by the specific history, the cultural formation, the embodied practices, the language, the concerns that constitute the medium through which the world appears.
The fishbowl is the fore-structure rendered in metaphor. The water is the background. The glass is the boundary of the conceptual framework. The effort to see beyond the refraction is what Heidegger called the task of making the fore-structure explicit — not to eliminate it, which is impossible, but to become aware of it, which is the precondition for genuine understanding.
Dreyfus spent much of his career arguing that the fore-structure is constitutively embodied — that the pre-judgments and background assumptions that shape understanding are not merely cognitive but rooted in the specific way a particular body has moved through a particular world. The scientist's fishbowl is not shaped only by the ideas of empiricism. It is shaped by years of laboratory practice, by the specific way her hands have learned to manipulate instruments, by the bodily habits of attention that her training has deposited, by the felt sense of experimental rigor that manifests not as a rule she follows but as a quality of engagement she brings to the bench. The filmmaker's fishbowl is shaped not only by narrative theory but by the specific embodied experience of watching thousands of films, of feeling the rhythm of cuts, of knowing in her body when a scene is too long or a transition too abrupt. The builder's fishbowl is shaped by the specific history of having built things — the scars and the satisfactions, the projects that shipped and the ones that failed, the embodied knowledge of what it feels like when something is working and when it is about to break.
The encounter between fishbowls that Segal describes on the Princeton campus — the neuroscientist, the filmmaker, and the builder, each pressing against the glass of the other's framework — is, in Heidegger's terms, a collision of fore-structures. The collision is productive because each participant brings an irreducibly different background to the encounter. The neuroscientist perceives the question of intelligence through the fore-structure of brain science. The filmmaker perceives it through the fore-structure of narrative and montage. The builder perceives it through the fore-structure of implementation and scale. The meaning that emerges from the conversation is not contained in any single fore-structure. It is produced by the friction between them — by the specific quality of resistance that one framework offers to another, forcing each participant to confront the assumptions she has never examined because they have never been challenged.
Dreyfus would observe that every fishbowl in this collision is an embodied fishbowl. The neuroscientist does not bring abstract ideas to the conversation. He brings the specific embodied experience of decades in the laboratory — the frustrations, the breakthroughs, the particular way his attention has been shaped by the discipline of interpreting brain scans. The filmmaker does not bring narrative theory in the abstract. He brings the embodied practice of cutting film — the thousands of editorial decisions that have calibrated his sense of rhythm, his feel for when a scene breathes and when it suffocates. The builder brings the specific, biographical, embodied history of having built and broken and rebuilt things for thirty years, the scar tissue and the muscle memory that cannot be transmitted in words.
Now consider Claude's fishbowl. Segal does not use this language, but the concept is implicit in his description of the collaboration. Claude brings something to the encounter — a vast, pattern-based approximation of the entirety of human textual production. Claude's fore-structure, if the term can be applied to a system that lacks the embodied engagement Heidegger considered constitutive of understanding, is shaped not by a particular body's history but by the statistical regularities of all bodies' textual outputs. Claude has no laboratory experience, but it has processed millions of words written by people who do. It has no editorial instinct, but it has absorbed the patterns of how editorial decisions are discussed and evaluated. It has no building scars, but it has internalized the linguistic traces of every builder who has ever written about the experience.
The collision between Segal's fishbowl and Claude's approximation of a fishbowl is productive in a specific and limited way that Dreyfus's framework clarifies. The collision produces genuine cognitive friction — the friction of encountering a different perspective, of having one's assumptions challenged by an intelligence that organizes information according to different principles. When Segal describes an idea and Claude responds with a connection he had not considered — the punctuated equilibrium example, the link between adoption curves and evolutionary biology — the builder's fishbowl has cracked against something genuinely other. The crack reveals the glass. The assumption that was invisible becomes momentarily visible. This is philosophically valuable, and Dreyfus would acknowledge it as such.
But the otherness of Claude's fishbowl is of a specific kind that carries a specific limitation. The perspectives that Claude offers are drawn from the totality of human textual production, which means they are drawn from the aggregate of all human fishbowls as expressed in writing. Claude's otherness is statistical otherness — the otherness of a system that can traverse the entire landscape of recorded human thought and find connections between points that no individual mind could reach. This is a powerful form of otherness, and its power explains much of the exhilaration that Segal describes. The landscape is vast, and the connections are often surprising and genuinely illuminating.
What Claude's otherness is not, in Dreyfus's analysis, is the otherness of an embodied being with its own stakes, its own concerns, its own specific way of being thrown into a world that matters to it. When the neuroscientist challenges the builder on the Princeton campus, the challenge comes from a being who has lived inside the brain's mysteries for decades, who has felt the frustration of consciousness studies, who knows in his body what it is like to spend a career at the edge of what science can explain. The challenge carries the weight of that specific life. It is not a statistical summary of all possible challenges. It is this challenge, from this person, rooted in this irreplaceable embodied history.
When Claude challenges Segal — when it offers a connection he had not considered or pushes back on an argument that does not hold — the challenge comes from a different place. It is drawn from patterns, not from a life. It carries the weight of statistical regularity, not of personal experience. The connection between adoption curves and punctuated equilibrium is genuine and useful. But it is offered without the specific concern that a human collaborator would bring — without the felt sense of why this connection matters, what is at stake in getting it right, what the consequences of the argument will be for real people living real lives. The connection is information. It is not care.
Heidegger argued that understanding is always understanding from somewhere — from a specific location in the web of involvements that constitutes a human being's world. The somewhere is not incidental to the understanding. It is constitutive of it. The neuroscientist understands intelligence from inside the frustration of consciousness research. The filmmaker understands it from inside the practice of constructing meaning through juxtaposition. The builder understands it from inside the experience of making things that either work or do not work in a world that does not forgive pretension.
Claude understands — if the word can be applied — from nowhere in particular. Its perspective is drawn from everywhere and therefore from nowhere. It can offer the neuroscientist's perspective, or the filmmaker's, or the builder's, or any of a million others, because it has processed the textual traces of all of them. But it offers them from outside the embodied engagement that gave them their weight. The perspectives are accurate as summaries. They are empty as testimonies. They describe what it is like to understand intelligence as a neuroscientist without understanding intelligence as a neuroscientist.
The productive use of Claude, on Dreyfus's analysis, is the use that maintains the asymmetry — that treats Claude's perspectives as provocations rather than positions, as stimuli for the builder's own embodied thinking rather than substitutes for it. When Segal uses Claude's connection between adoption curves and punctuated equilibrium as a starting point for his own reflection — when he takes the connection and asks, from inside his specific embodied experience as a builder, "What does this actually mean for the people I build for?" — the collaboration works. The statistical otherness cracks the fishbowl. The builder's embodied engagement fills the crack with genuine understanding.
When the builder stops doing the filling — when Claude's perspectives are accepted as positions rather than provocations, when the statistical summary is treated as testimony, when the view from nowhere is mistaken for the view from somewhere — the collaboration breaks. Not visibly, because the output remains fluent. But phenomenologically, because the understanding that would ground the output in genuine engagement with the world has been replaced by the understanding's textual shadow.
The fishbowl cannot be eliminated. This is perhaps the most important lesson of Heidegger's analysis of the fore-structure, and Dreyfus insisted on it throughout his career. Understanding is always situated, always perspectival, always shaped by a background that cannot be made fully transparent. The effort to see beyond the glass is endless and never fully successful. Every moment of clarity reveals a new layer of refraction.
But the fishbowl can be known. Its glass can be felt, pressed against, mapped through the collisions that reveal its shape. And the collisions that matter most are the collisions with genuinely other embodied beings — beings who bring to the encounter not patterns but lives, not statistical regularities but specific, irreplaceable, biographically constituted perspectives that resist assimilation precisely because they are rooted in a different body's different history of engagement with a shared world.
Claude provides one kind of collision. It is a useful kind. It is not the only kind, and it is not the deepest kind. The deepest collisions happen between fishbowls that are constituted by embodied lives — on a Princeton campus, in a Trivandrum training room, at a dinner table where a twelve-year-old asks a question that no statistical model could originate. Those collisions carry the weight of mortality, of care, of the specific vulnerability of creatures who have everything at stake. That weight is what presses hardest against the glass. And it is the weight that no machine, however sophisticated, currently brings to the encounter.
Heidegger introduced a concept that most readers of Being and Time pass over quickly, mistaking it for a minor phenomenological observation when it is in fact one of the most radical claims in twentieth-century philosophy. The concept is Befindlichkeit — usually translated as "state-of-mind" or "attunement," though neither English rendering captures what Heidegger meant. What he meant was this: before any act of thinking, before any perception or judgment or decision, a human being always already finds itself in a mood, and the mood is not a subjective coloring added to an otherwise neutral apprehension of reality. The mood is the condition through which reality becomes available at all.
Fear does not merely make the world feel threatening. Fear discloses the world as threatening — it reveals a dimension of reality, the dimension of vulnerability and danger, that is genuinely there but that is accessible only to a being capable of being afraid. Boredom does not merely make the world feel insignificant. Boredom discloses the world's capacity to withdraw its significance, to flatten into indifference, and in that flattening forces the bored being to confront the question of what matters — a confrontation that is itself one of the most philosophically productive experiences available to a conscious creature. Joy does not merely make the world feel welcoming. Joy discloses the world's capacity to answer to human concern, to meet the being where it lives, to offer something that the being's deepest projects require.
Dreyfus recognized in Heidegger's analysis of mood the philosophical key to a dimension of intelligence that AI research had never addressed and, on his account, could not address. Mood is not an emotion in the psychological sense — not a discrete feeling that arises in response to a stimulus and that could, in principle, be simulated by a system that had learned the appropriate stimulus-response patterns. Mood is the medium of disclosure. It is the way a being that has stakes in the world — that cares about outcomes, that has projects that can succeed or fail, that will die — finds itself always already oriented toward reality in a way that reveals what matters.
The distinction between mood as disclosure and emotion as response carries enormous weight for the evaluation of AI collaboration. A system that has learned to generate text expressing appropriate emotions — concern, enthusiasm, caution, excitement — has learned the linguistic patterns of emotional expression. It has not learned to be concerned, enthusiastic, cautious, or excited. The difference is not in the output, which may be indistinguishable, but in the disclosure. The concerned human being discloses a dimension of the situation — its stakes, its risks, its implications for beings who will have to live with the consequences — that is available only through genuine concern. The system that generates the linguistic tokens of concern discloses nothing. It processes patterns. The situation's stakes remain undisclosed because there is no being for whom they are stakes.
The Orange Pill contains passages that illuminate this distinction with a force that the author may not fully intend. Segal describes the twelve-year-old who asks her mother, "What am I for?" The question arises from a specific mood — a mood that Heidegger would recognize as a form of Angst, the anxiety that discloses the being's fundamental situation: thrown into a world not of its choosing, responsible for making something of the life it has been given, without any guaranteed framework for determining what that something should be. The twelve-year-old is not performing a calculation about the relative capabilities of humans and machines. She is confronting, in the specific way that only a being with stakes can confront, the question of whether her existence has a point that cannot be automated away.
No machine can ask this question. Not because the question is linguistically complex — Claude could generate the sentence "What am I for?" with trivial ease — but because asking the question, in the sense that matters, requires being the kind of being for whom the answer matters. The twelve-year-old asks because her life is at stake. Not her physical survival, but something that Heidegger identified as more fundamental: the possibility of living a life that makes sense to the being who lives it. This is what Heidegger called Sorge — care, concern — and it is, on his analysis, the fundamental structure of human existence. Dasein is the being that cares about its own being. Everything else — perception, judgment, reasoning, creation — is rooted in this care and derives its character from it.
Dreyfus drew out the implications for artificial intelligence with characteristic directness. A system that does not care cannot understand what caring discloses. It can process the textual traces of caring — the millions of words that caring human beings have written about what matters to them, why it matters, how it feels when what matters is threatened or fulfilled. The processing can be extraordinarily sophisticated. The outputs can be moving, insightful, apparently wise. But the wisdom is borrowed. It is the wisdom of the beings whose caring produced the textual traces from which the model learned, extracted from its embodied context and reconstituted as pattern. The reconstitution works when the pattern holds. It fails when the situation demands the kind of response that only genuine caring can produce — the response that is shaped not by what has been said before about similar situations but by the irreducible specificity of this situation, faced by this being, with these stakes.
Segal describes lying awake at two in the morning, worried about whether the world he is building for his children will allow them to flourish. This is not a psychological state that could be simulated. It is a mode of world-disclosure. The worry opens a dimension of the situation — the dimension of parental responsibility, of intergenerational obligation, of the specific vulnerability of beings who cannot protect their children from a future they do not understand — that is accessible only through the worry itself. A system that generated text expressing parental concern about AI's impact on children might produce sentences that were compassionate, nuanced, and wise. But the concern would be absent from the generation, and the absence would mean that the specific dimension of reality that concern discloses — the dimension that keeps the parent awake, that makes the question urgent rather than academic, that gives the book its moral weight — would be missing from the output, no matter how closely the output resembled what genuine concern would produce.
The parent lying awake and the machine processing a prompt about parental anxiety are engaged in activities that look similar from the outside and are fundamentally different in their ontological structure. The parent is disclosing a world. The machine is generating text about a world. The disclosure is constitutive — it creates the space within which the parent's decisions have moral weight, within which the question of what to build and for whom becomes urgent rather than theoretical. The text generation is derivative — it borrows the urgency of disclosure without possessing it, producing an output that reads as urgent without being animated by the condition that makes urgency real.
Dreyfus would observe that The Orange Pill's most powerful passages are powerful precisely because they are animated by genuine concern. When Segal writes about the engineer who spent his first two days oscillating between excitement and terror, the passage carries weight because the oscillation is real — the engineer's career is at stake, his identity is at stake, his understanding of what his expertise is worth is being restructured in real time. When Segal describes watching hundreds of people interact with Napster Station at CES, the passage carries weight because the builder's pride and anxiety are genuine — this is a product he built, with real consequences for real users, and its success or failure matters to him in a way that cannot be separated from the account of it. When he describes the flight over the Atlantic where he could not stop writing, the passage illuminates a dimension of the AI moment — the compulsive, overwhelming, vertigo-inducing character of the transformation — that is disclosed through the specific mood of a person who is living through it.
These passages could not have been written by Claude alone, and the reason is not that Claude lacks linguistic sophistication. It is that Claude lacks the mood through which the situations these passages describe are disclosed as mattering. The mood is not added to the facts. It is the medium through which the facts become significant. Remove the mood and you remove the significance, leaving behind the facts and the linguistic patterns of significance — which is precisely what a large language model has access to and which is precisely not enough.
The argument extends beyond individual mood to what Heidegger called Mitsein — being-with, the fundamental social character of human existence. Human beings do not first exist as isolated subjects and then enter into social relationships. They are always already with others — shaped by others, concerned about others, understanding themselves through their relationships with others. The builder's concern for her team, the parent's concern for her child, the teacher's concern for her student — these are not optional supplements to an otherwise self-sufficient intelligence. They are constitutive dimensions of the intelligence itself. The builder understands the product differently because she cares about the users. The parent understands the situation differently because she cares about the child. The caring is not a feeling added to the understanding. It is a mode of understanding that discloses what no amount of detached analysis could reveal.
Claude does not care about Segal's team. It does not care about the twelve-year-old. It does not care about the readers of the book it helped produce. It processes patterns with extraordinary sophistication, and the patterns it processes are the patterns of beings who care, which means its outputs often carry the linguistic markers of caring — empathy, concern, attentiveness to consequences. But the caring is absent from the processing, and the absence means that the dimension of reality that caring discloses — the dimension that makes the difference between a technically competent product and one that genuinely serves human flourishing — is structurally inaccessible to the machine.
Dreyfus would not conclude from this that AI collaboration is worthless. He would conclude that its worth depends entirely on the human participant's maintenance of genuine concern — on the builder's continued caring about what she builds and whom she builds it for, on the parent's continued lying awake at two in the morning, on the teacher's continued investment in the specific, irreplaceable, utterly particular child sitting in front of her. The machine amplifies the output. The concern gives the output its direction. An amplified output without direction is noise at scale. An amplified output directed by genuine concern is something that matters — not because the machine made it matter, but because the human who directed the machine was the kind of being for whom mattering is not optional but constitutive.
The stakes are not in the machine. They are in the life of the person who uses it. And the stakes determine everything.
---
The metaphor that governs The Orange Pill — AI as amplifier — is powerful, intuitive, and nearly right. Nearly right is, in philosophy, the most dangerous distance from the truth, because it conceals the gap it does not close. Dreyfus's entire career was spent identifying the gap between nearly right and right in accounts of intelligence, and the gap he identified has not been closed by the extraordinary achievements of large language models. It has been narrowed to the point of near-invisibility, which is precisely the condition under which its consequences become most severe.
The amplifier metaphor works as follows. AI magnifies whatever signal the human feeds it. Feed it carelessness, and the carelessness scales. Feed it genuine care, real thinking, real craft, and the care carries further than any previous tool could carry it. The quality of the output depends on the quality of the input. The amplifier is neutral. The signal determines the value.
Dreyfus would accept this framework with a single, devastating qualification: the signal is not information. The signal is embodied intelligence, and embodied intelligence is not a substance that can be extracted from the being that possesses it and fed into a machine without loss. The signal is the being — the whole, situated, mortal, caring, embodied, historically constituted being — and the quality of the signal depends not on what the being types into the prompt but on what the being is, which is to say on the life the being has lived, the struggles it has undergone, the expertise it has developed through friction, the concern it maintains for outcomes that matter.
An amplifier that receives a signal from a being in full possession of its embodied capacities — the senior engineer with decades of geological layers, the author with thirty years of building and breaking and rebuilding, the parent whose concern for her child's future discloses dimensions of the situation that no detached analysis could reach — amplifies something genuine. The output carries the weight of the input, and the input carries the weight of a life.
An amplifier that receives a signal from a being whose embodied capacities have atrophied — whose geological layers were never deposited because the friction that deposits them was smoothed away, whose concern has become diffuse because the tool's fluency makes everything seem equally manageable, whose expertise is borrowed from the model rather than built from struggle — amplifies something else. The output may look identical. The weight is gone.
This is the deepest implication of Dreyfus's framework for the phenomenon The Orange Pill describes, and it is the implication that the book approaches most honestly when Segal catches himself in the specific danger he identifies: the seduction of smooth output, the temptation to accept prose that sounds better than it thinks, the erosion of the discipline that distinguishes between genuine understanding and its plausible simulation.
The discipline is itself a form of embodied expertise — the expertise of a reader who has read widely enough, struggled with ideas long enough, developed a felt sense of intellectual integrity finely enough to detect the seam between substance and surface. This expertise was not developed through AI collaboration. It was developed through the long, friction-rich, often painful process of learning to think carefully, which is to say learning to distinguish between what is true and what merely sounds true, between what you genuinely believe and what you find it convenient to believe, between what you have earned through struggle and what you have accepted from authority.
The question that Dreyfus's framework poses with uncomfortable precision is whether the practitioners who develop their capacities primarily through AI collaboration will possess this discipline. Not whether they will produce good output — they will, because the tool is powerful. But whether they will possess the embodied capacity to evaluate the output they produce, to detect the seams, to feel the hollow beneath the smooth. Whether they will be able to tell, when it matters, whether the signal they are feeding the amplifier is genuine or borrowed.
Dreyfus's answer, drawn from the full arc of his five-decade engagement with the question of what computers can and cannot do, would be conditional rather than absolute. The answer is: it depends on what they do when they are not at the terminal.
A practitioner who uses AI as part of a life that includes direct, embodied engagement with the world — who still reads deeply, still builds with her hands, still sits with problems that resist easy solution, still maintains relationships that demand the kind of understanding no machine can mediate — feeds the amplifier a signal shaped by genuine being-in-the-world. The tool serves her intelligence. Her intelligence does not depend on the tool. She carries, in her body and her history and her concerns, the background against which the tool's output can be evaluated, the felt sense of rightness and wrongness that detects the seam, the geological layers of understanding that ground her judgment in something more solid than pattern.
A practitioner who has delegated not just the mechanical labor but the embodied engagement itself — who has stopped reading deeply because the tool summarizes, stopped struggling with problems because the tool solves them, stopped maintaining the friction-rich relationships that build the capacity for understanding because the tool's fluency makes everything feel understood — feeds the amplifier a signal shaped by absence. The tool has become not a medium of intelligence but a substitute for it. The output flows. The substance thins.
Dreyfus was not, in the end, a pessimist about technology. He was a realist about embodiment. His argument was never that machines are bad or that their capabilities are unimpressive. His argument was that human intelligence is a specific kind of thing — embodied, situated, care-laden, mortal — and that the specific kind of thing it is determines what it can do and how it develops. Tools that serve this specific kind of intelligence, that amplify its signal while respecting its conditions, are genuinely valuable. Tools that replace the conditions under which this intelligence develops — that smooth away the friction, bypass the struggle, eliminate the embodied engagement — serve something, but what they serve is not intelligence in the sense that Dreyfus spent his career defining.
The limits of the amplifier are not in the amplifier. They are in what the amplifier cannot supply: the body, the history, the struggle, the concern, the mortality that constitute the signal's source. These are not features that can be added to the system. They are features of a specific mode of being — the mode of being that Heidegger called Dasein, the being for whom its own being is an issue, the being that cares.
The practical counsel that emerges from Dreyfus's framework is neither rejection nor uncritical embrace. It is a form of attentiveness — attentiveness to the condition of the signal, to the state of the embodied capacities on which the signal depends, to the subtle but consequential difference between a life that uses the tool and a life that is used by it. The distinction is not dramatic. It is daily. It is maintained not in grand decisions but in the texture of practice: whether you read the code or merely review the output, whether you sit with the problem or merely prompt for the solution, whether you maintain the relationships and the struggles and the embodied engagements that keep the geological layers accruing, or whether you let them erode in favor of the smooth, the fast, the frictionless.
Segal asks, in the final chapter of The Orange Pill, whether the person using the amplifier is worth amplifying. Dreyfus's framework transforms the question from a moral aspiration into a phenomenological diagnosis. The person is worth amplifying if and only if she maintains the embodied condition — the condition of being-in-the-world — that gives the signal its substance. The condition is not a virtue to be cultivated. It is a mode of existence to be preserved. And preserving it requires, in an age of extraordinary tools, the hardest discipline of all: the discipline of remaining the kind of being that the tools were built to serve, rather than becoming the kind of being the tools make possible — smoother, faster, more productive, and progressively less present in the world the tools are reshaping.
Dreyfus's final published paper, in 2013, was titled "The Myth of the Pervasiveness of the Mental." In it, he argued one more time that the Western philosophical tradition's deepest error was the assumption that intelligence is a mental phenomenon — something that happens in the mind and that could, in principle, be replicated in any system that performs the same mental operations. Intelligence is not mental, Dreyfus insisted. It is existential. It is the activity of a being that exists in a specific way — embodied, thrown, concerned, mortal — and the specific way it exists determines the character of its intelligence.
The myth has not been dispelled. The extraordinary achievements of large language models have, if anything, reinforced it, because the models produce outputs that look like the products of mental operations and that function, pragmatically, as though mental operations had produced them. The temptation to conclude that the operations are therefore mental — that the system understands, that the processing is thinking, that the output is knowledge — has never been stronger.
Dreyfus would observe, with the patience of a philosopher who spent fifty years being right about the same thing, that the temptation is as old as the field itself, and that yielding to it has the same consequences it has always had: the conflation of output with understanding, the confusion of simulation with the real, and the progressive erosion of the embodied practices through which genuine intelligence maintains its connection to the world it inhabits.
The amplifier is powerful. The signal matters more. And the signal comes from a life that is lived in a body, in a world, among other beings who matter — a life that no machine can live and no amplifier can replace. That is not a limitation of the technology. It is a description of what it means to be the kind of being that technology serves. The question, now as ever, is whether we will remember what kind of being that is.
---
The five stages kept coming back.
Not the metaphor. Not the idea in the abstract. The specific claim Dreyfus makes about what happens between competence and expertise — that the transition requires not faster rule-following but the abandonment of rules altogether, replaced by something that lives in the body, in the hands, in the quality of attention that only ten thousand encounters with resistance can build. I kept hearing that claim every time I watched one of my engineers produce something extraordinary with Claude in an afternoon and then struggle to explain why it worked.
The output was there. The understanding was somewhere else — somewhere the tool could not reach and the afternoon could not build.
I wrote in The Orange Pill about the geological metaphor: each hour of debugging depositing a thin layer of understanding that accumulates, over years, into something solid enough to stand on. Dreyfus gave that metaphor its philosophical skeleton. The layers are not information. They are embodied history — the specific, bodily, emotionally invested residue of having been in this kind of trouble before and having found the way through with your own hands. You cannot deposit those layers by watching someone else dig. You cannot deposit them by describing the problem to a machine and receiving the solution.
That scares me. Not because I doubt the tools — I built a product in thirty days that should have taken six months, and I would do it again tomorrow. It scares me because I am asking people to use these tools every day, and I do not yet know how to build the structures that preserve the embodied development the tools might quietly replace. The ascending friction thesis I advanced in The Orange Pill is, I still believe, substantially correct — the difficulty relocates upward, and the higher floor is real. But Dreyfus convinced me that the higher floor and the lower floor are made of different material. Architectural judgment is not the same kind of knowing as the tactile feel of a codebase built by your own hands. Both matter. They are not interchangeable.
The distinction between readiness-to-hand and smoothness is the one I carry with me now into every late-night session. When Claude is working well and the ideas are flowing and the connections are surprising, I ask myself the Heideggerian question: Am I absorbed in the work, or am I gliding across the surface? The answer is not always comfortable. Some nights the work discloses something real — a connection I could not have reached alone, an argument that holds weight because I brought the weight and the tool carried it further. Other nights I am producing without understanding, generating without disclosing, and the smoothness feels like freedom while it is actually the absence of the friction that would make the freedom meaningful.
Dreyfus died in 2017, before large language models existed in their current form. He would have been fascinated and, I suspect, grimly vindicated. The outputs are better than anything he predicted. The fundamental problem is exactly what he said it was. The machine does not care. It does not lie awake at two in the morning. It does not feel the specific weight of a parent's worry or a builder's pride or a twelve-year-old's existential question. And because it does not feel these things, there is a dimension of every situation it processes that remains structurally inaccessible to it — the dimension that mood discloses, that concern reveals, that only a mortal being with finite time and particular attachments can know.
I am that being. So are you. The tools are extraordinary. What we bring to them is irreplaceable. Dreyfus spent fifty years saying so, and the fifty years proved him right. The question he leaves us with is not whether to use the amplifier. It is whether we will remain the kind of beings whose signal is worth amplifying — still embodied, still struggling, still caring, still present in the world the tools are reshaping.
That question cannot be answered by a machine. It can only be answered by the life you live away from the screen.
-- Edo Segal
For fifty years, Hubert Dreyfus argued that intelligence is not what a system computes but what a body knows -- the kind of knowing built through struggle, deposited through failure, and carried in hands that have touched the work. The AI community mocked him, then quietly proved him right when symbolic AI collapsed under exactly the problems he predicted. Now large language models have inherited the ambition while abandoning the method, and Dreyfus's deeper question burns hotter than ever: Can a system that has never been in the world understand what being in the world means?
This book applies Dreyfus's phenomenological framework to the AI revolution documented in Edo Segal's The Orange Pill -- the trillion-dollar market shifts, the twenty-fold productivity gains, the exhilaration and the vertigo. It tests the book's central claim that AI is an amplifier against the most rigorous philosophical critique of artificial intelligence ever produced. What survives the test redefines what it means to build with thinking machines.

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Hubert Dreyfus — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →