By Edo Segal
The thing I built in Trivandrum worked. Every feature, every function, every line of code the team shipped in those extraordinary days — it all worked. I described that week in The Orange Pill as proof that the imagination-to-artifact ratio had collapsed to nearly nothing. I still believe that.
But there is a question I did not ask clearly enough, and it took a physical chemist who died the year I was born to hand me the words.
The question is not whether the output works. The question is whether the person who produced it knows anything.
Michael Polanyi spent decades pulling apart what it actually means to know something — not to possess information, not to generate a correct answer, but to understand in the way a diagnostician understands a patient or a senior engineer understands a codebase that is about to break. He called it tacit knowledge: the vast, inarticulate substrate of understanding that operates beneath conscious awareness and that cannot be captured in any specification, no matter how detailed. "We can know more than we can tell." Six words that reorganize the entire AI conversation.
I needed Polanyi because the discourse keeps measuring outputs. Faster briefs. More code. Higher productivity. The metrics all point upward. But Polanyi forces you to ask what is happening beneath the metrics — whether the tacit ground that makes human judgment reliable is being maintained or quietly eroded by the very tools that make the outputs so impressive.
In The Orange Pill, I confessed to the moment when Claude's prose outran my thinking — when I almost kept a passage that sounded like conviction without containing it. Polanyi gave me the framework to understand why that moment was dangerous. Not because the output was wrong. Because I had accepted it without performing the personal evaluation that makes knowledge mine rather than borrowed. The smooth surface concealed the absence of the knower's commitment.
This book is not a retreat from the arguments in The Orange Pill. The amplification is real. The democratization is real. The ascending friction is real. But Polanyi adds a dimension the technology discourse keeps missing: the dimension of what the knower actually knows, as opposed to what the tool produces on the knower's behalf. That distinction — invisible to the market, invisible to the metrics, visible only to a framework that insists on asking what lies beneath the surface — may be the most consequential distinction of our time.
The machines can tell more than they know. We can know more than we can tell. Hold both sentences in your mind. The space between them is where everything that matters about the AI moment lives.
— Edo Segal ^ Opus 4.6
1891-1976
Michael Polanyi (1891–1976) was a Hungarian-British physical chemist and philosopher whose work fundamentally reshaped the understanding of how human beings actually know things. Born in Budapest, he earned his medical degree and doctorate in chemistry before establishing himself as a leading researcher in physical chemistry and crystallography at the Kaiser Wilhelm Institute in Berlin and later the University of Manchester. In midlife, he turned from science to philosophy, producing Personal Knowledge: Towards a Post-Critical Philosophy (1958) and The Tacit Dimension (1966), works that introduced the concept of tacit knowledge — the vast body of understanding that operates beneath conscious awareness and resists explicit articulation. His famous dictum, "We can know more than we can tell," became a foundational principle in epistemology, organizational theory, and the philosophy of science. Polanyi argued that all knowledge is irreducibly personal, requiring the knower's commitment, judgment, and embodied engagement — a position that placed him in direct opposition to both positivism and the early claims of artificial intelligence. His 1949 debate with Alan Turing on whether a machine could represent the mind anticipated by decades the philosophical challenges that large language models now pose. His concept of "indwelling" — the process by which tools become transparent extensions of the body and mind — remains among the most precise frameworks for understanding how humans integrate technology into cognition. Polanyi's work continues to influence fields ranging from knowledge management to the philosophy of AI, and economist David Autor's identification of "Polanyi's Paradox" has made the concept of tacit knowledge central to contemporary debates about automation and the future of work.
On the twenty-seventh of October, 1949, in a seminar room at the University of Manchester, a physical chemist turned philosopher named Michael Polanyi sat across from a mathematician named Alan Turing and posed a challenge that neither artificial intelligence nor the seventy-seven years since have fully answered. Polanyi had prepared a text for the occasion — "Can the Mind Be Represented by a Machine?" — and circulated it to Turing and the mathematician Max Newman several weeks before the meeting. His argument was precise and uncompromising: "The terms by which we specify the operations of the mind are such that they cannot be said to have specified the mind. The specification of the mind implies the presence of unspecified and pro-tanto unspecifiable elements."
Turing's response, published the following year as "Computing Machinery and Intelligence," proposed what became the Turing Test — the idea that a machine could be considered intelligent if its behavior was indistinguishable from a human's. The test became the founding thought experiment of the field we now call artificial intelligence. But Polanyi had already identified the flaw in its logic: the test measured output, not knowing. A machine that produced indistinguishable outputs would satisfy Turing's criterion without possessing anything Polanyi would recognize as understanding. The disagreement between the two men was not about whether machines could be clever. It was about what knowledge actually is. And that disagreement, unresolved in 1949, has become the central philosophical question of the age of large language models.
Polanyi spent the rest of his life developing the answer he had begun to articulate in that seminar room. The answer is contained in a single sentence, the most cited sentence in his entire body of work: "We can know more than we can tell." The sentence appears in The Tacit Dimension, published in 1966, but the insight behind it had been accumulating for decades — through Polanyi's years as a working physical chemist, through his turn to philosophy, through his sustained engagement with the question of what it means to know something as opposed to merely possessing information about it.
The sentence sounds simple. It is not. It describes a structural feature of all human knowledge that has consequences reaching into every domain where artificial intelligence is now being deployed — and that the AI discourse has largely failed to reckon with.
Consider the act of recognizing a face. A person can pick out a friend in a crowd of ten thousand strangers. The recognition is instantaneous, confident, and reliable. But ask her to describe how she does it — to specify the features, the proportions, the chromatic values, the geometric relationships that enable the recognition — and she cannot. Not because she is inarticulate or lazy. Because the knowledge that enables the recognition is not the kind of knowledge that can be articulated. It operates below the level of explicit awareness. It is tacit.
Or consider the diagnostician who examines a patient and senses that something is wrong before she can identify the symptom. She sees the patient's skin color, posture, breathing pattern, the quality of eye contact, the tempo of speech — and from these subsidiary clues, she arrives at a focal judgment: something is not right here. If pressed, she might point to a specific symptom. But the pointing comes after the knowing. The judgment preceded the justification. The tacit integration happened before the explicit analysis began.
Or consider Polanyi's own experience as a scientist. He described how the researcher follows what he called an "intimation of a hidden pattern" — a pre-articulate sense that something significant lurks in the data, that the experiment is on the right track, that the hypothesis is worth pursuing. This sense cannot be formalized into a rule. It cannot be derived from the data by any explicit procedure. It is the product of years of immersion in a domain — years of looking at results, handling materials, sensing the behavior of systems through the accumulated sensitivity of embodied practice. The researcher commits to this intimation before she can justify the commitment, because the justification is the discovery itself. She follows a hunch into territory she cannot see, guided by tacit knowledge she cannot articulate.
This is not mysticism. Polanyi was trained as a physical chemist and spent decades in the laboratory before turning to philosophy. His claims about tacit knowledge are grounded in careful analysis of what actually happens when people exercise skill, make judgments, and produce discoveries. The tacit dimension is not a mysterious supplement to real knowledge. It is the foundation on which all explicit knowledge rests. Every explicit statement presupposes a tacit framework within which the statement makes sense. Every formalized rule presupposes an inarticulate understanding of what the rule means, when it applies, and how to interpret its results. As Polanyi wrote in Personal Knowledge, "A formal system of symbols and operations can be said to function as a deductive system only by virtue of unformalized supplements, to which the operator of the system accedes: symbols must be identifiable and their meaning known, axioms must be understood to assert something, proofs must be acknowledged to demonstrate something."
The implications for artificial intelligence are direct and devastating. AI systems process explicit information. They operate on formalized representations — tokens, vectors, probability distributions, the mathematical structures that encode the patterns extracted from training data. They are extraordinarily good at this processing. The large language models of 2025 and 2026 produce outputs of remarkable sophistication — legal briefs, software prototypes, analytical reports, creative texts — that meet or exceed the explicit standards of competence in their respective domains. The four percent of GitHub commits generated by AI in early 2026, the twenty-fold productivity multiplier observed in Trivandrum, the imagination-to-artifact ratio collapsing to the width of a conversation — these are real measurements of real capability expansion. The explicit dimension of knowledge work is being automated at a speed that has rightly produced the vertigo Edo Segal describes in The Orange Pill.
But the tacit dimension is not being automated. It is not being automated because it cannot be formalized, and what cannot be formalized cannot be computed. This is not a contingent limitation of current technology — a problem that the next generation of models will solve. It is a structural feature of the relationship between tacit and explicit knowing. The tacit is, by definition, that which resists articulation. It operates through the body, through commitment, through the accumulated sensitivity of years of embodied practice. It is the ground from which explicit knowledge emerges, and it cannot be captured by any system that operates exclusively on explicit representations, no matter how sophisticated those representations become.
MIT economist David Autor gave Polanyi's observation a name in 2014: Polanyi's Paradox. In his landmark paper "Polanyi's Paradox and the Shape of Employment Growth," Autor argued that the paradox "largely predates the computer era, but the paradox he identified — that our tacit knowledge of how the world works often exceeds our explicit understanding — foretells much of the history of computerization over the past five decades." The tasks that proved stubbornly resistant to automation were precisely the tasks that required tacit knowledge: the adaptive judgment, the contextual sensitivity, the capacity to recognize when the rules do not apply and to improvise a response that no training data contained.
Some technological optimists argue that deep learning has overcome Polanyi's Paradox by learning patterns from data rather than from explicit rules. The argument has surface plausibility. AlphaGo learned to play Go at a superhuman level without being programmed with the rules of Go strategy. Large language models produce text of remarkable quality without being programmed with the rules of grammar, rhetoric, or reasoning. The machines appear to have captured tacit knowledge — to have extracted the inarticulate patterns that underlie skilled performance by statistical analysis of the outputs of skilled performers.
ASU computer scientist Subbarao Kambhampati offered a sharp corrective in his 2021 article "Polanyi's Revenge." The apparent success of machine learning in capturing tacit patterns has created a new set of problems that Kambhampati calls "Polanyi's Revenge": "AI's romance with tacit knowledge has obvious adverse implications to safety, correctness, and bias of our systems. Many of the pressing problems being faced in the deployment of AI technology, including the interpretability concerns, the dataset bias concerns as well as the robustness concerns can be traced rather directly back to the singular focus on learning tacit knowledge from data." The machine has not understood the tacit patterns it has captured. It has extracted statistical regularities from the outputs of skilled performers without possessing the understanding that produced those outputs. It can reproduce the pattern. It cannot evaluate whether the pattern applies in a new situation, recognize when the pattern breaks down, or improvise a response when the situation departs from anything in the training data.
The senior software architect in The Orange Pill who could "feel a codebase the way a doctor feels a pulse" possesses tacit knowledge in the most precise Polanyian sense. His embodied intuition — built through twenty-five years of patient struggle, thousands of hours of debugging, countless encounters with systems that behaved unexpectedly — is the product of a developmental process that deposited layer after layer of subsidiary awareness beneath the surface of his conscious attention. He does not think about the codebase. He thinks through the codebase, attending from his accumulated understanding to the focal judgment that something is wrong. The wrongness registers not as an explicit proposition — "line 4,327 contains a null pointer exception" — but as a felt quality, a disturbance in the pattern, an intuition that precedes and guides the explicit analysis.
This knowledge is real. It is reliable. It is the product of genuine cognitive achievement. And it is precisely the kind of knowledge that AI cannot replicate — not because AI lacks sufficient training data, but because the knowledge exists in a dimension that training data, by its nature, cannot reach. The training data contains the outputs of tacit knowing — the code that the senior architect wrote, the decisions he made, the solutions he produced. It does not contain the tacit knowing itself — the embodied sensitivity, the felt sense of rightness and wrongness, the capacity for judgment that operates below the threshold of articulation.
The AI discourse has largely missed this distinction. The triumphalists celebrate the expansion of capability without asking what kind of knowledge the expanded capability rests on. The elegists mourn the loss of human skill without specifying what, precisely, is being lost. The silent middle — those who feel both the exhilaration and the unease — sense that something important is at stake but lack the vocabulary to name it.
Polanyi provides the vocabulary. What is at stake is the tacit dimension — the pre-articulate ground of understanding from which all explicit knowledge emerges and against which all explicit knowledge is evaluated. When AI produces a legal brief, the brief may be competent by every explicit standard. But the lawyer who produced briefs through her own tacit engagement with the law — who read cases until the logic of legal reasoning became part of her perceptual apparatus, who argued motions until the feel of a good argument became as recognizable as a familiar face — possesses something the AI does not: the capacity to evaluate the brief against a tacit standard of quality that no explicit metric can capture. She knows whether the brief is not merely correct but right — whether it captures the essence of the legal argument, whether it would persuade a judge who thinks in the specific way judges think, whether it embodies the kind of legal reasoning that the profession recognizes as excellent rather than merely adequate.
This evaluative capacity is tacit. It cannot be formalized. It cannot be computed. And it cannot be developed by a person who has never done the work herself — who has never struggled with a case, argued a motion, felt the resistance of legal material that refuses to yield to easy formulation. The tacit dimension is built through the friction of engagement with difficulty. Remove the friction, and the tacit dimension does not form. The surface looks the same — the briefs are competent, the code works, the analysis is plausible — but the depth beneath the surface, the accumulated layers of tacit understanding that constitute genuine expertise, has not been laid down.
Polanyi would not have been surprised by the winter of 2025. He would have recognized it as the moment when the explicit dimension of knowledge work was automated at scale — and when the tacit dimension, invisible to the market's metrics and to the machines themselves, began its quiet, unnoticed erosion. The question he would ask is not whether the machines can produce competent output. They can. The question is whether a civilization that optimizes for explicit output while neglecting the tacit ground from which all genuine understanding emerges can sustain the knowledge it depends on.
We can know more than we can tell. But we can also lose more than we notice.
---
A blind person navigating a crowded street with a cane does not feel the cane pressing against her palm. She feels the sidewalk beneath the cane's tip — its cracks, its curb edges, the subtle shift from concrete to asphalt, the vibration that signals a grating. The cane has disappeared from her conscious awareness. It has been absorbed into her body schema, functioning not as an object she manipulates but as a medium through which she perceives the world. She attends not to the cane but through the cane, and the world revealed through it is as vivid and textured as the world a sighted person sees through her eyes.
Polanyi called this process indwelling, and he considered it the fundamental structure of all tool use, all knowing, and all understanding. The concept is deceptively simple. A tool is indwelt when the user has incorporated it so completely into her way of engaging with the world that she no longer attends to the tool itself. The pianist does not attend to the keys. She attends through the keys to the music. The surgeon does not attend to the scalpel. She attends through the scalpel to the tissue. The driver does not attend to the steering wheel, the accelerator, the brake. She attends through these instruments to the road, the traffic, the destination. In each case, the tool has become what Polanyi called phenomenologically transparent — invisible to conscious awareness, functioning as an extension of the body rather than an object in the world.
The transparency is not incidental to skilled performance. It is constitutive of it. A pianist who attends to the keys — who consciously thinks about which finger strikes which key — cannot play. The attention to the tool disrupts the performance by making explicit what must remain tacit. The skill depends on the subsidiary awareness of the keys being integrated into the focal awareness of the music. The from-to structure of knowing — attending from the subsidiary to the focal — requires that the subsidiary elements remain subsidiary. When they become focal, the knowing collapses.
This is why a person stumbles when she thinks about how she walks. Why a speaker falters when she focuses on the grammar of her sentences rather than the meaning of her argument. Why a doctor's diagnostic confidence wavers when she shifts attention from the patient to the procedure. The subsidiary elements — the body's balance, the sentence's structure, the diagnostic protocol — must remain in the background, functioning as the medium of attention rather than its object. Indwelling is the condition in which this from-to structure operates smoothly, and the disruption of indwelling is the condition in which it breaks down.
The concept illuminates the experience of working with AI tools in ways that neither the triumphalists nor the elegists have fully articulated. When Segal describes the experience of building with Claude Code — the exhilaration of seeing his intention realized in real time, the difficulty of stopping, the sense that removing the tool would feel like voluntary self-diminishment — he is describing indwelling with a phenomenological precision that Polanyi would recognize immediately. The tool has been absorbed into the builder's perceptual apparatus. It has become an extension not merely of his productive capacity but of his way of seeing what is possible. He attends not to the tool but through the tool to the product he is building, the problem he is solving, the vision he is realizing. The AI has become transparent — phenomenologically invisible — functioning as a medium of creative perception rather than an object of conscious evaluation.
This is not dependency in the pathological sense that the critics diagnose. It is the natural consequence of skilled tool use. Every tool that has ever been successfully indwelt produces the same phenomenon: the expanded perception that the tool enables becomes the normal mode of engagement, and removing the tool contracts the perceptual field in a way that feels like loss. The surgeon who has mastered laparoscopic technique feels constrained, not liberated, when forced to operate open-hand. The photographer who has mastered a particular lens sees differently without it — not merely lacking a piece of equipment but lacking a way of seeing that the equipment had become. The carpenter who has worked with a specific set of tools for thirty years finds that new tools, however technically superior, do not sit in the hand the same way — that the old tools had been indwelt so thoroughly that they were no longer tools at all but extensions of the carpenter's body and mind.
The builder who turns off Claude Code does not return to his pre-tool state. He returns to a state that feels impoverished relative to the expanded state he has experienced. The landscape of the buildable has been expanded by the tool, and when the tool is removed, the landscape contracts. This is not a sign of weakness or addiction. It is the signature of successful indwelling — the proof that the tool has been genuinely incorporated into the builder's cognitive architecture rather than remaining an external device he merely operates.
But indwelling carries a risk that Polanyi identified and that the current AI moment makes acute. The risk is this: when a tool is indwelt, the user's critical evaluation of the tool's reliability is suspended. The blind person who has indwelt her cane trusts the information the cane provides. She does not, with each step, consciously evaluate whether the cane is accurately transmitting the texture of the pavement. The pianist who has indwelt the keyboard trusts that the keys will respond as expected. She does not, with each note, consciously verify that the instrument is in tune. The trust is not irrational. It is constitutive of the indwelling itself. The from-to structure requires that the subsidiary elements — the tool, the medium, the instrument — be trusted rather than scrutinized. The moment the user shifts from trusting the tool to scrutinizing it, the tool becomes opaque, the indwelling breaks down, and the skilled performance degrades.
In the case of a cane or a piano, the risk is manageable. The cane is mechanically reliable. The piano can be tuned. The tool's behavior is predictable, and the failure modes are well understood. But in the case of an AI system — a system that produces plausible outputs from opaque processes, that can be confidently wrong in ways that no mechanical tool can be, that hallucinates with the same fluency it reasons — the risk of uncritical indwelling is qualitatively different.
Segal identifies this risk with characteristic honesty when he describes the Deleuze fabrication — the passage where Claude produced an elegant connection between Csikszentmihalyi's flow state and a concept attributed to Gilles Deleuze that turned out to be philosophically wrong. The passage worked rhetorically. It sounded like insight. The prose was smooth, the connection was beautiful, and the philosophical reference was incorrect in a way that was invisible unless you had actually read Deleuze. "Claude's most dangerous failure mode," Segal writes, "is exactly this: confident wrongness dressed in good prose. The smoother the output, the harder it is to catch the seam where the idea breaks."
In Polanyi's framework, this failure is precisely what happens when indwelling meets unreliable subsidiary elements. The builder has indwelt the tool. He attends through the tool to the creative work. The tool's outputs arrive as subsidiary elements in his from-to structure of knowing — elements from which he attends to the focal meaning of his argument. Because the outputs are subsidiary, they are not scrutinized. They are trusted, integrated, and built upon. And because the tool is capable of confident wrongness — of producing outputs that possess all the surface markers of reliability without actually being reliable — the trust that indwelling requires can be misplaced in ways that are far more dangerous than the misplaced trust in a mechanical tool.
This is why Segal's practice of catching fabrications — of deliberately breaking the indwelling to scrutinize the tool's outputs — is epistemologically crucial even as it is phenomenologically disruptive. The skilled user of an AI tool must oscillate between two modes: the mode of indwelling, in which the tool is transparent and the creative work flows through it, and the mode of critical evaluation, in which the tool becomes opaque and its outputs are examined rather than trusted. The oscillation is cognitively expensive. It disrupts flow. It breaks the from-to structure that skilled performance requires. But it is necessary, because the alternative — uncritical indwelling of an unreliable tool — produces work that is smooth on the surface and fractured beneath it.
Polanyi did not live to see large language models, but he anticipated the structure of the problem with remarkable precision. In Personal Knowledge, he wrote: "This is the difference between machine and mind. A man's mind can carry out feats of intelligence by aid of a machine and also without such aid, while a machine can function only as the extension of a person's body under the control" of a human operator. The machine, in other words, can be indwelt — can become a medium of perception and action — but only as long as a human mind supplies the evaluative framework that the machine itself lacks. The machine does not know when its outputs are wrong. The human must know. And the human can only know if she periodically breaks the indwelling, shifts from the from-to mode to the evaluative mode, and subjects the tool's outputs to the kind of critical scrutiny that the tool's transparency would otherwise suppress.
The challenge for organizations deploying AI at scale is to build cultures that support this oscillation — that value the critical evaluation of AI outputs as highly as they value the productive flow that indwelling enables. The temptation, which the market's logic of efficiency reinforces at every turn, is to optimize for flow and suppress evaluation. Flow is productive. Evaluation is expensive. The builder in flow produces more. The builder who stops to question the tool's outputs produces less. The market rewards production. It does not reward the epistemological hygiene that prevents production from being built on foundations of confident wrongness.
But the cost of uncritical indwelling, accumulated across millions of builders and billions of outputs, is a civilization that has indwelt its tools so thoroughly that it can no longer distinguish between genuine knowledge and its statistical simulation. The surface looks the same. The briefs are competent. The code runs. The analyses are plausible. But the tacit ground of understanding — the capacity to evaluate whether the output is not merely plausible but true, not merely competent but right — has been delegated to a tool that does not possess it and cannot develop it.
Polanyi's blind person trusts her cane because the cane is mechanically faithful to the world it transmits. The cane does not hallucinate pavement that is not there. The cane does not produce the sensation of a smooth sidewalk when the actual surface is broken. The cane mediates between the person and the world with a reliability that earns the trust that indwelling requires.
The question that the AI moment forces upon Polanyi's framework is whether a tool that is capable of hallucination — of producing outputs that bear all the phenomenological markers of reliability without actually being reliable — can be safely indwelt at all. The answer, provisionally, is that it can, but only by a user who possesses the tacit knowledge to evaluate the tool's outputs independently of the tool — who can recognize when the cane is transmitting a sidewalk that does not exist, because she has walked enough real sidewalks to know what they feel like. The senior engineer can indwell Claude Code because his twenty-five years of embodied expertise give him the tacit ground against which to evaluate the tool's outputs. The junior developer who has never debugged by hand, who has never built a system from scratch, who has never felt the specific wrongness of a codebase that is about to fail — she lacks the tacit ground. She indwells the tool without the capacity to evaluate it, and the indwelling, absent the evaluative capacity, becomes not an extension of skill but a replacement for it.
This is the distinction on which everything turns. Indwelling that extends tacit knowledge is a genuine expansion of human capability. Indwelling that substitutes for tacit knowledge is its quiet destruction. The difference is invisible from the outside — both produce competent output — but the difference in the long run is the difference between a civilization that knows what it is doing and a civilization that merely looks like it does.
---
In 1958, Michael Polanyi published Personal Knowledge: Towards a Post-Critical Philosophy, a book whose title announces its central argument with deliberate provocation. Knowledge is personal. Not subjective — Polanyi was emphatic about the distinction — but personal. Every act of knowing involves the knower's commitment, her judgment about what counts as relevant, her evaluation of what constitutes evidence, her sense of what matters. The scientist who publishes a finding commits herself to its truth. She stakes her reputation, her career, her intellectual identity on the claim she makes. This commitment is not a flaw in the knowledge — a residue of human imperfection that a better method would eliminate. It is what makes the knowledge meaningful. It is what separates genuine understanding from the mere accumulation of information.
The lie that Polanyi spent his career dismantling is the lie of objectivity — the claim, central to the positivist tradition in philosophy of science, that genuine knowledge is impersonal, detached, and fully articulable in explicit propositions. The ideal knower, in the positivist account, is a mind without a body, a perspective without a position, a judgment without a judge. The ideal knowledge is a set of propositions that can be evaluated independently of the person who formulated them, verified by anyone who follows the correct procedure, and expressed in language so precise that no ambiguity or personal interpretation can intrude.
Polanyi regarded this ideal not merely as unrealizable but as incoherent. The propositions do not interpret themselves. Someone must know what the symbols mean, must understand what the axioms assert, must acknowledge what the proofs demonstrate. The evaluation of evidence requires judgment — judgment about what counts as relevant, what counts as sufficient, what counts as anomalous. The verification of results requires skill — the laboratory skills that produce reliable data, the interpretive skills that distinguish signal from noise, the intuitive skills that recognize when a result is significant and when it is merely curious. Every step of the supposedly impersonal procedure presupposes a personal element that the procedure itself cannot specify.
The AI moment has given Polanyi's argument a new and unexpected force. Large language models produce outputs that possess all the surface markers of impersonal knowledge. The text appears authoritative, balanced, comprehensive, and free from the visible marks of personal bias. The code is functional, well-structured, and documented. The analysis is organized, evidence-based, and grammatically polished. The outputs look like the products of the objective ideal — knowledge without a knower, judgment without a judge, understanding without someone who understands.
This appearance of impersonality is precisely what makes AI outputs both useful and epistemologically dangerous. They are useful because they meet the explicit standards of competence that organizations, professions, and markets use to evaluate work. They are dangerous because they encourage the mistaken belief that the explicit standards are the only standards that matter — that competence fully articulated is competence fully achieved.
Polanyi would identify the danger with precision. The AI output is impersonal in the most literal sense: no person has committed to it. No one has staked her judgment on its truth. No one has exercised the evaluative sensitivity — the connoisseurship, in Polanyi's term — that distinguishes the excellent from the merely adequate. The output is orphaned knowledge — knowledge without a parent, judgment without a judge, commitment without a committer. It arrives in the world bearing all the marks of authority and none of the substance.
The authority of genuine knowledge, in Polanyi's framework, derives not from the quality of the output but from the quality of the commitment that produced it. When a scientist publishes a finding, her authority rests on the scientific community's trust that she has exercised due diligence — that she has evaluated the data with care, considered alternative explanations, subjected her conclusions to her own best critical judgment before submitting them to the judgment of others. The finding may be wrong. Scientists are wrong frequently. But the commitment is real, and the commitment is what gives the finding its epistemic weight. A finding that no one commits to, that no one stands behind, that no one has evaluated with the full force of their personal judgment, has no epistemic weight regardless of how competent it appears.
This is not an abstract philosophical point. It has immediate practical consequences for every domain in which AI-generated work is being deployed. The lawyer who signs a brief produced by AI is performing an act of personal commitment — she is staking her professional reputation on the brief's accuracy and quality. But if she has not read the cases the brief cites, has not evaluated the arguments the brief makes, has not subjected the brief to the kind of scrutiny that her years of legal training have equipped her to exercise, then her commitment is hollow. She is committing to something she has not evaluated. She is signing her name to knowledge she does not possess. The personal element that Polanyi identified as constitutive of genuine knowledge has been removed, and what remains is a performance of authority without its substance.
The medical diagnostician who relies on an AI system to flag potential diagnoses faces a structurally identical problem. The AI system produces a list of possible conditions ranked by probability. The diagnostician reviews the list and selects the most likely diagnosis. The selection appears to involve personal judgment. But if the diagnostician has not examined the patient with the kind of sustained, embodied attention that produces the tacit awareness of something being wrong — if she has not felt the quality of the patient's skin, observed the pattern of the patient's breathing, sensed the subtle wrongness that years of clinical experience make perceptible — then her judgment is operating on a shallower foundation than it appears. She is committing to a diagnosis she has not personally arrived at. The personal knowledge that would give her commitment its authority has been bypassed.
The pattern extends across every domain. The executive who presents AI-generated strategic analysis to a board is making representations about the quality of the analysis. The architect who submits AI-generated designs is staking her professional identity on the designs' adequacy. The teacher who grades AI-generated lesson plans is endorsing the plans' educational value. In each case, the professional is performing the role of the personal knower — the person who commits to the knowledge as her own — without having done the cognitive work that makes the commitment meaningful.
Polanyi would call this the collapse of personal knowledge — the substitution of a performance of commitment for the reality of it. The collapse is invisible from the outside because the outputs look the same. The brief is competent. The diagnosis is plausible. The analysis is well-organized. But the personal element — the engagement, the evaluation, the committed judgment that transforms information into knowledge — is absent. And its absence matters, not because the current outputs are necessarily worse, but because the developmental trajectory that produces the capacity for genuine commitment is being eroded.
Segal captures this dynamic with striking honesty when he describes the moment he almost kept a passage that Claude had produced — a passage about the moral significance of democratization that was "eloquent, well-structured, hitting all the right notes." He realized, upon reflection, that he could not tell whether he believed the argument or merely liked how it sounded. "The prose had outrun the thinking." He deleted the passage and spent two hours at a coffee shop writing by hand until he found the version of the argument that was his — "rougher, more qualified, more honest about what I didn't know."
In Polanyi's terms, what happened in that moment was the recovery of personal knowledge from the seduction of impersonal fluency. The AI had produced an output that met every explicit standard of quality. The prose was smooth. The argument was structured. The rhetoric was persuasive. But the personal element — Segal's own evaluative engagement with the idea, his judgment about whether the argument was true and not merely plausible — had been bypassed. The output was information masquerading as knowledge. The recovery occurred when Segal insisted on doing the personal work of knowing: the slow, uncomfortable, hand-written process of figuring out what he actually believed.
This recovery is cognitively expensive. It requires the builder to resist the seduction of the smooth — to refuse the elegant output that the tool provides and insist on the rougher, more honest output that genuine personal engagement produces. The market does not reward this resistance. The market cannot distinguish between the elegant output that no one commits to and the rough output that represents genuine personal knowledge. Both meet the explicit standards. Only one carries the tacit authority that genuine knowing confers.
Polanyi's concept of connoisseurship names the evaluative capacity that is most at risk. A connoisseur is a person who has cultivated, through years of attentive engagement with a domain, the ability to distinguish quality from adequacy — to recognize excellence in a way that cannot be reduced to explicit criteria. The wine connoisseur knows when a vintage is exceptional, but she cannot fully articulate what makes it so. The literary editor knows when a sentence is right, but she cannot reduce her judgment to a set of rules. The experienced engineer knows when a system design is elegant, but the elegance resists specification.
Connoisseurship is a form of tacit knowledge — the capacity for evaluation that is built through the same kind of patient engagement that produces the tacit ground of all expertise. It is the most personal form of knowledge because it depends most completely on the knower's individual history of engagement with the domain. Two connoisseurs may disagree, and the disagreement may be irresolvable, because each is evaluating from a tacit ground that is irreducibly personal. But the capacity for connoisseurship — the ability to evaluate at all, to bring a trained sensibility to the judgment of quality — is what separates the knower from the information processor.
AI does not possess connoisseurship. It possesses pattern matching — the capacity to identify statistical regularities in training data and to produce outputs consistent with those regularities. The outputs may be excellent by explicit standards. They are never evaluated by the implicit standards that connoisseurship applies. The tool does not know whether its output is good. It knows whether its output is probable. And the difference between the good and the probable — a difference that is tacit, personal, and irreducible to explicit metrics — is the difference that connoisseurship detects and that AI, by its nature, cannot.
The lie of objectivity told people that knowledge could be separated from the knower. Artificial intelligence has given that lie its most sophisticated embodiment: outputs that appear to be knowledge, that meet every explicit standard of knowledge, but that lack the personal commitment, the tacit engagement, the evaluative connoisseurship that transforms information into understanding. The corrective is not to reject the tools but to insist that the personal element — the commitment, the evaluation, the willingness to stake one's judgment on the claim — remains the irreducible foundation of everything the tools produce. The builder who uses AI wisely is the builder who never forgets that the machine provides information, and that knowledge requires something the machine cannot supply: a person who commits.
---
A pianist performing a Chopin nocturne does not think about her fingers. She does not consciously direct the fourth finger of her left hand to depress the E-flat below middle C at a specific velocity while simultaneously directing her right hand to shape a melodic phrase three octaves higher. If she did — if she attended to the mechanics of her fingers rather than the music they were producing — the performance would collapse. The fingers would stumble. The phrase would fracture. The music would disappear, replaced by the awkward, halting movements of a person consciously operating a complicated machine.
What the pianist attends to is the music. The fingers, the keys, the hammers, the strings — the entire mechanical chain from intention to sound — function as what Polanyi called subsidiary elements. She attends from them to the musical meaning that emerges through their integrated operation. The subsidiary elements are not absent from her awareness. She is aware of them — she must be, for without awareness of her fingers' position, pressure, and movement, she could not play at all. But the awareness is subsidiary: it serves the focal awareness of the music without itself becoming the object of attention. The moment it does become the object of attention — the moment she shifts from attending from the fingers to attending to the fingers — the from-to structure inverts, and the skill dissolves.
Polanyi identified this from-to structure as the universal architecture of all knowing. Perception operates through it: the viewer attends from the retinal impressions, the cognitive schemas, the contextual expectations that constitute her subsidiary awareness to the focal awareness of the object she perceives. A face is not perceived as a collection of features — two eyes, a nose, a mouth, arranged in a particular geometry. It is perceived as a face, a unified focal gestalt that integrates the subsidiary features into a meaning that exceeds their sum. If the viewer shifts attention to the individual features — if she tries to specify what it is about the nose, the spacing of the eyes, the curve of the jaw that makes this face recognizable — the recognition wavers. The subsidiary elements, made focal, lose their integrative power.
Diagnosis operates through the same structure. The physician attends from the patient's complexion, breathing, posture, speech patterns, and a hundred other subsidiary clues to the focal judgment that something is wrong. The clues are not processed sequentially, evaluated individually, and combined by explicit logic into a diagnosis. They are integrated tacitly — absorbed into a focal awareness that emerges from their joint operation without the physician being able to specify how the integration occurred or which clues contributed what. The integration is the skill. The skill is the from-to movement. And the movement depends on the subsidiary elements remaining subsidiary.
The implications for AI-augmented work are more specific and more actionable than the general observation that AI disrupts tacit knowledge. The from-to structure reveals precisely how the disruption operates and under what conditions it can be mitigated.
Consider the software engineer working with Claude Code. Before AI, her workflow involved hours of implementation — writing syntax, debugging errors, managing dependencies, configuring environments. This implementation work was not merely tedious overhead. It was the substrate through which she developed subsidiary awareness of the system she was building. Each debugging session deposited a layer of understanding about how the components fit together. Each dependency conflict taught her something about the architecture that no documentation specified. Each configuration challenge revealed a relationship between system elements that she had not previously seen. The implementation was subsidiary — she did not attend to it for its own sake but attended from it to the focal work of building a product. And the subsidiary awareness it generated was the foundation on which her architectural judgment rested.
When Claude Code takes over the implementation, it removes the subsidiary elements from which the engineer attends. The engineer now attends from a different set of subsidiaries — the prompts she crafts, the outputs she reviews, the high-level design decisions she makes — to the same focal goal: a working product. The from-to structure is preserved in form. But the content of the subsidiary awareness has changed. The engineer no longer develops the tacit understanding that implementation work produced. She develops a different tacit understanding — an understanding of how to direct the tool, how to evaluate its outputs, how to decompose a problem into prompts that produce useful results. Whether this new subsidiary awareness is adequate to support the same quality of focal judgment is the question that Polanyi's framework forces upon the AI discourse.
The honest answer is: sometimes yes, sometimes no, and the conditions that determine which are not well understood. The senior engineer in Trivandrum who discovered that the remaining twenty percent of his work was "everything" had already built the tacit ground that decades of implementation work produced. His subsidiary awareness was deep, layered, and stable. The AI tool added a new subsidiary element — the capacity for rapid implementation — to an already-rich foundation. His from-to structure was enhanced, not impoverished. He attended from a richer set of subsidiaries to a more ambitious set of focal goals. The tool extended his capacity because the capacity was already there to be extended.
The junior engineer who has never debugged by hand, who has never built a dependency tree from scratch, who has never spent an afternoon tracing a null pointer through six layers of abstraction, occupies a different position. Her subsidiary awareness is thin. The tacit ground that implementation work would have produced has not been laid down. The AI tool does not add a new subsidiary element to a rich foundation. It substitutes for the foundation itself. She attends from the tool's outputs — which she lacks the tacit ground to evaluate — to focal goals she cannot fully understand, because the understanding that would make them intelligible has not been developed through the engagement that the tool has replaced.
This is not a speculative scenario. Segal describes it happening in real time: the woman on his engineering team who had never written frontend code building a complete user-facing feature in two days using Claude Code. The accomplishment is real. The feature works. The user interface responds. The product ships. But the from-to structure through which the feature was built is qualitatively different from the from-to structure through which a frontend engineer with ten years of experience would have built it. The experienced engineer would have attended from her tacit awareness of CSS behavior, browser rendering quirks, accessibility requirements, and the thousand small conventions that make an interface feel right to the focal goal of a working feature. The engineer without frontend experience attended from Claude's outputs to the same focal goal. The focal goal was achieved. But the subsidiary awareness that would have informed the ten thousand micro-decisions embedded in the feature — choices about spacing, timing, interaction patterns, edge cases — was not the engineer's own. It was the tool's.
Whether this matters depends on what the feature is for and what happens next. If the feature is a prototype — a proof of concept that will be refined by experienced practitioners — the thin subsidiary awareness may be adequate. The prototype serves its purpose as a focal object that communicates the builder's vision. If the feature is a production system that users will depend on, the thin subsidiary awareness is a risk, because the micro-decisions embedded in the feature have not been evaluated by a sensibility trained to recognize when something is wrong. And if the engineer builds on this experience — if she takes the feature as evidence that she "knows" frontend development — the thin subsidiary awareness compounds. Each feature built without the tacit ground reinforces the illusion of competence while the actual competence remains undeveloped.
The from-to structure also illuminates the specific phenomenology of flow and disruption in AI-augmented work. The builder in flow has achieved a stable from-to structure: the tool is subsidiary, the creative work is focal, and the movement between them is smooth and self-sustaining. Each prompt produces a response that advances the focal goal. Each response generates the next prompt. The cycle accelerates. The builder loses track of time because the from-to movement has become automatic — the subsidiary elements are operating smoothly, the focal awareness is fully engaged, and the self-consciousness that monitors the boundary between the two has dropped away.
Disruption occurs when the tool's output fails — when the code doesn't work, when the analysis is wrong, when the passage breaks under scrutiny. At that moment, the tool shifts from subsidiary to focal. The builder must attend to the tool rather than through it. The from-to structure inverts. The flow breaks. And the builder experiences the specific cognitive jolt that accompanies every inversion of the from-to structure — the same jolt the pianist experiences when she notices her fingers, the driver when she notices the steering wheel, the speaker when she notices her grammar. The tool has become opaque, and the opacity demands the kind of conscious attention that the from-to structure, when functioning properly, makes unnecessary.
The Deleuze fabrication that Segal describes is a paradigmatic case of failed transparency. The AI produced a passage that maintained the appearance of subsidiary reliability — it sounded right, it felt like insight, it advanced the focal argument — while being substantively wrong. The from-to structure was preserved in form: Segal attended from the passage to the argument it supported. But the subsidiary element was unreliable, and the unreliability was concealed by the smoothness of the output. The failure was detected only when Segal broke the from-to structure — when he shifted from attending through the passage to attending to it, subjecting it to the kind of focal scrutiny that the passage's subsidiary smoothness had suppressed.
Polanyi's framework suggests that this kind of failure is not incidental but structural. The from-to structure requires trust in the subsidiary elements. Trust suppresses scrutiny. And a tool that is capable of confident wrongness — a tool whose failure modes are indistinguishable from its success modes — exploits this suppression in ways that no mechanical tool can. The cane does not hallucinate pavement. The piano does not hallucinate notes. But the large language model hallucinates knowledge with the same fluency with which it produces it, and the from-to structure that enables skilled use of the tool is the same structure that makes the hallucinations difficult to detect.
The practical prescription that emerges from this analysis is not that builders should refuse to indwell their AI tools. Indwelling is what makes the tools useful. The prescription is that builders must develop the tacit ground independently of the tools — that the subsidiary awareness from which they attend must be built through the kind of direct engagement with the domain that the tools are designed to bypass. The senior engineer can safely indwell Claude Code because his subsidiary awareness was built before the tool arrived. The junior engineer cannot safely indwell it because her subsidiary awareness has not been built at all.
This has implications that the current discourse has not fully absorbed. The question is not whether AI tools should be used in education, in professional training, in the developmental trajectory from novice to expert. They will be. The question is at what point in the developmental trajectory the tools should be introduced. Polanyi's from-to structure suggests that the tools should be introduced after the tacit ground has been built — after the novice has developed enough subsidiary awareness to evaluate the tool's outputs independently. Introducing the tool before the ground is laid produces practitioners whose from-to structure rests on the tool rather than on their own embodied understanding — practitioners who can produce competent outputs without possessing the tacit knowledge to evaluate whether those outputs are genuinely competent or merely statistically probable.
The from-to structure is not a philosophical curiosity. It is the architecture of all skilled performance, all genuine understanding, all productive tool use. The AI moment is restructuring this architecture for millions of knowledge workers simultaneously, and the restructuring is producing two radically different outcomes depending on the depth of the tacit ground that was in place before the restructuring began. Those with deep ground are experiencing genuine amplification — an expansion of what they can attend to because the subsidiary foundation supports the expansion. Those without it are experiencing what looks like amplification but is actually substitution — a replacement of tacit knowing with tool dependency that will reveal its fragility only when the tool fails or the situation demands judgment that the tool cannot provide.
The distinction between these two outcomes is invisible to the market, invisible to the productivity metrics, and invisible to the builders themselves until the moment of failure arrives. It is visible only to the framework that Polanyi built — the framework that insists on asking not what the output looks like but what the knower knows, not what the tool produces but what the builder attends from when the tool is in her hands.
Every act of understanding involves a sacrifice. Something must be given up so that something else can appear. The pianist who hears the music has surrendered conscious awareness of her fingers. The reader who follows the argument has surrendered conscious awareness of the typography, the syntax, the specific lexical choices through which the argument is conveyed. The scientist who grasps the significance of an experimental result has surrendered conscious awareness of the individual data points, the calibration procedures, the instrumental noise that the result integrates and transcends. In each case, the subsidiary elements — the fingers, the words, the data — have been sacrificed to the focal meaning that emerges through their integration. The sacrifice is not a loss. It is the condition of understanding. But it is a sacrifice nonetheless, and the AI moment is forcing a reckoning with what happens when the things being sacrificed are the very things that build the capacity for understanding in the first place.
Polanyi's distinction between subsidiary and focal awareness is not a binary. It is a gradient — a continuum along which elements of awareness can be shifted, with consequences that ripple through the entire structure of knowing. The medical student learning to use a stethoscope initially attends to the instrument: the cold metal against her hand, the earpieces that pinch, the unfamiliar pressure of the chest piece against the patient's skin. The stethoscope is focal. She hears noise — a confusion of thumps, whooshes, and ambient sounds that she cannot parse. Over weeks and months of practice, the stethoscope gradually shifts from focal to subsidiary. She stops attending to the instrument and begins attending through it. The noise differentiates into distinct sounds: the lub-dub of normal valve closure, the murmur that indicates regurgitation, the rub that suggests pericarditis. The sounds were always there. What changed was not the input but the structure of her awareness — the reorganization of attention that moved the instrument from focal to subsidiary and, in doing so, made the clinical meaning audible.
This reorganization cannot be shortcut. The student cannot skip the weeks of confused listening and arrive directly at skilled auscultation. The confusion is not a bug in the learning process. It is the process. The weeks of attending to noise, of failing to distinguish signal from background, of hearing the same chest and getting it wrong — these weeks are the period during which the subsidiary awareness is being built. Each failed attempt deposits a thin layer of discrimination. Each corrected error adjusts the perceptual filter. The accumulation of these deposits and adjustments is what eventually produces the reorganization — the moment when the stethoscope disappears from awareness and the heart sounds arrive as meaningful patterns rather than undifferentiated noise.
The deposit metaphor that Segal borrows from geology — every hour of debugging depositing a thin layer of understanding — is Polanyi's subsidiary-focal distinction expressed in material terms. The layers are layers of subsidiary awareness. Each one is too thin to be noticed on its own. Each one is too small to constitute, by itself, a recognizable advance in competence. But the accumulation, over months and years, produces a substrate that is qualitatively different from anything that could be constructed by assembling the individual layers explicitly. The substrate is not a list of facts about how codebases behave. It is a perceptual sensitivity — a capacity to attend from the accumulated understanding to the focal judgment that something is wrong, or right, or interesting, or dangerous. The sensitivity is tacit precisely because it is subsidiary: it functions only when it is not the object of attention, only when it operates below the threshold of conscious awareness, supporting the focal judgment without intruding into it.
AI tools interact with this subsidiary-focal structure in a way that is more complex and more ambiguous than either the triumphalists or the elegists acknowledge. The interaction has at least three distinct modes, each with different consequences for the builder's capacity for genuine understanding.
In the first mode, the AI tool handles subsidiary elements that the builder has already internalized. The senior engineer who uses Claude Code to implement a feature she has already designed in her mind is delegating subsidiary work that she could do herself. The implementation — the syntax, the debugging, the dependency management — was subsidiary to her design judgment before the tool arrived. The tool merely automates what was already beneath her focal attention. Her from-to structure is preserved intact. She attends from the same tacit ground — her accumulated architectural understanding — to the same focal goal — a working product. The tool has not removed anything from her subsidiary awareness. It has freed her attention from mechanical tasks, allowing her to bring the full force of her tacit knowledge to bear on the design decisions that constitute her genuine contribution.
This is the mode that produces the legitimate exhilaration Segal describes. The twenty-fold productivity multiplier, the imagination-to-artifact ratio collapsing, the builder seeing her intention realized in real time — these are the experiences of a person whose subsidiary ground is deep enough to support the expanded focal ambition that the tool enables. The tool has not hollowed out her knowing. It has amplified it. The amplification is real because the tacit ground is real.
In the second mode, the AI tool handles subsidiary elements that the builder has not internalized but would have, through the normal developmental trajectory of professional practice. The junior engineer who uses Claude Code to write frontend code she has never learned is delegating subsidiary work that she has not yet done. The delegation bypasses the developmental process through which the subsidiary awareness would have been built. The feature gets built. The product ships. But the perceptual substrate — the accumulated sensitivity to CSS behavior, browser rendering, accessibility patterns, the thousand small discriminations that a frontend engineer develops through years of practice — has not been deposited. The junior engineer's from-to structure rests on the tool's outputs rather than on her own subsidiary awareness. She attends from the tool to the product, and the tool, rather than her own understanding, determines the quality of the subsidiary elements that support the focal outcome.
This mode is where the genuine risk lives. The risk is not that the current output will be bad. It may be excellent — Claude Code is capable of producing frontend code that meets or exceeds the explicit standards of competence. The risk is that the developmental trajectory has been interrupted. The junior engineer has arrived at the destination without having traveled the road, and the road, in Polanyi's framework, is not merely the means of getting to the destination. It is the process through which the traveler develops the perceptual sensitivity, the tacit understanding, the subsidiary awareness that makes the destination meaningful. The engineer who has arrived without traveling does not know the landscape. She knows the coordinates.
In the third mode, the AI tool generates subsidiary elements that are genuinely new — connections, frameworks, possibilities that neither the builder nor the tool could have produced independently. This is the mode that Segal describes when he recounts the moment Claude connected his question about friction with the example of laparoscopic surgery — a connection that "neither of us owns." In Polanyi's terms, this is emergent meaning: focal awareness that arises from a subsidiary integration that neither contributor could have performed alone. The human brings tacit understanding of the problem domain. The machine brings statistical patterns drawn from a vast training corpus. The collision of the two produces a focal insight that transcends both inputs.
This third mode is the most interesting and the most philosophically challenging. It is the mode in which the human-AI collaboration most resembles the convivial process of discovery that Polanyi described — the process in which a community of knowers, each contributing subsidiary awareness that the others lack, produces focal insights that none could have reached independently. The collaboration is genuinely generative. The emergent meaning is real. And the from-to structure is operating at a level that extends both the human's tacit ground and the machine's pattern-matching capacity into territory that neither could reach alone.
But even in this third mode, Polanyi's framework introduces a crucial qualification. The emergent insight must be evaluated by someone who possesses the tacit ground to judge its validity. The laparoscopic surgery connection was generative — but its validity depended on Segal's capacity to evaluate whether the analogy held, whether the structural parallel was genuine, whether the connection illuminated or merely decorated the argument. The machine produced the connection. The human evaluated it. And the evaluation was an exercise of the personal knowledge — the committed, tacit, connoisseurial judgment — that Polanyi spent his career defending as the irreducible foundation of all genuine knowing.
Without that evaluation, the emergent insight is indistinguishable from the emergent fabrication. The Deleuze passage was also a connection that Claude produced — a link between Csikszentmihalyi and a concept the machine attributed to Deleuze. The connection was elegant. It advanced the argument. It emerged from the collision of the human's question and the machine's associative reach. And it was wrong. The difference between the laparoscopic surgery insight and the Deleuze fabrication was not in the process that produced them — both emerged from the same mode of human-AI collaboration — but in the evaluative judgment that the builder brought to bear after the emergence. One connection was evaluated against a tacit ground of genuine understanding and found valid. The other was initially accepted without adequate evaluation and found, the next morning, to be hollow.
The subsidiary-focal distinction clarifies a confusion that runs through the AI discourse: the confusion between the quality of the output and the quality of the knowing that produced it. The market evaluates outputs. It measures the brief's competence, the code's functionality, the analysis's coherence. These are focal evaluations — assessments of the thing that appears at the top of the from-to structure. But the quality of the output, in Polanyi's framework, is not the whole story. The quality of the knowing — the depth of the subsidiary awareness from which the output emerged, the strength of the personal commitment that the output embodies, the reliability of the tacit ground that supports the focal achievement — is what determines whether the output represents genuine understanding or its statistical simulation.
Two briefs can be identical at the focal level — same arguments, same citations, same structure, same persuasive force — and radically different at the subsidiary level. One was produced by a lawyer who attended from years of close reading, courtroom experience, and absorbed sensitivity to judicial reasoning to the focal goal of a persuasive argument. The other was produced by a machine that attended from statistical patterns in training data to the same focal goal. The briefs are the same. The knowing is not. And the difference in knowing will manifest — not in the current brief but in the next one, the one that requires the lawyer to recognize that the precedent has shifted, that the judge's reasoning has evolved, that the legal landscape has changed in ways the training data does not reflect. The lawyer with deep subsidiary awareness will feel the shift. The lawyer without it will not — and the machine will not tell her, because the machine does not know what it does not know.
The focal product is what the world sees. The subsidiary ground is what determines whether the product can be trusted to hold when the world changes. The AI moment is producing an enormous quantity of focal products — briefs, code, analyses, designs, strategies — of impressive explicit quality. The question that Polanyi's framework forces is whether the subsidiary ground that supports these products is being maintained, deepened, and transmitted to the next generation of practitioners — or whether the ground is eroding, silently and invisibly, beneath a surface that looks as solid as ever.
Polanyi would not have expected the market to answer this question. The market evaluates the focal product. The subsidiary ground is invisible to the market's metrics. The market cannot distinguish between the brief produced by deep subsidiary awareness and the brief produced by shallow tool dependency, because the distinction exists in a dimension that the market does not measure. The distinction is tacit. It resists articulation. It manifests only under conditions — changed circumstances, novel problems, situations that depart from the training data — that the market's current evaluation framework does not test for.
The institutions that protect the subsidiary ground — the educational systems, the apprenticeship structures, the communities of practice that build and transmit tacit knowledge across generations — are the institutions most at risk in the current transition. They are at risk not because anyone has decided to destroy them but because the market, evaluating only the focal product, sees no reason to invest in the subsidiary process. The product is competent. The process is expensive. The market eliminates the expensive process and celebrates the competent product. And the celebration continues until the day the product fails in a way that only the subsidiary ground could have prevented — a failure that no one sees coming because no one is looking at the ground.
---
In 1968, two years after the publication of The Tacit Dimension, Michael Polanyi published an essay titled "Life's Irreducible Structure" in the journal Science. The essay argued, with the precision of a physical chemist who understood reductionism from the inside, that higher levels of reality — life, consciousness, culture, meaning — emerge from lower levels but are irreducible to them. Biology emerges from chemistry but cannot be explained entirely in chemical terms. Consciousness emerges from neuroscience but cannot be captured entirely in neural descriptions. Culture emerges from individual behavior but cannot be predicted from the study of individuals in isolation. Each higher level has its own principles, its own regularities, its own form of organization that is not present in the components from which it emerges.
Polanyi drew a specific analogy to make the point concrete. The rules of grammar do not determine the content of a sentence. The laws of physics do not determine the design of a machine. The chemistry of ink does not determine the meaning of a text. At each level, the lower level provides the material conditions — the boundary conditions, in Polanyi's term — within which the higher level operates. But the higher level is not determined by the lower level. It harnesses the lower level's regularities while introducing organizational principles of its own. The machine harnesses the laws of physics. The sentence harnesses the rules of grammar. The meaning harnesses the chemistry of ink. But the machine is not reducible to physics, the sentence is not reducible to grammar, and the meaning is not reducible to chemistry. Each higher level has what Polanyi called a "dual control" — governed both by the laws of the lower level and by the organizational principles of the higher level, with neither set of laws sufficient to explain the whole.
The concept of emergence illuminates the authorship question that Segal raises in The Orange Pill with such candor and uncertainty. When Segal describes the moments that "keep him awake" — the moments when Claude made a connection he had not made, when the collaboration produced an insight that "neither of us owns" — he is describing emergent meaning: a higher-level pattern that arises from the interaction of human tacit knowledge and machine pattern-matching but that is irreducible to either contribution.
The emergence is real. It is not a trick of perception or a failure of attribution. The laparoscopic surgery connection genuinely did not belong to either Segal or Claude. It belonged to the interaction — to the higher-level system composed of a human mind with specific tacit knowledge and a machine with specific pattern-matching capabilities, operating together in a specific conversational context. The connection emerged from the from-to structure of the collaboration: Segal attended from his tacit understanding of the friction problem to the focal question of what replacing one kind of friction reveals, and Claude attended from its statistical associations across a vast training corpus to the focal task of producing a relevant response. The collision of these two from-to movements produced something that neither could have produced alone.
Polanyi would insist on two points about this emergence. The first is that it is genuine — not a mystification or a metaphor but a real property of complex systems in which higher-level organization arises from the interaction of lower-level components. The meaning that emerges from human-AI collaboration is as real as the meaning that emerges from a conversation between two humans, or from the interaction of instruments in a jazz ensemble, or from the collaboration of researchers in a laboratory. Emergence is not magic. It is a structural feature of systems whose components interact in ways that produce organizational properties not present in any individual component.
The second point is that emergence does not eliminate the need for evaluation by a mind capable of recognizing genuine meaning. A jazz ensemble produces emergent music, but the music still requires ears trained enough to distinguish the inspired from the chaotic. A laboratory collaboration produces emergent hypotheses, but the hypotheses still require the scientific judgment of someone who knows the field well enough to evaluate their plausibility. And a human-AI collaboration produces emergent insights, but the insights still require the personal knowledge — the tacit ground, the connoisseurship, the committed evaluation — of someone who can distinguish the genuine connection from the hallucinated one.
The emergence does not bypass the evaluator. It demands one. And the evaluator must possess the tacit ground that makes evaluation possible — the accumulated subsidiary awareness from which she can attend to the emergent meaning and judge whether it holds. Without the evaluator, the emergent meaning and the emergent nonsense are indistinguishable. Both arise from the same process. Both look the same at the surface. Only the evaluative judgment of a personal knower — someone who has committed herself to the domain, who has built the tacit sensitivity through years of engagement, who can feel whether the connection is right — can separate the one from the other.
This is why the authorship question that Segal raises cannot be resolved by decomposing the output into individual contributions. The emergent meaning exists at a level that is irreducible to the contributing levels. Asking whether Segal or Claude "owns" the laparoscopic surgery insight is like asking whether the trumpet or the piano "owns" the harmonic tension that arises from their interplay in a jazz performance. The tension belongs to the interaction. It exists at the emergent level. It cannot be decomposed without being destroyed.
But the irreducibility of the emergent level does not eliminate the asymmetry between the contributors. The human brings something the machine does not: personal commitment, tacit knowledge, the capacity for evaluative judgment that distinguishes genuine emergence from spurious coincidence. The machine brings something the human does not: the capacity to traverse statistical associations across a corpus of human knowledge vaster than any individual could absorb. The collaboration is genuine, but it is not symmetric. The human's contribution is irreplaceable in a way the machine's is not — not because the human is superior but because the human supplies the tacit ground and the evaluative commitment without which the emergent meaning cannot be recognized as meaning at all.
Polanyi's concept of emergence also clarifies a subtler point about what happens to professional communities when AI enables solo production. The author of The Orange Pill documents the emergence of the solo builder — the individual who, armed with AI tools, can produce revenue-generating products without teams, without institutional backing, without the collaborative infrastructure that previously constituted the minimum viable unit of significant production. The market celebrates this as efficiency. Polanyi would identify it as a loss of emergent capacity.
A team is not merely a collection of individuals who happen to work in proximity. It is an emergent system whose properties — its collective judgment, its shared tacit understanding, its capacity to solve problems that no individual member could solve alone — arise from the interaction of its members. The team knows things that none of its members know individually. The senior engineer's architectural intuition, the junior engineer's fresh perspective, the designer's sensitivity to user experience, the product manager's sense of market timing — these individual contributions interact, collide, challenge each other, and produce an emergent understanding that is qualitatively different from anything any contributor could produce alone.
When the solo builder replaces the team, the emergent level disappears. The solo builder may be extraordinarily productive. She may produce outputs that match or exceed the outputs of the team she replaced. But she cannot produce the emergent understanding that the team's interaction generated — the understanding that arose not from any individual mind but from the from-to structure of collaborative knowing, in which each member attended from her own tacit ground to a shared focal goal, and the integration of these multiple from-to movements produced insights that no single movement could reach.
The loss is invisible to the productivity metrics. The solo builder ships faster. The solo builder costs less. The solo builder does not require the coordination overhead, the meeting time, the interpersonal friction that teams inevitably produce. But the solo builder also does not produce the emergent judgment that arises from the specific friction of collaborative disagreement — the moment when the designer pushes back on the engineer's implementation, not because the implementation is wrong but because it does not feel right, and the ensuing argument produces a solution that neither the designer nor the engineer could have conceived independently. That moment is emergent. It arises from the interaction. It cannot be produced by a solo builder, no matter how productive, because it requires the collision of multiple tacit grounds, multiple from-to structures, multiple perspectives that no individual mind contains.
The implications extend beyond the workplace. Polanyi argued that knowledge itself is a social achievement — that the community of knowers, not the individual knower, is the primary locus of intellectual life. The scientist discovers alone, but her discovery has meaning only within the community that evaluates it, challenges it, builds on it, and integrates it into the shared body of understanding that constitutes a scientific discipline. The community is not merely the audience for individual discoveries. It is the emergent system within which discoveries become knowledge — the higher-level organization that transforms individual insights into collective understanding.
The AI-enabled dissolution of professional communities — the replacement of teams with solo builders, the substitution of AI feedback for peer review, the erosion of the collaborative structures through which tacit knowledge is transmitted — threatens this emergent level. The individual builder may be more productive. But the professional community, the emergent system that evaluates, integrates, and transmits knowledge across generations, is weakened. And the weakening is invisible because the individual outputs continue to appear competent, the same way a building continues to appear solid even as the foundation beneath it erodes.
Polanyi's framework does not counsel against AI-enabled individual production. It counsels against the illusion that individual production is a sufficient substitute for communal knowing. The emergent level matters. The team matters. The community of practice matters. Not because they are efficient — they are often spectacularly inefficient — but because they produce a form of understanding that no individual, however amplified, can produce alone. The protection of this emergent level — the maintenance of the collaborative structures through which tacit knowledge is built, evaluated, and transmitted — is not a sentimental attachment to the old way of working. It is an epistemological necessity, grounded in the structural features of how human beings actually know.
---
Science, Polanyi argued, is not what the textbooks describe. The textbook version presents science as a method — a procedure for generating hypotheses, designing experiments, collecting data, and drawing conclusions that any competent practitioner can follow to arrive at reliable knowledge. The method is impersonal, the results are objective, and the knowledge produced belongs to no one because it belongs to everyone who follows the same procedure. The textbook version is a lie. Not because the method is wrong — it is a useful approximation — but because it omits the dimension of scientific practice that makes the method work: the community of knowers within which the method is enacted, evaluated, and given its meaning.
Polanyi called this community convivial — using the word in its root sense of "living together." The scientific community is not merely a collection of individuals who happen to pursue similar inquiries. It is a living social organism within which knowledge is produced, evaluated, certified, and transmitted through processes that are irreducibly social. The scientist who publishes a finding submits it not to an abstract tribunal of logic but to a specific community of fellow practitioners who possess the tacit knowledge to evaluate it — who can sense whether the experimental design is sound, whether the data supports the conclusion, whether the finding is significant or trivial, whether the researcher has exercised the judgment that the community's standards demand. This evaluation is not fully articulable. The reviewer does not apply a checklist. She exercises connoisseurship — the cultivated capacity to distinguish the excellent from the adequate, the genuine from the spurious, the significant from the merely competent.
The community does not merely evaluate. It transmits. The most important function of the scientific community, in Polanyi's account, is the transmission of tacit knowledge from one generation of practitioners to the next. This transmission does not occur through textbooks, lectures, or explicit instruction. It occurs through apprenticeship — through the novice working alongside the master, absorbing through sustained proximity the tacit skills, the evaluative sensitivities, the ways of seeing and thinking that constitute the master's expertise. The apprentice watches the master design an experiment and gradually develops the sense of what makes a good design. She watches the master evaluate data and gradually acquires the ability to distinguish signal from noise. She watches the master follow a hunch and gradually learns to recognize the quality of intellectual intuition that indicates a promising direction. None of this learning is explicit. It cannot be reduced to rules or transmitted through documentation. It is conveyed through the specific social relationship of shared practice — through the daily, intimate, extended engagement of novice and master in the communal activity of knowing.
This conception of knowledge-as-community has immediate and uncomfortable implications for the AI transition. The AI tools that are transforming knowledge work operate on individuals. They augment the individual builder's capacity. They expand the individual practitioner's reach. They amplify the individual creator's productivity. But they do not, and cannot, replicate the communal structures through which tacit knowledge is transmitted, evaluated, and sustained.
The apprenticeship relationship is the most direct casualty. When the junior practitioner can obtain competent answers from an AI tool, the occasions for consulting the senior practitioner diminish. The junior lawyer who would have spent hours in the senior partner's office, absorbing through proximity the tacit dimensions of legal judgment — the sense of when a case is strong, the instinct for which arguments will resonate with which judges, the ethical sensibility that distinguishes aggressive advocacy from dishonesty — now obtains competent drafts from Claude and presents them for review. The review still happens. But the review is a thin interaction — a focal evaluation of the product — compared to the thick interaction of apprenticeship, which involved subsidiary exposure to the master's entire way of engaging with the work.
The difference between review and apprenticeship is the difference between evaluating a product and absorbing a practice. In review, the senior evaluates the junior's output and provides corrections. The corrections are explicit — change this argument, cite that case, restructure this section. In apprenticeship, the junior watches the senior work and absorbs the tacit sensibility that underlies the senior's choices. The absorption is not explicit. The junior does not learn rules. She learns a way of seeing, a way of thinking, a way of feeling her way through the material that no set of corrections can convey. The master's tacit knowledge is transmitted not through instruction but through the sustained proximity of shared practice.
When AI tools reduce the occasions for this proximity, the transmission is disrupted. The junior still learns. But what she learns is different. She learns how to use the tool. She learns how to prompt effectively, how to evaluate outputs, how to iterate toward a satisfactory result. These are genuine skills. But they are not the skills that apprenticeship transmits — the deep, domain-specific, tacitly acquired sensibility that constitutes the difference between a competent technician and a genuine practitioner.
Segal describes this disruption obliquely when he notes the dissolution of traditional team structures — designers writing code, engineers building interfaces, the boundaries between roles blurring as AI tools make cross-domain work accessible to individuals who lack domain-specific training. The blurring is celebrated as democratization, and the celebration has merit. The barriers between domains were often artifacts of implementation cost rather than genuine intellectual boundaries, and their dissolution enables forms of creative integration that the old structure could not support. But the blurring also dissolves the professional communities within which domain-specific tacit knowledge was transmitted. The designer who writes code does not participate in the community of programmers who would have evaluated her code, challenged her design choices, and transmitted the tacit standards of the programming profession. She participates in a community of one — herself and her AI tool — and the tool, however capable, does not possess the tacit knowledge that a community of practitioners transmits.
Polanyi's concept of mutual authority illuminates what is at stake. In a healthy community of practice, authority is distributed. The senior practitioner has authority over the junior in matters of professional judgment — she can recognize quality, identify errors, and evaluate competence in ways the junior cannot. But the junior has authority too — the authority of fresh perspective, of unfamiliar questions, of the outsider's ability to see what the insider has stopped noticing. The community functions through the interplay of these authorities — through the mutual challenge and correction that keeps the community's standards alive and responsive to new conditions.
AI tools disrupt this distribution. The tool has no authority and confers no authority. It produces outputs that the user must evaluate, but it does not participate in the evaluative relationship that constitutes a community of practice. It does not push back when the user's judgment is wrong. It does not challenge assumptions that the community would challenge. It does not transmit the ethical commitments, the professional standards, the evaluative sensibilities that constitute the community's shared tacit knowledge. The user and the tool exist in a relationship that is productive but not convivial — a relationship that generates outputs without generating the communal understanding that makes outputs meaningful.
The Berkeley researchers' finding that AI tools reduced delegation and blurred role boundaries captures one dimension of this dissolution. In the pre-AI workplace, delegation was a social act — an occasion for the transfer of tacit knowledge from the delegator to the delegate. The senior who delegated a task to the junior did not merely assign work. She communicated expectations, provided guidance, modeled standards, and created the conditions for the kind of thick interaction through which tacit knowledge is transmitted. When the senior delegates to AI instead, the social act is replaced by a technical one. The work gets done. The transmission does not happen.
The dissolution of communities of practice has consequences that extend beyond the individual workplace. Professions are communities of practice at scale. The legal profession, the medical profession, the engineering profession — each is a community that maintains standards, transmits expertise, certifies competence, and sustains the evaluative culture within which individual practice acquires its meaning. The standards are not merely rules. They are tacit understandings about what constitutes quality, what constitutes ethics, what constitutes the kind of practice that the community recognizes as belonging to the profession. These understandings are conveyed through the thousand daily interactions of professional life — the corridor conversation, the case conference, the peer review, the mentorship relationship, the shared experience of navigating difficult situations together.
When AI tools enable individual practitioners to operate independently of these communal structures — to produce competent work without submitting it to the community's evaluation, to develop skills without absorbing the community's standards, to build careers without participating in the community's transmission of tacit knowledge — the profession as a community erodes. The individual practitioners may remain productive. The communal understanding that made the profession something more than a collection of individuals pursuing similar occupations is diminished.
Polanyi would not have been surprised. He spent years arguing that knowledge is not the property of individuals but the achievement of communities — that the tacit dimension of all knowing is sustained by the social structures within which knowing occurs. The AI moment is testing this argument by providing individuals with tools powerful enough to simulate the outputs of communal knowledge without participating in the communal process that produces it. The simulation is impressive. The outputs are competent. But the communal process — the transmission of tacit knowledge, the mutual evaluation of professional judgment, the shared commitment to standards that no individual can enforce alone — is something that individual productivity, however amplified, cannot replace.
The challenge for the current moment is to build institutional structures that preserve the convivial dimension of knowledge work even as AI tools make individual production increasingly sufficient. This means maintaining apprenticeship relationships not because they are efficient — they are not — but because they are the medium through which tacit knowledge is transmitted. It means sustaining communities of practice not because they are productive — a solo builder with AI is often more productive — but because they are the social structures within which the evaluative standards that distinguish genuine knowledge from its simulation are maintained. And it means recognizing that the AI tool, however capable, is not a member of the community. It is a tool that the community uses. The distinction matters because the community's knowledge is not in the tool. It is in the relationships between the people who use the tool — in the shared practice, the mutual evaluation, the convivial knowing that makes the tool's outputs meaningful.
---
All knowledge rests on trust. Not the sentimentalized trust of self-help literature — the trust that everything will work out, that people are fundamentally good, that the universe is benevolent. The trust that Polanyi identified is harder-edged and more structural. It is the trust that the knower places in the framework within which her knowing occurs: trust in the reliability of her senses, trust in the competence of her teachers, trust in the validity of the methods her community employs, trust in the institutional structures that sustain the practice of knowing. Without this trust — which Polanyi called the fiduciary framework — no knowledge is possible. The scientist who doubts the reliability of her instruments cannot produce experimental results. The student who doubts the competence of her teachers cannot learn. The citizen who doubts the validity of every public claim cannot participate in democratic life. Some framework of trust must be accepted before any inquiry can begin.
The fiduciary framework is not blind faith. It is not the uncritical acceptance of whatever one is told. It is what Polanyi called a responsible commitment — a decision to trust that is made with awareness of the risk, that is revisable in the light of new evidence, but that must be made before the evidence can be gathered. The scientist commits to her instruments before the experiment begins. The student commits to her teachers before the lesson starts. The citizen commits to some framework of shared reality before the political conversation becomes possible. The commitment is a precondition of the knowing, not a conclusion derived from it. It is fiduciary in the legal sense: it involves accepting a responsibility, entering a relationship of trust, staking something on the reliability of a framework that cannot be fully verified in advance.
The AI moment is disrupting the fiduciary framework of knowledge work with a thoroughness that has not been adequately recognized. The disruption operates at multiple levels simultaneously, and each level compounds the others.
At the most basic level, the AI tool disrupts the fiduciary relationship between the practitioner and her own competence. Before AI, the lawyer who drafted a brief trusted her own knowledge of the law. She had read the cases. She had argued the motions. She had developed, through years of practice, the tacit sensitivity to legal reasoning that enabled her to construct arguments that held together under scrutiny. Her trust in the brief's quality was grounded in her trust in her own process — in her confidence that she had done the work, engaged with the material, exercised the judgment that her training had equipped her to exercise. The brief was an expression of her personal knowledge, and her trust in it was a trust in herself.
When the AI drafts the brief, this fiduciary relationship is disrupted. The lawyer reviews the output, signs her name, and presents it to the court. But her trust in the brief's quality is no longer grounded in her trust in her own process. It is grounded in her trust in the tool's process — a process she cannot inspect, cannot fully understand, and cannot evaluate against the tacit standards that her own engagement with the material would have produced. She trusts the brief because the brief looks right. But "looks right" is a focal evaluation — an assessment of the surface — that lacks the subsidiary depth of the evaluation she would have performed if she had done the work herself. She trusts the product without having trusted the process, and the gap between these two forms of trust is where the fiduciary framework fractures.
The fracture is compounded at the level of the professional relationship. The client trusts the lawyer. The trust is fiduciary in both the legal and the Polanyian sense: the client commits herself to the lawyer's competence, staking her legal interests on the lawyer's professional judgment. The client's trust is grounded in the assumption that the lawyer has done the work — that she has engaged with the case, exercised her judgment, brought the full force of her professional expertise to bear on the client's problem. When the lawyer delegates the work to AI, the client's trust rests on a foundation that is thinner than the client assumes. The lawyer has not done the work in the sense that the client's trust presupposes. She has reviewed the output of a tool whose process she cannot inspect. She has performed a focal evaluation of a product without having performed the subsidiary engagement that would give the focal evaluation its depth.
This is not a minor adjustment in the trust relationship. It is a structural change in the fiduciary framework of professional practice. The client trusts the lawyer. The lawyer trusts the tool. The tool trusts nothing — it has no fiduciary commitments, no personal stake, no evaluative framework grounded in years of engaged practice. The chain of trust that connects the client's interests to the quality of the legal work has been extended by a link that lacks the fiduciary character of the other links. The lawyer's trust in the tool is not the same kind of trust as the client's trust in the lawyer. The client's trust involves the assumption of personal commitment — the belief that the lawyer has engaged her professional self in the client's cause. The lawyer's trust in the tool involves no such assumption. The tool is not committed to the client's cause. It is not committed to anything. It produces outputs in response to prompts, and the outputs, however competent, carry no personal commitment, no fiduciary responsibility, no stake in their own reliability.
The disruption extends further. The professional community that evaluates the lawyer's work — the judges, the opposing counsel, the peers who read the briefs and assess their quality — has historically operated within a fiduciary framework that assumed the brief was the product of the lawyer's personal engagement with the law. The evaluation was not merely an assessment of the product. It was an assessment of the practitioner — of her competence, her judgment, her commitment to the standards of the profession. When the brief is produced by AI and reviewed by the lawyer, the evaluation is operating on a different object than it assumes. The community evaluates the brief as if it were an expression of the lawyer's personal knowledge. It is not. It is an expression of the tool's pattern-matching capacity, filtered through the lawyer's review. The community's evaluative framework — which is itself a fiduciary structure, grounded in shared assumptions about what professional work represents — no longer accurately describes what it is evaluating.
Polanyi would identify this as a corruption of the fiduciary framework — not because anyone has acted dishonestly but because the structural assumptions on which the framework rests have been altered without the framework itself being updated. The client still trusts the lawyer. The community still evaluates the brief. The professional standards still assume personal engagement. But the reality beneath these assumptions has changed, and the assumptions have not been revised to reflect the change. The fiduciary framework is operating on outdated premises, and the gap between the premises and the reality is where the epistemological risk accumulates.
The risk is not that any individual brief will be wrong. Most AI-generated briefs are competent. The risk is systemic. It is the risk that the entire fiduciary structure of professional practice — the chain of trust that connects client to lawyer to community to standard — will be gradually hollowed out as the personal engagement that grounds each link in the chain is replaced by tool-mediated production that lacks the fiduciary character the chain requires. The hollowing is invisible because the products remain competent. The briefs are filed. The cases proceed. The standards appear to be met. But the trust that connects the products to their supposed ground — the personal knowledge, the engaged judgment, the committed evaluation of a practitioner who has done the work — is eroding beneath the surface.
Segal identifies one dimension of this erosion when he describes the seductive quality of AI-generated prose — the way the smooth output can outrun the thinking, producing passages that sound like personal conviction without actually containing it. The seduction is a fiduciary risk. The reader trusts the text because the text presents itself as the expression of the author's considered judgment. When the text is actually the expression of the tool's pattern-matching, smoothed by the author's review but not grounded in the author's own wrestling with the material, the reader's trust is resting on a foundation that is thinner than the text's surface suggests.
The fiduciary framework extends to education, and the educational disruption may be the most consequential of all. The student trusts the teacher. The trust is fiduciary: the student commits herself to the teacher's guidance, accepting the teacher's evaluation of her work as a reliable indicator of her developing competence. The teacher's evaluation, in turn, is grounded in the assumption that the student's work represents the student's own engagement with the material — that the essay the student submits is the product of the student's struggle with the ideas, the student's effort to articulate understanding, the student's exercise of the developing judgment that the assignment was designed to cultivate.
When the student uses AI to produce the essay, the fiduciary framework collapses from both directions. The student's trust in the teacher's evaluation becomes hollow, because the evaluation is no longer measuring what it claims to measure — the student's developing competence — but the tool's capacity to produce competent-looking text. The teacher's trust in the student's work becomes unreliable, because the work no longer represents the personal engagement that the teacher's evaluation presupposes. The assignment itself — designed as a vehicle for the developmental process of writing, thinking, struggling, and learning — has been reduced to a transaction in which the student submits a product and the teacher evaluates it, with the developmental process that the assignment was designed to produce bypassed entirely.
Polanyi insisted that the fiduciary framework is not a luxury. It is a constitutive feature of all knowing. The scientist cannot produce knowledge without trusting her instruments, her methods, and her community. The student cannot develop knowledge without trusting her teachers, her assignments, and the evaluative process that certifies her progress. The professional cannot exercise knowledge without trusting her own competence, her community's standards, and the institutional structures that sustain professional practice. When the fiduciary framework is corrupted — when the trust that constitutes it rests on premises that no longer match reality — the knowledge that depends on it becomes unreliable, not because the outputs are wrong but because the process that connects the outputs to their supposed ground has been disrupted.
The repair of the fiduciary framework in the age of AI requires what Polanyi would call a responsible revision of commitments. The assumptions on which professional trust, educational trust, and institutional trust are grounded must be updated to reflect the reality of AI-mediated production. This means transparency about when and how AI tools are used. It means revising evaluative frameworks to account for the difference between personally produced work and tool-mediated work. It means building institutional structures that maintain the occasions for personal engagement — the direct, friction-rich, tacit-knowledge-building encounters with the material — even as AI tools make those encounters optional.
The alternative — maintaining the old fiduciary framework while the reality beneath it shifts — is a prescription for systemic epistemological failure. The trust holds until it doesn't. The framework functions until the gap between its assumptions and reality becomes too large to sustain. And the collapse, when it comes, will not be a single dramatic failure but a gradual, invisible erosion of the reliability of the knowledge that the framework was supposed to guarantee — an erosion that no one notices because the outputs continue to look competent, the metrics continue to show improvement, and the fiduciary structure continues to operate on premises that no longer describe the world it governs.
Discovery does not begin with a hypothesis. It begins with a disturbance — a sense, often inarticulate and always preliminary, that something is there. Not a conclusion waiting to be verified but a pattern waiting to be revealed. The scientist who follows a hunch into an experiment she cannot fully justify, the mathematician who pursues an approach she cannot yet defend, the engineer who redesigns a system because something about the current architecture feels wrong even though it passes every explicit test — each is exercising what Polanyi called the logic of tacit inference. Each has committed to an intimation before the intimation can be validated, because the validation is the discovery itself, and the discovery cannot occur without the prior commitment to pursue it.
This structure of discovery — intimation, commitment, pursuit, validation — is one of Polanyi's most radical claims, and it overturns the standard account of how knowledge advances. The standard account, inherited from positivism and codified in methodology textbooks, presents discovery as a process of conjecture and refutation: the scientist formulates a hypothesis, designs an experiment to test it, collects data, and evaluates whether the data supports or refutes the hypothesis. The process is explicit at every stage. The hypothesis is articulable. The experiment is specifiable. The data is measurable. The evaluation follows rules. There is no room in this account for the inarticulate, the pre-verbal, the felt sense that drives the scientist toward one line of inquiry rather than another.
Polanyi insisted that the standard account describes the last stage of discovery and mistakes it for the whole. The hypothesis that the scientist tests was not plucked from the space of all possible hypotheses by random selection. It was selected by a process of tacit evaluation — a process in which the scientist's accumulated understanding of the domain, her embodied sensitivity to patterns in the data, her intuitive sense of what is significant and what is noise, converged on an intimation of a hidden pattern that seemed worth pursuing. The selection is not arbitrary. It is guided by what Polanyi called a "fine sense of plausibility" — a tacit capacity to assess which directions are promising before any explicit evidence is available. "The imagination," he wrote, "does not work like a computer surveying millions of useless alternatives, but by producing ideas guided by a fine sense of their plausibility."
The computer, notably, does survey millions of alternatives. The large language model generates outputs by computing probability distributions across a vast space of possible token sequences. It does not experience intimation. It does not feel the pull of a promising direction. It does not commit to an approach before the evidence justifies the commitment. It computes the most probable output given the input and the training data, and the computation, however sophisticated, lacks the structure of discovery that Polanyi described — the movement from tacit intimation through personal commitment to articulate validation.
This distinction matters because discovery — genuine discovery, the kind that produces knowledge that did not previously exist — is not the production of probable outputs from existing data. It is the recognition of patterns that the existing data does not yet specify, the pursuit of connections that no computation can derive from the available information, the commitment to an insight that cannot be justified until the insight has been fully developed. Discovery requires the knower to go beyond the evidence — to stake herself on a possibility that the evidence suggests but does not confirm, to invest time, effort, and reputation in a direction that may prove fruitless, to exercise the kind of passionate commitment that Polanyi considered the motive force of all intellectual progress.
The builder's experience with AI tools captures this structure with a specificity that Polanyi's abstract analysis sometimes lacks. Segal describes bringing a "half-formed idea" to Claude — a problem he could not yet articulate, a connection he could feel but not specify. The idea was an intimation in Polanyi's precise sense: a pre-articulate sense that something was there, waiting to be developed. The AI responded with a concrete form — a structure, a framework, a set of connections drawn from its training data. The concrete form either confirmed the intimation or redirected it. The dialogue between the inarticulate and the articulate, between the felt sense and the specified form, is the structure of discovery operating in real time.
But Polanyi's framework reveals an asymmetry in this dialogue that the technology discourse tends to obscure. The intimation comes from the human. The commitment comes from the human. The evaluation of whether the concrete form captures the intimation or betrays it comes from the human. The AI provides the articulation — the rapid, concrete, pattern-derived response that gives the intimation a form it can be evaluated against. But the AI does not intimate. It does not commit. It does not evaluate in the Polanyian sense — the sense that involves staking one's judgment on the outcome, accepting responsibility for the direction taken, exercising the personal knowledge that transforms a plausible output into a genuine insight.
The distinction clarifies why the collaboration is productive and why it is limited. The collaboration is productive because the AI provides what the human often lacks: the capacity to traverse a vast space of articulated possibilities at a speed no human mind can match. The researcher who intimates a connection between two domains can ask the AI to explore the connection in ways that would take her months to explore independently. The builder who senses that a system architecture needs restructuring can ask the AI to generate alternative architectures that embody the restructuring in concrete form. The writer who feels that an argument needs a different structural support can ask the AI to propose supports drawn from domains the writer has not considered. In each case, the AI accelerates the articulation phase of discovery — the phase in which the inarticulate intimation is given concrete form.
The collaboration is limited because the AI cannot perform the other phases. It cannot intimate. It cannot exercise the fine sense of plausibility that selects promising directions from the space of all possible directions. It cannot commit — it has no stake in the outcome, no reputation to risk, no career that depends on the direction it suggests. And it cannot evaluate the way a personal knower evaluates — by bringing the full weight of her tacit understanding, her embodied sensitivity, her accumulated experience to bear on the question of whether the articulated form captures the intimation's truth or merely its surface.
Segal's description of the moment he spent two hours in a coffee shop writing by hand — recovering the rough, honest, personally committed version of an argument from the smooth, impersonal, tool-generated version — is a description of the evaluative phase of discovery operating against the seductive convenience of accepting the articulation phase as the whole. The tool had provided an articulation. The articulation was plausible. It met every explicit standard. But it did not capture the intimation — it did not embody the specific insight that Segal had been reaching for, the insight that could only be validated by the messy, uncomfortable, personally committed process of figuring out what he actually believed. The coffee shop was the site of genuine discovery — the place where the intimation was finally given a form that the knower could commit to, not because the form was smooth but because it was true.
The structure of discovery also illuminates the specific danger of AI-generated research and analysis. The danger is not that the outputs will be wrong in obvious ways. The danger is that the outputs will be right in probable ways — that the AI will produce analyses that are statistically consistent with the existing body of knowledge without revealing anything that the existing body of knowledge does not already contain. The outputs will be competent summaries, elegant syntheses, well-organized compilations of what is already known. They will not be discoveries, because discovery requires going beyond the data, and the AI, by its nature, cannot go beyond the data from which its patterns were derived.
Polanyi's concept of the fine sense of plausibility names what is lost when the intimation phase of discovery is bypassed. The scientist who has spent years immersed in a domain develops a sensitivity to the domain's unresolved tensions — the places where the existing theory strains against the data, the anomalies that the current framework cannot explain, the questions that the community has stopped asking because the answers seem intractable. This sensitivity is tacit. It cannot be computed from the data because it concerns the relationship between the data and the knower's understanding of what the data should look like if the current theory were complete. The anomaly is visible only to someone who has internalized the theory deeply enough to sense where it fails.
The AI cannot sense where the theory fails because the AI has not internalized the theory. It has processed the theory's outputs. It has learned the statistical patterns that the theory's application produces. But it has not developed the tacit understanding that would enable it to recognize when those patterns conceal a deeper pattern that the theory does not predict — the kind of recognition that drives genuine discovery. The AI can produce the expected. It cannot intimate the unexpected. And the unexpected is where discovery lives.
The implications extend beyond science to every domain of knowledge work. The lawyer who senses that a line of precedent is weakening, the physician who suspects that a standard treatment protocol is missing something, the architect who intuits that a design trend is producing buildings that fail in ways no one has yet articulated — each is exercising the structure of discovery. Each is intimating a pattern that the existing data does not specify, committing to the intimation before it can be justified, and pursuing the validation that will determine whether the intimation was genuine insight or mere noise. Each is exercising precisely the form of knowing that AI cannot perform — and that the market, with its focus on probable outputs and measurable competence, is least equipped to value.
Polanyi did not believe that discovery could be institutionalized. He believed that it could be cultivated — by protecting the conditions under which the fine sense of plausibility develops, by sustaining the communities of practice within which intimations are shared and evaluated, by maintaining the educational structures through which novices develop the tacit sensitivity that makes genuine discovery possible. The AI moment is a threat to these conditions not because AI suppresses discovery directly but because AI makes the articulation phase so productive, so efficient, so seductively satisfying that the intimation phase — the slow, uncertain, personally committed phase that cannot be accelerated — is devalued by comparison. The builder who can produce a working prototype in an hour has less incentive to spend a week in the inarticulate uncertainty that precedes genuine insight. The researcher who can generate a comprehensive literature review in minutes has less incentive to spend months immersed in the primary sources, developing the tacit sensitivity to the field's unresolved tensions that makes genuine discovery possible.
The protection of discovery in the AI age is the protection of the inarticulate — of the pre-verbal, the uncertain, the not-yet-formed. It is the protection of the right to not know, to be confused, to follow a hunch that cannot be justified, to commit to an intimation before the evidence arrives. These protections are not merely practical. They are epistemological. They preserve the conditions under which genuine knowledge — knowledge that goes beyond the probable, beyond the expected, beyond what the existing data already contains — can emerge. Without them, the AI-augmented civilization will be extraordinarily productive and profoundly incapable of surprise. It will produce more of what is already known, faster and more efficiently than ever before, and less of what is not yet known, because the knowing of the not-yet-known requires precisely the form of personal commitment, tacit intimation, and inarticulate pursuit that the market's logic of efficiency is most inclined to eliminate.
---
The deepest limitation of artificial intelligence is not that it lacks specific knowledge in some domain that future training will remedy. It is not that its reasoning is sometimes wrong, its outputs sometimes fabricated, its confidence sometimes misplaced. These are correctable limitations — problems of engineering, of data, of architectural refinement that the extraordinary ingenuity of the AI research community will continue to address with impressive results. The deepest limitation is structural. It concerns not what the machine gets wrong but what the machine cannot get right — not the errors in its outputs but the absence at its core.
The machine does not know what it does not know.
This sentence is not a paradox. It is a precise description of the difference between a system that processes information and a mind that possesses knowledge. The knower — the human being who has committed herself to understanding a domain, who has built the tacit ground through years of engaged practice, who exercises judgment by attending from her accumulated subsidiary awareness to a focal evaluation — knows the limits of her own competence. She knows where her understanding is deep and where it is thin. She knows when she is confident and when she is guessing. She knows the difference between a judgment grounded in her own experience and a judgment extrapolated from someone else's. This knowledge of the limits of knowledge — this meta-awareness that constitutes the knower's most important possession — is the thing the machine lacks entirely.
The machine produces outputs with uniform confidence. A large language model generating a legal brief, a medical diagnosis, a philosophical argument, or a piece of creative writing does not distinguish between domains where its training data is rich and domains where it is thin. It does not flag the moment when its output transitions from pattern-matching against a deep base of relevant examples to extrapolation from a sparse and possibly unrepresentative sample. It does not experience the sensation — familiar to every honest expert — of reaching the edge of its competence, the point where the solid ground of genuine understanding gives way to the uncertain territory of informed speculation. The machine does not have edges. It has probability distributions. And probability distributions do not know where they end.
Polanyi would identify this absence as the definitive difference between information processing and genuine knowing. Knowing, in his framework, is personal — it involves the knower's commitment, her stake in the truth of what she claims, her responsibility for the judgments she makes. Part of what makes knowing personal is the knower's awareness of the limits within which her knowing is reliable. The physicist who says "I am confident about this result" is not merely reporting a probability. She is expressing a personal evaluation — grounded in her tacit understanding of the experimental conditions, the reliability of her instruments, the robustness of her theoretical framework — that the result falls within the domain where her competence is genuine. If the same physicist says "I am not sure about this," she is exercising the same personal evaluation in the opposite direction — identifying a point where her tacit ground is insufficient to support a confident judgment, where the honest response is not an answer but a question.
The machine cannot say "I am not sure about this" with the kind of personal commitment that gives the statement its epistemic weight. The machine can be programmed to express uncertainty — to preface its outputs with hedging language, to assign probability scores to its claims, to flag areas where its training data is thin. But these expressions of uncertainty are computational, not personal. They do not arise from the machine's awareness of its own limits. They arise from statistical calculations about the distribution of relevant training data. The difference matters because the human expression of uncertainty is itself a form of knowledge — a tacit evaluation of the boundary between what the knower knows and what she does not — while the machine's expression of uncertainty is merely a calculation about data density.
Segal captures a specific instance of this limitation when he describes Claude's hallucination of the Deleuze connection — the moment when the machine produced a philosophically sophisticated, rhetorically elegant, and substantively wrong passage that the builder initially accepted because it met every surface criterion of quality. The machine did not know it was wrong. It could not know it was wrong, because the machine does not possess the tacit understanding of Deleuze's philosophy that would enable it to recognize the misattribution. The machine produced the passage with the same computational confidence it produces accurate passages, because its confidence is a function of statistical probability, not of understanding. The passage was probable. It was wrong. And the machine had no way to distinguish between these two conditions.
Polanyi formulated this limitation in his debate with Turing in 1949, before the first modern computer had been built. His argument was that minds possess what he called "unspecified and pro-tanto unspecifiable elements" — aspects of knowing that cannot be captured in any formal specification, no matter how detailed. Among these unspecifiable elements is the capacity for self-evaluation — the ability to assess one's own competence, to recognize one's own limits, to know when one is operating within the boundaries of genuine understanding and when one has strayed beyond them. This capacity is tacit. It arises from the knower's embodied engagement with the domain, from the accumulated experience of being right and being wrong, of having confident judgments confirmed and having confident judgments overturned. It is built through the specific developmental process of becoming a knower — and it cannot be replicated by a system that has never been wrong in the way a person is wrong, that has never experienced the disconfirmation that recalibrates the boundary between knowledge and ignorance.
Kambhampati's concept of "Polanyi's Revenge" adds a further dimension to this analysis. The AI community's success in training machines to capture tacit patterns from data has produced systems that are, in a specific sense, more dangerous than the earlier rule-based systems they replaced. The rule-based systems had explicit limits — they operated within the boundaries of the rules they were programmed with, and when a situation fell outside those boundaries, the system's failure was obvious. The pattern-based systems have no explicit limits. They produce outputs across an unlimited range of domains, with no mechanism for recognizing when they have crossed the boundary from areas where their patterns are reliable to areas where their patterns are spurious. The systems that learned to capture tacit knowledge from data now refuse to accept explicit knowledge that would constrain their outputs — a refusal that Kambhampati traces directly to the architecture's singular focus on learning from patterns rather than from understanding.
The result is a system that is maximally confident and minimally self-aware — that produces outputs of remarkable sophistication across every domain of knowledge work without possessing the evaluative capacity to distinguish its genuine competence from its sophisticated guessing. The system does not know what it knows. It does not know what it does not know. And it does not know that it does not know — which means that the responsibility for knowing falls entirely on the human who uses it.
This is the deepest practical implication of Polanyi's framework for the AI moment. The machine cannot evaluate itself. It cannot assess its own reliability. It cannot recognize when its patterns are applicable and when they are not. The human must perform all of these evaluations, and the human can only perform them if she possesses the tacit ground — the personal knowledge, the embodied understanding, the accumulated sensitivity to the domain — that enables her to assess the machine's outputs against a standard the machine itself does not have.
The senior engineer who has spent twenty-five years building systems can evaluate Claude's code against his tacit understanding of how systems behave. He can sense — not compute, not verify by explicit criteria, but sense — when the code is right and when it is subtly wrong. He can recognize the patterns that indicate robustness and the patterns that indicate fragility. He can feel the architecture's coherence or its hidden tension. This evaluative capacity is the thing he brings to the collaboration that the machine cannot supply — and it is the thing that his twenty-five years of struggle, failure, and accumulated understanding have produced.
The junior developer who lacks this tacit ground cannot perform the evaluation. She can check the code against explicit criteria — does it compile, does it pass the tests, does it produce the expected output — but she cannot evaluate it against the tacit standard that only embodied expertise provides. She cannot sense the subtle wrongness that indicates a system heading for failure under conditions the tests do not cover. She cannot feel the architectural fragility that will manifest only when the system is deployed at scale, under load, in the real world where the conditions are messier than the test environment. She accepts the code because it meets the explicit criteria. The explicit criteria are necessary but not sufficient. The sufficiency comes from the tacit evaluation that only personal knowledge can provide.
This asymmetry — the senior's capacity to evaluate and the junior's inability to do so — is the fault line along which the AI transition will produce its most consequential effects. If the senior generation transmits its tacit knowledge to the junior generation, the evaluative capacity will be sustained. The juniors will develop the embodied understanding that enables them to assess the machine's outputs against a tacit standard of quality. The human-AI collaboration will be genuinely productive — the machine providing rapid articulation, the human providing tacit evaluation, the combination producing knowledge that neither could achieve alone.
If the senior generation's tacit knowledge is not transmitted — if the apprenticeship structures are dissolved, if the developmental friction is eliminated, if the junior generation learns to use the tools without first developing the tacit ground against which the tools' outputs must be evaluated — the evaluative capacity will erode. The juniors will accept the machine's outputs because the outputs meet the explicit criteria, and the explicit criteria are the only criteria they possess. The tacit standard, the one that detects the subtle wrongness, the hidden fragility, the sophisticated fabrication — that standard will not be developed, because the developmental process that produces it has been bypassed.
The machine cannot know what it cannot know. Only the human can know this — and only the human who has built the tacit ground that makes the knowing possible. The protection of this ground — through education that values struggle, through communities that sustain apprenticeship, through institutional structures that maintain the developmental processes by which tacit knowledge is built — is the most important thing that any civilization deploying AI at scale can do. Not because the machines are dangerous. Because the machines are confident. And confidence without the capacity for self-evaluation — confidence without the awareness of limits, without the felt boundary between knowledge and ignorance, without the tacit understanding that tells the knower where her knowing ends — is the most dangerous form of ignorance there is.
Polanyi spent his career arguing that what we cannot tell is more important than what we can. The AI moment has made this argument urgent in a way that even Polanyi could not have anticipated. The machines can tell everything. They can articulate, formalize, specify, and produce outputs of extraordinary explicit quality across every domain of human knowledge work. What they cannot do is what makes knowledge knowledge rather than information: they cannot commit to the truth of what they produce, cannot evaluate it against a tacit standard of quality, cannot recognize their own limits, and cannot know what they do not know. These incapacities are not bugs to be fixed. They are structural features of a system that operates on explicit representations in a world where the most important knowledge is tacit.
The human who uses the machine wisely is the human who supplies what the machine lacks: the tacit ground, the personal commitment, the evaluative capacity, the awareness of limits. The human who mistakes the machine's output for knowledge — who accepts the confident articulation as a substitute for the tacit understanding that makes articulation meaningful — has lost the most important thing that Polanyi's philosophy was built to protect.
We can know more than we can tell. The machine can tell more than it can know. The difference between these two sentences is the difference between knowledge and information, between understanding and pattern-matching, between the personal commitment that gives a claim its authority and the computational confidence that gives an output its fluency. The AI age will be defined by whether the civilization that deploys these extraordinary tools can maintain the distinction — or whether, seduced by the fluency, it will forget that the distinction ever mattered.
---
The phrase I cannot shake is one Michael Polanyi wrote in 1958, seven years before I was born: "The specification of the mind implies the presence of unspecified and pro-tanto unspecifiable elements." He said it to Alan Turing's face, in a seminar room in Manchester, before there was an artificial intelligence field to argue about. The machines had not yet learned our language. They had barely learned to compute.
I keep returning to that sentence because it names the thing I have been circling around since December 2025 — the thing I tried to describe in The Orange Pill without having the right vocabulary. The vocabulary belonged to a physical chemist who turned philosopher because he realized that the most important things about knowing could not be captured by the scientific method he had spent his career practicing.
What Polanyi gave me was a way to understand why turning off the tool feels like losing a sense rather than putting down a hammer. The word is indwelling. When I work with Claude at three in the morning and the work is flowing, I am not operating a machine. I am perceiving through a machine. The tool has become transparent — part of my way of seeing what is possible. The landscape of the buildable has expanded, and the expansion has become my normal field of vision. Removing the tool does not return me to some prior state of adequacy. It contracts the world.
That is a real phenomenon. Polanyi's framework makes it legible. But his framework also makes legible the danger I confessed to in The Orange Pill — the moment when the prose outran my thinking, when I almost kept a passage because it sounded like conviction without actually containing it. Polanyi would say I was experiencing the collapse of personal knowledge: accepting the tool's output as my own without performing the tacit evaluation that makes knowledge mine. The smooth surface that Byung-Chul Han diagnosed and I could not fully explain — Polanyi explains it. The smoothness conceals the absence of the knower's commitment. The output arrives with all the marks of authority and none of its substance.
What unsettles me most about these chapters is the argument about what the junior developer lacks. My engineer in Trivandrum who built frontend features in two days without frontend experience — I celebrated that in The Orange Pill. I still celebrate the capability. But Polanyi forces me to ask: what subsidiary awareness was not deposited? What tacit ground was not laid? The features work. The product shipped. But the from-to structure through which the features were built rests on the tool rather than on the builder's embodied understanding. The difference is invisible now. It may not remain invisible forever.
The twelve-year-old who asked her mother, "What am I for?" — I wrote about her because the question haunts me as a parent. Polanyi's framework tells me why the question matters so precisely. The child is asking about her capacity to commit, to know personally, to possess the tacit ground that no machine possesses. The answer I would give her now, after working through these chapters, is not that she should learn to code or learn to prompt. It is that she should learn to struggle — to stay in the confusion long enough for the tacit dimension to form, to build the substrate of understanding that will let her evaluate whatever the machines produce.
We can know more than we can tell. That is what the machines will never have. That is what we must not allow the machines to make us forget we possess.
— Edo Segal
What if the most important thing about human knowledge is the part that cannot be written down, computed, or trained into a model? Michael Polanyi, a physical chemist who debated Alan Turing before artificial intelligence had a name, spent his career arguing exactly that. His concept of tacit knowledge -- the vast, inarticulate foundation beneath every skill, every judgment, every act of genuine understanding -- exposes a blind spot at the center of the AI revolution: the tools produce extraordinary outputs, but output is not knowledge, and the difference will define everything.
This book brings Polanyi's framework into direct contact with the arguments of The Orange Pill, examining what happens when tools that cannot know what they don't know are indwelt by builders who can -- and what happens when they're indwelt by those who can't yet. From the erosion of apprenticeship to the fiduciary collapse of professional trust, Polanyi's philosophy reveals what the productivity metrics cannot measure.
These chapters are not a warning against AI. They are a map of what must be protected so that the amplification remains genuine -- so that the extraordinary surface rests on ground that holds.
-- Michael Polanyi, The Tacit Dimension (1966)

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Michael Polanyi — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →