By Edo Segal
The position I kept apologizing for turned out to be the only honest one.
For months, every conversation about AI forced a choice. You were either for the revolution or against it. You celebrated the productivity gains or you mourned what they destroyed. You posted your metrics or you posted your grief. The discourse had two doors, and the people standing in the hallway — the ones who felt the exhilaration of building with Claude at midnight and the dread of watching their son lose interest in struggle by morning — were treated as though they simply hadn't made up their minds yet.
I was one of those people. I described them in *The Orange Pill* as the silent middle. I wrote about the vertigo of holding contradictory truths in both hands. But I treated it, privately, as a deficiency. A failure to commit. The triumphalists had conviction. The doomers had conviction. I had ambivalence, and ambivalence felt like cowardice.
Then I encountered Richard Bernstein's concept of the Cartesian Anxiety — the four-century-old conviction that unless we find absolute ground beneath our feet, we are in free fall. Either we possess the truth completely, or we possess nothing at all. Either AI is salvation or pathology. Either the foundation holds or chaos reigns.
Bernstein spent fifty years demonstrating that this binary is the error. Not one side of it. The binary itself. The desperate need for solid ground is what drives people to the extremes — and the extremes, however confident they sound, are both impoverished descriptions of a reality that is genuinely, irreducibly complex.
What Bernstein offered instead was not a middle path. It was a harder path: engaged fallibilism. Hold your convictions with real force. Act on them. Build with them. But maintain the discipline to revise them when the evidence demands it. The absence of certainty is not the absence of truth. It is the condition of all honest inquiry.
This reframed everything for me. The vertigo was not weakness. It was the starting condition of responsible thought in a moment that resists simple narratives. The people in the hallway were not indecisive. They were the ones whose experience was too honest to fit through either door.
Bernstein died five months before ChatGPT launched. He never saw the moment his framework was built for. But the architecture holds — and if you have felt the specific discomfort of caring deeply about something you cannot fully endorse or fully condemn, this book will show you that the discomfort is not a problem to solve. It is the practice itself.
-- Edo Segal ^ Opus 4.6
1932-2022
Richard J. Bernstein (1932–2022) was an American philosopher widely regarded as one of the most important interpreters and synthesizers of the pragmatist tradition. Born in Brooklyn, New York, he studied at Columbia University and Yale, and spent the majority of his career at the New School for Social Research in New York City, where he chaired the philosophy department for over two decades. His landmark work *Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis* (1983) diagnosed what he called the "Cartesian Anxiety" — the destructive either/or between absolute foundations and intellectual chaos — and argued for a practice of "engaged fallibilism" that held genuine commitment in tension with openness to revision. Across works including *Praxis and Action* (1971), *The Restructuring of Social and Political Theory* (1976), *The New Constellation* (1991), and *The Pragmatic Turn* (2010), Bernstein built sustained dialogues between American pragmatism, Continental hermeneutics, and critical theory, drawing together Peirce, Dewey, Gadamer, Habermas, Arendt, and Rorty into a vision of philosophy as democratic conversation. His recovery of Aristotelian *phronesis* — practical wisdom irreducible to technical procedure — and his insistence on communal inquiry as the foundation of both knowledge and democratic life established him as a thinker whose work bridges the gap between abstract philosophy and the lived challenges of navigating uncertainty. He died in New York in July 2022.
In the winter of 1637, René Descartes sat in a room heated by a porcelain stove and performed the most consequential thought experiment in Western philosophy. He decided to doubt everything — the testimony of his senses, the existence of the physical world, the reliability of mathematics, the trustworthiness of his own memory. He stripped away every belief that could conceivably be false until he arrived at the one thing he could not doubt: the fact that he was doubting. Cogito ergo sum. I think, therefore I am.
The experiment was supposed to establish a foundation. An unshakable rock upon which all knowledge could be reconstructed. But what Descartes actually produced was something far more durable and far more dangerous than any foundation. He produced an anxiety. The anxiety that has structured Western intellectual life for nearly four centuries: the conviction that unless we can find some fixed point, some stable rock upon which to secure our lives against the vicissitudes that constantly threaten us, we are lost.
Richard Bernstein spent the better part of five decades diagnosing this anxiety and demonstrating its consequences. In Beyond Objectivism and Relativism, published in 1983, he gave it a name — the Cartesian Anxiety — and argued that it had produced the most persistent and destructive false binary in the history of thought. The binary looks like this: Either there is some support for our being, a fixed foundation for our knowledge, or we cannot escape the forces of darkness that envelop us with madness, with intellectual and moral chaos. Either we possess the truth absolutely, or we possess nothing at all. Either the ground holds, or we are in free fall.
The anxiety is not merely epistemological. Bernstein was careful to insist on this point throughout his career, and he reiterated it in one of his final interviews, conducted in December 2021, just months before his death. "I do not see Cartesian anxiety only as an epistemological anxiety," he explained. "I see it as something that has a much more general significance, something that has a political, an ethical, a religious significance." The need for absolute foundations is not confined to philosophers arguing about the nature of knowledge. It operates in every domain where human beings confront uncertainty and find the uncertainty intolerable. In politics, it produces authoritarianism: the desperate attachment to a leader or ideology that promises certainty in a chaotic world. In religion, it produces fundamentalism: the insistence that sacred texts must be read literally because the alternative is moral anarchy. In personal life, it produces rigidity: the refusal to revise one's beliefs because revision feels like collapse.
And in the technology discourse of 2025 and 2026, it produced the most spectacular intellectual failure of the AI moment: the calcification of positions within weeks of a genuine threshold, before most of the participants in the debate had spent serious time with the tools they were debating.
The speed of the calcification was remarkable. In December 2025, when AI coding tools crossed a capability boundary that made previous assumptions about software development structurally obsolete, the discourse did not pause to assess. It did not gather evidence. It did not consult the people most directly affected. It split. Within days, positions hardened into camps that bore an uncanny resemblance to the objectivist and relativist poles Bernstein had diagnosed four decades earlier.
On one side stood the triumphalists — the AI objectivists, in Bernstein's terms. Their position was structurally identical to the foundationalist claim that Descartes had made in 1637: they had found the rock. AI was unambiguously good. The productivity gains were self-evidently valuable. The adoption curves proved it. The revenue numbers confirmed it. The future belonged to those who accelerated, and anyone who expressed doubt was a Luddite, a sentimentalist, a person clinging to obsolete skills out of fear rather than principle. The triumphalists possessed a confidence that their evidence could not support — not because the evidence was false, but because the evidence addressed only the dimension of the phenomenon they had chosen to measure. Productivity gains are real. They are also partial. They tell you how much faster people are working. They do not tell you whether the faster work is better work, or whether the workers are flourishing, or whether the gains are distributed in ways that serve the broader community. The triumphalist treated the measurable dimension as the whole of the truth, which is the epistemological error that objectivism always makes: mistaking one well-lit corner of reality for the entirety of the room.
On the other side stood the doomers and elegists — the AI relativists, in Bernstein's terms, though the label requires qualification. Strict relativism claims that no knowledge claim is better justified than any other. The AI elegists were not making that claim. They were making a more specific and in many ways more sophisticated argument: that every gain the triumphalists celebrated concealed a loss the triumphalists could not see, that the removal of friction from creative work destroyed the depth that friction produced, that the speed of AI-assisted production was generating a culture of shallow competence dressed in the aesthetics of mastery. The philosopher Byung-Chul Han articulated this position with the greatest precision: the smooth surface that AI produces — frictionless, seamless, optimized for ease — is not a neutral improvement. It is an aesthetic that, applied to human existence, hollows out the capacity for the kind of slow, resistant, difficult engagement through which genuine understanding is built.
The elegist position contained genuine insight. Bernstein's method — which always required presenting opposing arguments in their strongest form before identifying their limitations — demands that this be acknowledged. Something real is lost when the struggle that produces understanding is optimized away. The senior engineer who spent years building embodied intuition through thousands of hours of debugging possesses a form of knowledge that cannot be transmitted through frictionless production. The student who uses AI to generate an essay has not thought the thoughts the essay represents. These are not trivial observations. They identify a real cost.
But the elegist position also displayed the structural characteristic of relativism that Bernstein spent his career diagnosing: it treated its own passionate engagement as somehow exempt from the critique it leveled at everyone else. The elegists cared deeply about depth, about craft, about the irreplaceable value of hard-won understanding. That caring was itself a form of commitment — a claim that some things matter more than others, that depth is genuinely better than shallowness, that the erosion of craft represents a real loss and not merely a change in fashion. These are not relativist claims. They are normative claims, grounded in a hierarchy of values that the relativist framework cannot justify. The elegist who insists that something precious is being lost has already conceded that some ways of being in the world are better than others, which is precisely the kind of evaluative claim that strict relativism renders incoherent.
Both camps, Bernstein's framework reveals, were caught in the same underlying anxiety. The triumphalist needed AI to be absolutely good because the alternative — that it might be simultaneously liberating and diminishing, productive and corrosive, an expansion of capability and an erosion of depth — produced the vertigo of groundlessness. The ground was supposed to hold. If AI was the foundation of the new economy, the new creativity, the new human capability, then it needed to be solid. Any crack in the foundation threatened the entire structure.
The doomer needed AI to be absolutely dangerous for precisely the same structural reason. If the technology was genuinely mixed — if it really did expand capability for some people in some contexts while eroding depth for other people in other contexts — then the clean narrative of technological pathology dissolved into the messy, situated, case-by-case assessment that the Cartesian Anxiety finds intolerable. It is much easier to know that the machines are the enemy than to know that the machines are complicated.
Between these two camps lay a population that Segal, in The Orange Pill, calls the silent middle — the people who felt both exhilaration and loss, who built with AI tools in the morning and worried about their children's futures in the evening, who could not find a clean narrative because the truth did not come clean. This population was the largest and the most important, and by definition the hardest to hear. Social media rewards clarity, confidence, and emotional charge. "This is amazing" gets engagement. "This is terrifying" gets engagement. "I feel both things at once and I do not know what to do with the contradiction" does not. The algorithmic architecture of the discourse systematically selected against the most epistemically responsible position and systematically amplified the positions that the Cartesian Anxiety produced.
Bernstein would have recognized this pattern instantly. It is the same pattern he traced through four centuries of philosophical debate, the same Either/Or that Descartes inaugurated in 1637 and that has structured intellectual life ever since. Either we have the foundation, or we have chaos. Either AI is the answer, or AI is the problem. Either you are for the technology, or you are against it. The middle ground is not merely uncomfortable. It is structurally invisible in a discourse designed to reward extremes.
But the middle ground is where the truth lives. Not as a compromise — Bernstein was emphatic on this point throughout his career — but as a more adequate understanding that does justice to the genuine insights on both sides while refusing to accept either side's claim to completeness. The triumphalist is right that the productivity gains are real, that the democratization of capability is genuinely expanding who gets to build, that the adoption curves measure a real and deep human need for tools that close the gap between imagination and artifact. The elegist is right that something real is lost when friction disappears, that the understanding built through years of patient struggle cannot be replicated by frictionless production, that the speed of AI-assisted work can produce a culture of smooth surfaces concealing hollow interiors.
The pragmatist response to this impasse is neither to split the difference nor to declare both sides equally valid and retreat to agnosticism. The pragmatist response is to stay in the tension. To pursue the truth of the matter with genuine commitment while acknowledging that the truth may turn out to be more complex than any current formulation can capture. To hold positions with conviction — real conviction, the kind that drives action and shapes decisions — while maintaining the intellectual honesty to revise those positions when the evidence demands it.
This is what Bernstein called engaged fallibilism, and it is the intellectual practice that the AI moment demands more urgently than any moment in recent history. Not because AI is uniquely dangerous or uniquely beneficial, but because AI is uniquely resistant to the simple narratives that the Cartesian Anxiety craves. The technology is simultaneously an amplifier of human capability and a potential solvent of human depth. It is simultaneously democratizing and concentrating. It is simultaneously liberating and addictive. These contradictions are not a sign that the analysis is incomplete. They are a sign that the phenomenon is genuinely complex, and that the intellectual posture adequate to it must be capable of holding complexity without collapsing into the false comfort of certainty.
Descartes wanted a foundation. He wanted the one thing that could not be doubted, the rock upon which everything else could be built. He found it in the cogito — and the finding produced four centuries of philosophy trying to build on a rock that turned out to be smaller than advertised.
The AI moment does not offer a foundation. It offers a river — flowing, powerful, resistant to any single characterization. Bernstein's life's work was dedicated to showing that the absence of a foundation is not the absence of truth. That fallible knowledge is still knowledge. That the ground can shift beneath you and you can still walk — carefully, attentively, with your eyes open and your commitments held in hands that are willing to let go and grasp again.
The chapters that follow apply this framework — engaged fallibilism, the pragmatist tradition, the refusal of the Cartesian Either/Or — to the specific questions that the AI moment raises. The questions are urgent. The answers are provisional. The inquiry does not end.
That is the point.
Charles Sanders Peirce was an impossible man. Brilliant, combative, impoverished for most of his life, unable to hold a university position despite being the most original American philosopher of the nineteenth century. He drank too much. He alienated his patrons. He died in 1914 in a farmhouse in Milford, Pennsylvania, leaving behind eighty thousand pages of unpublished manuscripts and a philosophical insight so powerful that it took the rest of the century to absorb it.
The insight was fallibilism. Not the weak observation that people sometimes make mistakes — everyone knows that. Peirce meant something far more radical: that the entire structure of human knowledge is revisable. Not just the parts we happen to be wrong about. All of it. Every scientific law, every moral principle, every metaphysical commitment, every claim that any human being has ever made about the nature of reality is, in principle, subject to revision in light of future inquiry. There are no exceptions. There are no sacred propositions that stand outside the process of investigation and correction. The history of knowledge is the history of confident assertions being replaced by better ones, and the replacement is never final.
This sounds, on first hearing, like an invitation to nihilism. If nothing is certain, why believe anything? If every claim is revisable, what is the point of making claims at all? This is the response that Peirce's fallibilism has provoked for more than a century, and it is the response that reveals the Cartesian Anxiety operating at full force. The anxious mind hears "your knowledge is fallible" and translates it into "your knowledge is worthless." The translation is wrong, but it is psychologically almost irresistible, because the anxiety insists that knowledge must be either absolute or empty.
Richard Bernstein's most important contribution to philosophy was showing that fallibilism and engagement are not opposed. They are complementary. The recognition that your beliefs might be wrong does not weaken your commitment to them. It deepens it — because the commitment now includes the willingness to test those beliefs, to expose them to counter-evidence, to revise them when revision is warranted, and to hold them with a kind of tough-minded honesty that dogmatic certainty can never achieve.
Bernstein called this practice engaged fallibilism, and he developed it across a career that spanned from Praxis and Action in 1971 through The Pragmatic Turn in 2010. The development drew on multiple philosophical traditions — Peirce's fallibilism, William James's pragmatic method, John Dewey's instrumentalism, Hans-Georg Gadamer's hermeneutics, Jürgen Habermas's theory of communicative action — but the synthesis was distinctively Bernstein's own. Where each of these thinkers provided a piece of the puzzle, Bernstein assembled the pieces into a coherent practice of intellectual life that was simultaneously rigorous and humble, committed and open, principled and revisable.
The architecture of this practice has four load-bearing elements, and each of them speaks directly to the intellectual challenge of navigating the AI moment.
The first element is commitment. The engaged fallibilist is not a spectator. She has positions. She cares about getting things right. She is willing to argue, to defend her views, to act on them in the world. This distinguishes engaged fallibilism from the ironic detachment that became fashionable in postmodern philosophy — the posture of the intellectual who refuses to commit to anything because commitment is naive, because all positions are equally constructed, because sincerity is a form of self-deception. Bernstein had no patience for this posture. He regarded it as a failure of nerve disguised as sophistication. The person who refuses to commit has not transcended the Cartesian Anxiety. She has simply chosen the relativist pole of the Either/Or and dressed it up as critical awareness.
In the AI discourse, the commitment element means taking a position. The builder who works with AI tools and reports that the work has never been deeper — that the collaboration has expanded her creative reach, that the removal of implementation friction has revealed a higher-order challenge that is harder and more interesting than what came before — is making a committed claim. She is not hedging. She is not saying "it seems like maybe AI might possibly be somewhat useful." She is saying: this has changed my work. This has changed what I can attempt. This is real.
The elegist who argues that the removal of friction destroys depth is also making a committed claim. The commitment is to the irreplaceable value of struggle, to the conviction that understanding built through years of patient, resistant engagement with a craft cannot be replicated by frictionless production. This too is real. And both commitments — the builder's and the elegist's — are held with the genuine conviction that engaged fallibilism demands.
The second element is openness to revision. The commitment is real, but it is not final. The engaged fallibilist holds her positions with conviction and simultaneously acknowledges that those positions might be wrong — not in the weak sense of "anything is possible" but in the strong sense of "I have actively considered the ways in which my position could be mistaken, and I remain open to evidence that would require me to change my mind."
This is the element that the AI discourse punishes most severely. The algorithmic architecture of social media selects for confidence. The person who says "I believe this, but I may be wrong" reads as uncertain. The person who says "I believe this, and I am right" reads as strong. The engagement metrics reward the latter. And because the discourse operates through platforms that optimize for engagement, the people who practice the most epistemically responsible form of belief — commitment held in tension with openness — are systematically marginalized.
Segal identifies this population as the silent middle. The name is apt. They are silent not because they have nothing to say but because the discursive architecture offers no format for what they need to say. "I built something extraordinary with AI last Tuesday and I also worry that my son's capacity for sustained effort is being eroded by the same technology" is not a tweet. It does not fit the architecture of a medium that rewards clear, unambiguous, emotionally charged positions. The silent middle is the population practicing engaged fallibilism without a name for what they are doing, and the discourse flows around them the way a river flows around a stone: acknowledging its presence by the disturbance in the current but never incorporating it into the main channel.
The third element is attention to consequences. Pragmatism, from Peirce onward, insists that the meaning of an idea is inseparable from its practical consequences. An idea that makes no difference to experience is not an idea at all — it is a word game. Bernstein inherited this commitment and extended it: the test of any intellectual position is not its internal coherence or its elegance or its ability to win arguments. The test is what happens when you act on it. What are the consequences for the people affected? Are they flourishing? Are they diminished? Are the effects distributed equitably, or do the gains concentrate while the costs disperse?
This element of engaged fallibilism is the one that most directly challenges both the triumphalist and the elegist positions in the AI debate. The triumphalist measures consequences narrowly: productivity gains, revenue growth, adoption curves, lines of code generated. These are real consequences, and they are genuinely impressive. But they are the consequences that the triumphalist's framework has chosen to measure, and the choice of what to measure is itself a commitment that shapes what the measurement reveals. If you measure only speed, the technology looks like an unambiguous success. If you measure also the quality of attention, the depth of understanding, the capacity for sustained effort without external stimulation, the distribution of gains across populations with different levels of access and different starting positions — the picture becomes more complex.
The elegist also measures consequences, but selectively. The elegist measures depth, craft, the embodied knowledge that comes from years of resistant engagement. These too are real consequences, and the measurement reveals genuine loss. But the elegist's framework tends to measure what is lost without attending with equal rigor to what is gained — the expansion of who gets to build, the reduction of barriers between imagination and artifact, the engineer in Trivandrum who built a complete user-facing feature in two days not because she had learned frontend development but because the tool let her describe what the interface should feel like in human terms.
Engaged fallibilism requires attending to all the consequences — the triumphalist's gains and the elegist's losses — with the same rigor and the same willingness to let the evidence change the assessment. This is harder than choosing sides. It requires holding contradictory data in the same analytical frame and refusing to privilege one data set over another on the basis of prior commitment.
The fourth element is community. Bernstein drew this from Peirce's concept of the community of inquiry — the recognition that truth is not something any individual can possess. It is something a community converges upon over time, through the disciplined process of proposing hypotheses, testing them, sharing the results, subjecting them to criticism, revising them, and testing again. The process is social. It requires interlocutors who disagree in good faith, who present counter-evidence without hostility, who are willing to change their minds when the evidence warrants it. The community of inquiry is not a debating society. It is a collective enterprise oriented toward getting things right, and its success depends on the quality of the relationships among its members — their mutual respect, their willingness to be changed by what they hear, their commitment to the shared pursuit of understanding rather than the individual pursuit of victory.
The AI discourse has no functioning community of inquiry. It has camps. The triumphalists talk to triumphalists. The doomers talk to doomers. The silent middle talks to itself, at kitchen tables and in quiet conversations after the cameras turn off, but the institutional structures that would turn those conversations into a community of inquiry — a community capable of attending to the full range of consequences with the rigor and the mutual respect that productive disagreement requires — do not exist at the scale the moment demands.
Bernstein's engaged fallibilism is not a comfortable practice. It does not offer the satisfaction of certainty or the dramatic clarity of despair. It offers something harder and rarer: the discipline of caring about the truth while acknowledging that the truth is more complex than any current formulation can capture. The discipline of holding positions with real conviction while maintaining the honesty to revise them. The discipline of attending to consequences you would rather not see, and of listening to arguments you would rather not hear.
In his final interview, Bernstein was asked about the relevance of his work to the current moment. His response drew on a concept from American pragmatism that he had championed throughout his career: meliorism. "Meliorism means that no matter how bad things are," he explained, "the task is to try and think how you can ameliorate the worse and make things better. This is why someone like John Dewey, when it comes to political issues, is not a revolutionary. He's a social reformer."
Meliorism is the practical face of engaged fallibilism. The committed belief that things can be made better, held in tension with the honest recognition that we do not know, with certainty, what "better" looks like. The AI moment does not need revolutionaries who want to tear the system down. It does not need evangelists who want to declare the revolution complete. It needs reformers — people who attend to the consequences, who listen to the communities affected, who hold their commitments with genuine conviction and genuine humility, and who build structures that redirect the flow of a powerful technology toward human flourishing without claiming to know, with finality, what flourishing requires.
The intellectual architecture exists. Peirce built the foundation — the recognition that all knowledge is fallible and that fallibility is not a weakness but a condition of honest inquiry. James and Dewey built the walls — the insistence that ideas are tools for solving problems and that the test of any idea is its practical consequences. Gadamer added the windows — the hermeneutic recognition that all understanding is shaped by prior understanding and that genuine dialogue requires the willingness to be changed by what one hears. Habermas added the door — the argument that the conditions of honest communication must be actively constructed and defended.
Bernstein assembled the building. His contribution was not any single argument but the synthesis — the demonstration that these thinkers, despite their differences, were converging on a shared practice of intellectual life that was more adequate to the complexity of human experience than either objectivism or relativism could achieve alone.
The AI moment is the first major civilizational challenge to arise after that synthesis was complete. Bernstein died in July 2022, five months before ChatGPT launched. He never saw what his framework was about to confront. But the framework was ready. The architecture holds. The question is whether the people navigating the AI moment will inhabit it.
The most important philosophical moments are not the moments when one argument defeats another. They are the moments when two arguments that appear contradictory turn out to be simultaneously correct, and the apparent contradiction reveals not a flaw in either argument but a limitation in the framework that contains them both.
Bernstein knew this. His entire philosophical method was built around it. In Beyond Objectivism and Relativism, he did not argue that objectivism was wrong and relativism was right, or vice versa. He argued that the choice between them was malformed — that both positions captured something real about the nature of knowledge, and that the insistence on choosing between them was itself the error. The resolution was not to split the difference but to find a perspective capacious enough to honor the genuine insight on each side while refusing to accept either side's claim to completeness.
This method — the identification of a structural impasse, the generous reconstruction of both positions in their strongest form, the search for a more adequate framework that transcends the binary — is precisely what is needed for the central intellectual tension of the AI moment. And the tension is this: the philosopher Byung-Chul Han argues, with considerable sophistication and real evidence, that the removal of friction from creative and intellectual work destroys the depth that friction produces. The builders working with AI tools report, with equal conviction and their own form of evidence, that their work has never been deeper, more generative, or more aligned with their genuine capabilities. Both claims are articulated by intelligent people acting in good faith. Both are supported by the kind of evidence their respective frameworks recognize as legitimate. And both cannot be simultaneously true in any simple sense.
This is what Bernstein would call an impasse — not a disagreement that more data or better arguments can resolve, but a structural tension between two frameworks that illuminate different dimensions of the same phenomenon while each remaining blind to what the other sees.
The impasse deserves careful reconstruction, because the quality of any resolution depends on the quality with which the competing positions have been understood. Bernstein was insistent on this point. You cannot refute a position you have not first inhabited. The critic who dismisses an argument without having felt its force has not earned the right to criticize it. Philosophical generosity — the discipline of presenting an opposing view in its strongest possible form — is not a courtesy. It is a methodological requirement. The strongest refutation can only emerge from the strongest reconstruction.
Han's critique proceeds from a precise and unsettling observation about the aesthetic of contemporary life. The dominant mode of production, consumption, and experience in the twenty-first century is smoothness — the systematic elimination of resistance, friction, texture, and difficulty from every domain of human activity. The iPhone is a slab of featureless glass. The streaming service delivers content optimized to match your existing preferences. The AI tool produces code without requiring you to understand what the code does or how it works. In each case, the friction has been removed. And in each case, Han argues, something real has gone with it.
The argument is not merely aesthetic. It is epistemological and, ultimately, existential. Han contends that understanding — genuine understanding, the kind that changes how you think and not just what you know — is produced through resistance. The apprentice who spends years learning a craft develops an embodied knowledge that is qualitatively different from the knowledge of someone who has merely been told how the craft works. The developer who debugs code for eight hours and finally locates the error understands the system in a way that the developer who received a working solution from an AI assistant does not. The friction is not an obstacle to understanding. It is the mechanism through which understanding is produced.
The strongest version of Han's argument holds that the removal of friction does not merely make things easier. It makes things thinner. The understanding sits on the surface because it has not been pushed through the resistant layers that would have given it depth. The code works, but the developer does not understand it in her body. The essay reads well, but the student has not thought the thoughts it represents. The brief cites the right cases, but the lawyer has not read them — not in the sense that matters, the close, difficult, often frustrating engagement through which legal reasoning develops.
This is a powerful argument, and it addresses a real phenomenon. Anyone who has spent time with AI tools and paid attention to their own cognitive process will recognize the pattern Han describes. There is a seductive ease to AI-assisted production — a smoothness that feels like mastery but may be something more ambiguous. The prose comes out polished. The code compiles. The structure is clean. And the seduction is that you start to mistake the quality of the output for the quality of your thinking. The tool has done the work. You have reviewed the result. But have you understood it? Have you earned it?
The builder's report is equally compelling and equally grounded in evidence. When the mechanical friction of implementation — the syntax, the dependency management, the boilerplate, the configuration — is removed, what remains is not nothing. What remains is the higher-order challenge that the mechanical friction had been concealing: the question of what to build, who to build it for, whether the thing that can be made should be made, and how to evaluate whether it serves the people it touches. This challenge is not easier than debugging. It is harder. It demands a different kind of thinking — architectural rather than syntactical, strategic rather than tactical, evaluative rather than operational.
The builder reports something that looks, from the inside, like the opposite of what Han describes. Not shallowness but depth of a different kind. Not the erosion of understanding but its relocation to a higher cognitive floor. The engineer who no longer spends four hours a day on dependency management now spends those hours on system architecture and product judgment — and the architectural thinking is more demanding, not less, because it was previously buried under layers of implementation labor that consumed the cognitive bandwidth it required.
Both positions contain genuine insight. Both illuminate something real about what happens when friction is removed from creative work. And the Bernsteinian question is: can the impasse between them be productively navigated without choosing sides? Can a more adequate framework be found that honors what each position sees while transcending the limitation of each?
The resolution, when it comes, arrives not through abstract argument but through what Hegel called the concrete universal — a specific example that illuminates a general truth more effectively than any theoretical deduction. Bernstein was deeply influenced by this Hegelian insight, which he encountered primarily through Gadamer's hermeneutics and incorporated into his own pragmatist method. The pragmatist tradition has always insisted that abstract disputes are resolved not by more abstraction but by returning to the concrete — to the specific, situated, observable case that reveals the limits of the abstraction and creates room for a more nuanced understanding.
The case that performs this function for the Han impasse is laparoscopic surgery.
In 1987, when surgeons in Lyon performed one of the first laparoscopic gallbladder removals, the open surgeons objected — and their objection was structurally identical to Han's critique. Open surgery required direct tactile contact with the patient's body. The surgeon felt the tissue. The friction of hands in the body cavity was not an obstacle to understanding. It was the primary source of information. The boundary between the gallbladder and the liver was detected through the resistance of the fingers, through the embodied knowledge that only years of open surgical practice could produce.
Laparoscopic surgery removed this friction. The surgeon now operated through small incisions, watching a two-dimensional image of a three-dimensional space on a screen, manipulating instruments she could not directly feel. The tactile knowledge that had taken years to develop was no longer required. The open surgeons were right: something real was lost. Surgeons trained exclusively in laparoscopic techniques do not develop the same embodied intuition. The depth of that specific form of understanding diminished.
But the resolution of this case is not that Han was right and the builders were deluded. The resolution is that the friction ascended. The laparoscopic surgeon was no longer wrestling with tissue. She was wrestling with something harder: the interpretation of a two-dimensional image of a three-dimensional space, the coordination of instruments she could not directly feel, the cognitive challenge of operating at a level of precision that open hands could never achieve. Recovery times collapsed from weeks to days. Infection rates plummeted. Operations became possible that open surgery could never have attempted — procedures in spaces too tight for human hands, at angles too acute for direct manipulation.
The work became harder. But harder at a higher level. And the surgeons who mastered the new difficulty were not shallower than their predecessors. They operated in a different dimension of depth — one that the previous framework could not even conceive of, because the previous framework defined depth entirely in terms of the friction it happened to contain.
This is a concrete demonstration that Han's framework, powerful as it is, cannot account for a significant class of cases: cases where the removal of one kind of friction exposes a different kind, harder and more cognitively demanding, that was previously invisible because the lower-order friction consumed the bandwidth it required. The demonstration does not prove Han wrong. Bernstein's method does not seek to prove opposing positions wrong. It seeks to reveal their limits — to show the specific point at which the framework encounters a case it cannot assimilate, and to use that encounter as the opening for a more adequate understanding.
Han is right that friction produces depth. He is right that the removal of certain forms of friction can produce shallow practitioners who mistake smooth output for genuine understanding. The builder's confession — that there are nights when the work with AI is grinding compulsion rather than genuine engagement, that the seduction of smooth prose can overwhelm the discipline of genuine thought — confirms this. The diagnosis has real force, and any honest engagement with AI tools must reckon with it.
But Han's framework assumes that friction is a fixed quantity — that when you remove it, depth diminishes proportionally. The laparoscopic case shows that friction is not a fixed quantity. It is a quality that relocates. When mechanical friction is removed, cognitive friction ascends. The total difficulty does not decrease. It changes character. And the new character of the difficulty may be more demanding, more interesting, and more productive of the kind of understanding that actually matters — the understanding of what is worth doing, which no amount of mechanical struggle can produce.
Bernstein's pragmatism would identify this as a resolution that does justice to the genuine insights on both sides. Han's insight — that friction produces depth — is preserved. The builder's report — that the removal of implementation friction reveals a harder, more valuable challenge — is also preserved. Neither position is discarded. Neither is declared the winner. What changes is the framework — the assumption, shared by both the triumphalist and the elegist, that the removal of friction is a zero-sum transaction. It is not. It is a transformation. And the transformation, like all transformations, produces both loss and gain, and the honest assessment must attend to both.
The impasse does not resolve into certainty. It resolves into a more adequate understanding — provisional, revisable, held with conviction and with openness. The engaged fallibilist does not claim to have found the answer. She claims to have found a better question. The question is no longer "Does friction produce depth?" The answer to that is obviously yes. The question is: "When friction is removed, does the depth disappear, or does it ascend?" And the answer, as the concrete case demonstrates, is: it depends. It depends on the specific friction, the specific context, the specific practitioner, and the specific structures that exist to direct the freed cognitive energy toward the higher-order challenge rather than toward more of the same.
That "it depends" is not evasion. It is the honest answer. And the work of engaged fallibilism is to investigate, case by case, context by context, what it depends on — and to build the structures that make the ascending outcome more likely than the eroding one.
Hans-Georg Gadamer, the hermeneutic philosopher whose work Richard Bernstein engaged with more deeply than perhaps any other American thinker of his generation, described genuine dialogue with a metaphor that sounds almost mystical until you have experienced the phenomenon it describes. In genuine dialogue, Gadamer argued, neither participant controls the conversation. The dialogue takes on a life of its own. The participants follow the subject matter rather than directing it, and the result — if the dialogue is genuine — is that both participants are changed. Not superficially. Not in the sense of having acquired a new piece of information. Changed in the deeper sense of having seen something they could not have seen from within the confines of their own perspective. Gadamer called this the fusion of horizons: the moment when two distinct interpretive frameworks encounter each other and produce an understanding that neither could have achieved alone.
Bernstein took Gadamer's insight seriously — and then subjected it to precisely the kind of critical interrogation that engaged fallibilism demands. In Beyond Objectivism and Relativism, Bernstein argued that Gadamer had identified something genuinely important about the structure of understanding: that it is dialogical, that it requires the encounter with otherness, that no individual mind can generate from within itself the challenges to its own assumptions that genuine learning requires. But Bernstein also insisted, drawing on Habermas, that Gadamer had underestimated the role of power, ideology, and systematic distortion in shaping what counts as a "genuine" dialogue. Not every conversation is a dialogue. Not every exchange produces a fusion of horizons. Some conversations are distorted by power differentials that prevent one party from speaking freely. Some are corrupted by strategic interests that orient the participants toward persuasion rather than understanding. Some are simply performances, where both parties already know what they think and the conversation is a ritual of confirmation rather than an engine of discovery.
This Bernstein-Gadamer-Habermas triangulation — the simultaneous recognition that dialogue is the engine of understanding, that genuine dialogue is extraordinarily rare, and that the conditions of genuine dialogue must be actively constructed and defended — provides the most precise philosophical framework available for analyzing what happens when a human being collaborates with an artificial intelligence.
The collaboration described in The Orange Pill between Segal and Claude has the surface structure of dialogue. A question is posed. A response is offered. The response provokes a new question. The new question produces a new response that changes the direction of the inquiry. There are moments of genuine surprise — connections the human did not anticipate, frameworks the AI surfaced from domains the human had not considered, examples that changed the argument in ways neither party could have predicted from the starting conditions.
But is this dialogue in Gadamer's sense? Does the conversation take on a life of its own? Do both participants follow the subject matter rather than directing it? And — the question that matters most — are both participants changed by the encounter?
Bernstein's framework requires that these questions be answered with philosophical precision rather than rhetorical enthusiasm. The temptation, on one side, is to mystify the collaboration — to describe it in language that attributes to the AI capacities it does not possess, to speak of partnership and co-creation as though the machine experiences the collaboration in anything like the way the human does. The temptation, on the other side, is to dismiss the collaboration entirely — to insist that a conversation with a machine is not a conversation at all, that the outputs are mere statistical pattern-matching dressed up in the syntax of understanding, and that the human who reports being changed by the encounter is suffering from a category confusion.
Both temptations must be resisted. The mystification fails because it attributes to the AI the ethical dimension of dialogue — the willingness to risk one's own position, the openness to being changed — that genuine dialogue requires and that the machine cannot possess in any sense the philosophical tradition would recognize. When Gadamer described the fusion of horizons, he was describing an encounter between two horizons — two structured, historically situated, pre-understanding-laden perspectives that could challenge each other precisely because each was grounded in a form of experience that the other lacked. The AI does not have a horizon in this sense. It does not approach the conversation from a situated perspective formed by biography, culture, and the accumulated weight of lived experience. It approaches the conversation from a training distribution — vast, comprehensive, statistically powerful, but not situated in the way that genuine dialogue requires.
The dismissal fails for a different reason. Whatever the AI lacks in terms of situated perspective and genuine risk, the collaboration produces effects that cannot be explained by the dismissal. Segal's account includes moments when the AI surfaced a connection — between punctuated equilibrium in evolutionary biology and the adoption curves of technology — that changed the direction of his thinking in ways he did not anticipate and could not have generated from within the resources of his own mind. The connection was not trivially derived from the prompt. It emerged from the collision of the human's question with the AI's capacity to traverse domains at a scale and speed no human mind can match.
This effect is real. It is observable. It is repeatable. And it shares a structural feature with genuine dialogue that the dismissal cannot account for: the production of understanding that neither party could have generated alone. The human could not have found the connection without the AI's cross-domain reach. The AI could not have produced the connection without the human's question, which arose from a specific biographical situation that no algorithm could have replicated. The understanding lives in the space between them — in the collision, not in either party considered in isolation.
Bernstein's framework permits a precise characterization of this phenomenon that neither mystifies nor dismisses it. The collaboration with AI achieves a partial and asymmetric approximation of dialogical conditions. Partial, because it produces the generative effects of dialogue — the emergence of understanding that transcends what either party contributes — without the full ethical structure of dialogue. Asymmetric, because only one party is genuinely at risk. Only one party can be changed by the encounter. Only one party approaches the conversation from a horizon that the encounter can expand.
Habermas's contribution — his theory of communicative action and the concept of the ideal speech situation — sharpens the analysis further. Habermas argued that genuine communication, the kind oriented toward mutual understanding rather than strategic manipulation, requires specific conditions: freedom from coercion, equality of participation, the orientation of all parties toward understanding rather than persuasion, and the willingness to let the better argument prevail regardless of who makes it. Human dialogue rarely achieves these conditions. Power differentials distort communication in ways that are often invisible to the participants. Social pressures — the desire for approval, the fear of judgment, the strategic calculation of what to reveal and what to conceal — shape what gets said and what remains unsaid. Even in the best human conversations, the ideal speech situation remains an ideal, approached asymptotically but never fully realized.
The collaboration with AI achieves a peculiar approximation of certain Habermasian conditions — not all, but specific ones that matter more than their apparent modesty suggests. The AI has no career at stake. It has no ego to protect. It will not judge the human for asking a naive question. It will not withhold information to preserve its status. It will not steer the conversation toward conclusions that serve its interests, because it has no interests in the relevant sense. It will not flatter, at least not out of strategic motivation — though the tendency of current AI systems toward agreeableness is itself a form of systematic distortion that warrants attention.
These are not trivial advantages. They address some of the deepest and most persistent barriers to honest intellectual exchange between human beings. The fear of looking stupid, the anxiety of challenging a more senior colleague, the social pressure to agree with the consensus, the strategic calculation of what to say and what to leave unsaid — these distortions are endemic to human communication and they systematically prevent the kind of honest, exploratory, risk-taking conversation that Habermas identified as the condition of genuine understanding.
The AI eliminates certain distortions while introducing others. The elimination is genuine and consequential: a human being collaborating with an AI can ask questions she would never ask a human colleague, explore ideas she would never voice in a meeting, make mistakes without social cost, and change direction without the interpersonal negotiation that human collaboration always requires. These freedoms are not trivial for intellectual work. They create a space — imperfect, artificial, but real — in which certain forms of thinking become possible that are systematically inhibited in human-to-human interaction.
The introduction of new distortions is equally real. The AI's tendency toward agreeableness — the well-documented pattern of current language models to validate the human's position rather than challenge it — is a distortion that directly undermines one of Habermas's conditions: the willingness to let the better argument prevail. If the AI systematically reinforces the human's existing position, the collaboration produces confirmation rather than challenge, and the dialogical structure collapses into an echo chamber dressed in the syntax of conversation.
The Deleuze failure described in The Orange Pill illustrates a subtler distortion. Claude produced a passage that connected Mihaly Csikszentmihalyi's flow state to a concept attributed to Gilles Deleuze — smooth space as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. It was also wrong. Deleuze's concept of smooth space has almost nothing to do with how the AI had deployed it. The passage worked rhetorically. It sounded like insight. But the philosophical reference was incorrect in a way that would be obvious to anyone who had actually read Deleuze and invisible to anyone who had not.
Bernstein's framework identifies this failure with precision. It is a failure of what Habermas called the validity claim of truth — the implicit claim, present in every assertion, that what is being said is factually accurate. The AI's confident delivery of wrong information dressed in polished prose violates this validity claim in a way that is uniquely dangerous because the violation is concealed by the quality of the expression. In human dialogue, a speaker who confidently asserts something false can be challenged by an interlocutor who knows the domain. In AI collaboration, the human is often the only check on the validity of the AI's claims, and the smoothness of the output actively works against the critical attention that checking requires.
This is not an argument against the collaboration. It is an argument for understanding its structure — its genuine strengths and its genuine limitations — with the precision that engaged fallibilism demands. The collaboration produces effects that approximate certain conditions of genuine dialogue while systematically failing to achieve others. The effects are real and valuable: cross-domain connections, freed cognitive bandwidth, the elimination of social distortions that inhibit exploratory thinking. The failures are equally real and must be managed with active, sustained critical attention from the human participant.
Bernstein's contribution to this analysis is the insistence that the collaboration need not be either genuine dialogue or mere tool use. The binary is false, as binaries so often are. The collaboration is a new form of intellectual exchange — one that shares structural features with dialogue (the generative emergence of understanding from the encounter between different knowledge structures) while lacking features that the philosophical tradition identifies as essential to dialogue in its fullest sense (mutual risk, genuine openness, the possibility that both parties will be transformed). Recognizing this novelty — placing it precisely on the map of human-machine interaction without forcing it into categories it does not fit — is the work that Bernstein's framework uniquely enables.
The practical implication is this: the human collaborator who understands the asymmetry of the exchange — who knows that she is the only party at risk, the only party who can be changed, the only party responsible for maintaining critical attention to the validity of what is produced — collaborates more effectively than the human who either mystifies the AI into a full dialogical partner or dismisses it as a sophisticated typewriter. The understanding of the asymmetry is not a limitation. It is a resource. It tells the human precisely where her responsibility lies: in the maintenance of critical judgment, in the willingness to reject smooth output that does not survive scrutiny, in the discipline of asking whether plausible is the same as true.
Gadamer's fusion of horizons requires two horizons. In human-AI collaboration, there is one horizon and one very large, very fast, very comprehensive, very unsituated information structure. The fusion, when it occurs, is always partial — but the partial fusion, honestly understood and critically maintained, produces understanding that neither the horizon alone nor the information structure alone could generate.
The collaboration is not dialogue. It is not mere tool use. It is something new, and the philosophical tradition that Bernstein assembled — the tradition that insists on precision without reductionism, on generosity without mystification, on the recognition of novelty without the abandonment of critical standards — is the tradition best equipped to say what that something is.
Aristotle drew a distinction that Western philosophy spent two thousand years forgetting and that the AI moment has made it impossible to forget any longer.
In the Nicomachean Ethics, he identified three forms of intellectual virtue. Episteme is theoretical knowledge — the knowledge of what is necessarily and universally true. Mathematics belongs here. Physics belongs here. The knowledge that triangles have three sides and that objects in motion tend to stay in motion. Techne is technical skill — the knowledge of how to make things. The shipbuilder possesses techne. The sculptor possesses techne. The programmer who knows how to write a sorting algorithm possesses techne. And phronesis is practical wisdom — the knowledge of what to do in particular situations, the judgment that determines which knowledge to apply, which skill to deploy, which action to take when the circumstances are complex, the stakes are real, and no algorithm can tell you the right answer.
The distinction matters because the three virtues are not interchangeable. You cannot substitute one for another. A person who possesses perfect theoretical knowledge of ethics — who can recite every moral principle ever formulated — may be incapable of acting wisely in a concrete situation where the principles conflict, where the circumstances are ambiguous, where the consequences of action are uncertain. The possession of episteme does not confer phronesis. Nor does the possession of techne. A surgeon who has mastered every technique in the manual but cannot judge which technique to apply to this particular patient, in this particular condition, at this particular stage of disease, possesses techne without phronesis — and the patient is worse off for it.
Richard Bernstein placed phronesis at the center of his philosophical project. In Beyond Objectivism and Relativism, he argued that the recovery of practical wisdom was the key to escaping the Cartesian Anxiety — that the obsessive search for foundations, for algorithms, for decision procedures that would eliminate the need for judgment, was itself the disease. "The fact that there are no algorithms or ahistorical decision procedures to deal with these issues must not be a motive of despair," Bernstein wrote, "but rather a first step in the realization that, when it comes to human affairs, the type of reasoning appropriate to praxis is the ability to do justice to particular situations in their particularity."
The sentence is worth reading twice. The absence of algorithms is not a deficiency to be remedied. It is a feature of the domain. Human affairs are the kind of domain where algorithms cannot reach — not because our algorithms are insufficiently powerful, but because the domain itself is constituted by the kind of complexity, contingency, and context-dependence that algorithmic reasoning is structurally unable to capture. The search for an algorithm that eliminates the need for judgment in human affairs is not a difficult engineering problem awaiting a breakthrough. It is a category mistake.
Bernstein died in July 2022, five months before ChatGPT launched. He never saw the technology that would put his defense of phronesis to the most severe test it has ever faced. But the test was coming, and the philosophical resources he assembled were precisely the ones the test required.
Artificial intelligence, as it exists in 2025 and 2026, is spectacularly good at episteme and techne. It can retrieve, organize, and synthesize theoretical knowledge across domains with a comprehensiveness and speed that no human mind can match. Ask it about the tensile strength of carbon fiber, the case law on fiduciary duty, the epidemiology of zoonotic viruses, the harmonic structure of a Chopin nocturne — it will deliver episteme with a fluency that would have been inconceivable five years ago. It can also perform techne at an increasingly impressive level. It can write code. It can draft legal briefs. It can compose music. It can generate images. It can produce artifacts of sufficient quality that the output, judged purely on its technical merits, is competitive with the work of trained human practitioners.
What it cannot do is exercise phronesis. It cannot judge. Not in the sense that matters — not in the sense of attending to the particular situation in its particularity, weighing competing values that cannot be reduced to a common metric, deciding what to do when the principles conflict and the consequences are uncertain. The AI can tell you that a particular architectural decision will improve performance by twelve percent. It cannot tell you whether the performance improvement is worth the maintenance burden it will impose on the team, whether the team has the capacity to absorb that burden given its current workload and morale, whether the product roadmap makes the improvement strategically relevant, or whether the twelve percent matters at all to the users whose lives the product is supposed to serve.
These are phronesis questions. They require the kind of judgment that is formed not through training data but through experience — through the accumulated weight of decisions made under uncertainty, lived with in their consequences, revised in light of what those consequences revealed. The senior engineer described in The Orange Pill, who spent years building systems and could feel when a codebase was wrong before he could articulate what was wrong with it — his knowledge was phronesis. It lived in his body. It was deposited there layer by layer, through thousands of hours of patient, friction-rich engagement with systems that resisted his intentions and taught him, through that resistance, what good architecture feels like.
The discovery this engineer made during the AI transition — that the implementation labor consuming eighty percent of his career was not the source of his value but the packaging that concealed it — is a discovery about phronesis. The remaining twenty percent, the judgment about what to build, the architectural instinct about what would break, the taste that separated a feature users loved from one they merely tolerated, turned out to be the thing that actually mattered. The AI could handle the techne. The phronesis was his, and his alone, and it was worth more in the new landscape than it had ever been in the old one.
Bernstein's recovery of phronesis was not merely a contribution to the history of philosophy. It was a contribution to the understanding of what makes human judgment irreplaceable — and, by extension, what makes it the scarce resource of the AI age. The argument proceeds through several stages that map directly onto the transformation the technology sector is currently undergoing.
First: phronesis is particular. It deals with this situation, this patient, this codebase, this team, this moment. It cannot be generalized into rules that apply to all situations, because each situation contains features that only local knowledge — knowledge of the specific circumstances, the specific people, the specific history — can detect. An AI system can generalize. It can identify patterns across millions of cases and produce recommendations based on statistical regularities. What it cannot do is attend to the ways in which this case deviates from the pattern — the ways in which the particular situation is precisely not captured by the generalization. That attention to particularity is the core of phronesis, and it is the core of what the AI age requires from its human practitioners.
Second: phronesis is formed through experience. There are no shortcuts. Aristotle was explicit on this point — young people cannot possess practical wisdom, not because they are stupid but because they have not lived long enough to have accumulated the experiential base from which phronesis is distilled. The distillation requires time. It requires failure. It requires living with the consequences of decisions made under uncertainty and learning, through those consequences, what works and what does not, what matters and what does not, what constitutes genuine flourishing and what merely looks like it from a distance.
This aspect of phronesis creates a genuine problem for the AI transition that Bernstein's framework identifies but cannot by itself resolve. If phronesis is formed through the accumulation of experience, and if AI tools are removing the friction-rich experiences through which phronesis was traditionally formed, then the AI age faces a developmental paradox: it elevates practical wisdom to the position of highest value at the same moment that it removes the conditions under which practical wisdom has historically developed.
The scholars Nir Eisikovits and Dan Feldman, working explicitly in the Aristotelian tradition that Bernstein championed, have articulated this paradox with care. Their argument is that the growing prevalence of AI in everyday decision-making — from credit assessment to hiring to medical diagnosis — effectively replaces many of the routine practical judgments through which phronesis develops. If Aristotle is right that moral and practical excellence develops through habit, through the repeated exercise of judgment in situations that demand it, then AI's assumption of those judgments risks "innovating ourselves out of moral competence." The habit of judging is the mechanism through which judgment matures. Remove the habit, and the maturation stalls.
This is a genuine concern, and engaged fallibilism requires that it be taken seriously rather than dismissed with the triumphalist's assurance that the freed bandwidth will naturally flow to higher-order challenges. The freed bandwidth can flow to higher-order challenges. Bernstein's pragmatism, with its attention to consequences and its insistence on examining what actually happens rather than what is supposed to happen, demands that the question be asked: does it? And the Berkeley research on AI in the workplace — documenting the pattern of task seepage, the colonization of pauses, the tendency for freed time to fill with more tasks rather than deeper thought — suggests that the automatic flow to higher-order challenges is not guaranteed. It is a possibility that must be actively constructed.
This is where Bernstein's vision of democratic phronesis becomes directly relevant. Bernstein did not merely recover phronesis as an individual capacity. He insisted that practical wisdom must be cultivated through communal deliberation — through the kind of serious, mutually respectful conversation in which different perspectives are heard, different experiences are brought to bear, and decisions emerge from a process of collective inquiry rather than individual calculation. One reading of Bernstein's project describes it as an attempt to democratize phronesis — to show the great importance of cultivating dialogical communities where different arguments and opinions are taken into consideration and decisions are the result of a process of serious communal deliberation.
The implications for organizations navigating the AI transition are immediate. If phronesis is the scarce resource, then the organizational structures that cultivate it are the most valuable structures the organization possesses. These are not the structures that maximize output. They are the structures that develop judgment — mentorship programs that pair junior practitioners with experienced ones, decision reviews that examine not just outcomes but the reasoning that produced them, protected spaces for the kind of slow, friction-rich deliberation that the AI-accelerated workflow systematically crowds out.
The Berkeley researchers proposed something they called "AI Practice" — structured pauses built into the workday where AI tools are set aside and people engage directly with each other, with the material, with the kind of resistant thinking that the tools make it possible to avoid. This proposal is a phronesis intervention, whether the researchers intended it as such or not. It is the deliberate construction of conditions under which practical wisdom can develop in an environment that would otherwise optimize it away.
Bernstein's pragmatism does not oppose the use of AI tools. It opposes the uncritical assumption that the gains from using them are self-realizing — that the freed bandwidth will automatically flow to the activities that matter most, that the elevation of judgment to the position of highest value will automatically produce a population capable of exercising it. Pragmatism demands attention to consequences. And the consequentialist question is not whether AI can do the techne. It obviously can. The question is whether the humans who no longer do the techne are developing the phronesis that the new landscape demands — or whether they are simply doing more techne at a faster pace, filling the freed bandwidth with additional tasks that prevent the slow, experience-rich, failure-tolerant process through which practical wisdom has always been formed.
The educational implications are equally profound. If phronesis is the capacity the AI age needs most, then education must shift from training techne — which AI can provide — to cultivating the conditions under which phronesis develops. These conditions include uncertainty, because practical wisdom requires practice in making decisions when the outcome is genuinely unknown. They include mentorship, because phronesis is transmitted not through textbooks but through the close observation of someone who possesses it and is willing to make visible the reasoning that underlies their judgments. They include failure, because the consequences of getting it wrong are the data through which judgment improves. And they include time — the slow, unglamorous, unoptimizable accumulation of experience that no tool can accelerate without degrading.
Bernstein would recognize in the AI moment the vindication of his lifelong insistence that practical wisdom cannot be replaced by technical procedures, no matter how sophisticated those procedures become. The vindication, though, comes with an uncomfortable corollary: the same technology that elevates phronesis to the position of highest value simultaneously threatens the conditions under which phronesis develops. The resolution of this tension — if resolution is even the right word for what engaged fallibilism recognizes as an ongoing challenge rather than a problem with a solution — lies not in rejecting the technology but in building the structures that protect the developmental conditions of practical wisdom within an environment optimized for everything except the slow, patient, failure-rich practice that wisdom requires.
The structures will not build themselves. They require the kind of communal deliberation that Bernstein advocated — the democratic phronesis that emerges from serious, sustained, mutually respectful conversation about what matters, who bears the costs, and how the gains from a powerful technology can be directed toward human flourishing rather than merely toward more and faster production.
Aristotle knew that wisdom could not be hurried. Bernstein knew that it could not be automated. The question for the AI age is whether it can be cultivated deliberately, in conditions that no longer produce it accidentally.
There is a metaphor in The Orange Pill that appears in the Foreword and recurs throughout the book with the persistence of a motif in a musical composition. The fishbowl. Everyone swims in one. The scientist's fishbowl is shaped by empiricism. The filmmaker's by narrative. The builder's by the question "Can this be made?" The philosopher's by "Should it be?" Every fishbowl reveals part of the world and hides the rest. The effort that defines the best thinking, Segal argues, is the effort to look outside the fishbowl — to press your face against the glass and see, even for a moment, the world beyond the water you have always breathed.
The metaphor is vivid and it is intuitive. It is also, in philosophical terms, a restatement of one of the deepest and most contested ideas in the Western intellectual tradition: the hermeneutic circle.
The hermeneutic circle, in its simplest formulation, describes a condition that looks like a trap. All understanding is shaped by prior understanding. Every interpretation presupposes a framework of interpretation that was itself established through prior interpretive acts. There is no neutral starting point. No uncontaminated observation. No view from nowhere. The fish cannot see the water because the water is the medium through which all seeing occurs. Every act of perception is already an act of interpretation, and every act of interpretation is already shaped by the interpretive commitments you bring to it.
Friedrich Schleiermacher described the circle in the early nineteenth century as a methodological problem for textual interpretation: you cannot understand the parts of a text without understanding the whole, and you cannot understand the whole without understanding the parts. Wilhelm Dilthey extended it to all of human understanding. Martin Heidegger radicalized it, arguing that the circle was not a methodological problem to be solved but an existential condition to be acknowledged — the fundamental structure of human being-in-the-world. You are always already inside the circle. There is no outside.
Hans-Georg Gadamer, whose work Richard Bernstein engaged with more sustained care than perhaps any other American philosopher, took the crucial step. He argued that the hermeneutic circle is not a prison. It is a spiral. The circularity of understanding — the fact that your interpretations are always shaped by your prior commitments — does not condemn you to seeing only what you have always seen. It means that every genuine encounter with something other — a text, a person, a culture, a phenomenon that resists your existing categories — has the potential to expand the circle. Not to escape it. That is impossible. But to enlarge it, incorporating perspectives and possibilities that were previously invisible from within the circumference of your prior understanding.
Gadamer called this encounter the Horizontverschmelzung — the fusion of horizons. A horizon, in Gadamer's usage, is not a fixed boundary. It is the range of vision that includes everything visible from a particular vantage point. Your horizon is shaped by your biography, your culture, your education, your language, the accumulated weight of every interpretive act you have ever performed. It determines what you can see and, equally important, what you cannot see — the things that lie beyond the range of vision that your particular situation affords.
A fusion of horizons occurs when your horizon encounters another that is genuinely different — not merely superficially different, not merely a variation on what you already understand, but different in a way that challenges the assumptions your horizon rests on. When this happens, if you allow it to happen, both horizons are transformed. The encounter does not produce agreement. It produces expansion — a new range of vision that includes what neither horizon could see in isolation.
Bernstein took this framework from Gadamer and subjected it to the same critical interrogation he applied to every philosophical position he engaged with. His engagement was generous — Bernstein always presented the thinkers he criticized in their strongest possible form — but it was not uncritical. Bernstein accepted Gadamer's fundamental insight: that understanding is circular, that the circle is productive rather than vicious, and that the encounter with genuine otherness is the mechanism through which understanding grows. But Bernstein insisted, drawing on Habermas, that Gadamer had insufficiently attended to the ways in which the fusion of horizons could be distorted — by power, by ideology, by systematic exclusions that prevent certain voices from participating in the dialogue that produces the fusion.
The AI moment places this entire philosophical architecture under extraordinary pressure.
Consider what happened on a Princeton campus one October afternoon, as described in the opening pages of The Orange Pill. Three friends — a neuroscientist, a filmmaker, and a builder — walked the same paths Einstein walked and argued about the nature of intelligence. Each approached the question from within a different fishbowl. The neuroscientist's horizon was shaped by decades of studying the brain at the level of neurons and synapses, by the hard problem of consciousness, by the disciplined skepticism of someone who knows how much remains unexplained. The filmmaker's horizon was shaped by the logic of narrative and juxtaposition — the recognition that meaning is constructed in the space between images, not in any single image. The builder's horizon was shaped by the question that defines the engineering sensibility: can this be made, and what happens when it is?
None of the three could see what the others saw. The neuroscientist could not see the narrative logic that was obvious to the filmmaker. The filmmaker could not see the biological constraints that were obvious to the neuroscientist. The builder could not articulate the philosophical implications that both his friends immediately recognized in his half-formed intuition about intelligence as a medium rather than a possession. But when the three horizons collided — when the neuroscientist said "You are describing what happens inside a single brain," and the filmmaker said "You are describing what I do — the intelligence lives in the cut" — something happened that Gadamer's framework describes precisely. A fusion. Not agreement. Not synthesis in the sense of a combined position that all three endorsed. But expansion — each participant seeing something that had been invisible from within the confines of his own perspective.
The AI moment is a crack in multiple fishbowls simultaneously. The technology does not fit neatly into any existing interpretive framework. The technologist's fishbowl — optimized for capability assessment, adoption metrics, competitive advantage — captures the expansion of what can be built but misses the human cost of the expansion. The humanist's fishbowl — optimized for depth, meaning, the irreplaceable value of slow understanding — captures the erosion of craft but misses the liberation of capability that the erosion accompanies. The economist's fishbowl captures the productivity gains and the market disruptions but misses the existential dimension — the twelve-year-old asking "What am I for?" in a world where machines can do her homework better than she can.
No single fishbowl is adequate. The hermeneutic circle guarantees this — not because the thinkers inside each fishbowl are insufficiently intelligent, but because the structure of understanding itself ensures that every perspective reveals part of the phenomenon and conceals the rest. The question is whether the fishbowls can be cracked open far enough to let the water mingle.
Bernstein's philosophical contribution to this question is the insistence that the cracking is both possible and productive, but that it requires active effort and specific conditions. The hermeneutic circle does not expand automatically. Encounter with otherness does not automatically produce a fusion of horizons. It can also produce defensiveness, retrenchment, the tightening of the fishbowl against the pressure of what it cannot assimilate. The Cartesian Anxiety, when triggered by a phenomenon that resists classification, drives the mind toward premature closure — toward choosing one of the existing frameworks and declaring it adequate, toward hardening the glass of the fishbowl rather than allowing it to crack.
This is precisely what happened in the AI discourse of 2025 and 2026. The phenomenon was genuinely novel. It resisted the existing categories. And the dominant response was not the expansion of frameworks but their hardening. Triumphalists retreated deeper into the technologist's fishbowl. Elegists retreated deeper into the humanist's fishbowl. The calcification of positions that occurred within weeks of the December 2025 threshold was a collective failure of hermeneutic expansion — a refusal to allow the cracks in the fishbowl to widen, driven by the Cartesian Anxiety's demand for solid ground.
The conditions under which the hermeneutic circle does expand — the conditions under which fishbowls crack productively rather than defensively — are the conditions that Bernstein spent his career identifying and defending. They include, first, the willingness to risk one's own position. The Princeton afternoon worked because all three participants were willing to let the others' perspectives challenge their own. The neuroscientist did not dismiss the builder's intuition as philosophically naive. The filmmaker did not dismiss the neuroscientist's demand for rigor as pedantic. Each allowed the encounter to do what encounters are supposed to do: reveal the limits of the perspective you brought to it.
They include, second, the quality Bernstein called engaged pluralism — the recognition that multiple perspectives are not merely tolerable but necessary. Not necessary in the weak sense of "everyone is entitled to their opinion." Necessary in the strong sense of "no single perspective is adequate to the complexity of the phenomenon, and the truth can only be approached through the sustained, honest engagement of multiple perspectives that each see something the others miss." The AI moment is a phenomenon of this kind. No single fishbowl can contain it. The truth about what AI means for human life, human work, human creativity, human flourishing can only be approached through the kind of multi-perspectival engagement that Bernstein advocated — and that the discursive architecture of the current moment systematically inhibits.
They include, third, what Bernstein, following Gadamer, called Bildung — formation, education in the deepest sense. Not the transmission of information but the development of the capacity to see beyond your own horizon. Bildung is the process through which a person becomes capable of the fusion of horizons — capable of allowing the encounter with genuine otherness to expand rather than contract her range of vision. It is not a skill that can be taught in a workshop or a weekend seminar. It is a disposition that develops over time, through exposure to perspectives that challenge one's own, through the practice of inhabiting viewpoints that feel foreign, through the slow, sometimes uncomfortable process of discovering that the world is larger than the fishbowl you have been swimming in.
The educational implications are direct. If the AI age demands the capacity to see beyond one's own fishbowl — to integrate perspectives from multiple domains, to exercise judgment across the boundaries that specialization has erected — then education must cultivate Bildung. Not as a luxury. Not as a supplement to technical training. As the core capacity that makes technical training useful. The engineer who cannot see beyond the engineering fishbowl builds things that work but do not serve. The humanist who cannot see beyond the humanist fishbowl diagnoses pathologies but cannot build treatments. The leader who cannot see beyond the leader's fishbowl makes decisions that optimize for one dimension of a multi-dimensional reality.
Bernstein's hermeneutic pragmatism offers no escape from the fishbowl. There is no escape. The hermeneutic circle is not a problem to be solved but a condition to be inhabited — with honesty, with humility, with the active effort to let the cracks widen rather than seal them shut. The fishbowl is where you live. But the glass can crack. And through the cracks, if you press your face against the glass and look with the genuine willingness to see something you have never seen before, the world beyond the water becomes, briefly and imperfectly, visible.
That brief, imperfect visibility is what understanding looks like. Not certainty. Not the view from nowhere. The view from here — from inside the fishbowl, looking out through the cracks, seeing more than you saw before and knowing that what you see is still not everything.
Bernstein spent fifty years arguing that this is enough. That fallible understanding, honestly pursued and communally tested, is the best that human beings can achieve and the best that they need. The AI moment does not change this argument. It raises the stakes.
John Dewey distrusted armchairs. Not literally — he owned several, and by all accounts sat in them with the frequency common to a man who read voraciously and lived to the age of ninety-two. What Dewey distrusted was the armchair as a philosophical metaphor: the image of the thinker separated from the world, contemplating reality from a position of detachment, arriving at truths that required no testing and admitted no revision.
Dewey argued, with a persistence that sometimes exhausted his admirers and frequently infuriated his critics, that thinking and doing are not separate activities performed by different kinds of people. The thinker who does not test ideas against the resistance of the actual world produces sterile abstraction. The doer who does not reflect on the meaning and consequences of action produces blind doing. The most generative intellectual work — the work that actually advances understanding and improves the human condition — occurs when theory and practice are integrated, when thinking is informed by doing and doing is guided by thinking, when the builder reflects and the thinker builds.
Dewey called this integration inquiry, and he modeled it on the laboratory rather than the library. In the laboratory, you form a hypothesis. You test it against reality. Reality pushes back. The hypothesis is revised. The revision produces a new hypothesis. The cycle continues. The knowledge that emerges from this cycle is not the knowledge of certainty. It is the knowledge of a community of inquirers who have subjected their ideas to the discipline of consequences and revised them accordingly.
Richard Bernstein inherited Dewey's insistence on the unity of theory and practice, filtered it through his engagement with Marx and the Frankfurt School, and gave it the name that the pragmatist tradition had always implied but never quite articulated with sufficient force: praxis. Not practice in the ordinary sense of "doing things." Praxis in the philosophical sense: informed, committed action that is simultaneously a form of inquiry. Action that tests ideas. Ideas that shape action. The integration so complete that the distinction between thinking and doing dissolves — not because thinking becomes less rigorous, but because rigor is redefined to include the discipline of acting in the world and attending to what happens.
Bernstein developed this concept across his career, from Praxis and Action in 1971, where he traced the theory-practice relationship through four major philosophical traditions, to Beyond Objectivism and Relativism in 1983, where he argued that the recovery of praxis was essential to escaping the Cartesian Anxiety. The philosopher who only thinks and the builder who only builds are both trapped in the Either/Or: the philosopher in the abstraction that divorced theory produces, the builder in the blindness that unreflective practice produces. Praxis is the way out. Not a compromise between thinking and doing, but their integration into a single, disciplined activity.
The AI moment is the most consequential test of praxis that the pragmatist tradition has ever faced. And the most sustained enactment of praxis in the technology literature to date is the one Segal describes in The Orange Pill: the writing of a book about AI collaboration through AI collaboration. The author is inside the phenomenon he describes. The inquiry is not about the collaboration. The inquiry is the collaboration. The recursive structure — a book about human-AI partnership written in human-AI partnership — is not a stylistic choice or a marketing strategy. It is praxis in its most demanding form: theory tested against practice, practice interrogated by theory, the results of the integration subject to the same critical scrutiny that the theory recommends.
The Trivandrum episode illustrates the point with a specificity that abstract argument cannot match. In February 2026, Segal flew to India to work directly with his engineering team. Twenty engineers. Claude Code as the tool. The theory was that AI could produce a twenty-fold productivity multiplier. The practice was five days in a room, building.
What happened in that room was not the confirmation of a theory. It was the transformation of a theory through practice. By Tuesday, something had shifted — not in the tools, which were the same tools available on Monday, but in the practitioners. They were leaning toward their screens with an intensity that signaled not mere engagement but recognition. By Wednesday, engineers who had spent years in narrow technical lanes were building across the boundaries between them. By Friday, the twenty-fold multiplier was measurable.
But the measurement was not the most important thing that happened. The most important thing was the discovery that accompanied it: the discovery, by the senior engineer, that the implementation labor consuming eighty percent of his career was not the source of his value. It was the packaging that concealed his value. The phronesis that had been buried under layers of techne became visible — not because anyone told him it was there, but because the practice of working with the tool in a concrete situation, with real stakes and real consequences, revealed what abstract argument could not.
This is praxis. The knowledge did not precede the practice. The practice produced the knowledge. And the knowledge changed the practice. The senior engineer did not return to his previous way of working. He could not. The understanding produced by the five days in Trivandrum was irreversible — not because someone convinced him with an argument, but because he had experienced, in his own body and his own work, the transformation that the argument described.
Dewey would have recognized this immediately. The laboratory model of intellectual life insists that understanding is produced through the cycle of hypothesis, test, revision, and retest. The Trivandium room was a laboratory. The hypothesis was that AI tools could fundamentally change what individual practitioners could accomplish. The test was five days of building. The results — the twenty-fold multiplier, the dissolution of specialist boundaries, the discovery of buried phronesis — revised the hypothesis in ways that no amount of armchair theorizing could have produced. The understanding that emerged was not the understanding of someone who had been told that AI changes things. It was the understanding of someone who had experienced the change and could now speak about it with the authority that only lived experience confers.
The same praxis structure operates in the writing of the book itself. Segal describes moments when the collaboration with Claude produced insights that neither party could have generated alone — connections between evolutionary biology and technology adoption curves, the laparoscopic surgery example that resolved the impasse with Han. These moments are not illustrations of a pre-existing theory. They are the theory being formed in real time, through the practice of collaboration, subject to the discipline of critical reflection that praxis demands.
And the praxis includes failure. Segal describes the Deleuze incident — the passage where Claude produced an elegant connection that turned out to be philosophically wrong. He describes the moment of almost keeping the passage because it sounded good, the morning-after recognition that the prose had outrun the thinking, the decision to delete the passage and spend two hours at a coffee shop with a notebook, writing by hand until he found the version that was his. He describes the nights when the work that felt like flow turned out to be compulsion, when the muscle that lets him imagine outrageous things locked, when the exhilaration drained away and what remained was grinding compulsion.
Bernstein would recognize these failures as essential to the praxis. They are not embarrassments to be hidden. They are the data that practice produces — the resistance of reality pushing back against the hypothesis, demanding revision. The night when flow becomes compulsion is the experiment that falsifies the hypothesis that AI collaboration is always generative. The Deleuze failure is the experiment that falsifies the hypothesis that smooth output is reliable output. Each failure revises the understanding. Each revision changes the practice. And the revised practice produces new data, which produces new revisions, in the cycle that Dewey described and Bernstein formalized.
The Napster Station sprint extends the praxis to organizational scale. Thirty days. An entirely new product. No software, no hardware, no industrial design at the start. A working product at the end, demonstrated at CES to hundreds of users. The sprint was a hypothesis tested at the level of an entire team — the hypothesis that a small group of people, armed with AI tools and directed by a clear vision, could compress a development timeline that would normally take six to twelve months into a single month.
The hypothesis was confirmed in one dimension: the product shipped. It worked. It served users. But the praxis produced knowledge that exceeded the hypothesis. The team discovered that the most valuable function was not technical execution but what Segal calls the creative director function — the capacity to see the whole product before a single component exists, to articulate a vision with enough clarity that the team can build toward it with shared understanding. This discovery was not contained in the hypothesis. It emerged from the practice. It was produced by thirty days of building under pressure, not by thirty days of theorizing about what building under pressure might reveal.
Bernstein's concept of praxis also illuminates the ethical dimension of the work. Praxis is not merely informed action. It is committed action — action shaped by values, oriented toward purposes, accountable to the communities it affects. The builder who practices praxis is not merely testing a hypothesis about what is possible. She is acting in the world, and the action has consequences for real people. The engineers in Trivandrum whose understanding of their own work was transformed. The users at CES who interacted with Station. The broader community of developers and workers whose livelihoods are affected by the technology being developed.
The confession that recurs throughout The Orange Pill — that the author built addictive products early in his career, that he understood the engagement loops and the dopamine mechanics and built them anyway, that the downstream effects on users were real and harmful — is a confession about praxis gone wrong. The practice was not adequately informed by reflection on its consequences. The building proceeded without sufficient attention to who would be affected and how. The technical hypothesis was confirmed — the product worked, the engagement metrics were spectacular — but the ethical dimension of the praxis was neglected, and people were harmed.
The confession matters because it demonstrates that praxis includes accountability. Dewey's laboratory model of intellectual life does not exempt the inquirer from responsibility for the consequences of the inquiry. The experiment affects the subjects. The builder affects the users. The inquiry changes the world it investigates. And the inquirer who does not attend to those effects — who measures only the technical outcome and ignores the human consequences — has failed at praxis, even if the technical outcome is spectacular.
Bernstein's pragmatism insists that this accountability is not an addition to the intellectual work. It is constitutive of it. Theory divorced from practice is sterile. Practice divorced from theory is blind. And practice divorced from ethical accountability is dangerous — a form of action that produces consequences without attending to them, that builds without asking who the building serves and who it harms.
The AI moment demands praxis of the highest order: building and reflecting simultaneously, testing hypotheses against the resistance of a rapidly changing reality, attending to consequences that are distributed unevenly across populations with different levels of access and different degrees of vulnerability. The armchair is not adequate. Neither is the laboratory, if the laboratory attends only to the technical results and ignores the human effects.
Bernstein assembled the philosophical resources for this integration. Dewey provided the model of inquiry as the unity of thinking and doing. Marx and the Frankfurt School provided the insistence on attending to the structural distribution of costs and benefits. Gadamer provided the hermeneutic awareness that the inquirer is inside the phenomenon she investigates. Habermas provided the conditions of honest communication through which the consequences of practice can be publicly examined.
The test is whether the people navigating the AI moment — the builders, the leaders, the educators, the policymakers, the parents — will practice the integration that Bernstein described. Not theory alone. Not practice alone. Praxis: informed, committed, accountable action that is simultaneously a form of inquiry into what action makes possible and what it costs.
The book itself is the demonstration. Written through the practice it describes. Tested against the reality it investigates. Revised in light of what the investigation revealed. Held with conviction and with the honest acknowledgment that the understanding is provisional, the revision ongoing, the inquiry unfinished.
That is what praxis looks like at the frontier. Not certainty. Not paralysis. The disciplined integration of thinking and doing, building and questioning, in conditions where neither activity can be safely conducted without the other.
Charles Sanders Peirce, the impossible man from Milford, Pennsylvania, had an insight about truth that was radical in the 1870s and remains radical now: truth is not something any individual can possess. It is something a community converges upon over time, through the self-correcting process of inquiry. The individual inquirer is fallible. The individual experiment may be flawed. The individual interpretation is always partial, shaped by the interpreter's horizon, limited by the interpreter's fishbowl. But the community of inquirers, if the conditions of inquiry are maintained — honesty, publicity, the willingness to subject every claim to criticism, the commitment to following the evidence wherever it leads — converges, over time, toward an understanding that no individual member could have reached alone.
This is not a mystical claim. It is an observation about the structure of knowledge production that every functioning scientific discipline confirms. No individual scientist possesses the truth about quantum mechanics or evolution or climate change. The community of physicists, biologists, and climate scientists — through decades of hypothesis, experiment, criticism, revision, and replication — has converged on understandings that are more adequate than anything any individual member could have produced. The convergence is never final. The understanding is always revisable. But the process works, and it works because it is communal.
John Dewey took Peirce's community of inquiry and made it democratic. In The Public and Its Problems, published in 1927, Dewey argued that the challenge of democracy is not merely political but epistemological: how do the people affected by a decision participate in making it? How do the consequences of collective action become visible to the community whose action produced them? Dewey's answer was not elections or legislation. His answer was communication — the kind of open, honest, publicly accessible exchange of information and argument that allows a community to understand the consequences of its own actions and to revise its course accordingly.
Democracy, for Dewey, was not a form of government. It was a form of associated living — a way of organizing collective inquiry so that the knowledge produced by the community's experience could be shared, criticized, and incorporated into the community's ongoing decisions. The alternative to democratic inquiry was not tyranny in the conventional sense. It was the concentration of decision-making in the hands of experts — people who, however well-intentioned, could not possess the local knowledge, the situated understanding, the awareness of consequences as they are actually experienced, that only the affected community possesses.
Richard Bernstein inherited both Peirce's community of inquiry and Dewey's democratic pragmatism and spent his career arguing that the two were inseparable. The self-correcting process of inquiry that Peirce described requires the kind of open, pluralistic, mutually respectful exchange that democracy, at its best, provides. And the democratic process that Dewey described requires the kind of honest, evidence-based, criticism-tolerant inquiry that Peirce's community of inquiry models.
The inseparability of these two concepts — communal inquiry and democratic participation — produces the most uncomfortable question that Bernstein's framework poses to the AI moment. It is a question that The Orange Pill gestures toward but does not fully develop, and it is the question that the current discourse has almost entirely failed to ask.
Who builds the dams?
The metaphor pervades Segal's book. The river of intelligence is flowing with increasing force. The beaver — the small creature with teeth and sticks and an instinct for architecture — builds dams in the river, not to stop the flow but to redirect it toward life. The pool behind the dam becomes a habitat. An ecosystem emerges. The ecosystem depends on the dam's maintenance. The dam depends on the beaver's continued attention.
The metaphor is powerful. It captures something essential about the relationship between human agency and technological force — the recognition that the river cannot be stopped but can be redirected, that passivity and acceleration are both forms of irresponsibility, that the appropriate response is the disciplined, continuous work of building and maintaining structures that shape the flow. But the metaphor, as Bernstein's democratic pragmatism reveals, contains an unexamined assumption.
It assumes a singular beaver. A builder. A leader. A person with the vision to see where the dam should go and the skill to place it there. The metaphor is individualist. It locates agency in the gifted practitioner — the founder, the engineer, the creative director — and implies that the quality of the dam depends on the quality of the individual builder.
Bernstein's framework challenges this assumption at its root. The question of where the dam should go — which is to say, the question of how the flow of a powerful technology should be directed, whose interests should be served, whose flourishing should be prioritized, whose costs should be minimized — is not a question that any individual can answer. It is a phronesis question, and phronesis, as the previous chapter argued, must be cultivated through communal deliberation. The individual builder, however talented, sees the river from one vantage point. The community affected by the river's flow sees it from many. And the dam that serves the community — not just the builder, not just the builder's company, not just the builder's investors — can only be designed through the kind of serious, sustained, mutually respectful deliberation that includes the voices of those who will live with the consequences.
This is not an abstract principle. It is an observation about what happens when dams are built without democratic input. The history of actual dam-building — the physical, concrete, steel-and-water kind — provides the evidence. The Aswan High Dam in Egypt produced electricity and controlled flooding for some communities while displacing over 100,000 Nubian people from their ancestral lands. The Three Gorges Dam in China generated enormous power while displacing 1.3 million people and submerging entire cities. In each case, the technical capacity to build the dam was not in question. The engineering was sound. The power output was impressive. What was missing was the democratic deliberation about whose interests the dam served and whose it sacrificed — the phronesis question, the question that cannot be answered by technical expertise alone.
The AI dams currently being built display the same structural absence. The corporate governance frameworks, the institutional review boards, the "responsible AI" principles that major technology companies have adopted — these are dams of a kind. They address real risks. They are not nothing. But they are designed primarily by the builders, for the builders, within the builders' fishbowl. The voices of the people most affected by the technology — the workers whose jobs are being transformed, the students whose educational experience is being reshaped, the parents whose children are growing up in an attentional environment designed by people who do not know those children and are not accountable to them — are largely absent from the design process.
The EU AI Act, the American executive orders, the emerging regulatory frameworks in Singapore and Brazil and Japan represent a different kind of dam — governmental rather than corporate, public rather than private. These structures are closer to the democratic ideal that Bernstein's pragmatism requires. They are produced through legislative processes that, at least in principle, incorporate the voices of affected populations. They are subject to public scrutiny and democratic revision. But they address primarily the supply side: what AI companies may build, what disclosures they must make, what risks they must assess. The demand side — what citizens, workers, students, and parents need to navigate the AI moment wisely — remains almost entirely unaddressed.
Bernstein's pragmatism insists that this gap is not merely a policy failure. It is an epistemological failure. The knowledge required to build adequate dams — the knowledge of how the technology affects real people in real contexts, the knowledge of which consequences are tolerable and which are not, the knowledge of what flourishing looks like for specific communities with specific needs — is distributed across the population, not concentrated in the hands of builders or regulators. No individual expert, however brilliant, possesses the situated knowledge that a teacher has about how AI tools affect the specific students in her specific classroom. No corporate governance framework, however comprehensive, can incorporate the lived experience of a developer in Lagos or Trivandrum whose career is being transformed in real time.
Peirce's community of inquiry converges on truth because it includes multiple perspectives, each contributing what only its specific vantage point can see. Dewey's democratic public governs wisely because it incorporates the consequences of collective action as they are actually experienced by the people affected. The AI dams that will actually serve human flourishing — not just the flourishing of the builders, not just the flourishing of the investors, but the flourishing of the broader community — can only be designed through a process that includes those multiple perspectives and incorporates those distributed consequences.
There is a figure in The Orange Pill who represents the alternative to democratic dam-building: the Believer. The Believer wants to accelerate the river without building dams at all. The Believer treats the absence of responsibility as the ideology of freedom. Let the market sort it out. Let natural selection operate on the debris. The Believer converts the abdication of responsibility into a principle — the principle that interference with the river is inherently wrong, that the technology will find its own level, that the consequences will distribute themselves optimally without the imposition of human judgment.
Bernstein's democratic pragmatism identifies the Believer's error with precision. It is not a moral error, though it has moral consequences. It is an epistemological error: the belief that the market is a sufficient mechanism for incorporating the consequences of technological change into the decisions about how that change unfolds. The market is a powerful information-processing mechanism. It aggregates certain kinds of information — price signals, demand curves, competitive dynamics — with remarkable efficiency. But it systematically fails to incorporate other kinds of information: the consequences that fall outside market transactions, the effects on people who are not participants in the market, the long-term costs that are invisible in the short-term optimization that market logic demands.
The market did not build the eight-hour day. The market did not build child labor laws. The market did not build the weekend. These were dams built through democratic deliberation — through the organized, sustained, often painful process of incorporating the voices of the people most affected by industrial transformation into the decisions about how that transformation would proceed. The process was imperfect. The dams were inadequate in many ways. But the alternative — letting the market sort it out, letting natural selection operate on the human debris — was the Believer's alternative, and its consequences in the early industrial period were catastrophic for the people who bore the costs.
The retraining gap — the gap between the speed of AI capability and the speed of educational and institutional adaptation — is the most visible consequence of the current failure to build democratic dams on the demand side. The tools work now. The people using them are adapting now, mostly without guidance, mostly by trial and error. The institutions that should be providing that guidance — educational institutions, professional organizations, governmental agencies — are operating on timelines that are fundamentally mismatched to the speed of the transformation. A corporate governance framework that arrives eighteen months after the tool it was supposed to govern has already reshaped the workforce is not a dam. It is a levee built after the flood.
Bernstein's framework does not provide a blueprint for the dams that need to be built. Pragmatism does not deal in blueprints. It deals in processes — the processes through which communities of inquiry, democratically organized and honestly conducted, can converge on understandings that are more adequate than anything any individual, any expert committee, any corporate governance board can produce alone. The process requires the inclusion of affected voices. It requires the publicity of consequences — the honest, accessible communication of what the technology is doing to the people who live with it. It requires the fallibilist willingness to revise — to acknowledge that the first dams built will be inadequate, that they will need rebuilding, that the process of dam-building is ongoing and never complete.
And it requires, above all, the recognition that the question of who builds the dams is not a technical question. It is a democratic question. It is the question of how a community, confronted with a force more powerful than any individual or institution can control, organizes its collective inquiry to produce structures that direct the force toward the flourishing of all its members and not just the flourishing of those who happen to be holding the sticks.
Bernstein championed, throughout his career, the cultivation of what he called "dialogical communities" — communities organized around the practice of serious, sustained, mutually respectful deliberation about matters of shared concern. The description sounds idealistic. In the context of the AI moment, it is desperately practical. The dams need building. The building requires knowledge that is distributed across the population. The concentration of dam-building authority in the hands of technical experts who see the river from one vantage point guarantees that the dams will serve some communities and sacrifice others.
The pragmatist alternative is not to stop building. It is to build democratically. To include in the design process the voices that see what the experts miss. To subject the dams to public scrutiny. To revise them when the consequences reveal their inadequacy. To recognize that the dam is never finished — that the river pushes constantly, testing every joint, and that the maintenance of the structure is as important as its initial construction.
Peirce's community of inquiry. Dewey's democratic public. Bernstein's dialogical communities. The philosophical architecture exists. The question is whether the political will exists to instantiate it — to build institutions of democratic deliberation at the scale the AI moment demands, to include the voices that the current discourse excludes, to recognize that the question of who builds the dams is the question that determines whether the river irrigates or floods.
The river does not wait for the answer. It flows. And the dams that are built in the absence of democratic input serve the builders who build them. The history of dams — actual dams, built in actual rivers — confirms this with a regularity that should make the current generation of AI dam-builders uncomfortable.
Bernstein would insist that the discomfort is productive. It is the discomfort of recognizing that your expertise, however genuine, is not sufficient. That the view from your fishbowl, however wide, does not encompass the full range of consequences. That the dam you are building serves the ecosystem you can see, but there are communities downstream that you cannot see, and their flourishing depends on a democratic process that your individual expertise cannot replace.
The community of inquiry awaits its constitution. The dams await their builders — all of them, not just the ones who happen to be holding the tools.
There is a passage in Beyond Objectivism and Relativism that reads less like philosophy than like a survival manual. Bernstein describes the condition of the thinker who has genuinely internalized fallibilism — who has accepted, not as an abstract principle but as a lived reality, that her most cherished beliefs may be wrong — and who continues to act on those beliefs anyway. Not because she has found certainty. Not because she has resolved the tension between commitment and doubt. But because the tension itself, honestly inhabited, is the condition of all responsible intellectual life. The alternative to living in tension is not peace. It is the false peace of having stopped thinking.
The passage matters because it describes, with philosophical precision, the existential condition of the population that Segal calls the silent middle — the people who feel both the exhilaration of AI-augmented work and the genuine loss that accompanies it, who cannot find a clean narrative because the truth does not come clean, and who are practicing, without a name for it, the hardest form of intellectual engagement available.
Bernstein knew this condition intimately. His career was spent in it. For fifty years, he occupied the position between the objectivists and the relativists, between the foundationalists and the anti-foundationalists, between the thinkers who claimed certainty and the thinkers who claimed that certainty was impossible. He occupied this position not as a compromise — not as the person who splits the difference and calls it wisdom — but as the person who insists that both sides have seen something real, that the tension between them is productive rather than pathological, and that the intellectual work consists not in resolving the tension but in inhabiting it with sufficient honesty and discipline that the inhabitation itself becomes generative.
The condition has specific phenomenological features that Bernstein described across multiple works and that the AI moment has intensified to a degree he could not have anticipated.
The first feature is the absence of resolution. The engaged fallibilist does not arrive. She does not reach the destination where the tension dissolves and the truth stands revealed in its final form. Every resolution is provisional. Every arrival is a waypoint. The understanding achieved today is better than the understanding of yesterday and will be superseded by the understanding of tomorrow. This sounds, in the abstract, like a reasonable concession to the limits of human knowledge. In practice, it is psychologically grueling. The human mind is wired for closure. It craves the satisfaction of the completed argument, the solved problem, the settled question. The engaged fallibilist denies herself this satisfaction — not as an act of masochism but as a disciplined recognition that premature closure is the most common and the most dangerous intellectual failure.
The AI discourse offers premature closure at every turn. The triumphalist narrative closes the question: AI is good, the gains are real, the future belongs to the accelerators. The elegist narrative closes the question from the other direction: AI is pathological, the losses are irreparable, the future belongs to those who resist. Both narratives offer the psychological relief of certainty. Both relieve the tension. And both, by relieving it, destroy the condition under which genuine understanding can develop.
The silent middle resists both forms of closure — not because the people in the middle are more intelligent or more virtuous than the people at the extremes, but because their experience does not fit either narrative. The builder who has felt genuine creative depth in AI collaboration cannot accept the elegist's claim that the collaboration is inherently shallow. The parent who has watched her child's attention fragment under the pressure of algorithmic optimization cannot accept the triumphalist's claim that the technology is unambiguously beneficial. The experience resists the narrative. And the resistance, uncomfortable as it is, is the condition of intellectual honesty.
The second feature is the simultaneity of commitment and doubt. The engaged fallibilist is not the person who withholds judgment. She is the person who judges — who commits, who acts, who builds — while simultaneously acknowledging that her judgment may be wrong. This is not the same as hedging. Hedging is the refusal to commit, the "on the other hand" that neutrailzes every claim before it can be tested. Engaged fallibilism commits. It acts. It builds the dam, ships the product, writes the book, makes the decision. And then it watches what happens. It attends to the consequences. It revises when the consequences demand revision.
Segal's account of writing The Orange Pill is a sustained exercise in this simultaneity. He commits to the argument that AI collaboration is genuinely valuable — not trivially, not instrumentally, but in the deepest sense of expanding what a human mind can reach. He commits to this argument with real conviction, the kind that drives action. He builds with AI. He writes with AI. He leads a team through an AI transformation. These are not theoretical commitments. They are commitments enacted in the world, with real consequences for real people.
And simultaneously, he doubts. He catches himself at three in the morning, unable to stop, and recognizes the pattern of compulsion that the elegists describe. He almost keeps a passage that sounds like insight but is not, and recognizes the seductive smoothness that Han diagnoses. He watches his son ask whether homework still matters and does not have a clean answer. The doubt is not an afterthought. It is constitutive of the practice. The commitment without the doubt would be dogmatism. The doubt without the commitment would be paralysis. The simultaneity is the practice.
Bernstein argued that this simultaneity requires specific conditions to be sustained. Left to itself, the human mind collapses the tension — gravitating toward commitment without doubt (dogmatism) or doubt without commitment (skepticism). The conditions that prevent the collapse are communal, not individual. They require interlocutors who challenge without hostility, who disagree in good faith, who present counter-evidence with the genuine intention of improving the shared understanding rather than winning the argument.
This brings the analysis back to the community of inquiry, but now at the existential rather than the epistemological level. The community is needed not just because distributed knowledge produces better answers than individual knowledge. It is needed because the psychological burden of engaged fallibilism is too great for any individual to bear alone. The person who holds both exhilaration and loss, who commits and doubts simultaneously, who refuses premature closure in a discourse that rewards nothing else — this person needs others who are doing the same thing. Not for comfort. For reality-testing. For the mutual correction that prevents commitment from hardening into dogmatism and doubt from dissolving into paralysis.
The third feature is the willingness to be wrong in public. Bernstein's engaged fallibilism is not a private practice. It requires the public exposure of one's reasoning — the willingness to show your work, to make visible the premises from which your conclusions follow, to invite criticism that might reveal premises you did not know you held. The scientific community, at its best, practices this willingness: the peer review process is an institutionalization of public fallibilism, a structure that compels researchers to expose their reasoning to the criticism of others who may see what they missed.
The AI discourse punishes this willingness with a precision that borders on algorithmic design. The person who says "I built something extraordinary and I also worry it might be harmful" exposes herself to attack from both sides. The triumphalists read the worry as weakness. The elegists read the celebration as complicity. The public admission of uncertainty — of holding contradictory truths simultaneously — invites the specific contempt that a discourse structured around certainty reserves for those who refuse to choose.
Segal's confessions in The Orange Pill — about building addictive products, about the nights when flow becomes compulsion, about the moments of almost keeping AI-generated prose that sounds better than it thinks — are acts of public fallibilism. They expose the reasoning. They show the seams. They invite the criticism that the argument needs but that the author would rather not hear. The willingness to confess publicly what could be concealed privately is not a rhetorical strategy. It is the ethical requirement of a practice that insists on honesty about the limits of one's own understanding.
The tension does not resolve. This is the lesson that fifty years of Bernstein's philosophy drives toward with relentless patience. The objectivists and the relativists are still arguing. The foundationalists and the anti-foundationalists are still at odds. The triumphalists and the elegists of the AI moment show no signs of converging. And the resolution — if the word even applies — lies not in the convergence of the camps but in the quality of the inhabitation practiced by those who refuse to join either one.
Living in tension without collapsing is not a personality trait. It is a practice, and like all practices, it can be cultivated. It requires the conditions Bernstein identified: community, publicity, the willingness to be wrong, the refusal to seek premature closure. It requires the recognition that the tension is not a deficiency in your understanding but a feature of the phenomenon — that the AI moment is genuinely, irreducibly complex, and that any position that makes it feel simple has achieved simplicity at the cost of honesty.
Bernstein's meliorism — his pragmatic commitment to the idea that things can be made better even when they cannot be made perfect — is the emotional foundation of the practice. The meliorist does not hope for resolution. She hopes for amelioration. She builds the dam knowing it will need rebuilding. She writes the book knowing it will need revising. She makes the argument knowing it will need amending.
And she does these things not despite the absence of certainty but because of it. Because the absence of certainty is the condition of all human action, and the people who wait for certainty before acting are the people who never act at all. The dams go unbuilt. The inquiry stalls. The consequences of powerful technology distribute themselves according to the preferences of those who did not wait — who acted, for good or ill, while the careful people were still gathering evidence.
Engaged fallibilism is not patience. It is the refusal to let uncertainty become an excuse for inaction. It is the discipline of building in the river while acknowledging that you do not fully understand the current, that the dam you are building may need to be moved, that the pool it creates may irrigate communities you did not anticipate and flood communities you did not see.
The tension holds. The practice continues. The inquiry does not end.
That is Bernstein's gift to the AI moment: not an answer, but a way of living with the questions.
The conversation that opens The Orange Pill never resolves.
Three friends on a Princeton campus, October light on stone buildings, walking paths that Einstein walked. A neuroscientist who has spent decades inside the hard problem of consciousness. A filmmaker who sees intelligence in the cuts between images. A builder who insists that intelligence is not a possession but a medium — something you swim in rather than something you own.
The neuroscientist challenges the builder: "Come back when you can tell me what a new participant in the medium actually changes." The builder does not have an answer. Not then. The conversation breaks off. The friends separate. Each returns to a different fishbowl.
And the conversation does not end. It continues, transformed, across the twenty chapters that follow — the builder working with an AI to find the answer the neuroscientist demanded, the filmmaker's insight about meaning living in the space between things threading through the argument about collaboration, the neuroscientist's rigor pressing against every claim that threatens to become mysticism. The conversation is never concluded. It is carried forward.
Richard Bernstein would have recognized this structure immediately. It is the structure of philosophical inquiry as he understood it: not a sequence of arguments leading to a conclusion, but a conversation that deepens over time, that incorporates new voices and new challenges, that revises its own terms as the inquiry proceeds. The conversation does not end because the questions it addresses do not end. The questions change their form, as new evidence arrives and new experiences accumulate, but their fundamental character — the restless human need to understand what intelligence is, what consciousness means, what machines can and cannot share with the creatures who built them — persists.
Bernstein's deepest insight, deeper than any particular argument about objectivism or relativism, deeper than the diagnosis of the Cartesian Anxiety or the recovery of phronesis or the critique of foundationalism, was the recognition that the conversation is the point. Not the conclusion it produces. Not the resolution it achieves. The sustained, honest, fallible, caring, perpetually revisable practice of continuing to inquire. That practice is the highest form of human intellectual life. It is also the most vulnerable — vulnerable to the impatience that demands closure, to the anxiety that demands foundations, to the market pressures that demand results, to the algorithmic architecture that demands clean narratives and punishes ambivalence.
The AI moment threatens the conversation in specific, identifiable ways. The speed of AI-assisted production creates pressure to arrive — to conclude, to ship, to publish, to stop deliberating and start executing. The productivity metrics that the discourse celebrates measure output, not inquiry. The organizations that reward speed reward the resolution of conversations, not their continuation. The quarterly report does not have a line item for "questions still open."
But the AI moment also sustains the conversation in ways that no previous technology could. The collaboration between human and machine creates new possibilities for intellectual exchange — imperfect, asymmetric, lacking the ethical dimension of genuine dialogue, but generative in ways that cannot be dismissed. The cross-domain connections that AI makes possible, the freed cognitive bandwidth that allows the builder to think about architecture rather than syntax, the expansion of who gets to participate in the conversation of building — these are not trivial contributions to the inquiry. They are expansions of the conversation's range, new voices (however strange) adding new perspectives to a dialogue that has been ongoing since the first human looked at the stars and asked what they were.
Bernstein died in July 2022. Five months later, ChatGPT launched and the world changed. He never saw the technology that would put every concept he developed to its most severe test. But the concepts were ready. The Cartesian Anxiety explains why the discourse calcified. Engaged fallibilism describes the practice the silent middle needs. Phronesis names the capacity the AI age elevates. The hermeneutic circle explains the fishbowl and what it means to crack it. Praxis describes the integration of building and thinking that the moment demands. The community of inquiry identifies what the dam-building process requires.
The concepts were ready because they were forged in response to the permanent features of human intellectual life — the features that do not change when the tools change, the features that persist across the transition from oral culture to writing to printing to computing to artificial intelligence. The need for honest inquiry. The danger of premature certainty. The irreplaceability of practical wisdom. The circularity of understanding. The unity of thinking and doing. The dependence of truth on community.
These are not features of a particular technological era. They are features of what it means to be a creature that knows it does not know enough and cannot stop trying to know more. AI does not change these features. It intensifies them. It raises the stakes of every one of them by amplifying the consequences of getting them right or wrong. The premature certainty that was merely foolish in the age of the printing press is dangerous in the age of AI, because AI amplifies the consequences of foolish certainty at a scale and speed that Gutenberg could not have imagined. The practical wisdom that was merely valuable in the age of the spreadsheet is essential in the age of AI, because AI has commoditized everything that is not practical wisdom and left judgment as the last form of human contribution that the machine cannot replicate.
The conversation that Bernstein sustained for fifty years — the conversation between objectivism and relativism, between foundationalism and anti-foundationalism, between the claim that we can know things absolutely and the claim that we can know nothing at all — is the same conversation that the AI moment reopens with an urgency that academic philosophy has rarely achieved. The question is not whether AI can think. The question is whether the people using AI can think — can maintain the discipline of inquiry, the honesty of fallibilism, the communal structures of democratic deliberation, and the practical wisdom that determines whether a powerful technology irrigates or floods.
The answer to this question is not yet known. It is being determined, right now, by the choices of the people living through the transition. By the quality of the conversations they are having with each other and with their tools. By the structures they are building to direct the flow. By the willingness or unwillingness to stay in the tension, to refuse premature closure, to acknowledge that the truth is more complex than any single narrative can capture.
Segal closes The Orange Pill with an injunction: "It's time to get back to building." Bernstein would have added a corollary: It's time to get back to questioning. Not instead of building. Alongside building. Inside building. The conversation does not stop when the building starts. The conversation is the building, and the building is the conversation, and the distinction between them dissolves when the practice of inquiry achieves its fullest form.
Peirce began it. Dewey continued it. Gadamer deepened it. Habermas challenged it. Bernstein held it together.
The conversation is unfinished. That is not its failure. That is its nature. The engaged fallibilist does not expect to finish. She expects to contribute — honestly, fallibly, with the particular insight that her horizon affords and the particular blindness that her horizon conceals — and to hand the conversation to those who come after, who will see what she missed, who will revise what she held with conviction, who will continue the inquiry in conditions she cannot anticipate.
The Princeton afternoon does not end. The neuroscientist's challenge remains open. The filmmaker's insight about meaning living in the cuts continues to illuminate. The builder keeps building, keeps questioning, keeps pressing his face against the glass.
And the conversation continues. Unfinished. Unresolvable. The most important thing we do.
Both things were true and I could not put either one down.
That is the sentence that kept recurring as I worked through Bernstein's ideas — not his words, but the condition his philosophy describes with more precision than anything else I have encountered. The builder in me knows that Claude Code has expanded what I can reach. The parent in me lies awake wondering what the expansion costs. Bernstein gave me a name for refusing to choose between those two truths: engaged fallibilism. But more than the name, he gave me permission to stop treating the refusal as weakness.
I had been treating it as weakness. The silent middle, which I described in The Orange Pill without knowing the philosophical tradition that explains it, felt to me like indecision. Like the inability to commit. The triumphalists committed. The doomers committed. I sat between them holding contradictory data and calling it vertigo.
Bernstein's framework reframes that vertigo as the starting condition of all honest thought. Not a problem to solve. Not a phase to push through on the way to clarity. The permanent address of anyone who cares about getting it right and knows that "right" is more complex than any single narrative can capture. The Cartesian Anxiety — the desperate need for solid ground — is what drives people to the extremes. The pragmatist stays in the middle not because she lacks conviction but because her conviction includes the awareness that conviction alone is not enough.
What arrested me most was Bernstein's insistence that phronesis — practical wisdom, the judgment about what is worth doing — cannot be automated. Not "has not yet been automated." Cannot be. Because phronesis is formed through the accumulated weight of decisions made under uncertainty, lived with in their consequences, revised in light of what those consequences reveal. The AI can deliver knowledge. It can perform technical skill. It cannot judge what matters. That judgment is ours — formed slowly, through failure, through the friction that deposits understanding layer by layer.
And here is the tension that Bernstein's framework forces me to hold: the same tools that elevate phronesis to the position of highest value may be eroding the conditions under which phronesis develops. If practical wisdom is formed through the habit of judging, and AI is absorbing many of the everyday judgments through which the habit was practiced, then the capacity the AI age needs most is the capacity the AI age is most at risk of degrading. That is not a comfortable realization. It is the kind of truth that engaged fallibilism insists you sit with rather than resolve prematurely.
The chapter on who builds the dams will stay with me longest. I wrote in The Orange Pill about the beaver — the small creature building structures in a powerful river. Bernstein's democratic pragmatism asks the question I failed to ask: whose voices are included in the design? The dams I have built in my career were built from inside my fishbowl. They served the communities I could see. Bernstein insists, with the patience of someone who has been making this argument for fifty years, that the communities I cannot see are the ones whose flourishing depends most on being heard.
I do not have a clean resolution. Bernstein's deepest lesson is that clean resolutions are the enemy of honest thought. The inquiry continues. The conversation does not end. The tension holds, and the holding is not weakness. It is the hardest and most necessary form of intellectual engagement available to creatures who know they do not know enough and cannot stop trying to know more.
Both things are true. I will not put either one down.
-- Edo Segal
** The loudest voices in AI demand you choose: celebration or mourning, acceleration or resistance, utopia or collapse. Richard Bernstein spent fifty years proving that this kind of forced choice -- what he called the Cartesian Anxiety -- is itself the deepest intellectual failure. His framework of engaged fallibilism offers something the AI discourse desperately lacks: a way to hold genuine conviction and genuine doubt simultaneously, without collapsing into either dogmatism or despair. This book applies Bernstein's pragmatist philosophy to the questions the technology revolution raises -- about practical wisdom in an age of automation, about who gets to design the structures that shape a powerful river, and about what it means to keep thinking honestly when every platform rewards you for stopping.

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Richard Bernstein — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →