By Edo Segal
The argument I could not win was with myself.
For months I had been making the case — in boardrooms, at dinner tables, in the pages of *The Orange Pill* — that AI is an amplifier, that the river of intelligence is widening, that the question is not whether to build but what to build and for whom. I believed it. I still believe it. But somewhere over the Atlantic, laptop open, Claude humming alongside me, I caught myself unable to answer a simpler question: How would I know if I was wrong?
Not wrong about a detail. Wrong about the whole frame. Wrong about whether the gains outweigh the losses. Wrong about whether ascending friction is real or just a story I tell myself so I can keep building at three in the morning without confronting what the building might be costing.
That question — how would I know — is the question Larry Laudan spent his career making unavoidable.
Laudan was a philosopher of science, which sounds like a discipline safely removed from the chaos of the AI transition. It is not. What Laudan studied was exactly what we are living through: the moment when competing frameworks collide, when the same evidence gets claimed by both sides, when smart people look at identical data and reach opposite conclusions and each thinks the other is irrational.
The triumphalists see the productivity data and declare progress. The elegists see the burnout data and declare catastrophe. Both cite real evidence. Both are internally coherent. And neither can explain why the other is wrong without smuggling in the very assumptions the other side rejects.
Laudan built the tools for navigating exactly this deadlock. Not by picking a winner, but by asking a harder question: Which framework solves more of the problems we actually face? Which one generates anomalies it cannot explain? Which one is growing to meet new evidence, and which one is shrinking to avoid it?
These are operational questions. They do not require you to choose a side before you have evaluated the evidence. They require you to specify what evidence would change your mind — and then to be honest when that evidence arrives.
That discipline — the discipline of evaluating before committing, of holding your own framework to the same standard you hold everyone else's — is the rarest commodity in the AI discourse right now. It is also the most necessary.
Laudan gave me the vocabulary to ask the question I had been avoiding. Not "Am I right?" but "What would wrong look like, and am I watching for it?"
That is why this book exists. Another lens. Another floor of the tower.
-- Edo Segal ^ Opus 4.6
1941-2022
Larry Laudan (1941–2022) was an American philosopher of science whose work fundamentally reshaped how scholars evaluate scientific progress and rational theory change. Born in Austin, Texas, he studied at the University of Chicago and received his doctorate from Princeton University. Over a career spanning five decades, he held positions at the University of Pittsburgh, Virginia Polytechnic Institute, the University of Hawaiʻi, and the National Autonomous University of Mexico (UNAM). His landmark work *Progress and Its Problems* (1977) introduced the problem-solving model of scientific progress, arguing that theories should be evaluated not by their approximation to truth but by their capacity to solve empirical and conceptual problems. In *Science and Values* (1984), he developed the reticulated model of scientific rationality, demonstrating that theories, methods, and aims evolve together in a web of mutual adjustment rather than through appeal to fixed, paradigm-independent standards. His later work extended into philosophy of law and epistemology of risk. Laudan's insistence that progress is measurable, comparative, and never guaranteed — that it depends on the quality of evaluation rather than the conviction of the evaluators — established him as one of the most rigorous and practically consequential philosophers of science of the twentieth century.
Every significant intellectual dispute eventually reveals itself to be a dispute about standards. Not about facts — the facts are often shared, or at least shareable — but about what the facts are supposed to demonstrate. The participants marshal evidence, cite data, point to the same phenomena, and arrive at irreconcilable conclusions. The onlooker concludes that someone must be irrational. Usually both sides reach the same conclusion about the other.
The philosophy of science encountered this problem long before the AI discourse replicated it. For most of the twentieth century, philosophers assumed that scientific disagreements could be resolved by appeal to a neutral standard — logic, evidence, the correspondence of theory to reality. When Thomas Kuhn demonstrated in 1962 that scientists working within different paradigms literally could not see the same data the same way, that the standards of evaluation were themselves products of the paradigm rather than independent arbiters between paradigms, the philosophical establishment responded with something approaching panic. If there was no neutral standard, was science rational at all? Were paradigm shifts just mob psychology dressed in lab coats?
Larry Laudan spent his career dismantling both the comfortable assumption and the panicked response. The comfortable assumption — that there exists a fixed, paradigm-independent standard for evaluating scientific theories — was untenable. Kuhn had shown that much convincingly. But Kuhn's alternative — that paradigm shifts are fundamentally arational, driven by generational change and social pressure rather than evidence — was equally untenable, because it could not explain why science so manifestly works. Bridges stay up. Vaccines prevent disease. Rockets reach the moon. If theory change were mere fashion, these outcomes would be miraculous.
Laudan's solution was characteristically operational. Abandon the search for a fixed standard. Abandon the conclusion that no standard exists. Instead, evaluate competing frameworks by what they actually do: solve problems. A research tradition — Laudan's deliberately more flexible replacement for Kuhn's paradigms — is progressive when it solves more problems than its competitors while generating fewer anomalies. It is degenerative when the anomalies accumulate faster than the solutions. The evaluation is comparative, not absolute. There is no God's-eye view from which to pronounce one tradition true and the other false. There is only the auditable, revisable question: which tradition handles more of the problems we actually face?
This framework, developed across Progress and Its Problems in 1977 and refined in Science and Values in 1984, was designed for scientific disputes. It applies with unsettling precision to the dispute The Orange Pill documents.
The AI discourse of 2025 and 2026 is not a debate between the informed and the ignorant. It is a competition between research traditions in Laudan's technical sense — traditions that share much of the same evidence base but operate with different problem sets, different standards of evaluation, and different conceptions of what counts as progress. Identifying these traditions with precision is the first step toward evaluating them rationally rather than ideologically.
The triumphalist tradition measures progress by a specific and internally consistent set of metrics: productivity gains, adoption speed, capability expansion, and the democratization of access. Its core problems — the problems it is designed to solve — are problems of capability and reach. Can more people build more things? Can the gap between imagination and artifact be compressed? Can barriers of skill, capital, and institutional access that previously gated participation in the building of technology be lowered or eliminated?
By its own standards, the triumphalist tradition is spectacularly successful. The data it cites is real. Claude Code's adoption curve is real. The twenty-fold productivity multiplier Segal documents in Trivandrum is real. The developer in Lagos who can now prototype a product that would have required a team and a year of runway is real. The engineer who reached across disciplinary boundaries she could never have crossed without AI assistance is real. The Death Cross of SaaS valuations — a trillion dollars of market repricing in weeks — is real evidence that the market, whatever its other failures, has recognized a genuine shift in where value resides.
The triumphalist tradition solves these problems convincingly. It has a theory — that AI is an amplifier of human capability — that accounts for the evidence, generates testable predictions, and provides operational guidance for builders, investors, and policymakers. Evaluated purely by the problems it claims as its own, the tradition is progressive.
But every research tradition generates anomalies — problems it cannot solve without abandoning its core commitments — and the triumphalist tradition's anomalies are severe.
The first anomaly is the grinding emptiness that the tradition's own success produces. Segal documents this with an honesty unusual in technology literature: the inability to stop building, the confusion of productivity with aliveness, the recognition that the exhilaration had drained away and what remained was compulsion. The Berkeley study documents it with empirical rigor: work intensification, task seepage, the colonization of rest by AI-assisted productivity. The triumphalist tradition cannot explain these phenomena without undermining its central claim. If AI is an amplifier of human capability, and human capability is being amplified, why are the most enthusiastic users reporting symptoms indistinguishable from addiction? Why does the spouse of a builder write a viral essay titled "Help! My Husband is Addicted to Claude Code"? The triumphalist tradition handles this anomaly the way degenerating research traditions always handle anomalies: by minimizing it, reframing it as a temporary adjustment cost, or dismissing the people who report it as insufficiently adapted to the new reality. None of these responses solve the problem. They defer it.
The second anomaly is the erosion of depth. Triumphalist metrics measure output — lines of code, products shipped, revenue generated. They do not measure understanding. The engineer who uses Claude to build a system she cannot explain, the lawyer who submits a brief citing cases she has not read, the student who produces an essay articulating ideas she has not thought — these are anomalies for the triumphalist tradition because the output metrics show success while something essential to the tradition's own long-term viability is being consumed. Segal captures this through the metaphor of geological deposition: every hour spent debugging deposits a layer of understanding, and Claude skips the deposition. The surface looks the same, but the foundation is not being built. The triumphalist tradition has no mechanism for registering this loss, because depth is not among its metrics. It measures what it values and therefore cannot see what it is losing.
The elegist tradition measures progress by an entirely different set of standards: the preservation of depth, craft, embodied knowledge, and the formative friction of struggle. Its core problems are problems of meaning and identity. What happens to expertise when implementation is automated? What happens to understanding when the struggle that produced it is optimized away? What is lost when the imagination-to-artifact ratio approaches zero?
The elegist tradition also solves its problems convincingly. The loss of tactile knowledge when open surgery becomes laparoscopic is real. The erosion of debugging intuition when Claude writes the code is real. The calligrapher's embodied understanding of letterform, the framework knitter's feel for fiber tension, the senior engineer's architectural instinct built through thousands of hours of patient failure — these forms of knowledge are genuinely threatened by tools that bypass the process through which they develop. The elegist tradition provides a framework for seeing these losses, naming them, and arguing that they matter.
But the elegist tradition generates its own anomalies, and they are equally severe.
The first anomaly is democratization. If the friction of struggle is formative and its removal is pathological, then the barriers that previously prevented the developer in Lagos from building — barriers of skill, capital, institutional access — were, in some sense, features rather than bugs. They ensured that only those who had undergone the formative struggle could participate. The elegist tradition, taken to its logical conclusion, defends a gatekeeping function that it cannot defend morally. The tradition that mourns the loss of depth must account for the fact that depth was available only to those with the privilege of access, and that the tools it criticizes are expanding access to people for whom the previous barriers were not formative but exclusionary. Segal makes this point forcefully: a philosophy of friction that cannot account for the rising floor has told only half the truth, the privileged half.
The second anomaly is stagnation. If the elegist prescription — resist the tools, preserve the friction, maintain the old ways of building — were followed, the result would not be the preservation of a golden age. It would be the calcification of a set of practices optimized for a problem set that no longer exists. The Luddites who broke machines did not save their trade. They accelerated the social hostility toward their movement and ensured that the transition happened without their input. The elegist tradition, in its strongest form, repeats this error: it advocates a response to structural change that structural change will overwhelm, leaving the elegists precisely where the Luddites ended — displaced, bitter, and absent from the conversation that determined the shape of the new world.
Laudan's framework does not ask which tradition is correct. Traditions are not correct or incorrect. They are progressive or degenerative — evaluated not by their correspondence to some fixed truth but by their capacity to solve problems while managing anomalies. By this standard, both the triumphalist and the elegist traditions are partially progressive and partially degenerative. Each solves problems the other cannot. Each generates anomalies the other does not face.
The population that Segal calls the silent middle — the people who feel both the exhilaration and the loss, who use the tools and worry about what the tools are doing, who lack a clean narrative because no clean narrative can accommodate both truths — occupies, in Laudan's framework, the most epistemically rational position. Not because ambivalence is a virtue, but because the silent middle is the population attempting to take on the full problem set. It refuses to dismiss the triumphalist's anomalies, and it refuses to dismiss the elegist's anomalies. It holds both sets of unsolved problems in view, which is uncomfortable, narratively unsatisfying, and methodologically progressive.
The silent middle does not get engagement on social media, because social media rewards the clarity that comes from ignoring half the evidence. The silent middle does not get invited to give keynotes, because keynotes reward conviction, and the silent middle's conviction is precisely that conviction in either direction is premature. But the silent middle is where the evaluation must happen — the slow, unglamorous, empirically grounded work of determining which problems are being solved, which anomalies are accumulating, and whether the transition as a whole is progressive or degenerative.
Laudan would add one further observation. Research traditions do not merely compete. They evolve. A tradition that takes its anomalies seriously and develops the internal resources to address them becomes more progressive over time. A tradition that dismisses its anomalies or redefines them as non-problems becomes degenerative. The triumphalist tradition can become more progressive by developing the tools to measure and address the depth erosion and work intensification that its own success generates. The elegist tradition can become more progressive by developing an account of friction that distinguishes between formative friction and exclusionary friction, between the struggle that builds understanding and the barrier that prevents access.
Whether either tradition will evolve in these directions is an empirical question, not a philosophical one. Laudan's framework does not predict outcomes. It provides the machinery for evaluating them. The evaluation has barely begun.
The discourse is young, the data is thin, the positions are hardening faster than the evidence warrants, and the most rational participants are the quietest — not because they lack conviction, but because they understand that the conviction the moment demands is the conviction to keep evaluating rather than the conviction to stop.
---
The word "progress" carries more concealed assumptions than almost any term in the contemporary lexicon. When the triumphalist tradition declares that AI represents progress, it means something specific: that more people can build more things faster than before, that the imagination-to-artifact ratio has compressed, that capability has expanded. When the elegist tradition denies that this constitutes progress, it also means something specific: that speed is not depth, that capability is not understanding, that the capacity to produce is not the capacity to comprehend what one has produced. Both uses of the word are internally coherent. Both are supported by evidence. They are irreconcilable because they measure progress against different standards — and neither tradition can justify its standard without circular appeal to its own commitments.
Laudan confronted precisely this structure in the philosophy of science. Positivists measured scientific progress by the accumulation of confirmed predictions. Kuhnians measured it by the internal coherence of paradigms. Realists measured it by approximation to truth. Each standard was internally justified and externally contested. The debates were interminable because the participants were arguing about evidence while actually disagreeing about what evidence was for.
Laudan's intervention was to replace the search for the correct standard with a pragmatic alternative: the problem-solving model of progress. Progress, in this framework, is the increase in solved problems and the decrease in anomalies. A theory, a tradition, a framework is progressive when it handles more of the problems it faces than the alternatives do. It is degenerative when the problems it cannot handle accumulate faster than the problems it solves.
This sounds simple. Its consequences are not.
The first consequence is that progress becomes comparative rather than absolute. There is no fixed benchmark against which the AI transition can be measured. There is only the question of whether the AI-mediated approach to work, learning, and creation solves more problems than the approach it is displacing. This comparison requires specifying the problems — not in the vague terms of "Is AI good?" or "Is AI bad?" but in the precise terms that allow evaluation. What specific problems does AI-assisted software development solve that manual development did not? What specific problems does it generate that manual development did not face? The answers to these questions are the raw material of progress evaluation, and they must be gathered empirically rather than settled ideologically.
The second consequence is the distinction between empirical problems and conceptual problems, a distinction central to Laudan's framework and indispensable for the AI discourse.
Empirical problems are questions about what happens in the world. What does AI do to productivity? To attention? To the distribution of capability? To the quality of creative output? To the well-being of the people who use it? These questions have answers that can, in principle, be investigated through observation, measurement, and controlled study. The Berkeley study that Segal documents in The Orange Pill is an attempt to answer some of them. The twenty-fold productivity multiplier in Trivandrum is evidence bearing on others. The adoption curves, the GitHub commit data, the SaaS valuation collapse — all of these are evidence relevant to empirical problems of the AI transition.
Conceptual problems are different. They arise not from the world but from within the frameworks deployed to explain the world. A conceptual problem exists when a tradition's internal commitments generate tensions that the tradition cannot resolve without modification. The triumphalist tradition faces a conceptual problem when it claims AI amplifies human capability while its most enthusiastic users report symptoms of compulsion rather than flourishing. If the amplification thesis is correct, the compulsion should not exist — or if it does exist, the tradition must explain how genuine amplification and genuine compulsion can coexist in the same user, which requires theoretical resources the tradition does not currently possess. The elegist tradition faces a conceptual problem when it defends the formative value of friction while acknowledging that much of the friction it valorizes was not formative but exclusionary — a gatekeeping mechanism that preserved depth for the privileged while denying access to everyone else.
These conceptual problems are not empirical questions waiting for more data. They are structural tensions within the traditions themselves, and they must be resolved through theoretical development rather than additional observation. More data about AI adoption rates will not resolve the triumphalist tradition's compulsion anomaly. More data about the loss of debugging intuition will not resolve the elegist tradition's access problem. The traditions must evolve their internal commitments — and this evolution is where progress, in Laudan's sense, either happens or fails to happen.
The third consequence of the problem-solving model is that it permits — indeed requires — the evaluation of traditions that operate across different domains. The AI transition poses problems that do not belong to any single discipline. The productivity questions belong to economics. The attention questions belong to cognitive science. The identity questions belong to philosophy. The institutional questions belong to political science. The parenting questions belong to developmental psychology. No single disciplinary tradition has the resources to address the full problem set.
Laudan was explicit that research traditions are not confined to individual sciences. They are general frameworks that shape how communities identify, approach, and evaluate problems. The triumphalist and elegist traditions cut across disciplines — each contains economists, technologists, educators, policymakers, and parents, united not by disciplinary training but by shared assumptions about what constitutes progress in the face of AI. A Laudanian evaluation must assess each tradition's capacity to handle problems across the full range of domains the transition affects. A tradition that solves the productivity problems brilliantly but cannot address the attention problems or the identity problems is less progressive than a tradition that handles all three, even if its solutions are less elegant in any single domain.
This is why The Orange Pill's attempt to hold multiple domains in tension simultaneously — economics and philosophy, cognitive science and parenting, technology and meaning — is, by Laudan's standard, a structurally progressive move. Not because it succeeds in solving all the problems it identifies. It does not. But because it refuses to restrict the problem set to the domain where its solutions are strongest, which is what both the triumphalist and elegist traditions do when left to their own devices.
The triumphalist tradition, confined to its preferred domain of productivity and capability, is spectacularly successful. Expand the problem set to include attention ecology, identity, and the formative value of struggle, and its anomalies multiply. The elegist tradition, confined to its preferred domain of depth and meaning, is morally compelling. Expand the problem set to include access, democratization, and the barriers that the old friction imposed on the unprivileged, and its anomalies multiply in turn.
The problem-solving model demands that the evaluation encompass the full problem set, not the curated subset that makes any single tradition look good.
There is a further subtlety in Laudan's framework that bears directly on the AI discourse. Laudan distinguished between solved problems and unsolved problems, but he also identified a third category that is often more consequential than either: anomalous problems. An anomaly is not simply a problem that a tradition has not yet solved. It is a problem that the tradition's own commitments predict should not exist. Unsolved problems are normal; every tradition has them, and their presence does not indicate degeneration. Anomalies are different. They are evidence that something in the tradition's core commitments is wrong, that the framework generates predictions that the world contradicts.
The compulsion-amid-amplification pattern is anomalous for the triumphalist tradition. If AI genuinely amplifies human capability — if the tools expand what people can do, make their work more satisfying, and free them from drudgery — then the prediction is that users should feel more capable and more satisfied. The finding that users feel more capable and less satisfied, more productive and more depleted, is not merely a problem the tradition has not yet addressed. It contradicts the tradition's own predictions. That makes it an anomaly.
The exclusion-amid-depth-preservation pattern is anomalous for the elegist tradition. If friction is formative and its removal is pathological, then the prediction is that the populations most deprived of friction — the people with the most access, the most tools, the smoothest workflows — should be the shallowest practitioners. But the evidence from The Orange Pill and elsewhere suggests that the people with the most access to AI tools are often the ones producing the most ambitious, most integrative, most genuinely novel work — precisely because the tool frees them from the implementation drudgery that previously consumed their cognitive bandwidth. The elegist tradition predicts that the removal of friction should produce shallowness. In many observable cases, it produces the opposite. That is an anomaly.
A progressive tradition addresses its anomalies. It develops new theoretical resources, modifies its commitments, expands its framework to accommodate what the world is showing it. A degenerative tradition ignores its anomalies, redefines them as non-problems, or blames the observers who report them.
The AI discourse of 2025 and 2026 shows both patterns. Some triumphalists have begun to develop the concept of "AI Practice" — structured interventions designed to prevent the productivity gains from calcifying into compulsive overwork. This is progressive development within the tradition: it acknowledges the anomaly and proposes solutions. Other triumphalists dismiss the compulsion reports as the complaints of people who lack the discipline to manage powerful tools, which is anomaly-dismissal rather than anomaly-resolution. Some elegists have begun to distinguish between formative friction and exclusionary friction, acknowledging that not all barriers are pedagogically valuable and that some of what was lost was not depth but gatekeeping. This too is progressive development. Other elegists treat any defense of the tools as capitulation, which is the retreat to core commitments that characterizes degeneration.
The problem-solving model does not determine which tradition will prevail. Traditions prevail by solving more problems, and whether the triumphalist or elegist tradition — or some synthesis that has not yet emerged — will ultimately prove more progressive depends on developments that have not yet occurred and evidence that has not yet been gathered.
What the model does provide is the standard by which the competition can be evaluated: not by ideological conviction, not by the persuasiveness of the rhetoric, not by the social status of the advocates, but by the auditable, revisable, empirically grounded question of which framework solves more problems while generating fewer anomalies.
The evaluation is underway. The data is thin. The responsible intellectual posture is not conviction but disciplined inquiry — the willingness to specify in advance what evidence would strengthen or weaken each tradition, and then to evaluate honestly as the evidence accumulates.
Progress is not a destination. It is a direction — the direction of increasing problem-solving capacity, maintained only through the permanent willingness to revise.
---
In the summer of 2025, Xingqi Maggie Ye and Aruna Ranganathan of UC Berkeley's Haas School of Business completed what remains the most sustained empirical observation of AI's effects on knowledge work. They spent eight months embedded in a two-hundred-person technology company, observing behavior, conducting interviews, analyzing workflows. Their findings, published in the Harvard Business Review in February 2026, occupy a peculiar and instructive position in the AI discourse: cited by triumphalists as evidence of productivity gains, cited by elegists as evidence of work intensification, and fully satisfying to neither.
Laudan was deeply skeptical of crucial experiments — single observations claimed to decide between competing theories. The history of science, he argued, demonstrates that individual experiments rarely settle disputes between well-developed research traditions, because the traditions differ not only in their predictions but in their interpretive frameworks, their background assumptions, and their standards for what counts as a confirming or disconfirming observation. The same experimental result can be absorbed by both traditions, each interpreting it as confirmation of its own commitments and disconfirmation of the other's. The resolution of inter-tradition disputes requires not a single decisive experiment but a sustained accumulation of problem-solving success across multiple domains — a process measured in decades, not in journal articles.
The Berkeley study exhibits exactly this pattern. Its empirical findings are shared by both traditions. Its interpretation is not.
Consider the study's first finding: AI does not reduce work; it intensifies it. Workers who adopted AI tools worked faster, took on more tasks, expanded into domains previously assigned to other roles. The triumphalist tradition reads this as evidence of capability expansion — precisely what its theory predicts. The tools amplified what workers could do, and the workers responded by doing more. The elegist tradition reads the same finding as evidence of auto-exploitation — precisely what its theory predicts. The tools made more work possible, and the internalized imperative to achieve converted possibility into compulsion. The datum is shared. The interpretation is tradition-dependent.
Consider the study's second finding: work seeps into pauses. Employees prompted AI on lunch breaks, in elevators, in the minutes between meetings. Time previously protected — informally, invisibly, but consequentially — as cognitive rest was colonized by AI-assisted productivity. The triumphalist tradition reads this as a temporary adjustment: workers are excited by a new tool and will eventually develop healthier usage patterns, the way email users eventually learned to set boundaries. The elegist tradition reads it as structural: the tool's availability and speed eliminate the natural friction that previously enforced rest, and no amount of adjustment will restore what friction provided passively. Again, the datum is shared. The interpretive framework is not.
Consider the study's third finding: multitasking became the norm and fractured attention. AI could run tasks in the background while workers focused on other things, but the workers could not fully focus because they needed to monitor the background processes. The triumphalist tradition reads this as a design problem — the tools will improve, the interfaces will become more transparent, the monitoring burden will decrease. The elegist tradition reads it as an epistemological problem — the fracturing of attention is inherent in any system that multiplies the demands on a finite cognitive resource, regardless of interface design.
In each case, the study provides data. In no case does the data settle the dispute.
This is not a failure of the study. It is a structural feature of disputes between research traditions at an early stage of competition. Laudan identified the conditions under which empirical evidence can resolve inter-tradition disputes, and those conditions are stringent. The traditions must agree on what counts as relevant evidence. They must agree on what predictions their theories generate. They must agree on the background conditions under which those predictions are tested. And they must agree on what constitutes a disconfirming result — what observation, if it occurred, would force the tradition to acknowledge that its framework had failed.
None of these conditions obtain in the AI discourse. The triumphalist and elegist traditions do not agree on what counts as relevant evidence. The triumphalist tradition privileges quantitative metrics — productivity, adoption, output volume. The elegist tradition privileges qualitative observations — the experience of depth, the phenomenology of understanding, the felt difference between knowledge that has been earned and knowledge that has been extracted. Neither tradition recognizes the other's preferred evidence as decisive.
They do not agree on what predictions their frameworks generate. The triumphalist tradition predicts that AI will increase productivity and expand capability. The elegist tradition predicts that AI will erode depth and produce pathological intensity. But neither tradition specifies its predictions precisely enough to be tested against the Berkeley data. "Increase productivity" is too vague — productivity in what dimension? measured how? over what timeframe? "Erode depth" is too vague — depth of what kind? in which practitioners? measured by what indicator?
They do not agree on what constitutes disconfirmation. The triumphalist tradition can absorb any evidence of burnout or compulsion by categorizing it as a temporary adjustment cost. The elegist tradition can absorb any evidence of genuine capability expansion by categorizing it as breadth mistaken for depth, quantity mistaken for quality. Both traditions have sufficient interpretive flexibility to accommodate virtually any observation, which means that no single observation — no matter how rigorous — can resolve the dispute.
Laudan argued that this interpretive flexibility is the central problem of inter-tradition evaluation. When traditions are flexible enough to accommodate any evidence, evidence alone cannot adjudicate between them. What can adjudicate is comparative problem-solving effectiveness over time — the sustained accumulation of solved problems and the sustained reduction of anomalies, evaluated across multiple domains and over extended periods.
This has immediate practical consequences for how the AI discourse should be conducted.
The Berkeley study is valuable not because it settles the dispute but because it identifies the specific empirical problems that must be solved. These problems can be stated with precision, and stating them with precision is itself a form of progress.
Problem one: the intensity question. AI-assisted work is more intense than unassisted work. Is this intensity formative or pathological? The study cannot answer this question because intensity, observed from outside, is ambiguous between flow and compulsion. Solving this problem requires longitudinal data — observations of the same workers over years rather than months — and it requires phenomenological data that the study's methodology does not capture: not what workers did, but what the experience of doing it was like, and how that experience changed their capacity for future work.
Problem two: the depth question. AI-assisted workers expanded into new domains. Did this expansion produce genuine interdisciplinary capability or superficial breadth? The study cannot answer this because it measured behavior, not understanding. An engineer who uses Claude to build a frontend feature has expanded her behavioral range, but whether she understands the frontend architecture well enough to make sound decisions about it — as opposed to merely generating code that works — is a question the study's methodology cannot address. Solving this problem requires evaluation instruments that measure understanding independently of output, which do not yet exist in standardized form.
Problem three: the distribution question. The productivity gains documented by the study accrued primarily to individual workers. Who captured the value of those gains? Did the gains translate into reduced working hours, or into increased output expectations? Did they flow to the workers themselves, to their employers, to the consumers of their output? The study documents what happened inside the organization. It does not document how the gains were distributed, which is ultimately the question that determines whether the transition is socially progressive or socially regressive.
Problem four: the counterfactual question. The study documents what happened when AI was introduced. It cannot document what would have happened if AI had been introduced differently — with different organizational structures, different norms, different expectations about what the tools were for. The finding that AI intensified work may reflect something inherent in the tools, or it may reflect something specific to the organizational culture in which the tools were deployed. This distinction matters enormously for policy. If the intensification is inherent, the prescription is to limit or structure exposure. If it is cultural, the prescription is to change the culture.
Each of these problems is solvable in principle but unsolved in fact. And the inconclusiveness is not a reason to defer evaluation — it is a reason to specify more precisely what evidence would advance the evaluation, and then to gather that evidence with the patience and rigor the situation demands.
Laudan was explicit that premature theoretical closure — the commitment to one tradition before the evidence warrants it — is the most common and most costly form of intellectual error in periods of transition. The history of science is littered with cases where a tradition that appeared overwhelmingly successful in the short term generated anomalies that proved fatal in the long term, and where a tradition that appeared to be failing rallied when its practitioners developed new theoretical resources to address its weaknesses.
The AI transition is too young, the data too thin, and the stakes too high for the kind of premature closure that both the triumphalist and elegist traditions are rushing toward. The triumphalist tradition's certainty that the gains will compound and the costs will diminish is not supported by the evidence — it is an extrapolation from short-term data interpreted through a framework that is structurally inclined to interpret any data as confirmation. The elegist tradition's certainty that the costs are irreversible and the gains are illusory is equally unsupported — it is an extrapolation from the same short-term data interpreted through a framework that is structurally inclined to interpret any data as disconfirmation.
The progressive response to inconclusive evidence is not agnosticism. It is the disciplined specification of what would count as evidence, followed by the honest evaluation of that evidence as it accumulates. It is the willingness to say: the data so far shows X, and X is consistent with both traditions, and the questions that would differentiate between them are questions one through four above, and the studies that would address those questions would require methods A, B, and C, applied over a timeframe of years rather than months.
This is unglamorous work. It does not produce viral posts or keynote invitations. It does not satisfy the appetite for conviction that the moment generates. But it is the only work that can produce genuine understanding of what the AI transition is doing to the people inside it, and genuine understanding is the prerequisite for the institutional responses — the dams — that will determine whether the transition proves progressive or catastrophic.
The evidence will come. The question is whether, when it arrives, the discourse will have preserved enough intellectual honesty to evaluate it — or whether the positions will have hardened so completely that no evidence can penetrate.
---
A conceptual problem, in Laudan's technical vocabulary, is not a gap in empirical knowledge. It is a tension within a theoretical framework — a case where the tradition's own commitments generate contradictions that the tradition cannot resolve without modifying those commitments. Empirical problems ask what the world is like. Conceptual problems ask whether the framework being used to describe the world is internally coherent. Both types of problems count toward the evaluation of a tradition's progressiveness, and Laudan was insistent — against a long tradition in the philosophy of science that privileged empirical adequacy above all — that conceptual problems are often the more revealing diagnostic. A framework can be empirically adequate and conceptually incoherent, and when it is, the incoherence will eventually produce empirical failures that the framework cannot accommodate.
The deepest conceptual problem of the AI transition is the indistinguishability of flow and compulsion.
Mihaly Csikszentmihalyi's research program on optimal experience identified the conditions under which human beings report the highest levels of satisfaction, engagement, and meaning. The conditions are specific: clear goals, immediate feedback, a match between challenge and skill, a sense of control over the activity, and the absorption of attention so complete that self-consciousness drops away and time distorts. The state Csikszentmihalyi called "flow" is not relaxation. It is not ease. It is intense, effortful, demanding engagement with something difficult — and it is, by every measure Csikszentmihalyi's research could capture, the state in which human beings are most fully alive.
Byung-Chul Han's diagnostic framework identifies a superficially similar but theoretically opposed phenomenon. The achievement subject — Han's term for the individual who has internalized the imperative to perform — works with the same intensity, the same absorption, the same inability to stop. But the phenomenology is different. The achievement subject is not engaged in flow. The achievement subject is engaged in auto-exploitation — a condition in which the whip and the hand that holds it belong to the same person, in which the compulsion to achieve is experienced as freedom because there is no external authority to rebel against, and in which burnout is the inevitable terminal state.
Both frameworks describe intense, sustained, voluntary engagement with challenging work. Both frameworks acknowledge that the experience is subjective — that the person in the state may not be able to distinguish, from within, whether they are in flow or in compulsion. And both frameworks generate predictions about what happens next. Csikszentmihalyi predicts that flow produces energy, renewed capacity, and the desire to return to the activity — that flow is developmental, building capability through engaged practice. Han predicts that auto-exploitation produces depletion, exhaustion, and eventually the flat affect of a nervous system that has been running at maximum capacity for too long.
The problem is that from the outside, the two states are identical. A camera pointed at a person in flow and a camera pointed at a person in the grip of compulsion records the same image: a person working intensely, absorbed in what they are doing, unable or unwilling to stop. The external behavioral signature is indistinguishable. The internal experience, by the subjects' own report, may be indistinguishable in the moment — the distinction becomes apparent only afterward, in the quality of the fatigue, in whether the person feels renewed or depleted, in whether the desire to return is anticipatory or anxious.
This indistinguishability is not an empirical problem waiting for better measurement. It is a conceptual problem — a tension between two well-developed theoretical frameworks that cannot be resolved by additional data about external behavior, because external behavior is precisely what the two frameworks predict will be identical. Resolving the problem requires either a new framework that subsumes both, or a diagnostic criterion that the two frameworks agree distinguishes the states they describe.
Neither currently exists.
The triumphalist tradition, when it engages with this problem at all, resolves it by fiat: the intense engagement that AI tools produce is flow, because the conditions Csikszentmihalyi specified — clear goals, immediate feedback, challenge-skill balance, sense of control — are present. AI tools provide immediate feedback. They enable clear goals. They maintain the challenge-skill balance by handling the routine and exposing the judgment layer. The triumphalist tradition points to the conditions and declares the case closed.
But this resolution is inadequate, for a reason Laudan's framework makes precise. A tradition that resolves a conceptual problem by ignoring one of the competing frameworks rather than accommodating it has not resolved the problem — it has suppressed it. The suppressed framework does not disappear. Its explanatory power remains, and its predictions continue to be confirmed by a subset of the evidence. Han's prediction that auto-exploitation will produce burnout is confirmed by the Berkeley data, by the spouse's viral essay, by Segal's own confession of compulsive late-night building sessions. Suppressing Han's framework does not make this evidence go away. It makes the triumphalist tradition unable to see it, which is the epistemic equivalent of a research tradition that defines its anomalies as non-problems — the classic signature of degeneration.
The elegist tradition resolves the problem by the opposite fiat: the intense engagement is compulsion, because the structural conditions Han identifies — the internalized imperative to achieve, the absence of external authority, the self-exploitation that masquerades as freedom — are present. AI tools amplify the imperative by removing the friction that previously imposed natural limits on how much work a person could do. The elegist tradition points to the conditions and declares the case closed.
This resolution is equally inadequate. It ignores the substantial evidence — both experimental and phenomenological — that flow is a real, distinguishable state with measurable cognitive and emotional characteristics that differ from compulsion. Csikszentmihalyi's research program extended across four decades, six continents, and thousands of subjects. The evidence that flow exists, that it is developmental, and that it produces outcomes distinguishable from compulsive engagement is not a theoretical commitment that can be dismissed by pointing to a competing framework. It is an empirical finding that the elegist tradition must accommodate or refute, and refuting it requires engaging with the evidence rather than substituting a different theoretical vocabulary.
Both traditions, in other words, resolve the conceptual problem by dismissing the competing framework rather than integrating it. And both resolutions are degenerative, because they reduce the tradition's problem-solving capacity by narrowing the evidence base it can address.
The progressive resolution — the one that would increase problem-solving capacity rather than decreasing it — would require a framework that can accommodate both flow and compulsion as real phenomena, specify the conditions under which each occurs, and provide a diagnostic criterion that distinguishes them. Such a framework does not yet exist in the AI discourse, which is why the conceptual problem remains the most consequential unsolved problem of the transition.
Segal proposes a diagnostic signal in The Orange Pill: the quality of the questions the person is asking. When the questions are generative — "What if we tried this? What would happen if we connected that?" — the state is more likely to be flow. When the questions are reactive — "What's next in the queue? How do I clear this backlog?" — the state is more likely to be compulsion. The signal is introspective rather than behavioral, which means it cannot be observed from outside, which means it cannot resolve the problem for researchers or policymakers. But it may resolve the problem for the individual practitioner, which is itself a form of progress — a partial solution to a problem that is otherwise intractable.
Evaluated by Laudan's standard, the proposed signal is progressive to the extent that it solves a problem that neither the flow framework nor the compulsion framework solves alone — the problem of real-time self-assessment during intense engagement. It is limited in that the solution is private, introspective, and unverifiable by external observers. It generates its own anomalies: What about the practitioner who asks generative questions compulsively? What about the practitioner who clears the backlog in a state of genuine flow because she finds optimization deeply satisfying? The signal does not resolve these cases, and their existence suggests that the framework requires further development.
This brings Laudan's analysis to a point that is both uncomfortable and productive. The flow-compulsion problem may not be fully resolvable within the current theoretical resources of the AI discourse — not because the problem is incoherent, but because the frameworks available for addressing it are insufficiently developed. Csikszentmihalyi's flow theory was developed through the study of activities with natural endpoints — rock climbing, chess, surgery — where the challenge-skill balance is externally constrained and the activity has a built-in conclusion. AI-assisted work has neither. The challenge scales infinitely. The skill scales with the tool. There is no natural endpoint, no summit, no checkmate. The conditions that Csikszentmihalyi identified as productive of flow may interact differently with an activity that has no inherent limit than with one that does.
Similarly, Han's theory of auto-exploitation was developed through the analysis of a cultural condition — the achievement society — that predated AI by decades. The specific characteristics of AI-assisted work — the immediacy of feedback, the collapse of the imagination-to-artifact gap, the conversation-like quality of the human-machine interaction — may produce a form of engagement that Han's framework was not designed to capture. Auto-exploitation, as Han describes it, is characterized by the absence of an external other to whom the exploitation can be attributed. But AI-assisted work introduces a conversational partner — not a person, not a consciousness, but something that responds, that holds context, that occasionally surprises. This partner may alter the phenomenology in ways that Han's framework, developed before such partners existed, cannot anticipate.
The deepest conceptual problems of a transition are often the ones that reveal the inadequacy of the theoretical resources developed before the transition occurred. The AI transition has produced a phenomenon — intense, sustained, productive engagement with a conversational machine partner that has no natural endpoint — that the existing frameworks were not designed to analyze. Flow theory was designed for bounded activities. Auto-exploitation theory was designed for a cultural condition without conversational AI. Neither framework anticipated the specific features of the phenomenon they are now being asked to explain.
This does not mean the frameworks are useless. It means they need development. The progressive response to a conceptual problem that exceeds the current theoretical resources is not to declare the problem insoluble but to develop new resources — new distinctions, new diagnostic criteria, new empirical indicators — that bring the problem within reach. That development is the work of the next decade of research on AI and human experience. It has barely begun, and the quality of the questions that guide it will determine whether the result is genuine understanding or the premature closure that passes for understanding when the pressure to choose a side exceeds the pressure to get it right.
Every research tradition contains what its practitioners cannot see. This is not a metaphor for ignorance. It is a structural feature of how traditions function — indeed, of how they must function in order to function at all. A research tradition that held all of its assumptions open to simultaneous scrutiny would be paralyzed. The capacity to take certain things for granted, to treat certain commitments as settled so that attention can be directed to the problems that remain unsettled, is what allows a tradition to be productive. The cost of that productivity is blindness to the assumptions that enable it.
Laudan made this point with characteristic precision in Science and Values. Theories, methods, and aims form what he called a "reticulated model" — a web of mutual adjustment in which changes at any level propagate to the others, but never all at once. Scientists do not simultaneously revise their theories, their methods for testing those theories, and their aims for what theories are supposed to accomplish. They hold some elements fixed while revising others, then hold the revised elements fixed while reconsidering what was previously stable. The web evolves, but never in its entirety. At any given moment, some commitments are treated as background — as the water the fish swims in — while others are foregrounded as objects of inquiry.
Segal's fishbowl metaphor in The Orange Pill captures the phenomenology of this structural feature with a vividness that the philosophical vocabulary sometimes lacks. The fishbowl is the set of assumptions so familiar that the practitioner has stopped recognizing them as assumptions. The scientist's fishbowl is empiricism — the commitment to observation and measurement as the ground of knowledge. The builder's fishbowl is the question "Can this be made?" The philosopher's fishbowl is "Should it be?" Each fishbowl reveals part of the world and hides the rest. Each is simultaneously enabling and constraining. And the effort Segal describes — pressing one's face against the glass to see the world beyond the water's refractions — is the effort of making background assumptions visible, which is the hardest intellectual work any tradition can undertake.
Laudan's framework adds something the metaphor alone does not provide: a criterion for evaluating whether a tradition's response to the fishbowl problem is progressive or degenerative. When a tradition encounters problems it cannot solve — problems that arise from outside its fishbowl, problems generated by developments the tradition's assumptions did not anticipate — it faces a choice. It can expand, modifying its assumptions to accommodate the new problems while preserving its capacity to solve the old ones. Or it can retreat, dismissing the new problems as illegitimate, redefining its boundaries to exclude them, treating the fishbowl's walls as the edges of reality rather than the edges of a particular perspective.
Expansion is progressive. Retreat is degenerative. The distinction is not between traditions that change and traditions that remain stable — both change and stability can be either progressive or degenerative depending on whether they increase or decrease problem-solving capacity. The distinction is between traditions that grow to meet new problems and traditions that shrink to avoid them.
The AI transition has cracked multiple fishbowls simultaneously, and the responses to those cracks illustrate both patterns.
The epistemological fishbowl has been cracked by the authorship problem. For most of the history of textual production, authorship was a reasonably stable concept. A person thought something, wrote it down, and the text bore a traceable relationship to the person's cognitive process. The relationship was never simple — editors intervened, collaborators contributed, cultural influences shaped what could be thought and how — but the concept of authorship was robust enough to serve as the basis for copyright law, academic credit, literary criticism, and the everyday sense that a text tells you something about the mind that produced it.
AI-assisted writing cracks this fishbowl. When Segal describes the process of writing The Orange Pill — the moments where Claude helped him say something better, the moments where Claude offered a structure that made his argument legible, the moments where Claude drew a connection that changed the direction of the argument — the concept of authorship becomes unstable in a way the old fishbowl cannot accommodate. The ideas originated with Segal. The expression was collaborative. The connections sometimes emerged from the interaction itself, belonging to neither party. Where does authorship reside? In the intention? In the expression? In the collaboration?
The literary tradition has two available responses. It can expand, developing a concept of authorship capacious enough to accommodate human-AI collaboration while preserving the features of the old concept that remain valuable — the connection between text and intention, the accountability of the author for the claims the text makes, the role of authorship in organizing credit and responsibility. Or it can retreat, declaring AI-assisted writing to be not-writing, excluding it from the category by definitional fiat, and preserving the old concept by narrowing its application.
Both responses are visible in the current discourse. The expansion response produces new vocabulary — "collaborative authorship," "augmented writing," the distinction between the ideas and their expression — that attempts to accommodate the new phenomenon without abandoning the useful features of the old concept. The retreat response produces gatekeeping — the insistence that AI-assisted text is categorically different from human text, that it lacks authenticity, that it is, in some essential sense, fraudulent. The retreat response preserves the fishbowl's coherence at the cost of excluding a rapidly growing category of textual production from its purview.
The economic fishbowl has been cracked by the repricing problem. For decades, the technology industry operated within a valuation framework that equated software with value: the capacity to write code was scarce, the products that code enabled commanded premium prices, and the companies that employed the code-writers were valued accordingly. The Death Cross of SaaS valuations documented in The Orange Pill represents the moment the market recognized that this framework no longer held — that the capacity to write code had been commoditized, that the value had migrated to the judgment layer above the code, and that the companies whose value proposition was "we wrote the code" were structurally overvalued.
The economic tradition can expand, developing a valuation framework that distinguishes between code-value and ecosystem-value, that recognizes judgment, institutional trust, and accumulated data as the durable assets of the AI era. Or it can retreat, treating the valuation collapse as a temporary correction, waiting for the "fundamentals" to reassert themselves, and applying the old framework to a world it no longer describes. The expansion response requires the tradition to acknowledge that its core assumption — that software production is the locus of value — was not a fact about the world but an assumption specific to a historical period. The retreat response treats the assumption as permanent and the contradiction as noise.
The pedagogical fishbowl has been cracked by the assessment problem. Educational institutions have operated for centuries within a framework that equates learning with the production of artifacts — essays, exams, problem sets — that can be evaluated to determine whether learning has occurred. The framework assumes a traceable relationship between the student's cognitive process and the artifact produced: if the essay demonstrates understanding, the student understands. AI breaks this assumption. A student can now produce an essay that demonstrates understanding without possessing it, because the understanding is in the tool rather than the student. The artifact no longer serves as a reliable proxy for the cognitive process that was supposed to produce it.
The pedagogical tradition can expand, developing assessment methods that evaluate the cognitive process directly rather than through its artifacts — evaluating the questions students ask rather than the answers they produce, assessing the capacity for judgment rather than the capacity for recollection, measuring understanding through dialogue rather than through documentation. Or it can retreat, banning AI from classrooms, requiring handwritten exams, treating the tools as threats to be excluded rather than phenomena to be accommodated. The expansion response requires the tradition to acknowledge that its assessment methods were proxies for learning, not learning itself, and that the proxies have been compromised. The retreat response treats the proxies as sacred and defends them against a development that has rendered them unreliable.
In each case, the pattern is the same. The fishbowl cracks. The tradition faces a choice between expansion and retreat. Expansion is harder, because it requires revising assumptions the tradition has treated as foundational. Retreat is easier, because it preserves the familiar at the cost of relevance. And the temptation to retreat is strongest in precisely those traditions where the assumptions run deepest — where the fishbowl has been in place so long that its walls are invisible.
Laudan observed this pattern repeatedly in the history of science. When the Aristotelian tradition encountered evidence that the heavens were not perfect and unchanging — sunspots, the phases of Venus, the moons of Jupiter — some Aristotelians expanded, incorporating the new observations into a modified cosmology. Others retreated, refusing to look through the telescope, declaring the observations artifacts of the instrument, defending the fishbowl's walls against the pressure of evidence. The retreaters preserved their coherence. The expanders preserved their relevance. History remembers which response was progressive.
But Laudan also observed something that complicates the narrative of expansion-as-progress. Expansion is not always successful. A tradition that expands too quickly, that modifies its core assumptions in response to every anomaly, risks losing the coherence that made it productive in the first place. The Ptolemaic astronomers who added epicycles to accommodate each new observation were expanding — modifying their framework to accommodate new data — but the expansion was degenerative because it increased complexity without increasing problem-solving capacity. Each epicycle solved one problem and generated new questions about why the system required so many epicycles.
The risk of degenerative expansion is real in the AI discourse. A literary tradition that accommodates AI-assisted authorship by simply declaring everything to be authorship — that treats the distinction between writing and prompting as irrelevant, that abandons the concept's discriminative function in the name of inclusivity — has expanded past the point of usefulness. An economic tradition that accommodates the Death Cross by declaring everything to be an ecosystem play — that treats the distinction between genuine institutional infrastructure and mere incumbency as irrelevant — has expanded past the point of descriptive accuracy. An educational tradition that accommodates AI by abandoning assessment entirely — that treats the distinction between understanding and output as unmeasurable and therefore unimportant — has expanded past the point of pedagogical responsibility.
Progressive expansion is not unlimited expansion. It is the modification of assumptions that demonstrably increases problem-solving capacity while preserving the tradition's ability to make meaningful distinctions. The literary tradition needs a concept of authorship that can accommodate collaboration while still distinguishing between the person who directed the work and the tool that assisted — because the distinction matters for accountability, for credit, for the relationship between reader and text. The economic tradition needs a valuation framework that can distinguish between code-value and ecosystem-value while still measuring something — because undifferentiated claims that "everything is an ecosystem" are as vacuous as the old claims that "everything is software." The educational tradition needs assessment methods that can evaluate understanding in the presence of AI while still assessing something — because the alternative, abandoning evaluation altogether, serves no one.
The fishbowl problem is not the problem of being inside a fishbowl. Everyone is always inside one. The fishbowl problem is the problem of responding well when the glass cracks — of expanding enough to accommodate the new problems without expanding so far that the tradition loses the coherence that makes it capable of solving problems at all.
The AI transition has cracked fishbowls across every domain of human intellectual life simultaneously. The cracks are real. The pressure to respond is intense. And the quality of the responses — whether they are progressive expansions that increase problem-solving capacity or degenerative retreats that preserve coherence at the cost of relevance, or degenerative expansions that preserve relevance at the cost of coherence — will determine whether the traditions that structure human intellectual life emerge from this transition stronger or weaker.
The evaluation is comparative, empirical, and ongoing. No tradition's response is yet complete. The progressive traditions will be the ones that find the narrow path between retreat and overexpansion — that modify their assumptions enough to accommodate the new problems while preserving the discriminative power that makes the assumptions useful. That path is difficult to find, easy to lose, and identifiable only in retrospect. The work of finding it is the intellectual task of the present moment.
---
Every transition between research traditions leaves behind problems that the new tradition does not solve. Laudan called these residual problems, and his insistence on their significance was one of his most distinctive — and most uncomfortable — contributions to the philosophy of science. The discomfort arises because residual problems complicate the narrative of progress that both scientists and the public prefer. The preferred narrative is clean: the new theory is better than the old one, the transition is justified, progress has been made, and whatever was left behind was not worth keeping. Laudan demonstrated, through meticulous historical analysis, that this narrative is almost always incomplete. The new theory is usually better in aggregate. The transition is usually justified on balance. Progress has usually been made by the standard of overall problem-solving capacity. But things are left behind — problems the old tradition solved that the new tradition cannot — and those residual problems are real losses, not rounding errors in the aggregate accounting.
The concept of residual problems is the philosophical machinery needed to take the Luddites seriously without either dismissing them as irrational or endorsing their conclusion that the transition should be stopped.
The Luddites, as Segal describes them in The Orange Pill, were not ignorant. They were not afraid of progress in the abstract. They were skilled workers who understood, with a precision that bordered on the prophetic, what the power looms would do to them — to their wages, their communities, their children's futures, and the specific forms of knowledge that lived in their hands. They were correct about the costs. The framework knitters' wages collapsed. The communities built around craft expertise dissolved. The embodied knowledge of fiber tension and drape — knowledge that had been deposited over years of patient practice, that could not be articulated in words or transmitted through instruction — disappeared from the economy because the new production system did not need it and therefore did not preserve it.
The triumphalist narrative handles the Luddites by pointing to the aggregate outcome. The industrial revolution produced, over the course of a century, an expansion of material prosperity, technological capability, and human lifespan that the pre-industrial world could not have conceived. The Luddites were on the wrong side of history. Their suffering, while regrettable, was the cost of progress — a cost that the aggregate gains more than compensated.
Laudan's framework reveals the inadequacy of this response. A residual problem is not an unsolved problem in the ordinary sense — a problem the new tradition simply has not gotten around to addressing. It is a problem the new tradition's commitments prevent it from addressing. The industrial production system could not preserve the framework knitters' embodied knowledge, because preserving it would have required maintaining the very production methods that industrialization was designed to replace. The knowledge was not merely unpreserved. It was structurally unpreservable within the new system.
This structural feature — the impossibility of preserving within the new tradition what the old tradition solved — is what makes residual problems different from merely unsolved problems, and more troubling. Unsolved problems can be addressed with time, resources, and attention. Residual problems often cannot, because the conditions for their solution have been eliminated by the transition itself.
The AI transition is generating residual problems of precisely this kind.
Consider the embodied knowledge that Segal describes in his account of the senior engineer in Trivandrum. Over years of debugging, dependency management, and the mechanical work of implementation, this engineer had built what Segal calls "architectural intuition" — the capacity to feel that something was wrong in a codebase before he could articulate what. This intuition was not mystical. It was the product of thousands of hours of patient struggle, each hour depositing a thin layer of understanding that accumulated into something solid enough to stand on. The intuition was built through the friction of manual implementation — through the specific, unglamorous, time-consuming process of encountering errors, diagnosing them, and developing the pattern-recognition that allowed him to anticipate them.
Claude removes this friction. The errors are handled by the tool. The diagnosis is performed by the tool. The pattern-recognition is the tool's, not the engineer's. The output is correct — often more correct than what the engineer would have produced manually. But the process that would have built the intuition has been bypassed.
The question is whether this constitutes a residual problem in Laudan's sense — a problem that the new tradition's commitments prevent it from solving — or merely an unsolved problem that the tradition can address with appropriate intervention.
The answer depends on a distinction that the current discourse has not adequately drawn: the distinction between friction that was formative and friction that was merely present. Not all friction is formative. Much of the mechanical work of software development — resolving dependency conflicts, writing boilerplate configuration, managing the connective tissue between components — is not pedagogically valuable. It is tedious, repetitive, and teaches nothing that the first encounter did not teach. Removing this friction is not a loss. It is an unambiguous gain.
But some of the friction — perhaps ten minutes in a four-hour block, as Segal estimates — was genuinely formative. It was the moment when something unexpected happened, when a system behaved in a way the engineer did not predict, when the configuration revealed a connection between components that no documentation described. These moments were rare and embedded in hours of drudgery, invisible to anyone measuring from outside, and they were the moments that built the intuition that distinguished the senior engineer from the junior one.
The new tradition — AI-assisted development — removes both kinds of friction simultaneously. It cannot distinguish between the drudgery that teaches nothing and the formative moment embedded in the drudgery, because from the outside they look identical. A dependency conflict that takes thirty minutes to resolve manually is drudgery ninety-five percent of the time and formative five percent of the time, and neither the engineer nor the tool can predict in advance which instance will be which.
If the formative friction can be replaced by other means — through deliberate pedagogical exercises, through structured debugging practice, through the kind of "AI Practice" that the Berkeley researchers propose — then the loss is an unsolved problem, addressable within the new tradition's commitments. If the formative friction can only be built through the specific, unpredictable, embedded-in-drudgery encounters that manual implementation provides, then the loss is a residual problem — structurally unpreservable within the new tradition, because preserving it would require maintaining the very manual processes that AI-assisted development is designed to replace.
The answer is not yet known. The evidence is too thin, the transition too recent, and the alternative pedagogical methods too undeveloped to determine whether the formative friction of manual implementation can be reproduced in other forms. This uncertainty is itself a data point — evidence that the AI transition is generating residual problems whose severity cannot yet be assessed.
Laudan was explicit that residual problems do not make a transition irrational. A new tradition that solves significantly more problems than the old one is the rational choice even if it leaves some of the old tradition's problems unsolved. The industrial production system was rationally adopted despite the Luddites' residual problems, because the problems it solved — scale, cost, access — were more numerous and, in aggregate, more consequential than the problems it left behind.
But Laudan was equally explicit that residual problems create obligations. They are debts the new tradition owes — acknowledgments that the transition came at a cost, that the cost was borne by specific people and specific forms of knowledge, and that addressing the cost is not an afterthought but a condition of the transition's legitimacy.
The Luddites' debt was not repaid by the aggregate prosperity of the industrial age. It was repaid — partially, belatedly, and at enormous political cost — by the institutional structures that the industrial society eventually built: the eight-hour day, the weekend, child labor laws, the trade unions, the safety regulations. These structures did not stop industrialization. They redirected it. They insisted that the power flowing through the new system had to leave room for the people inside it.
The debt was repaid because people built the structures to repay it. Where the structures were not built, the debt compounded — in exploited labor, in destroyed communities, in generations of people whose suffering the aggregate metrics did not register.
The AI transition is accumulating debts of the same kind. The skilled practitioners whose expertise was optimized for the old problem set — the developers who spent years mastering the lower floors of the technology stack, the writers whose craft was built through the specific struggle of finding words, the analysts whose value was in the patient construction of models that AI can now generate in seconds — bear the cost of the repricing. Their expertise is not worthless. It is precisely the judgment, taste, and architectural instinct that the new economy claims to value most. But the market's recognition of this value lags behind the market's recognition of the old value's disappearance, and in the gap between the two recognitions, real people experience real displacement.
Segal's prescription — build dams, create transitional institutions, do not leave the displaced to bear the cost alone — is, in Laudan's framework, the acknowledgment that residual problems create obligations and that meeting those obligations is the condition of the transition's legitimacy.
The history of technological transitions provides both models and warnings. The transitions where dams were built early — where the institutional structures to address residual problems were designed and implemented during the transition rather than after it — produced expansion. The transitions where dams were built late produced decades of suffering followed by belated institutional response. The transitions where dams were not built at all produced the Luddites' fate: displacement without recourse, suffering without remedy, and the permanent loss of knowledge that might, with appropriate institutional support, have been preserved or transformed.
The AI transition is in the early phase where the institutional choices are still being made. The residual problems are accumulating. The debts are being incurred. The question is whether this transition will be one where the dams were built in time or one where history records, with the detachment of retrospection, that the opportunity to build them was available and was not taken.
Residual problems are the philosopher's reminder that progress is never free. Every gain comes with a loss. The evaluation of whether the gain exceeds the loss is comparative, empirical, and never complete. But the obligation to address the loss — to build the structures that prevent residual problems from compounding into catastrophe — is not comparative. It is absolute. It obtains regardless of the magnitude of the gains, because the people who bear the cost of the transition are not abstract units in an aggregate calculation. They are specific individuals with specific knowledge and specific lives, and their claim on the new tradition's attention is not diminished by the tradition's success in solving other problems.
The Luddites understood this. Their response — breaking machines — was the wrong instrument. Their claim — that the transition owed them something — was not.
---
When a research tradition replaces one set of problems with another, Laudan's framework provides the machinery for evaluating whether the replacement constitutes progress. The evaluation is straightforward in principle and difficult in practice: a problem-shift is progressive when the new problems are more important, more productive, and more consequential than the old ones. It is degenerative when the new problems are trivial compared to what was lost, or when they prove permanently intractable, consuming the tradition's resources without yielding solutions.
The distinction between progressive and degenerative problem-shifts is among the most practically consequential tools in Laudan's philosophical repertoire, because it permits the evaluation of transitions that cannot be assessed by simpler criteria. A tradition that merely solves more problems is obviously progressive. A tradition that merely generates more anomalies is obviously degenerative. But many transitions — including the most consequential ones — do both simultaneously: they solve a large class of old problems while generating a new class of problems that did not previously exist. Whether such a transition is progressive or degenerative depends entirely on the relative importance and tractability of the old and new problem sets, a comparison that requires substantive judgment rather than mechanical counting.
Segal's ascending friction thesis — the claim that the removal of mechanical friction through AI does not eliminate difficulty but relocates it to a higher cognitive level — is a claim about problem-shifts, and Laudan's framework is the appropriate instrument for evaluating it.
The thesis draws its evidence from the history of computing abstraction. Each major abstraction layer in the development of software removed difficulty at one level and introduced it at another. Assembly language required the programmer to manage every register and memory address. Compilers abstracted this away, and the critics warned that programmers would lose their understanding of the machine. The critics were correct: most contemporary programmers cannot write assembly. But the programmers freed from assembly built operating systems and networked applications of a complexity that assembly-era programmers could not have conceived. The problem-shift was progressive: the problems that assembly programming solved — direct hardware control, memory management — were replaced by problems that were harder, more consequential, and more productive of valuable outcomes.
The pattern repeated with each subsequent layer. Frameworks abstracted away code structure, and the critics warned about lost architectural understanding. Cloud infrastructure abstracted away server management, and the critics warned about lost operational knowledge. In each case, the critics correctly identified a loss — a residual problem in Laudan's vocabulary — and in each case, the practitioners who ascended to the higher level produced work that was more ambitious, more integrative, and more valuable than what the lower level permitted.
AI-assisted development extends this pattern by abstracting away implementation itself. The programmer no longer writes the code. The programmer describes what the code should do, in natural language, and the tool handles the translation. The abstraction is more radical than any previous layer, because it crosses the boundary between formal and natural language — the boundary that has defined programming as a discipline since its inception. The problem-shift is correspondingly more radical: the problems of implementation — syntax, debugging, dependency management, the mechanical labor of converting design into running software — are replaced by the problems of judgment — what should be built, for whom, to what standard, with what tradeoffs.
The ascending friction thesis claims this is a progressive problem-shift. The judgment problems are harder than the implementation problems. They are more consequential — the decision of what to build matters more than the execution of the build. And they are more productive of valuable outcomes — a correct judgment about what should exist, executed competently by AI, produces more value than a correct implementation of an incorrect judgment, no matter how skillfully the implementation was performed.
The claim is plausible. It is also, at this stage, underdetermined by the evidence.
The first test of a progressive problem-shift is whether the practitioners who ascend to the new problem level are actually engaging with harder problems. The evidence from The Orange Pill is mixed. Segal describes engineers who, freed from implementation, began working on product vision, architectural strategy, and the integrative thinking that connects technical decisions to user needs and business models. These practitioners ascended. The problems they engaged with were genuinely harder and more consequential than the problems they left behind. The senior engineer who discovered that his remaining twenty percent — the judgment, the instinct, the taste — was worth everything is the paradigm case of a progressive problem-shift.
But Segal also acknowledges, drawing on the Berkeley study, that freed cognitive resources do not automatically flow to higher-level problems. They often flow to more of the same — additional features, additional optimization passes, additional tasks that happen to be available. The Berkeley researchers documented this pattern with empirical rigor: workers who were freed from implementation filled the freed time not with strategic thinking but with task expansion. The tool made more work possible, and the internalized imperative to achieve converted possibility into activity, regardless of whether the activity constituted an ascent to harder problems or merely a lateral expansion at the same level.
This is the degenerative alternative: a problem-shift in which the old problems are replaced not by harder, more consequential problems but by more of the same problems — more features, more optimizations, more output — at the same cognitive level. The worker is busier but not higher. The friction has been removed but nothing harder has taken its place. The freed cognitive resources dissipate into task-filling rather than ascending to judgment.
Laudan's framework provides the criterion for distinguishing between these outcomes, but it does not predict which will obtain. The criterion is problem-solving effectiveness: are the practitioners who ascended producing solutions to more consequential problems than they were solving before? Are the solutions more valuable — to users, to organizations, to the broader ecosystem? Are the problems they now face genuinely harder, requiring capabilities that the old problem set did not develop?
These questions are answerable in principle. They are not yet answered in fact, because the transition is too recent and the longitudinal data too sparse. What can be said with confidence is that both outcomes are real — that some practitioners are ascending and some are laterally expanding — and that the difference between them is determined not by the tool but by the structures surrounding the tool's use.
This is where the ascending friction thesis intersects with the institutional argument that runs through The Orange Pill. The problem-shift is progressive or degenerative depending on whether the organizational and cultural environment directs the freed resources toward judgment or toward task-filling. An organization that responds to AI-driven productivity gains by expecting more output at the same level — more features per sprint, more tickets closed per day — is producing a degenerative problem-shift. The old problems are solved, but nothing harder takes their place. The workers are faster but not higher.
An organization that responds by restructuring its expectations — by redefining the job around judgment rather than implementation, by creating protected time for the strategic thinking that the freed resources make possible, by measuring value rather than volume — is producing a progressive problem-shift. The old problems are solved, and harder, more consequential problems take their place. The workers are not just faster. They are operating at a different level.
The distinction maps onto the history of computing abstraction. Each previous abstraction layer produced both outcomes. Some practitioners ascended; others expanded laterally. The practitioners who ascended were typically those whose organizations and cultures supported the ascent — who were given the time, the expectation, and the institutional permission to engage with harder problems. The practitioners who expanded laterally were typically those whose organizations treated the abstraction as an opportunity to produce more at the same level — to use the compiler to write more assembly-equivalent code faster rather than to write fundamentally different software.
The pattern suggests that the ascending friction thesis is conditionally correct. The problem-shift is progressive when the institutional environment supports ascent. It is degenerative when the environment treats the abstraction as a productivity multiplier at the existing level rather than an elevation to a new one. The tool does not determine the outcome. The structures surrounding the tool do.
Laudan would note that this conditionality is itself a finding of consequence. It means that the evaluation of the AI transition cannot be conducted at the level of the technology alone. The same tool, deployed in different institutional environments, produces different problem-shifts — progressive in one context, degenerative in another. The evaluation must therefore be conducted at the level of the technology-plus-institution, which is a more complex object of analysis but a more accurate one.
It also means that the institutional choices being made now — the decisions about how to restructure work, how to redefine roles, how to measure value, how to allocate the cognitive resources that AI frees — are not secondary to the technology. They are constitutive of whether the technology produces progress. The tool is the river. The institution is the dam. And the character of the pool that forms behind the dam — whether it is a habitat for more ambitious, more consequential work or merely a wider, shallower version of the old channel — depends on where and how the dam is built.
The ascending friction thesis is not a prediction. It is a possibility — one that the evidence shows is real in some contexts and absent in others. The work of making it real in more contexts is institutional work, organizational work, cultural work. It is the work of building environments in which the freed resources flow upward rather than outward, in which the removal of old friction creates space for new friction that is harder and more worthy of the practitioners who face it.
Whether this work will be done — at sufficient scale, with sufficient speed, across enough of the organizations and cultures that the AI transition is reshaping — is the open question on which the progressiveness of the transition ultimately depends.
---
Laudan spent considerable energy identifying the conditions under which inquiry goes wrong — not through dishonesty or incompetence, but through structural features of the inquiry process that systematically bias results in one direction. The most insidious of these structural biases are what might be called methodological vices: practices that make confirmation easy and disconfirmation difficult, that produce the appearance of progress while concealing stagnation or regression, that satisfy the inquirer's desire for answers without subjecting those answers to the tests that would reveal their inadequacy.
A methodological vice is not a logical error. It is an environmental condition — a feature of the way inquiry is conducted that tilts outcomes toward a particular kind of result regardless of whether that result is warranted. The vice operates below the level of individual decisions. The inquirer may be scrupulous, honest, and rigorous at every step, and the outcome may still be systematically biased, because the bias is in the environment rather than the agent.
The aesthetics of the smooth, as documented through Byung-Chul Han's analysis and Segal's engagement with it in The Orange Pill, constitutes a methodological vice of precisely this kind — perhaps the most consequential one the AI transition has produced.
The vice operates through a specific mechanism: the asymmetry between the cost of producing output and the cost of evaluating it. AI tools produce text, code, analysis, and design with remarkable fluency and speed. The output is syntactically correct, stylistically polished, and structurally coherent. It arrives, as Segal notes, with a surface so smooth that the seams where the argument might fracture are nearly invisible. Producing this output costs seconds. Evaluating it — determining whether the polished surface conceals a genuine insight or a plausible fabrication, whether the argument holds under scrutiny or merely sounds as if it does — costs orders of magnitude more time and effort.
This asymmetry is the vice. It creates a systematic tilt toward acceptance, because the cost of accepting smooth output is low (a glance, a nod, the pleasant sensation of encountering something that reads well) while the cost of rejecting it is high (sustained attention, domain expertise, the willingness to slow down and test each claim against what is actually known). In any environment where time is scarce and output is abundant — which is to say, in any environment shaped by AI tools — the tilt toward acceptance dominates. Smooth output accumulates. Unverified claims propagate. The appearance of rigor substitutes for rigor itself.
Segal provides the paradigm case in his account of the Deleuze fabrication. Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Gilles Deleuze — something about "smooth space" as the terrain of creative freedom. The passage was rhetorically elegant. It connected two threads beautifully. It read as insight. But the philosophical reference was wrong. Deleuze's concept of smooth space has almost nothing to do with how Claude had used it. The output was plausible, coherent, and false — and the smoothness of its surface was what made the falsity difficult to detect.
This is not a failure of the tool in any straightforward sense. Claude was not lying. It was performing the operation it was designed to perform: generating text that is contextually appropriate, syntactically correct, and consistent with the patterns in its training data. The pattern-matching produced a surface that looked like philosophical insight because it had the syntactic and structural features of philosophical insight. What it lacked was the semantic accuracy that would have made it genuine insight — and the distinction between the syntactic features and the semantic content is precisely what the smooth surface conceals.
Laudan would recognize this pattern from the history of science. Traditions that generate predictions so vague they cannot be refuted exhibit the same vice. Psychoanalysis, in Karl Popper's classic critique, could accommodate any clinical observation by adjusting its interpretive framework, which meant that no observation could count as evidence against it, which meant that it was never tested in any meaningful sense. The theory was smooth — it always fit, it always sounded right, it always had an explanation — and the smoothness was the vice, because it prevented the kind of confrontation with evidence that produces genuine knowledge.
AI-generated output exhibits a structural analogue of this vice. The output is smooth — it always reads well, it always sounds coherent, it always has an answer. And the smoothness prevents the kind of scrutiny that would distinguish genuine insight from pattern-matched plausibility. The user who accepts the output at face value — because it reads well, because it is faster than checking, because the cost of acceptance is low and the cost of verification is high — is in the same epistemic position as the psychoanalyst who accepts a theory that cannot be refuted: satisfied, productive, and systematically insulated from the errors that would force genuine learning.
The vice compounds over time. Each accepted output lowers the threshold for future acceptance. The user develops what might be called a smoothness tolerance — an increasing willingness to accept polished output without verification, born of the repeated experience that the output is usually good enough. "Usually good enough" is the epistemic equivalent of a carcinogen with a long latency period: the damage is invisible at any single exposure, and the cumulative effect becomes apparent only when the capacity for independent evaluation has atrophied to the point where the user cannot tell whether they are accepting insight or fabrication.
Segal describes this atrophy in his own practice. The tolerance for friction diminishes. The developer who has used AI for six months finds manual debugging not just tedious but intolerable. The writer who has worked with Claude for a year finds the blank page not just challenging but unbearable. The capacity that was built through friction — the capacity for independent judgment, for self-directed inquiry, for the specific kind of thinking that only happens when the answer is not immediately available — erodes through disuse. And the erosion is self-concealing, because the smooth output continues to satisfy the surface criteria of quality while the deep criteria, the ones that require independent judgment to evaluate, are precisely the criteria that the erosion has compromised.
The historical parallels are instructive. The printing press generated a flood of text that overwhelmed the existing mechanisms for quality control. The early decades of print produced an explosion of unreliable, unsourced, contradictory information that contemporaries experienced as a crisis of authority. The response — peer review, editorial standards, the institution of the scholarly press, the development of citation practices — took generations to develop. The mechanisms that now distinguish reliable printed text from unreliable printed text are institutional inventions that did not exist when the technology that necessitated them arrived.
The internet produced a second such flood. The mechanisms developed for print — editorial review, institutional gatekeeping, the authority of established publishers — were overwhelmed by the volume and speed of digital publication. The response — search engine ranking, algorithmic curation, fact-checking organizations, the emerging norms of digital literacy — is still developing, decades after the flood began. The gap between the technology's capacity to generate content and the culture's capacity to evaluate it remains wide.
AI-generated output promises a third flood, and the mechanisms developed for the second have not yet been fully deployed, let alone the new mechanisms that the third will require. The asymmetry between production and evaluation, already severe in the digital age, becomes catastrophic in an age where the production cost approaches zero and the output is polished enough to pass casual inspection. The existing mechanisms — human editorial judgment, peer review, domain expertise — operate at human speed. The output operates at machine speed. The gap will widen before it narrows.
Laudan's prescription for methodological vices was not to eliminate the practice that produces them — vices are structural features of the inquiry environment, not individual choices that can be corrected by willpower — but to build countervailing practices that restore the balance between confirmation and disconfirmation. The practice of smooth acceptance must be counterbalanced by the practice of deliberate scrutiny. The cost asymmetry between production and evaluation must be addressed by institutional structures that subsidize evaluation — that make it someone's job, with protected time and explicit incentive, to test the smooth output against the rough standard of what is actually known.
In scientific practice, this takes the form of replication, peer review, and the adversarial structure of academic discourse. In legal practice, it takes the form of cross-examination and the adversarial structure of trial proceedings. In journalism, it takes the form of editorial oversight, fact-checking, and the institutional norms that distinguish reporting from opinion.
AI-assisted work requires its own version of these countervailing practices. Segal's discipline of rejecting output that sounds better than it thinks is an individual practice of this kind. But individual practices are insufficient against structural vices. The smooth is not a problem that individual discipline can solve at scale, because the asymmetry between production and evaluation is structural rather than individual. It affects every user, in every context, regardless of their commitment to rigor.
What is needed is institutional — organizations that build verification into the workflow rather than relying on individual vigilance, educational systems that teach the specific skill of evaluating smooth output rather than merely producing it, professional norms that treat uncritical acceptance of AI-generated work with the same disapproval now reserved for plagiarism or fabrication.
These institutions do not yet exist at scale. Their absence is the most dangerous gap in the current response to the AI transition — more dangerous than the gaps in regulation, more dangerous than the gaps in retraining, because the methodological vice of the smooth operates on every user of AI tools simultaneously, corroding the capacity for independent judgment that the other responses depend on. A workforce that cannot evaluate AI output cannot be trained to use AI well. A citizenry that cannot distinguish plausible from true cannot participate meaningfully in the governance of AI. A culture that has lost the capacity for scrutiny has lost the foundation on which every other response must be built.
The smooth is not a temporary condition. It is an enduring feature of a technological environment that produces output faster than it can be evaluated, and it will intensify as the tools improve. The countervailing practices must be built with the understanding that they are not one-time interventions but permanent structural features of the intellectual environment — maintained, like Laudan's countervailing methodologies in science, not because the vice has been cured but because the vice is chronic and the treatment must be continuous.
The tools produce smooth surfaces. The discipline — individual, institutional, cultural — is to test those surfaces, constantly and without apology, against the rough standard of what is actually the case. That discipline is the most important intellectual practice of the AI era, and its absence is the most consequential methodological vice.
Laudan's framework is a machine for evaluation. It takes competing traditions, specifies their problem sets, counts their solutions and anomalies, and renders a comparative verdict. It does this with a rigor that no other framework in the philosophy of science matches, because it refuses both the positivist fantasy of a fixed standard and the relativist capitulation that no standard exists. It works. It has been applied productively to disputes in physics, biology, geology, and the social sciences. It clarifies what other frameworks obscure and disciplines what other frameworks leave vague.
It does not work on the twelve-year-old's question.
"What am I for?" — the question Segal places at the moral center of The Orange Pill — is not a problem in Laudan's technical sense. A problem, for Laudan, is a specific, identifiable challenge that a theory or research tradition is expected to address: an empirical phenomenon to be explained, a conceptual tension to be resolved, a predictive failure to be corrected. Problems have solutions, and solutions can be evaluated. The evaluation machinery — comparative problem-solving effectiveness, the ratio of solved problems to anomalies, the distinction between progressive and degenerative traditions — depends on the existence of solutions that can be compared.
The child's question does not have a solution. It is not the kind of thing that has a solution. It is the kind of thing that has responses — provisional, personal, revisable, never final — and the quality of the responses cannot be evaluated by Laudan's criteria, because there is no competing tradition whose response can be shown to solve more problems. There is only the question, sitting in the silence of a child's bedroom, demanding not an answer but a way of living with the uncertainty that the question opens.
This is not a failure of the child's question. It is a limitation of the problem-solving framework — a boundary beyond which the machinery of evaluation, however powerful, loses its grip. Laudan himself was aware of such boundaries. His later work, particularly Science and Values, acknowledged that the aims of inquiry — the goals that determine what counts as a problem and what counts as a solution — are themselves subject to revision, and that the revision of aims cannot be evaluated by the same criteria used to evaluate the revision of theories, because the criteria are derived from the aims. When the aims change, the criteria change with them, and the evaluation enters a domain where the usual machinery does not apply.
The AI transition has forced a revision of aims. Before AI, the aims of professional life in the knowledge economy were relatively stable: develop expertise, produce output, advance within a hierarchy defined by the scarcity of the skills you possess. The problems of professional life — how to write better code, how to draft more persuasive briefs, how to build more effective models — were problems in Laudan's sense, with solutions that could be evaluated by specifiable criteria. The research traditions that organized professional life — the developer tradition, the legal tradition, the analytical tradition — were coherent frameworks with defined problem sets and established standards.
AI disrupted the problem set. The skills that defined expertise were commoditized. The output that measured value was automated. The hierarchy that organized careers was destabilized. And the disruption reached past the professional into the existential, because for many people the professional identity was the existential identity. The question "What am I for?" is not a question about careers. It is a question about the relationship between capability and meaning — between what a person can do and why it matters that a person, rather than a machine, does it.
The triumphalist tradition answers: the person matters because the person directs. Human judgment, human taste, human caring — these are the inputs that AI amplifies, and they are irreplaceable because they arise from the lived experience of a conscious being with stakes in the world. The child is for the questions, for the wondering, for the capacity to look at a world full of answers and ask whether they are the right answers. This response solves the problem of professional purpose — it provides a role for the human in the AI-augmented workflow — but it does not solve the existential problem, because the child is not asking about her role in a workflow. She is asking about her place in a universe that appears, increasingly, to contain machines that can do what she can do.
The elegist tradition answers: the person matters because the person struggles. The embodied knowledge, the friction, the slow deposition of understanding through years of patient effort — these are what make the human contribution irreducible. The child is for the craft, for the mastery that can only be built through difficulty, for the specific depth that machines cannot replicate. This response preserves the dignity of human effort but does not solve the problem that the child is actually posing, because the child has watched the machine produce what took her hours in seconds, and telling her that her hours were more valuable because they were harder does not address the felt reality of having been outperformed by something that does not feel anything at all.
Neither response satisfies. Both solve a version of the problem that is not the problem the child asked. The child's question is prior to both traditions — it is the question that must be answered before either tradition's response can take hold, because without a prior sense of why human existence matters independently of what it can produce, both the triumphalist assignment of a directing role and the elegist defense of craft are responses to a question the child has not yet asked.
Laudan's framework can evaluate the competing responses. The triumphalist response solves the problem of purpose-in-workflow but generates the anomaly of existential insufficiency. The elegist response solves the problem of human dignity but generates the anomaly of empirical obsolescence. Neither response is fully progressive, because neither addresses the full problem. The evaluation is clear. But the evaluation does not produce a solution. It produces a diagnosis: the problem exceeds the resources of the available traditions.
This is, in Laudan's vocabulary, an unsolved problem of the first importance — a problem whose unsolved status indicates not that the traditions are failing but that the problem itself is of a kind that requires theoretical resources that do not yet exist. Such problems are the most consequential in the history of inquiry, because their eventual resolution — if it comes — reshapes the traditions that produced them. The problem of the relationship between heat and motion was unsolved for centuries before thermodynamics provided the framework to address it. The problem of the relationship between electricity and magnetism was unsolved before Maxwell's equations unified them. In each case, the unsolved problem was a marker of a gap in the available conceptual resources, and closing the gap required not better answers to existing questions but a new framework within which the questions could be reconceived.
The child's question may be of this kind. It may require a framework for understanding the relationship between human consciousness and artificial capability that does not yet exist — a framework that neither the triumphalist emphasis on direction nor the elegist emphasis on struggle can provide, because both frameworks take for granted a concept of human value that the AI transition has destabilized. The new framework, if it emerges, will need to specify what consciousness contributes to the universe independently of capability — what the candle in the darkness does that no machine, however capable, does. This specification will not come from the philosophy of science. It will come from the philosophy of mind, or from the phenomenology of consciousness, or from somewhere that the current intellectual landscape has not yet mapped.
Laudan's framework reaches its limit here, and the limit is instructive. A framework designed to evaluate competing traditions by their problem-solving effectiveness can identify the problem, can evaluate the competing responses, can diagnose the insufficiency of both — but it cannot solve a problem that exceeds the category of problems it was designed to address. The child's question is not a problem to be solved. It is a condition to be inhabited. The question demands not a theory but a practice — a way of living with the uncertainty it opens, of creating spaces in which the question can be explored without being foreclosed by premature answers.
Laudan would likely have been the first to acknowledge this. His framework was designed for the evaluation of inquiry, not for the conduct of existence. The distinction matters. Inquiry can be evaluated by its problem-solving effectiveness. Existence cannot. A life is not a research tradition. It does not succeed or fail by the number of problems it solves. It succeeds or fails by criteria that the problem-solving framework, for all its power, was not built to capture.
The progressive response to a problem that exceeds the framework is not to abandon the framework. It is to use the framework where it works — for the evaluation of the competing responses, for the identification of the anomalies each generates, for the diagnosis of what remains unsolved — and to acknowledge, without apology, where it stops. The framework stops at the threshold of existential questions. What lies beyond that threshold is the domain of other disciplines, other practices, other ways of knowing. The philosopher of science can point to the threshold. The philosopher of science cannot cross it.
What can be said from this side of the threshold is that the conditions in which the child's question is explored matter enormously. The question can be foreclosed — by premature answers, by the smooth optimization that substitutes activity for reflection, by the algorithms that fill every pause with content, by the cultural pressure to be productive rather than contemplative. The question can be held open — by educational practices that value questioning over answering, by cultural spaces that protect stillness, by parents who model the kind of wondering that the question demands.
The institutional structures that hold the question open are, in Laudan's framework, problem-solving structures — structures designed to address the specific problem of how to create environments in which existential questions can be explored. These structures can be evaluated by their effectiveness. Do the educational practices that emphasize questioning produce students who are more capable of engaging with uncertainty? Do the cultural spaces that protect stillness produce practitioners who are more capable of self-directed reflection? Do the parents who model wondering produce children who are more resilient in the face of existential uncertainty?
These are empirical questions, and they are answerable. The answers will not solve the child's question. But they will tell us whether the conditions for exploring it are being preserved or destroyed — and that is the question Laudan's framework can address, the question that lies just this side of the threshold, the question whose answer determines whether the next generation will have the space to engage with the question that matters most.
The child's question is the hardest problem of the AI transition. It is not a problem in Laudan's technical sense. It exceeds the resources of every research tradition currently operating in the discourse. And its unsolved status is not a mark of failure but a marker of depth — evidence that the AI transition has reached into the foundations of what it means to be human and found questions there that no amount of problem-solving can answer, only inhabit.
The progressive response is to inhabit them — carefully, honestly, without the premature comfort of a solution — and to build the structures that allow others to do the same.
---
The preceding nine chapters have applied Laudan's framework to the AI transition with a specific and limited ambition: not to declare a verdict but to provide the machinery for arriving at one. The machinery has identified the competing research traditions and their respective problem sets. It has distinguished between empirical problems and conceptual problems, between residual problems and anomalies, between progressive and degenerative problem-shifts. It has diagnosed the aesthetics of the smooth as a methodological vice. It has traced the boundaries of the framework itself, acknowledging where the machinery of evaluation gives way to questions it was not designed to address.
What remains is the question of application: given the analysis, what constitutes a rational response to the AI transition?
Laudan would resist the temptation to prescribe a specific course of action. His framework evaluates traditions; it does not generate policy. But it constrains the space of rational responses by eliminating the positions that fail the problem-solving test, and the constraints are themselves useful. A rationality of AI adoption that satisfies Laudan's criteria would have several features, each derived from the analysis above.
The first feature is conditionality. A progressive rationality of AI adoption does not adopt or reject AI categorically. It adopts conditionally — specifying the conditions under which adoption is progressive and the conditions under which it is degenerative, and evaluating those conditions as they change. The distinction between progressive and degenerative outcomes, as the analysis of ascending friction demonstrated, is not determined by the technology but by the institutional environment in which the technology is deployed. The same tool produces ascending friction in an organization that restructures expectations around judgment and degenerative task-expansion in an organization that treats the tool as a productivity multiplier at the existing level. Conditional adoption means specifying which outcome the adopting institution is pursuing and building the structures that make that outcome more likely.
This is more demanding than either blanket adoption or blanket rejection. Blanket adoption — the triumphalist position — requires only enthusiasm and a subscription. Blanket rejection — the elegist position — requires only conviction and a willingness to forgo the tools' benefits. Conditional adoption requires ongoing evaluation, institutional design, the willingness to revise, and the intellectual honesty to acknowledge when the conditions are not being met. It is the most rational position and the most difficult to sustain.
The second feature is the acknowledgment of residual problems. A progressive rationality treats the losses of the transition not as acceptable costs to be written off in the aggregate accounting but as obligations to be addressed through institutional intervention. The embodied knowledge that AI-assisted development may fail to build. The depth of understanding that frictionless execution may fail to develop. The professional identities that the repricing of problem-solving capacity may destroy. These are not casualties of progress. They are debts that the new tradition incurs, and a progressive rationality builds the structures to repay them.
The institutional structures are specifiable. In education, they take the form of pedagogical practices that develop the capacity for questioning, evaluation, and judgment independently of AI tools — practices that ensure students build the cognitive foundations that the tools require but do not provide. The teacher who grades questions rather than answers, who designs assignments that develop the capacity for uncertainty rather than the capacity for production, who creates spaces where the formative friction of intellectual struggle is preserved even as the mechanical friction of execution is removed — this teacher is building an institution designed to address a residual problem.
In organizations, the structures take the form of what the Berkeley researchers called "AI Practice" — protected time and space for the kinds of work that AI cannot perform and that the AI-assisted workflow tends to crowd out. Mentoring that develops judgment through slow, friction-rich interaction between experienced practitioners and novices. Strategic reflection that occurs in the absence of AI tools, where the pressure to optimize gives way to the slower work of questioning whether the optimization is aimed at the right target. Role definitions that are structured around judgment rather than implementation, that measure value by the quality of decisions rather than the volume of output.
In the broader cultural environment, the structures take the form of attentional ecology — the deliberate design of cognitive environments that counteract the methodological vice of the smooth. Spaces for boredom, which is neuroscientifically the condition in which attention regenerates and imagination activates. Spaces for slowness, which is the condition in which deep understanding develops. Spaces for the kind of open-ended exploration that has no immediate productive application but builds the cognitive foundations on which all productive application depends.
These structures are not new inventions. They are adaptations of institutions that human societies have been building for centuries in response to previous transitions. The eight-hour day was an institutional response to the intensification that electrification produced. The weekend was an institutional response to the colonization of time that industrial production enabled. The public library was an institutional response to the information flood that the printing press generated. Each of these institutions was built during or after a transition that threatened to overwhelm the people inside it, and each redirected the transition's energy toward human flourishing rather than human exhaustion.
The AI transition demands equivalents. The specific forms will differ — the eight-hour day addressed physical exhaustion; the AI equivalent must address cognitive exhaustion, which has different characteristics and requires different interventions. But the principle is the same: institutional structures that redirect the transition's energy by creating boundaries, protecting spaces, and ensuring that the gains of the transition are distributed rather than concentrated.
The third feature of a progressive rationality is continuous evaluation. Laudan's framework is not a one-time assessment. It is an ongoing practice — the permanent commitment to evaluating the transition's problem-solving capacity as evidence accumulates, and the permanent willingness to revise the evaluation when the evidence warrants it. The triumphalist tradition's certainty that the gains will compound is not warranted by the current evidence. The elegist tradition's certainty that the losses are irreversible is equally unwarranted. The progressive rationality holds both conclusions open, specifying the evidence that would confirm or disconfirm each, and evaluating honestly as the evidence arrives.
The specific questions that the evaluation must address can be stated with precision. Is the ascending friction thesis being confirmed? Are practitioners who have been freed from implementation engaging with harder, more consequential problems, or are they expanding laterally into more of the same? Is the capacity for independent judgment being preserved, or is the methodological vice of the smooth eroding it? Are the residual problems being addressed by institutional structures, or are they compounding? Is the democratization of capability producing a broader base of genuine builders, or a broader base of people who can generate output without understanding it? Are the conditions for exploring the child's question being preserved, or are they being destroyed by the optimization that fills every pause with production?
These questions are answerable. Not immediately, not completely, but progressively — through the accumulation of evidence, the refinement of methods, and the intellectual patience that genuine inquiry demands. The progressive rationality does not wait for final answers before acting. It acts on the best available evidence while building the capacity to gather better evidence, and it revises its actions as the evidence improves.
The fourth feature, and the one that links Laudan's framework most directly to the practical concerns of the present moment, is the distribution of epistemic responsibility. A progressive rationality does not assign the work of evaluation to a single class of experts — not to the builders of AI tools, not to the regulators of AI tools, not to the philosophers of AI tools. It distributes the responsibility across every institution and every individual that the transition affects.
Builders bear the responsibility of evaluating whether the tools they create are producing progressive or degenerative problem-shifts, and of designing the tools to favor progression. This is not a responsibility that can be discharged by publishing safety research or appending ethical guidelines. It is a design responsibility — a commitment to building tools that preserve the conditions for independent judgment rather than undermining them.
Policymakers bear the responsibility of building the demand-side institutions — the educational reforms, the labor protections, the attentional ecology frameworks — that redirect the transition's energy toward human flourishing. The supply-side regulations currently dominating the policy conversation are necessary but insufficient. They constrain what the tools can do. They do not prepare citizens for what the tools will do regardless of the constraints.
Educators bear the responsibility of redesigning assessment and pedagogy for a world in which the old proxies for learning — the essay, the exam, the problem set — have been compromised by tools that can produce the proxy without the learning. This redesign is not optional and not postponable. Every semester of delay is a semester in which students develop dependency on tools that bypass the cognitive development the education was supposed to produce.
Parents bear the responsibility that Segal identifies as the most personal and most consequential: modeling the kind of engagement with uncertainty that the child's question demands. The parent who demonstrates caring — about quality, about purpose, about whether what is built serves someone beyond the builder — provides the child with something that no tool and no institution can provide: evidence that the question "What am I for?" has a response that is worth living.
And individuals — every individual who uses AI tools, in any capacity, for any purpose — bear the responsibility of maintaining the capacity for independent judgment in an environment that structurally discourages it. The discipline of rejecting smooth output. The practice of questioning whether the plausible is true. The willingness to sit with uncertainty rather than reaching for the premature comfort of an answer. These are not optional luxuries for the philosophically inclined. They are survival skills for the epistemically responsible — the equivalent of washing hands in a pathogenic environment, a practice whose absence produces harm that is invisible at any single exposure and catastrophic in aggregate.
The progressive rationality of AI adoption is not a destination. It is a practice — maintained only through the permanent commitment to evaluating, revising, building, and acknowledging error. The framework does not guarantee progress. It provides the conditions under which progress is possible and the diagnostic tools for recognizing when those conditions are not being met.
Laudan's deepest insight was that progress is neither inevitable nor impossible. It is conditional — dependent on the quality of the evaluation, the honesty of the evaluators, and the willingness of the traditions involved to address their anomalies rather than suppress them. The AI transition will be progressive if the people inside it — builders, policymakers, educators, parents, individuals — maintain the commitment to evaluation that progressiveness requires. It will be degenerative if the commitment is abandoned in favor of the easier alternatives: the triumphalist certainty that everything will be fine, the elegist certainty that everything is lost, or the quietest and most dangerous alternative of all — the indifference that lets the transition unfold without anyone taking responsibility for its direction.
The machinery has been provided. The evaluation has been specified. The competing traditions have been identified, their problem sets catalogued, their anomalies diagnosed, their responses assessed. What remains is the work itself — the permanent, revisable, institutionally supported, individually maintained work of determining whether the AI transition is producing the progress it promises.
That determination is not the philosopher's to make. It belongs to everyone the transition touches, which is to say, everyone. The philosopher's contribution is the framework. The framework's contribution is clarity — the specific, operational, auditable clarity that distinguishes rational evaluation from ideological conviction.
Whether the clarity will be used is the open question. Laudan, who spent his career providing tools for rational evaluation and watching them be ignored by communities that preferred conviction, would have no illusions about the odds. He would also have insisted — as his entire body of work insists — that the odds are irrelevant to the obligation. The evaluation must be conducted regardless of whether it will be heeded, because the alternative is not a different kind of evaluation but no evaluation at all, and the absence of evaluation is the one condition under which degeneration is guaranteed.
Progress is not guaranteed. Degeneration is not inevitable. The difference between them is the quality of the attention we bring to the transition we are inside.
Laudan provided the tools. The attention is ours to supply.
---
The word that stopped me was not "progress." It was "residual."
I had been reading through this manuscript on a flight back from another round of meetings where the same conversation repeated itself — how many people can we cut, how fast can we ship, how much can the tool replace. The triumphalist arithmetic was in every room, clean and seductive, and I recognized it because I had run it myself more times than I want to admit. Laudan's framework gave me a name for what that arithmetic was missing. Not the people. I already knew it was missing the people. What Laudan gave me was the precise term for the type of loss the arithmetic could not see: residual problems. Losses that are not merely unsolved but structurally unpreservable within the system that produced the gains.
I thought about my engineer in Trivandrum — the one who spent two days oscillating between excitement and terror. The excitement was about what he could now build. The terror was about what he might stop being. The senior intuition he had deposited over fifteen years of patient struggle, the ability to feel a codebase the way a doctor feels a pulse — Laudan's framework told me that this knowledge might not merely be at risk. It might be a residual problem in the strict sense: something the new tradition cannot reproduce because reproducing it would require maintaining the very friction the new tradition exists to eliminate.
That distinction hit harder than the loss itself. I had been telling myself the loss was temporary — that we would find new ways to build the intuition, that the ascending friction would take care of it. Laudan forced me to consider the possibility that some losses are not temporary but structural. Not because we failed to build the right institutions but because the institutions that would preserve the old knowledge are incompatible with the system that produced the new capability.
I do not know yet if that is true for my engineer. The evidence is too thin and the transition too young. But the intellectual honesty of confronting the possibility — rather than assuming the ascending friction thesis covers every case — changed how I think about the choices in front of me.
What stayed with me longest was the limit. Laudan's framework does everything I need a framework to do when I am evaluating competing claims about AI — and then it stops, cleanly and without apology, at the threshold of my daughter's question. "What am I for?" cannot be evaluated by its problem-solving effectiveness. It is not a problem to be solved. And Laudan's willingness to acknowledge this — his framework's capacity to point at its own boundary and say I do not work here, and something else must — is what made me trust the framework everywhere it does work.
Because the frameworks I distrust are the ones that claim to work everywhere. The triumphalist framework that says AI is always an amplifier. The elegist framework that says friction is always formative. The smooth ones, in other words — the ones that never hit a wall, that always have an answer, that resolve every tension instead of sitting with the ones that cannot be resolved.
Laudan sits with them. His framework counts the solved problems, diagnoses the anomalies, evaluates the competing traditions, and then — at the place where the counting stops being the right tool — it stops counting and points.
What it points at is us. The people inside the transition. The ones who have to decide, not in theory but in practice, whether the ascending friction thesis will be confirmed by building institutions that support ascent or falsified by building institutions that reward lateral expansion. Whether the residual problems will be acknowledged as obligations or dismissed as acceptable costs. Whether the conditions for the child's question will be preserved or optimized away.
The evaluation is ours. The tools have been provided. What I take from Laudan is the discipline to use them — and the honesty to acknowledge where they end and something harder begins.
-- Edo Segal
** The AI discourse is stuck. Triumphalists cite productivity gains as proof of progress. Critics cite burnout and lost depth as proof of collapse. Both sides marshal real evidence. Both are internally coherent. And neither can explain why the other persists -- because they are not disagreeing about facts. They are disagreeing about what facts are supposed to demonstrate.
Larry Laudan spent his career building the framework for exactly this deadlock: competing traditions, shared evidence, irreconcilable conclusions. His problem-solving model of progress does not ask which side is right. It asks which framework solves more of the problems we actually face -- and which one is generating anomalies it refuses to see. Applied to AI, his tools reveal what conviction conceals.
This book brings Laudan's razor to the most consequential technology transition in a generation -- not to settle the argument, but to make it honest.

A reading-companion catalog of the 32 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Larry Laudan — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →