By Edo Segal
The thing nobody told me about building dams is that you can't see water pressure.
You can see water. You can see the dam. You can measure the height of the pool behind it. But the pressure — the force that will find every weak joint, every gap in the mud, every stick that was not seated properly — is invisible until the moment it isn't. Until the moment something gives.
I have been writing about dams for an entire book now. Building them, maintaining them, insisting that the river of intelligence requires structures around it if the flow is going to nourish rather than destroy. What I did not have, until I spent serious time with Robert K. Merton's sociology of science, was a way to see the pressure itself.
Merton spent his career mapping forces that operate on knowledge communities from the inside — forces that are structural, not personal, and therefore invisible to the people they act upon. The way credit concentrates around those who already have it. The way a false belief about the future can produce the very future it describes. The way institutions serve purposes nobody states and nobody measures, purposes that vanish when the institution is disrupted and leave behind a loss that no dashboard can detect.
These are not abstract concerns. They are the specific dynamics shaping who benefits from AI and who bears its cost, right now, in real organizations, in real careers, in real families. The self-fulfilling prophecy is not a thought experiment. It is the mechanism by which entire professional communities are either investing in their future or abandoning it based on beliefs that have not yet been tested against reality. The Matthew Effect is not an academic curiosity. It is the structural engine that determines whether the democratization I celebrate in *The Orange Pill* actually reaches the developer in Lagos or only amplifies the advantage of the developer in San Francisco.
I am a builder. I think in products, timelines, teams. Merton taught me to think in structures — the invisible architectures of incentive, norm, and accumulated advantage that determine what any tool actually does in the world, regardless of what it was designed to do. The technology is the river. The structures are the terrain it flows through. And the terrain, not the water, decides where the river goes.
This book is another lens for the climb. It will not make the view from the roof more comfortable. It will make it more honest. And honesty, I have learned the hard way, is the only foundation that holds.
— Edo Segal ^ Opus 4.6
1910-2003
Robert K. Merton (1910–2003) was an American sociologist widely regarded as one of the founders of the sociology of science and among the most influential social scientists of the twentieth century. Born Meyer Robert Schkolnick in Philadelphia to Jewish immigrant parents, he adopted the name Robert Merton as a teenager. He studied under Talcott Parsons at Harvard and spent the bulk of his career at Columbia University, where he taught for over five decades. Merton's major works include *Social Theory and Social Structure* (1949), which introduced the concepts of manifest and latent functions, the self-fulfilling prophecy, role strain, and the distinction between local and cosmopolitan influentials, and *The Sociology of Science* (1973), which formalized the normative structure of scientific communities (universalism, communalism, disinterestedness, and organized skepticism) and documented the phenomenon of multiple independent discovery. His 1968 essay on "The Matthew Effect in Science" — named after the Gospel of Matthew's observation that the rich get richer — became one of the most cited concepts in social science, applied far beyond its original domain to economics, education, and technology. Merton also coined the terms "role model," "unintended consequences," and "self-fulfilling prophecy" in their modern sociological usage. He received the National Medal of Science in 1994, the first sociologist to be so honored, and his frameworks for understanding how social structures shape knowledge production remain foundational to the study of science, technology, and institutional behavior.
On the morning of June 29, 1858, two papers were read aloud at the Linnean Society of London. The first was by Charles Darwin, who had been developing his theory of natural selection for more than twenty years but had never published it. The second was by Alfred Russel Wallace, a young naturalist working in the Malay Archipelago who had arrived at essentially the same theory independently, composing his version in a malarial fever on the island of Ternate. The two men had never collaborated. They had barely corresponded. They were separated by thousands of miles, decades of age, and entirely different social positions — Darwin the gentleman-naturalist with independent means, Wallace the self-educated specimen collector scraping together a living from butterfly sales. And yet the theory they produced was, in its essential architecture, the same theory.
This was not a coincidence. Robert K. Merton spent much of his career demonstrating why.
In a series of papers spanning three decades — most systematically in "Singletons and Multiples in Scientific Discovery" (1961) and the earlier "Priorities in Scientific Discovery" (1957) — Merton documented what he came to regard as the dominant pattern of scientific advance: not the singleton, the discovery made by one person alone, but the multiple, the discovery made independently by two or more researchers working without knowledge of each other's efforts. His inventory was formidable. Newton and Leibniz arriving at the calculus by different routes. Boyle and Mariotte independently formulating the gas law. Darwin and Wallace. Lavoisier and Scheele and Priestley, each discovering oxygen through different experimental programs. Alexander Graham Bell and Elisha Gray filing telephone patents on the same day — not the same week, not the same month, but the same day, February 14, 1876. Merton catalogued over 260 such cases and argued that this was not a catalogue of curiosities but evidence of a structural regularity in the production of knowledge.
The regularity demanded a structural explanation. Individual genius could not account for it — genius, by definition, is rare and idiosyncratic, and the probability of two rare and idiosyncratic minds arriving at the same insight by chance is vanishingly small. Merton's explanation was sociological rather than psychological: scientific discoveries are not produced ex nihilo by individual minds but are made possible by the accumulated knowledge and technique of a scientific community at a given moment. When the accumulated foundations reach a certain threshold — when the prerequisite concepts, methods, instrumentation, and data are in place — the next discovery enters what Stuart Kauffman would later call the "adjacent possible." It becomes structurally available to anyone working at the frontier with the relevant training and institutional support. The specific discoverer is determined by contingency: who happened to be in the right laboratory, who received the critical piece of data first, whose institutional position allowed them to pursue the line of inquiry. The discovery itself is determined by structure.
"The pattern of independent multiple discoveries in science is in principle the dominant pattern rather than a subsidiary one," Merton wrote. The emphasis on in principle is characteristically precise. Merton was not claiming that every discovery is made simultaneously by multiple people. He was claiming that the structural conditions for simultaneity are present far more often than the historical record of credited discoveries suggests, because priority disputes — the bitter contests over who discovered something first — tend to obscure the multiplicity by awarding credit to one claimant while erasing the others. The history of science, as Merton documented it, is littered with forgotten co-discoverers whose only misfortune was arriving at the strategic research site a few months too late.
The relevance to the artificial intelligence transition of 2025 is not merely analogical. It is structural.
Consider what was in place by the early 2020s. The transformer architecture, introduced in a 2017 paper by Vaswani and colleagues at Google — a paper whose eight co-authors exemplify the collaborative, institutional character of modern discovery. Massive training datasets compiled from the accumulated text of human civilization. Computational infrastructure sufficient to train models at a scale that would have been physically impossible a decade earlier — infrastructure that was itself the product of decades of investment in semiconductor fabrication, distributed computing, and the economics of cloud services. Reinforcement learning from human feedback, a technique for aligning model outputs with human preferences that drew on prior work in reward modeling, preference learning, and the psychology of evaluation. Each of these components had its own history, its own community of practitioners, its own trajectory of incremental advance.
No single organization produced these foundations. They were the accumulated product of thousands of researchers working across dozens of institutions over decades. Google contributed the transformer. Academic researchers across multiple universities contributed foundational work on attention mechanisms and sequence modeling. The open-source community contributed training frameworks and datasets. Hardware manufacturers contributed the chips. The cloud providers contributed the infrastructure. The entire edifice of modern AI was, in Merton's precise sense, a community achievement — a product of the cumulative, collaborative, institutionally structured process that Merton had documented as the engine of scientific advance.
When the foundations converged, the breakthrough became structurally inevitable.
Multiple organizations were converging on the same capabilities simultaneously. OpenAI, Google DeepMind, Anthropic, Meta, Mistral — each approaching the frontier from different starting points, with different architectures, different training strategies, different organizational philosophies, but converging on capabilities that were recognizably similar. Large language models that could engage in extended reasoning. Coding assistants that could translate natural language into working software. Multimodal systems that could process text, image, and audio within unified frameworks. The specific breakthroughs — the December 2025 threshold that Segal describes in The Orange Pill, the Claude Code moment that triggered what the technology industry came to call the SaaS Apocalypse — were contingent in their timing and form. They were structurally inevitable in their occurrence.
This is not technological determinism in the crude sense — the claim that technology develops according to its own internal logic regardless of human choices. Merton was explicit that structure determines what is discovered; contingency determines who discovers it, when, and how. The organizations that reached the frontier first did so through genuine excellence: superior talent acquisition, more effective institutional design, larger and better-allocated investment, organizational cultures that tolerated the high failure rates inherent in frontier research. These are real achievements, and they matter. But the frontier itself — the terrain on which these organizations competed — was produced by the accumulated work of a community far larger than any single firm.
The distinction matters for a reason that extends beyond academic precision. If the AI breakthrough were the product of individual genius — if it required a specific mind in a specific moment — then the appropriate response would be to identify and protect such minds. If the breakthrough is the product of structural forces that make the discovery available to any sufficiently equipped community at the frontier, then the appropriate response is entirely different. The question shifts from "Who are the geniuses?" to "What structures are in place, and what do those structures produce?"
Merton's analysis of priority disputes illuminates the emotional dimension of this structural reality. Scientists fight bitterly over priority — over who discovered something first — precisely because the reward structure of science assigns credit to individuals while the production structure of science is collective. The Nobel Prize goes to one or two or three people. The work that made the prize possible involved hundreds. The mismatch between the individual reward structure and the collective production structure generates what Merton called "ambivalence in the scientist": the simultaneous commitment to the communal ideal of shared knowledge and the individual desire for recognition.
The AI industry exhibits this ambivalence in amplified form. The companies that reached the frontier compete fiercely for credit: each announcement positions the announcing firm as the originator of a capability that was, in structural terms, converging across the field. The employees within those firms experience the ambivalence personally: they know their contribution is one strand in a vast collaborative web, but the career incentives reward them for claiming individual ownership of collective achievements. The venture capital ecosystem, which funds the research, demands narratives of individual genius because such narratives justify concentrated investment. The media, which disseminates the narratives, rewards clean stories of visionary founders over messy accounts of distributed, incremental, institutionally structured advance.
The mythology of the lone genius persists not because it is true but because it is useful — useful to the individuals who benefit from the credit, useful to the investors who need to justify the concentration of capital, useful to the media that need clean narratives, useful to the culture that prefers heroes to structures. Merton documented this mythology in the history of science and demonstrated its sociological function: it simplifies a complex collective process into a comprehensible individual story, and in doing so it serves the interests of those who benefit from the simplification.
But the mythology comes at a cost. When the AI transition is narrated as the product of individual genius, the policy response follows from the narrative: attract and retain the geniuses. When it is understood as the product of structural forces, the policy response is different and far more consequential: invest in the structures. Fund the research communities. Build the educational systems that produce competent practitioners at the frontier. Establish the norms and institutions that govern how the technology is developed and deployed. The genius narrative produces talent wars. The structural narrative produces institution-building. The two responses lead to very different outcomes for the distribution of AI's benefits.
Merton's analysis also reveals something uncomfortable about the rhetoric of inevitability itself. The claim that the AI breakthrough was structurally inevitable can function as what Merton would have recognized as a legitimating ideology — a narrative that naturalizes a particular outcome and thereby discourages examination of the choices that shaped it. If the breakthrough was inevitable, then the specific decisions that led to it — decisions about what to fund, what to prioritize, what safety measures to implement or defer, what communities to include or exclude from the development process — are rendered retrospectively natural, as though they were compelled by the logic of discovery rather than chosen by actors with interests and values.
Merton's sociology insists on maintaining the distinction between structural availability and institutional choice. The capability was structurally available. The specific form it took — which organizations developed it, under what governance structures, with what safety practices, at what speed, and for whose benefit — was the product of institutional choices that could have been made differently. The river, to borrow the language of The Orange Pill, was flowing toward this channel. But the specific dams built around it — or the failure to build them — were human decisions, made by specific people in specific institutional positions, serving specific interests.
The most consequential implication of Merton's analysis for the present moment is this: the next wave of AI capability is also structurally inevitable. The foundations for the next threshold — whatever it turns out to be — are being laid now, by the same distributed, cumulative, institutionally structured process that produced the current threshold. Multiple organizations are converging on the next frontier, and the specific breakthrough will be contingent in its timing and form but inevitable in its occurrence.
This means that the window for building the institutions that will govern the next transition is now. Not after the transition. Before it. The structures that determine whether a technological breakthrough produces broadly distributed benefit or concentrated advantage are not built in the aftermath of the breakthrough. They are built in the interval between breakthroughs — the period when the structural inevitability is visible to those who study the terrain but the specific form of the next transition has not yet been determined.
Merton spent his career studying how the structures of scientific communities shape the knowledge those communities produce. The AI community is, at this moment, in the process of solidifying its structures — its norms, its reward systems, its governance institutions, its relationship to the broader society. Those structures will determine what the next wave of AI capability actually does in the world, for whom, and at whose expense. The sociology of simultaneous discovery tells us the wave is coming. The sociology of institutions tells us that what the wave does when it arrives is not determined by the wave. It is determined by the dams.
---
The young physicist working on blackbody radiation in Berlin in 1900 did not know he was standing at a strategic research site. Max Planck was solving what he considered a specific, bounded technical problem — the ultraviolet catastrophe, the failure of classical physics to predict the spectrum of radiation emitted by a heated body. His solution, the introduction of discrete energy quanta, was conservative in intent. He was trying to save classical physics, not overturn it. But the strategic research site does not care about the intentions of the researcher standing on it. It cares about the accumulated structure of knowledge that has made the site the place where the next discovery must emerge.
Robert K. Merton developed the concept of the strategic research site — which he sometimes called the "strategic research material" — to describe those locations in the landscape of knowledge where the conditions for discovery are maximally concentrated. The concept is not mystical. It is structural. A strategic research site emerges when multiple lines of investigation converge on a problem that is soluble with existing techniques but that has not yet been solved, because the convergence required to solve it has not yet occurred. The site is legible to those with the training to read the terrain. It is invisible to everyone else.
The convergence is what matters. A single line of investigation, no matter how advanced, does not create a strategic research site. Planck's problem required not only the mathematics of statistical mechanics but also the experimental data on blackbody spectra that had been accumulating in German laboratories for decades, the conceptual framework of thermodynamics that had been developed over the preceding century, and the specific institutional culture of Berlin physics that encouraged the application of mathematical formalism to experimental results. Remove any one of these lines and the site does not form. Planck does not arrive at the quantum. The ultraviolet catastrophe remains unsolved — until the lines converge elsewhere, at a different site, in a different mind, producing the same discovery under different circumstances. This is the structural logic of multiples applied to the geography of knowledge.
The AI moment of 2025 was a strategic research site of extraordinary scale — arguably the largest convergence of prerequisite lines of investigation in the history of technology. The lines that converged can be enumerated with some precision, and each had its own decades-long trajectory.
The first line was computational architecture. The transformer model, published in 2017, provided a mechanism for processing sequential data — particularly natural language — that was qualitatively superior to the recurrent neural networks and convolutional approaches that had preceded it. The self-attention mechanism at the transformer's core allowed the model to weigh the relevance of every element in a sequence against every other element simultaneously, rather than processing the sequence step by step. This was not a minor improvement. It was the architectural innovation that made scaling possible, because self-attention parallelizes in a way that recurrence does not. Without the transformer, the computational cost of training models on the datasets that existed by the early 2020s would have been prohibitive.
The second line was data availability. The accumulated text of human civilization — books, articles, conversations, code repositories, legal documents, medical records, patents, forum posts, social media, the entire digital sediment of a species that had been writing for five thousand years and typing for fifty — constituted a training corpus of unprecedented scale and diversity. The data was not collected for the purpose of training AI models. It was the byproduct of a civilization that had been externalizing its knowledge into written form since the Sumerians, accelerating through Gutenberg, and reaching torrential volume with the digitization of communication in the late twentieth century. The river of recorded human intelligence, to borrow a metaphor that the evidence increasingly supports as more than metaphor, had been accumulating for millennia. The AI models trained on it were, in a precise sense, trained on the cumulative output of human civilization.
The third line was hardware infrastructure. The graphics processing units originally designed for rendering video game graphics turned out to be architecturally suited for the matrix multiplications at the heart of neural network training. NVIDIA, which had spent two decades developing GPU technology for a market measured in tens of billions of dollars, found itself supplying the computational substrate for a market that would be measured in trillions. The cloud computing infrastructure built by Amazon, Google, and Microsoft over the preceding fifteen years — infrastructure designed to serve web applications and enterprise computing — provided the distributed systems necessary to train models across thousands of GPUs simultaneously. Each of these infrastructure layers had been built for purposes entirely unrelated to AI, and each turned out to be a prerequisite for the AI breakthrough.
The fourth line was algorithmic refinement: reinforcement learning from human feedback, constitutional AI, chain-of-thought prompting, instruction tuning — a family of techniques for aligning model outputs with human intentions that drew on prior work in reward modeling, preference learning, cognitive science, and the practical experience of companies deploying language models at scale. These techniques addressed the gap between a model that could generate text and a model that could generate useful text — text that followed instructions, avoided harmful outputs, and maintained coherence over extended interactions.
The convergence of these four lines — architecture, data, hardware, alignment — produced the strategic research site. No single line was sufficient. The transformer without the data would have been an elegant architecture with nothing to learn from. The data without the hardware would have been an untapped reservoir. The hardware without the alignment techniques would have produced powerful but ungovernable models. The alignment techniques without the architecture, data, and hardware would have been solutions in search of a problem. The strategic research site formed at, and only at, the intersection.
Merton's analysis predicts what happened next: multiple organizations arrived at the site simultaneously. This is not because they were following each other — though competitive intelligence certainly played a role — but because the terrain itself directed them there. OpenAI, Google DeepMind, Anthropic, Meta, and others were each following the logic of their respective research programs, and the logic of those programs converged on the same site because the accumulated knowledge of the field made the site the obvious next destination.
The convergence was visible to those with the training to see it. Researchers at the frontier knew, by 2022 or 2023, that something was coming. The scaling laws — empirical relationships between model size, data volume, and capability that had been documented with increasing precision — pointed unmistakably toward a threshold. The question was not whether models would cross the threshold but when and in what specific form. The researchers were reading the terrain, and the terrain was legible.
But — and this is the qualification that Merton's sociology insists upon — the terrain was legible only to those with the training, institutional position, and resources to read it. Access to the strategic research site was not democratically distributed. The researchers who converged on it were those employed by a small number of well-funded organizations, concentrated in a handful of geographic locations, trained in a specific set of educational institutions, and connected through professional networks that functioned as information channels. The strategic research site was, in sociological terms, a restricted site: accessible to the initiated and invisible to everyone else.
This restriction produces a paradox that the AI discourse has not adequately confronted. The tools that emerged from the strategic research site have the potential to democratize capability, lowering the floor of who gets to build and create — a potential that Segal documents in The Orange Pill with specific cases from his own experience. But the site itself was accessible only to the already-advantaged: researchers with advanced degrees from elite institutions, employed at organizations with billions in capital, located in cities where the cost of living ensures that participation requires substantial prior economic advantage. The democratization of the output was produced by the concentration of the input.
Merton would recognize this pattern from his studies of the scientific community. The norms of science, as Merton described them — universalism, communalism, disinterestedness, organized skepticism — are aspirational ideals that coexist with, and are often contradicted by, the actual social structure of scientific practice. Universalism holds that knowledge claims should be evaluated on their merits regardless of the social characteristics of the claimant. In practice, claims from prestigious institutions receive more attention, more favorable review, and more rapid dissemination than equivalent claims from obscure ones. Communalism holds that scientific knowledge belongs to the community. In practice, patent law, proprietary research, and trade secrets restrict the flow of knowledge in ways that serve institutional interests rather than communal ones.
The AI community exhibits the same gap between normative aspiration and structural reality, and the gap is wider because the stakes are higher and the capital concentrations more extreme. The open-source AI movement — Llama, Mistral, the broader ecosystem of freely available models and tools — represents a genuine commitment to the communal norm: the belief that AI capability should be shared rather than hoarded, that the accumulated knowledge of human civilization that trained these models was communal property and that the models trained on it should be as well. But the open-source movement operates within a market structure that rewards proprietary advantage, and the most capable models — the ones at the strategic research site's cutting edge — remain proprietary, developed behind closed doors by organizations that combine communal rhetoric with competitive practice.
The concept of the adjacent possible, borrowed from Kauffman's complexity theory, maps onto Merton's strategic research site with illuminating precision. The adjacent possible is the set of configurations that are one step away from the current state of a system — the things that become achievable when you add one new element to the existing arrangement. Kauffman developed the concept for biological evolution, but it applies with equal force to the evolution of knowledge. The AI breakthrough of 2025 was in the adjacent possible of the accumulated knowledge base of 2024. It was not in the adjacent possible of 2014, because the prerequisite elements — the transformer architecture, the alignment techniques, the hardware infrastructure at sufficient scale — were not yet in place.
The adjacent possible explains why the breakthrough felt, to those at the frontier, simultaneously surprising and inevitable. Surprising because the specific capabilities that emerged — the quality of natural language reasoning, the capacity for extended coding, the fluency of multimodal processing — exceeded what most practitioners had predicted. Inevitable because the structural conditions were so obviously converging that the question of whether had long since been replaced by the question of when. This is the phenomenology of standing on a strategic research site: the sense that the ground beneath you is about to shift, combined with the inability to predict the precise form of the shift until it happens.
Merton's framework also illuminates the politics of credit that follow the breakthrough. When multiple organizations arrive at the strategic research site simultaneously, priority disputes erupt. Who got there first? Whose architecture was superior? Whose safety practices were more responsible? Whose deployment was more beneficial? These disputes are not trivial — real resources, real reputations, and real policy decisions follow from how they are resolved. But they are also, in Merton's analysis, structurally predictable consequences of a reward system that assigns individual credit for collective achievement.
The strategic research site does not belong to anyone. It is produced by the accumulated work of a community. The organizations that arrived there first were standing on foundations they did not build, trained by educational systems they did not create, funded by capital accumulated through economic structures they did not design, and using data produced by a civilization whose contributions span millennia. The credit they claim is real but partial. The foundations on which it rests are communal.
This is neither an accusation nor a diminishment. It is a sociological observation about the structure of knowledge production, and its implications for the AI transition are direct. If the breakthrough was produced by structures rather than by genius alone, then the response to the breakthrough must be structural as well. Investing in individual talent matters, but it is not sufficient. The structures that produce the talent, that accumulate the knowledge, that provide the institutional environment in which talent can operate effectively — these are the levers that determine whether the next strategic research site produces broadly distributed benefit or concentrated advantage.
The next strategic research site is forming now. The lines of investigation that will converge to produce the next threshold are already visible to those reading the terrain: agentic AI systems that operate autonomously over extended periods, multimodal reasoning that integrates perception and language at a deeper level, models with persistent memory and the capacity for genuine learning from experience. These capabilities are in the adjacent possible. They will be reached, because the structural conditions for reaching them are accumulating with the same inexorable momentum that Merton documented across centuries of scientific advance.
The question is not whether they will be reached. The question is what institutional structures will be in place when they are.
---
In 1932, the Last National Bank was solvent. Its assets exceeded its liabilities. Its loans were performing. Its capital reserves met every regulatory standard then in force. By every objective measure available to the banking regulators of the era, the Last National Bank was a healthy financial institution.
Then a rumor started. The bank, it was whispered, was in trouble. The rumor was false. But the depositors who heard it did not know it was false, and they acted on the basis of what they believed rather than what was true. They withdrew their deposits. Other depositors, seeing the withdrawals, inferred that the rumor must be correct — why else would so many people be withdrawing? — and withdrew their own deposits. The bank, drained of the liquidity it needed to operate, became insolvent.
The rumor had been false. The insolvency it predicted was real. But the insolvency was not the product of a pre-existing financial weakness. It was the product of the rumor itself. The belief created the reality it described.
Robert K. Merton formalized this pattern in 1948, in an essay that would become one of the most cited works in the history of social science. He called it the self-fulfilling prophecy: "a false definition of the situation evoking a new behavior which makes the originally false conception come true." The definition is precise and the precision matters. The self-fulfilling prophecy is not merely a case of a prediction coming true. It is a case of a prediction coming true because it was believed — because the belief altered behavior in ways that produced the very outcome the belief described. The prophecy does not merely predict the future. It constructs it.
Merton drew the concept from the earlier work of W.I. Thomas, whose theorem — "If men define situations as real, they are real in their consequences" — had established the principle that subjective definitions of situations have objective consequences regardless of their accuracy. Merton's contribution was to formalize the mechanism and demonstrate its operation across a wide range of social phenomena: racial discrimination, economic crises, educational outcomes, political contests. In each case, the pattern was the same: a belief about reality altered behavior in ways that reshaped reality to match the belief.
The AI displacement follows this mechanism with a precision that Merton, who took visible intellectual pleasure in demonstrating that social patterns repeat across domains, would likely have appreciated.
Consider the belief that is currently circulating through every professional community touched by AI: human expertise is becoming obsolete. The belief is stated with varying degrees of sophistication. In its crudest form: "AI is going to take everyone's jobs." In its more nuanced versions: "The skills I spent decades developing are being commoditized." "The implementation work that defined my career can now be done by a tool." "The competitive advantage of deep specialist knowledge is eroding as machines become competent generalists."
These beliefs are not straightforwardly true or false. They contain genuine elements of truth — Segal documents the real and measurable expansion of AI capability across multiple professional domains — mixed with projections, extrapolations, and anxieties that may or may not prove accurate. The interesting question, from a Mertonian perspective, is not whether the beliefs are true but what behaviors they motivate, because the behaviors will determine whether the beliefs become true.
If employers believe that AI makes human expertise less valuable, they will reduce investment in human development. Training budgets, which are among the first items cut in any cost-reduction exercise, will be cut further. Mentoring programs, which require senior practitioners to invest time in junior ones without immediate productivity returns, will be deprioritized. Junior employees will be given AI tools instead of experienced colleagues — not because the AI tools are better mentors, but because they are cheaper and because the belief that human expertise is losing value makes the investment in human mentoring feel like a poor allocation of resources.
The result: a generation of practitioners who develop neither the deep traditional expertise of the pre-AI era nor the hybrid competence that the AI era actually demands. The expertise that the employer believed was losing value actually atrophies — not because AI replaced it, but because the institutional response to the belief about AI starved it of the resources it needed to develop. The prophecy fulfills itself. The belief creates the reality.
Recent research has documented this mechanism operating in real time, confirming that the self-fulfilling prophecy is not merely a theoretical concern but an empirical phenomenon in AI-mediated systems. A study published in Information Systems Research demonstrated experimentally that disclosing algorithmic assessment scores to individuals steered their behavior in the direction of the revealed scores. Individuals scored highly behaved in ways that confirmed the high score; individuals scored low behaved in ways that confirmed the low score. The scores were not descriptions of pre-existing reality. They were, in Merton's precise sense, definitions of a situation that produced the reality they purported to describe.
In healthcare, a scoping review published in Resuscitation identified four mechanisms through which machine learning systems create self-fulfilling prophecies in clinical settings. When an AI system predicts that a patient will require intubation, the clinical team — informed by the prediction — may intubate preemptively. The intubation confirms the prediction. Future training data includes the confirmed prediction. The model learns that patients with similar profiles require intubation. The cycle reinforces itself, and the question of whether the patient actually needed intubation — whether the prediction was accurate or whether the prediction produced the outcome — becomes empirically irrecoverable.
The scholars who published these findings in Ethical Theory and Moral Practice identified the critical epistemic problem: "Such mistakes — along with other mistakes in predicting or in the larger practical endeavor — are easily overlooked when the predictions turn out true. Thus self-fulfilling prophecies prompt no error signals; truth shrouds their mistakes from humans and machines alike." The self-fulfilling prophecy is epistemically self-concealing. Because the belief produces the outcome it predicts, the outcome appears to validate the belief, and the validation discourages examination of whether the belief was accurate in the first place.
Applied to the AI displacement, this self-concealment operates at multiple levels simultaneously.
At the organizational level: The company that cuts its training budget because it believes AI will substitute for human expertise produces employees who are, in fact, less expert — confirming the belief that human expertise is losing value and justifying further cuts. The company points to the declining quality of its human workforce as evidence that the original decision was correct. The declining quality was produced by the decision.
At the professional level: The practitioner who stops developing her skills because she believes they are being commoditized finds, over time, that her skills are indeed less competitive — not because AI replaced them, but because she stopped investing in them. She points to her own diminished competitiveness as evidence that the original belief was correct. The diminished competitiveness was produced by the belief.
At the educational level: The university that reduces enrollment in computer science because it believes AI will eliminate programming jobs produces fewer graduates with the skills to direct AI effectively. The market for such graduates, unsupplied, turns increasingly to AI tools as substitutes for human judgment, confirming the belief that human skills are less needed. The scarcity of skilled humans was produced by the educational response to the belief about scarcity.
But Merton's analysis reveals something more subtle and more important than the pathology alone. The self-fulfilling prophecy is not a one-way mechanism. It can operate in reverse, and the reverse is just as powerful.
If organizations believe that AI enhances rather than replaces human expertise, they invest in hybrid development programs: training that builds traditional depth alongside AI fluency, that treats the tool as an amplifier of human judgment rather than a substitute for it. Their practitioners develop a compound competence that the market values more than either component alone. The belief that expertise has a future produces the institutional conditions under which expertise has a future.
If practitioners believe that their skills retain value in the AI-augmented landscape, they invest in deepening those skills while simultaneously learning to direct AI tools effectively. Their hybrid competence positions them as the most valuable members of their organizations — the people who can exercise judgment that AI cannot replicate and leverage AI to extend that judgment further than human effort alone could reach. The belief in the continuing value of expertise produces the behavior that sustains the value.
Segal's description of the Trivandrum training in The Orange Pill illustrates the positive self-fulfilling prophecy in action. The engineers were told, in effect, that their expertise would become more valuable, not less, when amplified by AI tools. This belief motivated a specific set of behaviors: engagement with the tools, willingness to experiment, investment in developing the judgment layer that would direct the tools effectively. The result confirmed the belief. The engineers became measurably more capable. Their expertise did not atrophy; it found a new and more potent expression.
The Luddites, by contrast, illustrate the negative self-fulfilling prophecy. Their belief that their craft was worthless in the new industrial order led them to refuse engagement with the tools that might have transformed their craft into something the new economy valued. Their refusal produced the very obsolescence they feared. The belief constructed the reality.
The implication is not that optimism is inherently more accurate than pessimism. The implication is that accuracy is, in some sense, beside the point. The beliefs that circulate through a professional community about the consequences of AI are not merely descriptions of a future that is independently determined. They are inputs into the determination of that future. They alter the institutional behaviors — the hiring decisions, the training investments, the educational priorities, the career choices — that collectively produce the future they describe.
This is why the discourse matters in a way that transcends mere commentary. The triumphalist who declares that AI will empower everyone is not merely predicting a future. He is constructing one — or attempting to — by motivating the institutional behaviors that would make empowerment real. The catastrophist who declares that AI will destroy expertise is not merely warning of a future. She is constructing one — by motivating the institutional behaviors that would make destruction real. The elegist who mourns the loss of craft is not merely grieving. He is constructing a narrative that, if widely adopted, will produce the very abandonment of craft he mourns.
Merton, characteristically, would resist the implication that the solution is simply to believe harder in the optimistic scenario. The self-fulfilling prophecy is not a power of positive thinking. It is a structural mechanism that operates through institutional behavior, not individual attitude. What matters is not whether you personally believe that your expertise has a future. What matters is whether the institutions you inhabit — the companies that employ you, the schools that train you, the professional communities that credential you, the governments that regulate the landscape — behave as though expertise has a future. Individual belief without institutional support is consolation, not adaptation.
The dam-building that Segal describes — the construction of institutional structures that redirect the flow of AI capability toward human flourishing — is, in Mertonian terms, the construction of the conditions under which the positive self-fulfilling prophecy can operate. The dam is not optimism. It is structure. It is the concrete institutional decision to invest in human development rather than to defund it, to build hybrid training programs rather than to replace human practitioners with tools, to protect the conditions under which expertise develops rather than to allow those conditions to erode under the pressure of cost reduction.
The prophecy will fulfill itself one way or the other. The structures we build determine which prophecy gets fulfilled.
---
The Gospel of Matthew, chapter 25, verse 29: For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath. Robert K. Merton borrowed the verse in 1968 to name a pattern he had documented across the sociology of science, and the name he gave it — the Matthew Effect — has since become one of the most widely applied concepts in the social sciences, precisely because the pattern it describes is one of the most widely operative dynamics in social life.
Merton's original observation was specific to the scientific community. Eminent scientists, he found, receive disproportionate credit for work that involved significant contributions from lesser-known collaborators. When a Nobel laureate and an unknown postdoctoral researcher co-author a paper, the laureate receives the lion's share of the citations, the invitations, the media attention, and the career benefit. The paper's contribution to the laureate's reputation is large. Its contribution to the postdoctoral researcher's reputation is negligible. Not because the laureate's contribution was necessarily greater — Merton documented cases where the junior researcher had done the majority of the work — but because the social structure of scientific recognition amplifies existing reputation.
The mechanism is cumulative. The laureate, now more recognized, attracts more funding. More funding enables more research. More research produces more publications. More publications attract more citations. More citations enhance reputation further. The cycle compounds. Meanwhile, the postdoctoral researcher, receiving no recognition for equivalent work, attracts less funding, produces fewer publications, receives fewer citations, and finds her career trajectory diverging from the laureate's not because her work is inferior but because the initial conditions were different. The same contribution, filtered through different positions in the social structure, produces radically different outcomes.
"Unto every one that hath shall be given." The advantage compounds. The disadvantage compounds. The gap widens, and it widens not because of differences in talent or effort but because of the structure through which talent and effort are recognized and rewarded.
The AI transition is producing its own Matthew Effect, and the dynamics are visible at every level: individual, organizational, national, and civilizational.
At the individual level, consider two developers — one in San Francisco, one in Lagos. Both have access to Claude Code. Both have intelligence, ambition, and ideas. Segal argues, correctly, that AI lowers the floor of who gets to build. The developer in Lagos can now translate an idea into working software through conversation with a tool, a capability that was previously available only to those with years of specialized training or the capital to hire those who had it.
But the San Francisco developer has a fast, reliable internet connection. She has colleagues who share best practices over lunch. She has a professional network that circulates knowledge about effective prompting strategies, architectural patterns, and deployment techniques. She has access to venture capital that can turn a prototype into a funded company. She has a cultural context in which shipping a product is celebrated and supported by an ecosystem of advisors, mentors, and potential customers. She has legal infrastructure that protects her intellectual property, financial infrastructure that processes her payments, and social infrastructure that provides a safety net if her venture fails.
The developer in Lagos may have comparable talent — or superior talent, since the barriers to reaching the frontier from Lagos are higher, which means that those who reach it despite the barriers are likely among the most capable. But she operates within a structural environment that provides fewer of the complementary assets that turn raw capability into realized value. Unreliable power grids interrupt her workflow. Limited bandwidth slows her interactions with AI tools. Distance from capital markets means her prototype, however impressive, has a longer and more uncertain path to funding. The absence of a local professional community means she develops her skills in relative isolation, without the informal knowledge-sharing that accelerates learning in dense professional clusters.
The technology is the same. The structural environments are radically different. And the Matthew Effect predicts, with the reliability of a sociological law, that the gains from the technology will flow disproportionately to the developer whose structural environment provides the complementary assets that amplify the technology's benefits.
Recent scholarship has confirmed that AI amplifies this dynamic rather than disrupting it. A March 2026 analysis published in the Network Law Review examined how generative AI affects the distribution of influence in academic publishing — a domain Merton studied extensively — and found that the pattern is intensifying. "An output explosion amplifies the Matthew Effect," the authors wrote, "concentrates reputational gains among established scholars, and contributes to the emergence of increasingly stratified publication tracks." The mechanism the authors identified is revealing: researchers with strong existing foundations use AI to produce more and better work, because the AI amplifies what is already there. "AI accelerates the steps that were bottlenecks, but it cannot supply what it does not receive. A researcher who enters the process with a sharp research question, a command of the relevant literature, and a clear theoretical position uses AI to move faster."
The finding is as precisely Mertonian as any empirical result could be. The established scholar — the one who already "hath" — receives more from AI because she brings more to it. Her existing knowledge provides the substrate that AI amplifies. The unknown scholar — the one who "hath not" — brings less to the interaction and therefore receives less from it. The technology does not discriminate. The social structure through which the technology operates discriminates with the quiet efficiency of a system that was designed by no one and serves the interests of those who benefit from it without any individual needing to intend the outcome.
Jeff Pooley's analysis of AI as knowledge arbitrator extends the point. AI tools, Pooley argues, function as filtering systems: "to surface, to rank, to summarize, and to recommend." These verbs are not neutral operations. They are acts of selection, and the selection criteria are trained on the accumulated patterns of past recognition — patterns that embody the Matthew Effect in their very structure. "The models are trained on the scholarly past, and their filtering logic is inscrutable. As a result, they may smuggle in the many biases that mark the history of scholarship." The AI system, trained on a corpus in which established voices are overrepresented and marginal voices are underrepresented, reproduces and amplifies the existing distribution of recognition. The past's inequalities become the future's training data.
At the organizational level, the Matthew Effect operates through a mechanism that Merton would have recognized as structurally identical to the individual case: cumulative advantage through resource concentration. The established technology company with billions in capital can invest in AI integration at a scale that no startup can match. The investment produces productivity gains. The gains fund further investment. The cycle compounds. Meanwhile, the startup — which may have a superior product idea, a more innovative approach, a more compelling vision — cannot match the incumbent's investment in AI infrastructure, and the gap between the two organizations widens not because the incumbent's ideas are better but because the incumbent's resources are larger.
Segal describes the Death Cross — the moment when the AI market overtakes the SaaS market in aggregate value — and observes that the companies best positioned to survive the transition are those with the deepest ecosystems, the most accumulated data, the strongest institutional trust. These are, by definition, the companies that already had the most. The Death Cross does not punish incumbency. It punishes the absence of accumulated structural advantage. And accumulated structural advantage is what the Matthew Effect produces over time: a widening gap between those who entered the market early and from positions of strength and those who entered later or from positions of weakness.
At the national level, the Matthew Effect is operating on a scale that Merton, writing in the mid-twentieth century about individual scientists and their laboratories, could not have anticipated. The nations that lead in AI development — the United States and China, with smaller but significant clusters in the United Kingdom, France, Canada, Israel, and a handful of others — are the nations that entered the AI era with the largest existing concentrations of research talent, computational infrastructure, venture capital, and institutional capacity. These advantages compound. AI capability attracts talent, which produces more capability, which attracts more talent. The nations that lack these initial advantages — most of Africa, much of South Asia, Latin America — face a widening gap that the technology alone cannot close, because the gap is not primarily technological. It is structural.
The structural nature of the Matthew Effect is what makes it so resistant to technological disruption. Technology can lower barriers. AI can make it possible for the developer in Lagos to build what previously required a team in San Francisco. But the complementary assets that turn capability into realized value — capital, networks, infrastructure, institutions, cultural norms, legal frameworks — are structural, not technological. They accumulate slowly, through decades of institutional investment. They cannot be downloaded.
This does not mean democratization is illusory. Segal's argument that AI lowers the floor of who gets to build is empirically supported and morally significant. The floor has risen. People who were previously excluded from the building process by lack of training or capital can now participate. This is real, and it matters. But Merton's sociology insists on a distinction that the triumphalist narrative tends to blur: lowering the floor and leveling the playing field are not the same operation. The floor can rise while the ceiling rises faster. Access can expand while the benefits of access concentrate. The Matthew Effect does not prevent new entrants from joining the system. It ensures that the system's rewards flow disproportionately to those who entered it first and entered it with the most resources.
The practical implication is uncomfortable and essential. If the AI transition is governed by the Matthew Effect — if the benefits of AI capability compound for those who already possess advantage — then the democratization narrative, however accurate in its description of expanded access, is insufficient as a guide to policy. Expanded access without structural intervention to redirect the flow of benefits will produce a widening gap between the AI-advantaged and the AI-disadvantaged, even as the absolute capabilities of both groups increase. Everyone gets smarter, and the gap between them grows. The rising tide lifts all boats, and the yachts rise faster.
The structural interventions that could counteract the Matthew Effect are Mertonian in character: interventions that modify the institutional environment rather than the technology itself. Investment in educational infrastructure in underserved regions. Capital allocation mechanisms that direct funding to founders outside the established clusters. Connectivity infrastructure that ensures the developer in Lagos has the same quality of access as the developer in San Francisco. Professional communities that bridge geographic and institutional divides, sharing the informal knowledge that currently circulates within privileged networks.
These interventions are not technological solutions. They are institutional ones. They are dams built in the river of cumulative advantage, designed to redirect some portion of the flow toward those who would otherwise be swept further downstream. Merton's sociology does not prescribe these specific interventions. It insists that the interventions must be structural, because the dynamics they address are structural. Individual talent, individual effort, individual access to a tool — these matter, but they operate within a structural environment that amplifies or diminishes their effects according to a logic that no individual controls.
The Matthew Effect is not a moral judgment. It is a structural description. It describes what happens in social systems where advantages compound, and it predicts, with the reliability of a pattern documented across centuries and domains, that the AI transition will produce concentration alongside democratization, that the gains will flow upward alongside their spread outward, and that the gap between the advantaged and the disadvantaged will widen unless specific, sustained, institutional effort is directed at narrowing it.
Merton borrowed his name for the pattern from a religious text, and the choice was not arbitrary. The passage from Matthew is a parable about stewardship — about what the master demands of those to whom much has been given. The question embedded in the Matthew Effect is not merely descriptive but, in its deepest register, ethical: What is the obligation of those who have received disproportionate benefit from a structural system to the system that produced their advantage, and to those within it who received less?
The AI community, standing on foundations built by a civilization's accumulated knowledge, trained on data produced by billions of people who will never share in the profits, deploying tools that concentrate advantage among those who already possess it — this community faces the question of the parable in its most urgent form. The answer will not be found in the technology. It will be found in the structures built around it.
Thomas Kuhn published The Structure of Scientific Revolutions in 1962, and the book became one of the most influential — and most misappropriated — works in the philosophy of science. Its central concept, the paradigm shift, entered common usage so rapidly and so loosely that Kuhn himself spent the latter part of his career trying to clarify what he had meant, with diminishing success. The phrase became a cliché, applied to everything from marketing strategies to breakfast cereals, its precision dissolved by overuse.
But Kuhn's argument was precise, and its precision matters here, because the AI transition is a paradigm shift in the strict Kuhnian sense — not in the diluted, metaphorical sense that management consultants deploy at quarterly retreats, but in the structural sense that Kuhn intended and that Merton's earlier work on the sociology of science had made possible.
Kuhn acknowledged the debt explicitly. His analysis of how scientific communities function — how they train practitioners, how they define legitimate problems, how they evaluate proposed solutions, how they resist challenges to their foundational assumptions — drew directly on Merton's prior work on the normative structure of science and the social mechanisms of knowledge production. Merton had established the sociological framework. Kuhn populated it with an epistemological argument: that scientific knowledge advances not through gradual accumulation but through periods of normal science (work within an established paradigm) punctuated by revolutionary episodes in which the paradigm itself is replaced.
The structure of a paradigm, in Kuhn's analysis, is more than a theory. It is a complete framework for professional practice: a set of shared assumptions about what counts as a legitimate problem, what methods are appropriate for addressing it, what standards of evidence apply, what training is required, and what constitutes competent performance. The paradigm defines not only what scientists believe but what they do — how they spend their days, what skills they develop, what questions they consider worth asking, what answers they consider satisfying.
Normal science operates within the paradigm. The practitioner does not question the framework; she works within it, solving the "puzzles" that the paradigm defines as soluble. The puzzles are genuine intellectual challenges — Kuhn was not dismissing normal science as trivial — but they are challenges whose terms are set by the paradigm itself. The practitioner knows what a solution looks like before she finds it, because the paradigm has defined the shape of acceptable solutions.
The crisis occurs when anomalies accumulate — when problems arise that the paradigm cannot solve within its own terms, when the puzzles start producing answers that violate the paradigm's expectations, when the gap between what the framework predicts and what practitioners observe grows too wide to ignore. The crisis does not resolve itself gradually. It resolves through revolution: the replacement of the old paradigm with a new one that accommodates the anomalies the old one could not.
And here is Kuhn's most controversial and most relevant claim: the two paradigms are incommensurable. The criteria for competent practice in the new paradigm are not translatable into the criteria of the old. The practitioners trained in the old paradigm cannot evaluate the new one on their own terms, because their terms are the old paradigm's terms. The revolution is not merely a change in what practitioners believe. It is a change in the standards by which belief is evaluated — a shift in the very meaning of competence.
The professional landscape that existed before the AI threshold of 2025 was a paradigm in Kuhn's precise sense. It had shared assumptions about what constituted professional competence, and those assumptions were so deeply embedded in institutional practice that they had become invisible — the water in which every professional swam.
The core assumption: expertise equals accumulated skill in implementation. The competent lawyer was the one who could draft a brief from memory, citing relevant precedents without needing to look them up, structuring arguments according to conventions internalized through years of practice. The competent developer was the one who could write clean, efficient code in multiple languages, debug complex systems through a combination of analytical reasoning and embodied intuition, navigate dependency hierarchies and architectural decisions with the fluency that only years of immersive practice could produce. The competent physician was the one who could diagnose by pattern recognition honed through thousands of clinical encounters, whose judgment had been deposited, layer by layer, through the specific friction of repeated exposure to difficult cases.
These assumptions defined not only what competent practice looked like but what training should produce, what hiring should select for, what promotion should reward, and what professional identity should be organized around. The medical residency, the legal apprenticeship, the software engineering career ladder — each was designed to develop the specific form of expertise that the paradigm valued: accumulated skill in implementation, built through years of supervised practice, demonstrated through increasingly complex performance.
The AI threshold introduced anomalies that the old paradigm could not accommodate. A junior developer using Claude Code shipped in a weekend what her senior colleague had estimated would take six months. The output was comparable in quality. The senior colleague's fifteen years of accumulated implementation skill — the very thing the paradigm defined as the basis of professional value — had been matched, in terms of output, by a tool that cost one hundred dollars a month. This was not a gradual erosion. It was an anomaly: an event that violated the paradigm's most fundamental assumption about the relationship between accumulated skill and productive capability.
The anomalies multiplied. A non-technical founder prototyped a revenue-generating product without writing code. A backend engineer built frontend features she had never been trained in. A designer implemented complete system features end to end. Each of these events was, within the old paradigm, impossible — not because the paradigm denied that such things could theoretically occur, but because the paradigm had no framework for evaluating them. The criteria for competent practice in the old paradigm — years of specialized training, demonstrated fluency in specific technical skills, the embodied intuition that only long practice produces — could not account for practitioners who produced competent work without possessing the competencies the paradigm deemed necessary.
This is incommensurability. The old paradigm and the new one are not arguing about the same thing on different terms. They are operating with different definitions of what competence means, what training should produce, what expertise consists of, and what professional value looks like. The senior developer who evaluates the junior developer's AI-assisted output using the old paradigm's criteria — Did she understand the code? Could she have written it herself? Does she possess the embodied intuition that years of debugging produce? — will find the output deficient, because the old paradigm's criteria are designed to measure something the new paradigm does not prioritize. The junior developer who evaluates her own output using the new paradigm's criteria — Does the code work? Does the product serve users? Was the problem identified correctly? Was the architectural judgment sound? — will find it satisfactory, because the new paradigm's criteria measure something the old paradigm took for granted.
Neither evaluation is wrong. They are incommensurable. They are measuring different things because they are operating within different frameworks for what matters.
Merton's sociology of science predicts what happens during paradigm transitions, and the prediction is not comforting. The practitioners trained in the old paradigm experience the transition as a loss of meaning, not merely a change of method. Their skills, built through years of investment that the old paradigm validated and rewarded, are not merely devalued by the market. They are rendered illegible — unintelligible within the new framework's terms. The master calligrapher does not merely lose his job when the printing press arrives. He loses the framework within which his skill was meaningful. The press does not merely produce letters faster. It redefines what the production of letters is for, and the redefinition excludes the specific values — the beauty of the hand, the discipline of the stroke, the intimacy between craftsman and material — that organized the calligrapher's professional identity.
Kuhn observed that paradigm shifts are resolved not by the conversion of old-paradigm practitioners but by the generational replacement of the community. The old practitioners do not adopt the new paradigm. They retire, or they are marginalized, or they continue working within the old framework in diminishing numbers while the new generation, trained from the outset within the new paradigm, populates the field. "A new scientific truth does not triumph by convincing its opponents and making them see the light," Kuhn wrote, quoting Max Planck, "but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
The AI transition may not follow Kuhn's generational timeline precisely — the speed of the technological change compresses the transition into years rather than decades — but the mechanism is recognizably the same. The practitioners who entered the profession before the threshold experience the transition as a crisis of meaning. The practitioners who enter after the threshold experience the new paradigm as simply the way things are. The incommensurability between the two groups is not a failure of communication. It is a structural feature of paradigm transitions.
Merton's contribution to this analysis is the insistence that paradigm transitions are not merely intellectual events. They are institutional events. The paradigm is embedded in training programs, hiring criteria, promotion standards, compensation structures, professional organizations, credentialing systems, and the thousands of institutional arrangements that translate abstract assumptions about competence into concrete social reality. When the paradigm shifts, all of these institutional arrangements must shift with it, and they do not shift smoothly. They resist, because institutions are designed for stability, not for revolution.
The resistance is not irrational. Institutions that changed their foundational assumptions every time a new technique appeared would be dysfunctional — incapable of the sustained, coherent practice that professional work requires. The medical school that restructured its curriculum every year in response to the latest technology would produce graduates who knew a little about everything and mastered nothing. The law firm that abandoned its training program every time a new tool emerged would have no institutional knowledge to transmit.
But resistance that is rational in normal times becomes pathological during genuine paradigm shifts. The medical school that refuses to integrate AI into its curriculum because the old paradigm defined competence as unaided clinical judgment will produce graduates who are less capable than their AI-augmented peers. The law firm that evaluates associates on their ability to draft briefs from memory when AI can produce competent briefs in minutes will lose its best talent to firms that evaluate on higher-order skills: judgment, strategy, client relationship, the capacity to identify legal problems that AI cannot recognize.
The institutional response to the paradigm shift determines whether the transition produces broadly distributed adaptation or concentrated advantage. Institutions that restructure early — that redefine competence, redesign training, reorganize evaluation — position their practitioners to thrive in the new paradigm. Institutions that resist — that cling to the old definitions, the old criteria, the old training models — produce practitioners who are increasingly mismatched with the demands of the landscape they inhabit.
Merton's sociology does not prescribe the specific institutional changes required. It insists that the changes must be institutional rather than individual, because the paradigm is not an individual belief. It is an institutional structure. The developer who personally recognizes that the paradigm has shifted but works within an organization that evaluates her by the old paradigm's criteria faces a structural contradiction that no amount of individual adaptation can resolve. She must either change the institution or change institutions.
The paradigm shift in professional competence — from accumulated implementation skill to judgment about what to implement — is not complete. It is underway. The old paradigm has not been fully replaced; it coexists, uncomfortably, with the new one, and the coexistence produces the specific disorientation that Segal describes as the silent middle. Practitioners who can see both paradigms simultaneously, who understand the value of the old and the necessity of the new, occupy the most structurally strained position in the transition. They are the ones who feel the incommensurability most acutely, because they can see what is being gained and what is being lost, and they know that the two cannot be measured on the same scale.
The resolution will be institutional. It will come through the redesign of training programs, hiring criteria, promotion standards, and professional norms — the concrete institutional arrangements that translate abstract paradigmatic assumptions into lived professional reality. The institutions that redesign wisely will produce the practitioners who thrive. The institutions that resist will produce the practitioners who struggle. And the gap between the two will be determined not by the technology but by the institutional response to the technology — which is, in the end, always where the sociology of knowledge has insisted the decisive action lies.
---
In 1936, a twenty-six-year-old Robert K. Merton published an essay that would become one of the foundational texts of modern sociology. "The Unanticipated Consequences of Purposive Social Action" was, in its structure, deceptively simple. Merton argued that when people act with a purpose — when they design a policy, build an institution, deploy a technology, implement a reform — the consequences of their actions routinely diverge from their intentions. The divergence is not an accident. It is not a failure of planning. It is a structural feature of complex social systems in which actions propagate through networks of interdependence in ways that no actor can fully predict.
Merton identified five sources of unintended consequences, and each is so precisely mirrored in the AI transition that the essay reads, nearly ninety years later, less like historical scholarship than like diagnosis.
The first source is ignorance. The actor cannot know all the relevant circumstances and all the consequences of an action in a complex system. The knowledge required to predict all consequences of an intervention exceeds what any individual or organization possesses, not because the actor is incompetent but because the system is complex. The number of variables, the density of their interconnections, and the nonlinear dynamics of their interaction guarantee that consequences will emerge that no amount of prior analysis could have anticipated.
The AI developers of the early 2020s were not ignorant in any colloquial sense. They were among the most technically sophisticated practitioners on the planet. But the systems they were building were being deployed into a social environment of staggering complexity — into organizations, labor markets, educational institutions, healthcare systems, legal frameworks, and cultural practices whose interdependencies far exceeded the developers' capacity to model. The engineers who built Claude Code knew that it would make developers more productive. They could not have predicted — because the social system through which the tool would propagate was too complex to model — that the productivity gain would be experienced by many users not as liberation but as intensification; that the freed-up hours would be filled not with strategic reflection but with additional tasks; that the tool's very effectiveness would make it psychologically impossible for some users to stop working. The consequences were unintended not because the developers were careless but because the consequences emerged from the interaction between the tool and the social system, and that interaction was, in principle, unpredictable in its specifics.
The second source is error. The actor's model of the system is inaccurate. Previous experience, which the actor uses to predict the consequences of current actions, may be an unreliable guide when the current situation differs from past situations in ways that are not immediately apparent. The AI developers' model of how their tools would be used was based on the history of previous productivity tools — a history in which tools had generally produced the effects their designers intended, with manageable side effects that could be addressed through iteration. This model was an error, not because it was uninformed but because the AI tools were qualitatively different from previous productivity tools in ways that invalidated the historical comparison. No previous tool had been capable of engaging in extended natural language conversation. No previous tool had been able to produce outputs across the full range of professional domains. The novelty of the tool meant that the historical model was a misleading guide to its consequences.
The third source, and the one Merton found most sociologically interesting, is what he called the imperious immediacy of interest. The actor's short-term interests override long-term considerations. The pressure to act — to ship the product, to capture the market, to beat the competition — produces a temporal myopia in which the immediate consequences of an action (revenue, market share, competitive advantage) loom larger than the distant consequences (social disruption, labor displacement, attentional erosion). The actor is not unaware of the long-term consequences. She simply cannot afford to weight them equally with the short-term ones, because the competitive environment penalizes delay.
A 2025 paper mapping Merton's five causes onto contemporary AI cases found that the imperious immediacy of interest is the single most pervasive driver of unintended consequences in AI deployment. The authors described how "AI does not present a wholly new governance challenge but rather a magnified version of an old sociological truth: our actions ripple outward in ways we cannot fully control." The magnification is produced by scale and speed: AI systems propagate through social networks faster than previous technologies, and their effects cascade through more interconnections, which means that the window between deployment and the emergence of unintended consequences is shorter than for any previous technology, while the scale of the consequences is larger.
The AI companies that shipped models at breakneck speed through 2024 and 2025 were not indifferent to the social consequences of their products. Many invested substantially in safety research, alignment techniques, and responsible deployment practices. But they operated within a competitive environment that punished delay and rewarded speed, and the imperious immediacy of competitive interest produced deployment timelines that were faster than the institutional capacity to evaluate consequences. The safety research was real. It was also, structurally, insufficient — not because the researchers were inadequate but because the competitive pressure compressed the timeline below the threshold at which adequate evaluation was possible.
The fourth source is basic values. The actor's fundamental commitments — to progress, to freedom, to efficiency, to innovation — may prevent her from seeing consequences that conflict with those commitments. The commitment functions as a cognitive filter, admitting evidence that confirms the value and excluding evidence that challenges it. The technology community's commitment to the value of efficiency — the deep, culturally embedded belief that making things faster and easier is inherently good — functioned as precisely such a filter during the AI transition. Evidence that AI tools intensified work rather than reducing it, that they eroded deep skill rather than enhancing it, that they produced compulsion rather than freedom — this evidence was available, documented in the Berkeley study and elsewhere, but it was systematically underweighted by a community whose basic values predisposed it to interpret efficiency gains as progress.
The fifth source is the self-defeating prophecy — a prediction that, precisely because it is made, motivates behavior that prevents the predicted outcome from occurring. This is the inverse of the self-fulfilling prophecy examined in Chapter 3, and it operates in the AI context when warnings about AI's negative consequences motivate institutional responses that prevent those consequences from materializing. The warning itself becomes the intervention. Researchers who warned about bias in language models motivated the development of bias mitigation techniques. Ethicists who warned about the concentration of AI power motivated the open-source movement. Sociologists who warned about labor displacement motivated retraining programs. In each case, the prediction was not wrong — the predicted consequence was genuinely possible — but the act of predicting it altered the institutional environment in ways that reduced its probability. The prediction defeated itself.
The unintended consequences of the AI transition are already visible, and they match Merton's taxonomy with diagnostic precision.
Work intensification: Tools designed to reduce workload have intensified it. The Berkeley researchers documented this empirically. AI did not free workers from drudgery. It expanded the scope of what each worker was expected to accomplish, filled previously protected pauses with additional tasks, and produced a condition of continuous, AI-augmented productivity that was more exhausting than the unaugmented work it was designed to replace. The intention was liberation. The consequence was intensification. The gap between the two was produced not by the technology's failure but by its interaction with organizational incentive structures that reward visible output and penalize apparent idleness.
Skill erosion: Tools designed to enhance capability have eroded it. When the friction of implementation is removed — when the developer no longer debugs, the lawyer no longer drafts, the physician no longer reasons through a differential diagnosis unaided — the embodied understanding that friction produced atrophies. The tools' manifest function (producing competent output) is served. Their latent function (developing the practitioner's deep understanding through struggle) is eliminated. The intention was augmentation. The consequence was atrophy. The gap was produced by the structural fact that manifest functions and latent functions are served by the same process, and eliminating the process eliminates both.
Concentration of advantage: Tools designed to democratize capability have concentrated it. The Matthew Effect, examined in Chapter 4, channels the benefits of expanded access toward those who enter the system with existing advantages — capital, networks, infrastructure, institutional support. The intention was democratization. The consequence was concentration alongside democratization: a wider base of participants and a steeper gradient of rewards. The gap was produced by the structural reality that access to a tool and access to the complementary assets that make the tool valuable are distributed by different mechanisms, and the mechanisms that distribute complementary assets are far less egalitarian than the mechanisms that distribute the tool.
Merton's analysis does not counsel despair. It counsels vigilance. Unintended consequences cannot be eliminated, because they are produced by the structural complexity of the systems through which human action propagates. But they can be monitored for, detected early, and addressed through institutional structures designed for continuous correction rather than one-time intervention.
The dam, in Segal's metaphor, is not built once. It is maintained. And the reason it must be maintained is precisely Merton's point: the river constantly produces effects that no one predicted, that no one intended, that emerge from the interaction between the current and the terrain in ways that make prior plans insufficient. The builder who anticipates every consequence is deluded. The builder who monitors for consequences and corrects course when they appear is a sociologist — whether she knows it or not.
---
The concept of role strain — the condition that arises when the demands of a social role exceed the capacity of the individual occupying it — was Merton's contribution to understanding one of the most common and least visible sources of human distress. The concept appears straightforward, even obvious, until one examines what it actually implies about the relationship between individuals and the social structures they inhabit.
A social role, in Merton's analysis, is not a simple, unitary set of expectations. It is a role-set: a complement of role relationships that a person has by virtue of occupying a particular social position. The physician does not merely play the role of "doctor." She simultaneously occupies a role relationship with patients (healer), with nurses (collaborator and, in some institutional structures, supervisor), with hospital administrators (employee), with insurance companies (claimant), with regulatory bodies (licensee), with medical students (teacher), with pharmaceutical companies (prescriber), and with her own professional community (peer, colleague, competitor). Each of these role relationships carries expectations, and the expectations frequently conflict.
The patient expects the physician to spend as much time as necessary for a thorough examination. The administrator expects the physician to see a certain number of patients per hour. The insurance company expects the physician to prescribe the most cost-effective treatment. The patient expects the physician to prescribe the most effective treatment regardless of cost. The regulatory body expects meticulous documentation. The patient expects eye contact rather than screen time. Each expectation is legitimate within its own terms. Taken together, they exceed the physician's capacity to satisfy all of them simultaneously. The resulting strain is not a personal failure. It is a structural feature of the role.
Merton's insight was that role strain is produced by the social structure, not by the individual's inadequacy. The physician who feels torn between patient care and administrative demands is not failing to manage her time. She is experiencing the structural contradiction between two legitimate sets of expectations that her role requires her to satisfy and that cannot be simultaneously satisfied. The strain is inherent in the position, and it would be experienced by anyone occupying it, regardless of personal resilience, time management skill, or emotional fortitude.
The AI transition has produced role strain at a scale and intensity that Merton, writing about mid-twentieth-century professional life, could not have anticipated. The strain is produced by a specific structural contradiction: the professional is simultaneously expected to maintain the competencies of the old paradigm and develop the competencies of the new one, and the two sets of competencies make competing demands on finite cognitive resources.
Consider the senior software engineer — a figure who appears repeatedly in Segal's account of the AI transition, and who represents, in Mertonian terms, the paradigmatic case of AI-induced role strain. Before the threshold of late 2025, her role was defined by a clear set of expectations: deep knowledge of multiple programming languages, the ability to architect complex systems, fluency in debugging, familiarity with deployment infrastructure, and the embodied intuition that Segal calls "feeling a codebase the way a doctor feels a pulse." These competencies were built through years of intensive practice. They were validated by the market through compensation, by the profession through status, and by the individual through identity. The engineer did not merely have these skills. She was these skills, in the sense that her professional self-concept was organized around them.
The AI threshold introduced a new set of expectations without retiring the old ones. The engineer was now expected to direct AI tools effectively — to prompt well, to evaluate AI-generated output, to make architectural decisions at a level that presupposed the implementation would be handled by a machine, to think about product strategy and user experience and business model because the implementation that used to consume her cognitive bandwidth was no longer consuming it. These new expectations required competencies that her training had not developed: lateral thinking across domains, product judgment, the ability to describe intentions in natural language rather than formal syntax, comfort with ambiguity and imprecision.
The old competencies had not become worthless. They remained valuable as inputs to judgment — the deep understanding of systems architecture informed the quality of the prompts she constructed, the debugging intuition helped her evaluate whether AI-generated code would break under edge cases. But the old competencies were no longer sufficient. They had been demoted from defining competencies (the things that made her a senior engineer) to supporting competencies (the things that made her a better user of AI tools).
The demotion produced strain. Not because the new expectations were unreasonable — they were, individually, sensible responses to a changed landscape — but because the old expectations had not been formally retired. The organization still evaluated her, in part, on her implementation speed. Her colleagues still respected her, in part, for her coding fluency. Her identity was still organized, in substantial part, around the skills that the new paradigm was rendering secondary. She was expected to be the old thing and the new thing simultaneously, and the two things competed for the same finite resources of time, attention, and cognitive bandwidth.
Merton's analysis of role strain identifies several mechanisms through which individuals attempt to manage competing role demands, and each is visible in the AI transition.
The first mechanism is role compartmentalization: the attempt to satisfy different role expectations in different contexts, segregating the competing demands temporally or spatially. The engineer who writes code by hand in the morning ("to keep her skills sharp") and uses AI tools in the afternoon (to meet the new productivity expectations) is compartmentalizing. The strategy reduces the moment-to-moment experience of contradiction but does not resolve the underlying structural conflict. It doubles the cognitive load by requiring the practitioner to maintain two modes of practice rather than one.
The second mechanism is role hierarchy: the prioritization of one set of role expectations over others. The engineer who decides that AI-augmented productivity is the future and invests all her development time in prompt engineering, product thinking, and lateral capability — at the cost of maintaining her traditional coding skills — has hierarchized her roles. The strategy is adaptive in the long run if the new paradigm fully displaces the old. It is risky if the transition is slower or less complete than anticipated, leaving the practitioner vulnerable in the interim.
The third mechanism is role exit: the abandonment of the role entirely. Segal observes the phenomenon of senior engineers "moving to the woods" — reducing their cost of living and withdrawing from the profession in anticipation that their livelihoods would be destroyed. This is role exit in its most literal form: the resolution of role strain through the elimination of the role. The strategy resolves the strain but at the cost of everything the role provided — income, identity, community, the sense of professional purpose that had organized the practitioner's life.
The fourth mechanism, and the one Merton considered most sociologically significant, is institutional restructuring: the modification of the role itself to reduce or eliminate the contradictions between its constituent expectations. This is not an individual strategy. It is a collective one, requiring organizational or professional action to redefine what the role demands.
The Berkeley researchers' proposal for "AI Practice" — structured pauses, sequenced work, protected mentoring time — is, in Mertonian terms, a proposal for institutional restructuring aimed at reducing role strain. The proposal does not ask individual practitioners to manage the contradiction through personal resilience. It asks organizations to modify the structure of the role so that the contradiction is less acute: designating specific times for AI-augmented work and specific times for traditional practice, creating protected spaces for the kind of slow, friction-rich learning that AI tools otherwise eliminate, restructuring evaluation criteria to reflect the new paradigm's competencies rather than the old one's.
Segal's organizational experiment in Trivandrum represents another form of institutional restructuring. By framing the AI tools explicitly as amplifiers of existing expertise rather than replacements for it — by telling the engineers that their skills would become more valuable, not less, when augmented by AI — the intervention addressed role strain at the level of definition. The engineers were given a framework for understanding their new role that integrated rather than contradicted their existing professional identity. The framework did not eliminate the strain entirely — the transition was still disorienting, still required the development of new competencies, still produced the oscillation between excitement and terror that Segal describes — but it provided an institutional context that reduced the strain's most destructive effects.
The fight-or-flight response that Segal observes in the AI-affected professional community is, in Mertonian terms, the behavioral expression of role strain. The professionals who "fight" — who lean into the new tools, invest in developing new competencies, and attempt to expand their role to encompass both old and new expectations — are adopting the role hierarchy strategy, prioritizing the new paradigm's demands while drawing on the old paradigm's skills as supporting resources. The professionals who take "flight" — who withdraw, disengage, retreat to environments where the old paradigm still holds — are adopting the role exit strategy, resolving the strain by abandoning the role that produces it.
Neither response is pathological. Both are structurally predictable consequences of occupying a role that is being redefined faster than any individual can adapt. The strain is not produced by individual weakness. It is produced by the structural speed of the paradigm transition — a speed that exceeds the rate at which human identity can reorganize itself around new definitions of competence and value.
Merton's sociology directs attention away from individual coping and toward the institutional conditions that either intensify or alleviate strain. The organization that evaluates its engineers simultaneously by old-paradigm criteria (lines of code written, bugs fixed, implementation speed) and new-paradigm criteria (quality of architectural judgment, effectiveness of AI direction, breadth of cross-domain capability) is producing strain as a structural feature of its evaluation system. The organization that restructures its evaluation to reflect the new paradigm's competencies — that rewards judgment over implementation, questions over answers, breadth over narrow depth — reduces the strain by aligning the role's expectations with the landscape its practitioners actually inhabit.
The resolution of role strain is not psychological. It is institutional. The practitioner cannot think her way out of a structural contradiction. She can only manage its symptoms through individual coping mechanisms that are, in Merton's analysis, inherently partial and inherently costly. The resolution comes when the institutions that define the role — the organizations, the professional communities, the credentialing systems, the educational programs — restructure the role to eliminate or reduce the contradiction between its constituent expectations.
The AI transition will produce role strain for as long as the old paradigm and the new one coexist within the same institutional structures. The strain will be resolved, eventually, by the institutional restructuring that aligns professional roles with the new paradigm's requirements. The question is how long the coexistence lasts and how much human cost it extracts before the restructuring occurs. Merton's sociology cannot answer that question. It can insist that the answer depends on institutional choices rather than individual resilience, and that the institutions that restructure earliest will impose the least cost on the people inside them.
---
In the opening chapters of Social Theory and Social Structure, Robert K. Merton introduced a distinction that would become one of the most widely applied analytical tools in sociology. The distinction is between manifest functions — the stated, recognized, intended purposes of a social institution or practice — and latent functions — the unstated, unrecognized, often unintended consequences that the institution or practice also produces. The distinction is deceptively simple. Its implications are profound, because the latent functions of an institution are frequently more important to its persistence than the manifest ones, and their loss, when the institution is disrupted, is frequently more consequential than the disruption of the manifest function — precisely because the latent functions were unrecognized and therefore unprotected.
Merton's canonical example was the Hopi rain dance. The manifest function of the ceremony is to produce rain. By any empirical standard, it fails: the ceremony has no measurable effect on precipitation. An observer who evaluated the rain dance solely by its manifest function would conclude that it is useless — a superstitious relic that persists only through inertia and ignorance. But the observer would be wrong, because the rain dance also serves latent functions that are invisible to anyone evaluating it on its manifest terms. The ceremony brings the community together. It reinforces shared identity. It provides a collective activity that strengthens social bonds. It marks the agricultural calendar, coordinating planting decisions. It connects the present generation to ancestral traditions, providing a sense of continuity and belonging.
These latent functions are, sociologically, more important than the manifest one. The rain dance persists not because it produces rain but because it produces social cohesion. And if the ceremony were eliminated — by a well-meaning reformer who observed that it does not, in fact, produce rain — the community would lose not the rain but the social cohesion, and the loss would be felt as a diffuse, hard-to-name deterioration in communal life that the reformer could not have predicted because he was evaluating the institution by the wrong function.
The application to the AI displacement is immediate, and it illuminates the dimension of the transition that is most consequential and least visible.
Professional expertise, as it existed before the AI threshold, served manifest functions that are easily enumerated and measured. The lawyer's expertise produced legal briefs. The developer's expertise produced working code. The physician's expertise produced diagnoses. The architect's expertise produced building plans. Each manifest function is specifiable, evaluable, and — crucially — replicable by AI with increasing competence. The brief is drafted. The code compiles. The differential diagnosis is generated. The building is modeled. AI addresses the manifest functions of expertise with a facility that improves with each generation of the technology.
But expertise also served latent functions that its practitioners rarely articulated, that economic analysis almost never measured, and that the AI discourse has been systematically unable to see. These latent functions are more important to human flourishing than the manifest ones, and their erosion is the most consequential — and most invisible — cost of the AI transition.
The first latent function of expertise is identity. The professional's sense of self is organized around her competence. The developer is not merely a person who writes code. She is a person whose self-concept, whose sense of where she stands in the world, whose answer to the question "What do you do?" — which is, in contemporary culture, nearly synonymous with "Who are you?" — is organized around the capacity to write code. The lawyer is not merely a person who drafts briefs. The physician is not merely a person who diagnoses illness. Each is a person whose identity is constituted, in significant part, by the exercise of expertise.
When AI handles the manifest function — when the code is written, the brief is drafted, the diagnosis is generated — the practitioner retains the output but loses the identity. She is still productive. The dashboard metrics look green. But the specific thing she did that made her her — the thing that organized her professional self-concept, that provided an answer to the existential question of who she is and what she contributes — has been delegated to a machine. The output is preserved. The meaning is not.
Segal describes this as the elegist's grief — the quiet mourning of "something they could not quite articulate." Merton's framework supplies the articulation. The elegists are mourning a latent function: the identity-constituting dimension of professional practice that is invisible to any evaluation focused on output. Productivity metrics cannot measure identity. Revenue figures cannot capture meaning. The discourse that evaluates the AI transition by its effects on manifest functions — more output, faster, cheaper — is systematically blind to the erosion of latent functions that the manifest-function metrics cannot see.
The second latent function is community. The shared experience of mastering a difficult discipline creates bonds between practitioners that function as social structure. Developers who learned the same languages, struggled with the same frameworks, debugged the same categories of error, share a common experiential vocabulary that is the basis of professional community. They can communicate in shorthand. They understand each other's frustrations. They recognize each other's achievements because they know, from personal experience, how hard those achievements were.
When AI eliminates the shared struggle, it eliminates the experiential basis of the community. Practitioners who use different AI tools, who describe their work in different natural-language terms, who solve different categories of problems because the old categorization has been rendered obsolete — these practitioners may share an industry but they do not share the specific experiential bond that professional struggle produces. The latent function of community, built on shared difficulty, erodes when the difficulty is eliminated.
The third latent function is status. Years of training signal commitment, and the signal is legible to the market and to peers. The developer with twenty years of experience commands respect not only because her skills are superior — though they may be — but because the duration and difficulty of her training communicate something about her character: her persistence, her capacity for sustained effort, her willingness to invest years in mastery. The training is a signal in the economic sense, and the signal conveys information about the practitioner that transcends the specific skills the training produced.
When AI makes it possible for a junior practitioner to produce output comparable to a senior one, the signal is disrupted. The market can no longer use training duration and difficulty as a reliable indicator of capability, because capability is now a function of tool access and judgment rather than accumulated implementation skill. The senior practitioner's status, which was built partly on the signal value of her training, erodes — not because her underlying qualities (persistence, discipline, judgment) have diminished, but because the signal mechanism that communicated those qualities has been disrupted.
The fourth latent function is meaning. The relationship between the practitioner and the work — the specific intimacy of understanding a system she built by hand, the satisfaction of solving a problem through sustained struggle, the quiet pride of craft — is a source of existential meaning that transcends productivity. The craftsman who builds a table by hand does not value the table only as an object. He values it as an expression of his agency, his skill, his relationship with the material. The table is evidence that he can make things, that his actions have consequences in the physical world, that he is not merely a consumer of experience but a producer of it.
When AI handles the implementation, the practitioner loses not the product but the process — and the meaning lived in the process, not the product. The code that works, the brief that is filed, the diagnosis that is correct — these are products. The satisfaction of having produced them through one's own struggle is a process. The two are often confused because, before AI, they were inseparable: the product could not exist without the process. AI separates them, and in the separation, the meaning that lived in the process is left behind while the product is carried forward.
This is what Segal's engineer meant when he said he felt like a master calligrapher watching the printing press arrive. The calligrapher was not mourning the letters. The printing press would produce more letters, better distributed, more accessible. The calligrapher was mourning the relationship between his hand and the page — a relationship that was constitutive of his identity, embedded in his community, legible as a signal of his status, and experienced as a source of meaning.
The printing press served the manifest function (producing letters) more efficiently than the calligrapher. It could not serve the latent functions (identity, community, status, meaning) at all. And because the manifest function was the stated purpose of the institution, the reformers who celebrated the press's efficiency were blind to the latent functions that the calligrapher lost.
Merton's framework does not argue that latent functions should be preserved at any cost. It argues that they should be recognized — that institutional change should be evaluated not only by its effects on the manifest function but by its effects on the full range of functions, manifest and latent, that the institution serves. The reformer who eliminates the rain dance because it does not produce rain is making the specific error that Merton's framework is designed to prevent: evaluating an institution by its manifest function alone and therefore missing the latent functions whose loss will produce consequences the reformer cannot predict.
The AI discourse is making the same error at civilizational scale. Every evaluation of the AI transition that measures productivity, output quality, cost reduction, and efficiency gain is evaluating the transition by its manifest functions. Every such evaluation is systematically blind to the latent functions — identity, community, status, meaning — whose erosion is the most consequential human cost of the transition.
The cost is invisible because latent functions are, by definition, not what the institution claims to be doing. No professional says, "I write code in order to have an identity." No lawyer says, "I draft briefs in order to belong to a community." The latent functions are served through the manifest function, not instead of it, and they become visible only when the manifest function is disrupted and the latent functions — suddenly unserved — produce a diffuse, hard-to-name deterioration in human flourishing that the productivity metrics cannot detect.
The Berkeley researchers measured burnout, reduced empathy, diminished satisfaction. These are the symptomatic expressions of latent-function loss. The practitioners are producing more output than ever. The dashboards are green. And the human beings behind the dashboards are experiencing what Merton's framework predicts: the disorientation that follows when an institution's latent functions are eliminated because a reformer was measuring only the manifest ones.
The prescription that follows from Merton's analysis is not to resist the AI transition. It is to ensure that the latent functions currently served by professional practice are recognized, valued, and — where possible — served by alternative institutional arrangements when the old ones are disrupted. If expertise provided identity, what will provide identity when implementation is delegated to machines? If shared struggle provided community, what will provide community when the struggle is eliminated? If training duration provided status, what will signal commitment and character when training is compressed? If the process of creation provided meaning, what will provide meaning when the product arrives without the process?
These are not rhetorical questions. They are design problems. And they are design problems that the AI community — focused, as it structurally must be, on the manifest function of producing capable tools — is not equipped to solve alone. The solutions will come from the institutions that surround the tools: the organizations that employ practitioners, the educational systems that train them, the professional communities that credential them, the cultural narratives that give their work meaning.
The manifest functions of expertise will be served by AI with increasing competence. The latent functions will be served by human institutions — or they will not be served at all. The choice is institutional, not technological. And the institutions that recognize what is at stake — that see the latent functions as clearly as the manifest ones — will build structures that preserve what the technology alone cannot.
In 1942, with the world at war and the relationship between science and state power assuming a new and terrifying intimacy, Robert K. Merton published an essay that would define the sociology of science for the next half-century. "The Normative Structure of Science" identified four institutional imperatives — four norms — that Merton argued constituted the ethos of modern science: universalism, communism (later softened to "communalism" to avoid Cold War misreadings), disinterestedness, and organized skepticism. The norms were subsequently given the acronym CUDOS by the sociologist John Ziman, and they became the foundational framework for understanding how scientific communities govern themselves.
The norms are not descriptions of how scientists actually behave. Merton was too sophisticated a sociologist to confuse the normative with the empirical. Scientists are ambitious, competitive, jealous, petty, and capable of every variety of self-interested behavior that characterizes human beings in any institutional setting. The norms are institutional ideals — the standards against which the scientific community evaluates conduct and corrects deviations. They function not by eliminating bad behavior but by providing the community with a shared vocabulary for identifying bad behavior, sanctioning it, and distinguishing it from the behavior the community aspires to.
The distinction between norms and behavior is critical, because it is the distinction between a community that has standards and a community that meets them. No community fully meets its own standards. The value of the standards lies in the gap between aspiration and practice — in the community's capacity to recognize the gap and exert pressure to close it. A community without norms is not a community freed from hypocrisy. It is a community without the tools to identify what it should be doing differently.
Universalism holds that knowledge claims are evaluated by impersonal criteria — by the logic of the argument and the quality of the evidence — not by the social characteristics of the claimant. The race, nationality, religion, gender, or institutional affiliation of the scientist is irrelevant to the truth value of the claim. Universalism does not assert that scientists are actually impartial. It asserts that impartiality is the standard against which their judgments should be measured, and that departures from impartiality — the rejection of a finding because it comes from an unfavored institution, the acceptance of a finding because it comes from a prestigious one — are recognized as violations.
Communalism holds that scientific findings are the product of social collaboration and belong to the community rather than to the individual who produced them. Secrecy is the antithesis of the norm. Scientific knowledge is to be shared, published, made available for scrutiny and use by all qualified practitioners. The individual scientist may receive recognition for the discovery — indeed, the priority system that Merton analyzed at length depends on individual attribution — but the knowledge itself is communal property. The patent, the trade secret, the proprietary dataset — each represents a departure from the communal norm, a privatization of what the norm holds should be shared.
Disinterestedness does not require that scientists lack personal interests. It requires that the institutional structure of science is organized to minimize the influence of personal interest on the evaluation of knowledge claims. Peer review, replication, the public character of scientific argument — each is a structural mechanism designed to ensure that the community's conclusions are determined by evidence rather than by the interests of the individuals producing the evidence.
Organized skepticism holds that no claim is exempt from critical scrutiny. The authority of the claimant, the prestige of the institution, the political utility of the conclusion — none of these factors immunizes a claim from the community's obligation to test, challenge, and, where warranted, reject it. Organized skepticism is what distinguishes science from dogma: the institutionalized commitment to doubting one's own conclusions as rigorously as one doubts the conclusions of others.
These four norms, taken together, constitute what Merton regarded as the specific contribution of the scientific community to human civilization: not the content of any particular discovery, but the institutional structure that makes cumulative, self-correcting knowledge possible. The norms are the dams that the scientific community built around the river of inquiry — structures designed to redirect the flow of knowledge production toward cumulative understanding rather than private advantage, toward self-correction rather than dogmatic persistence, toward shared benefit rather than concentrated exploitation.
The AI community is in the process of building — or failing to build — its own normative structure, and the stakes of that construction are higher than in any previous knowledge community because the technology's reach is broader, its deployment faster, and its consequences more immediate than those of any previous scientific or technological development. The norms that the AI community adopts — the standards against which it evaluates its own conduct — will determine the character of the most powerful technology in human history. And the norms are being formed now, in real time, through the accumulation of institutional decisions, competitive pressures, regulatory responses, and cultural narratives that are shaping the community's self-understanding.
The tension between communalism and privatization is the most visible normative contest in the AI community, and it maps onto Merton's framework with diagnostic precision. The communal norm — that knowledge should be shared, published, made available for scrutiny and use — has a genuine and significant expression in the open-source AI movement. Meta's Llama models, Mistral's open-weight releases, the broader ecosystem of freely available tools and training techniques represent a real commitment to the principle that AI capability should be communal rather than proprietary. The arguments for openness are Mertonian at their core: that the training data was communally produced (by billions of people writing on the internet), that the foundational research was communally funded (through public university systems and government grants), and that the technology's consequences are too broad to be governed by private interests alone.
The counter-pressure is equally real. The most capable frontier models — the ones at the cutting edge, the ones that define the strategic research site — remain proprietary, developed behind closed doors by organizations that combine communal rhetoric with competitive practice. The justification for closure is partly commercial (the models are expensive to train and the organizations need revenue to sustain operations) and partly safety-related (unrestricted access to the most capable models carries risks that the developers are not confident the open-source ecosystem can manage). But the effect, regardless of justification, is the privatization of knowledge that the communal norm holds should be shared.
Scholars examining the intersection of Mertonian norms and contemporary research practice have found that the tension is structural, not incidental. One study in Accountability in Research concluded that "although adherence to Mertonian values is desired and promoted in academia, such adherence can cause friction with the normative structures and practices" of the platform economy. The Mertonian ideal of communal knowledge production exists in an institutional environment that rewards proprietary control, and the friction between the ideal and the environment is producing a normative crisis whose resolution will determine the character of AI development for decades.
The tension between organized skepticism and accelerationist urgency is equally consequential, and it is playing out in real time in the AI safety debate. Organized skepticism — the commitment to subjecting every claim to critical scrutiny before accepting it — is the norm that should, in principle, govern the pace of AI deployment. No system should be released until its capabilities and risks have been thoroughly evaluated. No claim about a model's safety should be accepted without rigorous testing. No deployment should proceed without adequate understanding of its consequences.
In practice, the competitive environment compresses the timeline for skeptical evaluation below the threshold at which thorough scrutiny is possible. A scholar invoking Merton directly in Interdisciplinary Science Reviews warned against "people being so fooled by the hype that they do become slaves, not to incredibly intelligent computers but to stupid computers that are taken to be unquestionable authorities." The warning captures the inversion of organized skepticism: a community that should be institutionally committed to doubt is, under competitive pressure, institutionally committed to credulity — to accepting the claims of its own products at face value because the market rewards confidence and penalizes caution.
The disinterestedness norm is under pressure from the sheer scale of the financial interests involved. The AI industry is capitalized in the trillions. The personal fortunes of its leading practitioners are measured in billions. The organizations that produce AI research are, simultaneously, the organizations that commercialize it. The structural separation between the production of knowledge and the exploitation of knowledge — the separation that the disinterestedness norm is designed to protect — has collapsed in the AI industry more thoroughly than in any previous knowledge-producing community. The researcher who evaluates the safety of a model is employed by the organization that profits from the model's deployment. The peer who reviews the safety claim is often employed by a competing organization with its own commercial interests. The structural mechanisms that Merton identified as the guarantors of disinterested evaluation — mechanisms that depend on the independence of the evaluator from the consequences of the evaluation — are absent.
The universalism norm faces its own specific challenge. AI development is concentrated in a small number of countries, a small number of organizations, and a small number of demographic groups. The practitioner base is disproportionately male, disproportionately educated at a handful of elite institutions, and disproportionately located in a handful of geographic clusters. The knowledge claims that shape the technology's development are evaluated not by the universal criteria that universalism demands but by the specific criteria of a community whose composition is far from universal. The biases embedded in the training data — biases that reflect the historical dominance of certain languages, certain cultures, certain perspectives — are, in Mertonian terms, violations of the universalism norm: evaluations of knowledge that are shaped by the social characteristics of the community rather than by impersonal criteria.
Some scholars have argued that the Mertonian norms are not merely under pressure but have been structurally inverted in the platform economy. A framework termed DECAY — differentialism, egoism, capitalism, and advocacy — has been proposed as a description of the norms that actually govern contemporary knowledge production, in contrast to the norms that Merton identified as aspirational. Where Merton's CUDOS described what the scientific community aspires to, DECAY describes what the platform economy incentivizes: the differentiation of knowledge claims by the status of their claimants, the pursuit of individual advantage over communal benefit, the subordination of knowledge production to commercial interest, and the replacement of organized skepticism with organized advocacy for pre-determined conclusions.
The DECAY framework may be overstated as a description of the AI community's actual norms — there are genuine and significant commitments to safety, openness, and rigorous evaluation within the community, commitments that the DECAY framework risks erasing. But the framework captures a real structural dynamic: the pressure that commercial incentives exert on communal aspirations, the tension between the knowledge community's normative self-understanding and the institutional environment in which it operates.
Merton's contribution to this debate is not a prescription for which norms the AI community should adopt. It is the insistence that the question is normative — about values, not merely about capabilities — and that the norms a community adopts are among the most consequential structures it builds. The dams that redirect the river of AI development toward broadly distributed benefit or away from it are not primarily technological. They are normative. They are the standards against which the community evaluates its own conduct, the expectations it holds of its members, the sanctions it imposes on deviations, and the institutional mechanisms it constructs to align behavior with aspiration.
The AI community is building its norms now. Every decision about openness or closure, speed or caution, commercial interest or communal benefit, is a normative decision — a choice about what kind of community the AI community aspires to be. The norms will not be perfect. No community's norms are. But the community that has norms and falls short of them is in a fundamentally different position from the community that has no norms to fall short of. The former has the tools for self-correction. The latter has only the market, which optimizes for outcomes that may have nothing to do with the community's aspirations or the broader society's needs.
Merton documented what happens when a knowledge community's norms are strong enough to function as effective dams: cumulative, self-correcting knowledge that benefits the broader society. He also documented what happens when the norms are weak, captured, or absent: the concentration of benefit among those who control the production of knowledge, at the expense of those who depend on it.
The AI community's norms are not yet set. They are being contested, negotiated, and formed through every institutional decision, every regulatory debate, every competitive gambit, every open-source release, every safety evaluation that is conducted or deferred. The outcome of these contests will determine not what AI can do — the technology will continue to advance regardless — but what the community that produces it will demand of itself, and therefore what the technology will actually do in the world.
---
Sociology does not make predictions. This is a limitation that sociologists have spent considerable energy either defending or lamenting, depending on their temperament and their audience. The physicist can predict the trajectory of a projectile. The economist can, under constrained conditions, predict the movement of a price. The sociologist can do neither, because the systems sociology studies — human communities, institutions, professional cultures, the dense and reflexive networks of belief and behavior that constitute social life — are characterized by a specific kind of complexity that resists prediction: they are composed of actors who can become aware of the predictions being made about them and alter their behavior in response.
This reflexivity is not a defect of sociology. It is its subject. Merton's entire intellectual career was organized around the insight that beliefs about social reality are not merely descriptions of that reality but inputs into it — that the stories people tell about the world they inhabit shape the world they inhabit, in ways that make the relationship between description and reality fundamentally different from the relationship between a physicist's equation and the trajectory of a ball. The self-fulfilling prophecy is the formalization of this insight. The Thomas theorem — "If men define situations as real, they are real in their consequences" — is its philosophical foundation.
The AI transition is the most reflexive social event in living memory. Every description of the transition — every prediction, every analysis, every narrative about what AI will do to work, to expertise, to professional identity, to human flourishing — is simultaneously a description and an intervention. The prediction that AI will destroy expertise motivates institutional behavior that either preserves or destroys expertise. The prediction that AI will democratize capability motivates institutional behavior that either democratizes or concentrates capability. The prediction that AI will produce abundance motivates institutional behavior that either distributes or hoards the abundance.
The stories are not neutral. They are causal.
This means that a book about the sociology of the AI transition — this book — is itself an intervention in the social system it describes. The concepts it introduces — multiple discovery, strategic research sites, the self-fulfilling prophecy, the Matthew Effect, manifest and latent functions, role strain, normative structure — are not merely analytical tools for understanding what is happening. They are cognitive resources that, once available to the actors within the system, alter what those actors can see, what they can name, and therefore what they can address.
A practitioner who has no concept of role strain experiences the disorientation of the AI transition as a personal failure — a failure of resilience, adaptability, or competence. A practitioner who possesses the concept of role strain can see the disorientation as a structural feature of a position that imposes contradictory demands, and can direct her response at the structure rather than at herself. The concept does not eliminate the strain. It relocates the response from the individual to the institution, which is where Merton's sociology insists the response must be directed if it is to be effective.
A leader who has no concept of latent functions evaluates the AI transition by its effects on productivity, output quality, and cost — the manifest functions — and declares the transition a success when the dashboards are green. A leader who possesses the concept of latent functions can see that the dashboards may be green while the human beings behind them are experiencing the erosion of identity, community, status, and meaning that the dashboards cannot measure. The concept does not solve the problem. It makes the problem visible, which is the prerequisite for solving it.
A policymaker who has no concept of the Matthew Effect celebrates the democratization of AI capability and assumes that broader access to tools will produce broader distribution of benefits. A policymaker who possesses the concept can see that access and benefit are distributed by different mechanisms, that the mechanisms that distribute benefit are shaped by structures of existing inequality, and that the gains from broader access will flow disproportionately to those who entered the system with the most complementary assets. The concept does not prescribe a specific policy intervention. It insists that intervention is necessary if the distribution of benefits is to match the distribution of access.
A community leader who has no concept of normative structure watches the AI industry make decisions about openness and closure, speed and caution, commercial interest and communal benefit, and has no framework for evaluating those decisions beyond their market outcomes. A community leader who possesses the concept can see that each decision is a normative choice — a choice about what the community aspires to be — and can evaluate the decisions against the norms that Merton identified as the prerequisites for cumulative, self-correcting, broadly beneficial knowledge production.
These concepts are, in the language of The Orange Pill, dams. They are structures built in the river of the AI transition — not to stop the flow, which cannot be stopped, but to redirect it toward channels that produce broadly distributed benefit rather than concentrated advantage. The dams are cognitive before they are institutional: you must be able to see the river's dynamics before you can build the structures that redirect them. Merton's sociology provides the vision. The institutional construction is the work that follows.
But — and this is the qualification that Merton's intellectual honesty demands — the vision is not sufficient. Sociology can tell us that the AI transition follows patterns visible in previous transitions. It can tell us that the distribution of gains and losses will follow structures of existing inequality unless specific institutional interventions redirect them. It can tell us that unintended consequences will emerge, because they always emerge when purposive action propagates through complex social systems. It can tell us that self-fulfilling prophecies will shape institutional responses, because beliefs about social reality are inputs into social reality. It can tell us that the Matthew Effect will concentrate benefits, because cumulative advantage is a structural feature of every social system that has been studied. It can tell us that latent functions will be lost, because they are invisible to the metrics by which transitions are evaluated. It can tell us that role strain will produce distress, because contradictory role demands always produce distress. It can tell us that the normative structure of the AI community will determine the character of the technology's social impact, because norms determine what a community demands of itself.
Sociology cannot tell us whether the net effect will be positive or negative, because that determination depends on choices that have not yet been made by institutions that are still being built by people who are still arguing about what to build. The structures are not yet set. The norms are not yet established. The dams are not yet complete — or rather, they are being built right now, in real time, by every organization that adopts an AI policy, every school that redesigns a curriculum, every government that passes a regulation, every parent who sets a boundary, every builder who chooses to keep the team instead of cutting it, every professional community that redefines what competence means.
Merton's deepest insight — the one that runs beneath all his specific concepts, connecting the self-fulfilling prophecy to the Matthew Effect to the normative structure of science to the analysis of unintended consequences — is that social structures are human creations that take on a life of their own. People build institutions, and then the institutions shape people. The relationship is recursive. The structures we build to manage the AI transition will, once built, shape the possibilities available to the next generation of people who inhabit them. Build the wrong structures, and the next generation inherits constraints that no individual effort can overcome. Build the right ones, and the next generation inherits possibilities that no individual effort could have created alone.
The structures have not been decided. They are being decided now. By the choices of people who may or may not be reading this book, who may or may not possess the sociological concepts that would make the choices more informed, who may or may not have the institutional position to translate their choices into structural reality. The sociology of the AI transition is, in the end, the sociology of choices being made in real time, under conditions of uncertainty, with consequences that propagate through networks of interdependence in ways that no actor can fully predict, for people who are not yet born and whose lives will be shaped by structures they had no role in building.
Merton would not have called that conclusion optimistic. He would not have called it pessimistic. He would have called it sociological: a description of the actual conditions under which consequential decisions are made in complex social systems. The conditions are never ideal. The information is never complete. The consequences are never fully predictable. The actors are never fully disinterested.
What sociology can offer — what Merton's career was devoted to providing — is not the ability to predict the outcome but the ability to see the structures through which the outcome will be produced. To name the dynamics that are otherwise invisible. To make the self-fulfilling prophecies visible before they fulfill themselves. To identify the latent functions before they are lost. To map the Matthew Effect before it compounds beyond correction. To articulate the norms before the community's normative structure solidifies around defaults that serve the powerful at the expense of everyone else.
The seeing is the intervention. The naming is the first dam. What is built after the naming — what institutional structures are constructed, what norms are adopted, what policies are implemented, what choices are made — depends on the people who now possess the concepts and must decide what to do with them.
Merton spent his career insisting that sociology's job is to tell the truth about social structures rather than to comfort the people inside them. The truth about the AI transition is that its trajectory is being determined right now, by structures that are being built right now, according to norms that are being contested right now. The truth is uncomfortable, as sociological truths tend to be, because it places the weight of consequence on choices that are being made under conditions of radical uncertainty by actors who cannot know the full implications of what they are choosing.
The alternative to choosing under uncertainty is not choosing — which is itself a choice, and one whose consequences are determined by the same structural dynamics that govern every other choice. The actor who declines to participate in building the norms, designing the institutions, constructing the dams, does not thereby escape the river. She merely ensures that the structures are built by those who remained in the room.
The structures will be built. They are being built. The question that sociology poses — the question that Merton's entire body of work exists to sharpen — is not whether the structures will determine the outcome. They will. The question is whether the people building them can see clearly enough to build structures worthy of what is at stake.
---
The pattern I could not stop seeing was the bank run.
It is one thing to read Merton's description of the self-fulfilling prophecy as an intellectual concept. It is another to realize you are watching it unfold in real time across an entire industry. The bank was solvent. The rumor started. The depositors withdrew. The bank collapsed — not from any structural weakness, but from the behavior the rumor produced.
I watched this happen with expertise. The belief that human skill was becoming obsolete circulated through every Slack channel, every conference hallway, every dinner conversation I had in the winter of 2025. And I watched the belief begin to create the reality it described. Companies cutting training budgets because they believed AI would substitute for human development. Engineers stopping their own skill-building because they believed the skills were losing value. Universities questioning whether to maintain computer science programs because the market seemed to be signaling that implementation skill was a declining asset.
The bank was solvent. The expertise was real. The value was there. And the belief that the value was disappearing was beginning to make it disappear — not because the technology had replaced it, but because the institutional response to the belief was starving it of investment.
What Merton gave me, across these ten chapters, was not comfort. It was sight. The ability to see that the dynamics shaping the AI transition are not mysterious, not unprecedented, not beyond analysis. They are structural patterns that have operated across every major technological transition in recorded history. Cumulative advantage concentrating gains among the already-advantaged. Latent functions — identity, community, meaning — eroding invisibly while manifest functions are served more efficiently than ever. Role strain fracturing professionals who are caught between paradigms. Norms being formed, right now, that will determine what the AI community demands of itself for decades.
I wrote in The Orange Pill that AI is an amplifier, and that the most powerful question is whether you are worth amplifying. Merton showed me the other half of that equation: the amplifier operates through social structures, and the structures determine who gets amplified and who gets swept aside. Individual worthiness matters. But it operates within institutional environments that either multiply its effects or nullify them, and those environments are human constructions that could be constructed differently.
The dam-building I called for — the institutional structures that redirect the river toward broadly distributed flourishing — is not an optional appendix to the technological transition. It is the transition. The technology determines what is possible. The structures determine what actually happens. And the gap between the two is where every consequential choice lives.
Merton never wrote about artificial intelligence. He died in 2003, before the modern AI revolution began. But his frameworks describe the AI transition with a precision that startles me every time I return to them. Precisely because he was studying the permanent features of how knowledge communities function, his insights travel across technological eras. The specifics change. The dynamics are structural.
The prophecy will fulfill itself, one way or the other. The gains will concentrate or distribute. The latent functions will be preserved or lost. The norms will hold or erode. None of this is determined by the technology. All of it is determined by us — by the structures we build, the norms we adopt, the choices we make in conditions of radical uncertainty for people we will never meet.
Merton's gift is the insistence that we can see the structures clearly enough to build them well. Not perfectly. Well enough. The rest is maintenance — the daily work of tending the dam against the river's constant pressure.
That work starts now.
-- Edo Segal
The AI revolution is not being shaped by the technology. It is being shaped by the invisible social structures through which the technology flows -- structures of credit, advantage, belief, and institutional norm that determine who benefits, who is displaced, and whose future gets built. Robert K. Merton spent sixty years mapping exactly these forces in scientific communities. His findings have never been more urgent.
This book applies Merton's foundational sociology -- the self-fulfilling prophecy, the Matthew Effect, manifest and latent functions, the normative structure of knowledge communities -- to the AI transition unfolding right now. It reveals how false beliefs about obsolescence can create real obsolescence, how the gains from democratized tools concentrate among the already-advantaged, and how the unstated purposes of professional expertise are being destroyed by reformers measuring only the stated ones.
Part of the Orange Pill series exploring AI's transformation of human capability through the world's most essential thinkers, this volume offers the structural sight that the technology discourse alone cannot provide. The river is flowing. The terrain decides where it goes.

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Robert K. Merton — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →