By Edo Segal
The risk I failed to account for was the one I was producing.
Not the risk of AI going wrong. I had thought about that plenty — hallucinations, job displacement, the usual catalog of fears that populate every conference panel. I had answers for those. Frameworks. Mitigations. Slides with reassuring arrows pointing toward managed outcomes.
The risk I missed was quieter. It was baked into the success, not the failure. Every time Claude helped me ship faster, every time my team compressed a quarter's work into a week, every time the friction between my intention and its realization collapsed to the width of a conversation — something else was being produced alongside the capability. A byproduct. Invisible, accumulating, and structurally inseparable from the thing I was celebrating.
I did not have language for this until I encountered Ulrich Beck.
Beck was a sociologist who spent his career studying what happens when modern societies generate hazards as reliably as they generate wealth. Not hazards born from malice or incompetence — hazards manufactured by the normal, competent, rational operation of systems doing exactly what they were designed to do. The nuclear reactor that provides electricity and produces contamination risk through the same physical process. The chemical plant that synthesizes useful materials and poisons the groundwater through the same chemistry. The benefit and the hazard, produced by the same mechanism, inseparable by design.
That framing cracked something open for me. Because the cognitive risks I describe throughout *The Orange Pill* — the productive addiction, the erosion of depth, the colonization of every idle moment by the always-available tool — are not failures of the technology. They are byproducts of its success. The same mechanism that collapses the imagination-to-artifact ratio also collapses the boundary between work and rest. The same frictionlessness that liberates also erodes. You cannot have one without manufacturing the other.
Beck also showed me something I was not prepared to see about my own prescriptions. The dams I call for — self-knowledge, attentional ecology, organizational boundaries — are real and necessary. But they are individual and local responses to risks that are structural and global. My dam protects my stretch of the river. The developer in Lagos, the student in Dhaka, the engineer in Trivandrum — they face the same manufactured uncertainties, produced by the same tools, with none of my institutional scaffolding.
This book applies Beck's framework to the AI moment he never lived to see. The fit is uncomfortably precise. The diagnosis demands structural responses we have not yet built, at scales we have not yet attempted. That is why it matters now.
— Edo Segal ^ Opus 4.6
1944-2015
Ulrich Beck (1944–2015) was a German sociologist whose work fundamentally reshaped how scholars, policymakers, and publics understand the relationship between modern societies and the hazards they produce. Born in Stolp, Pomerania (now Słupsk, Poland), Beck studied sociology, philosophy, and political science at the University of Munich, where he later held a professorship for decades. He also served as a professor at the London School of Economics. His landmark work *Risk Society: Towards a New Modernity* (1986), published the same year as the Chernobyl disaster, introduced the concept of the "risk society" — a social order in which the production and distribution of risks, rather than wealth, becomes the central organizing conflict. Beck developed related concepts including "reflexive modernization" (the process by which modern institutions undermine themselves through their own success), "organized irresponsibility" (the structural gap between the production of risk and the attribution of accountability), and "sub-politics" (the exercise of consequential power in spaces outside formal democratic governance). His later works, including *World Risk Society* (1999), *Cosmopolitan Vision* (2006), and the posthumously published *The Metamorphosis of the World* (2016), extended his analysis to globalization, climate change, and the inadequacy of national governance frameworks for transnational risks. Beck's influence spans sociology, political theory, environmental studies, and science and technology studies, and his frameworks remain among the most widely cited analytical tools for understanding how advanced societies generate — and fail to govern — the hazards of their own making.
Ulrich Beck published Risk Society: Towards a New Modernity in 1986, the same year reactor number four at the Chernobyl nuclear power plant exploded and deposited cesium-137 across twenty countries. The coincidence was almost theatrical — the theorist of manufactured risk and the manufactured risk arriving simultaneously, as though the universe had decided to peer-review the argument in radioactive particulate. Beck had spent years developing a single, devastating observation: modern societies produce risks as systematically as they produce wealth, and the institutions designed to manage those risks are structurally incapable of doing so because they are the same institutions that produce them. Chernobyl did not prove Beck right. It demonstrated, at a cost of several hundred thousand displaced persons and an exclusion zone that will remain uninhabitable for centuries, what the theory already described — that the most dangerous products of modernization are not the goods it manufactures but the hazards it generates as byproducts of their manufacture.
The first risk society — the one Beck diagnosed in 1986 — was organized around material hazards. Factories emitted pollutants. Nuclear reactors produced waste with half-lives measured in millennia. Chemical companies synthesized compounds whose long-term effects on human biology were unknown at the time of their release and, in many cases, decades afterward. These risks shared a set of characteristics that distinguished them from the hazards of premodern societies. They were global in scope: radiation does not stop at national borders, ozone depletion affects the entire atmosphere, and chemical contamination travels through water systems that connect continents. They were invisible in their accumulation: no one could taste the dioxins in the water, see the thinning of the ozone layer, or feel the slow increase in background radiation until instruments detected what the senses could not. They were temporally displaced: the risks produced by one generation's industrial choices would be borne by subsequent generations who had no voice in those choices. And they were democratically unaccountable: no one voted for the chemical contamination of groundwater, the accumulation of microplastics in human tissue, or the slow destabilization of the global climate.
The central political conflict of the industrial age, Beck argued, was the conflict over the distribution of goods. Who gets what? The welfare state, the labor movement, the entire apparatus of twentieth-century redistributive politics addressed this question. The central political conflict of the risk society was different. It was the conflict over the distribution of risks. Who is exposed to what? And this conflict was more insidious than the first, because the institutions designed to adjudicate the distribution of goods — legislatures, regulatory agencies, courts — were structurally ill-equipped to adjudicate the distribution of risks whose causal chains were too complex to trace, whose effects were too delayed to prove, and whose boundaries exceeded the jurisdiction of any single authority.
Beck died on January 1, 2015. He never saw a large language model generate text. He never experienced the specific vertigo that The Orange Pill describes — the sensation of sitting across from a machine that responds in natural language with a fluency that blurs the line between tool and interlocutor. He never used Claude Code to build a product in thirty days that would have taken six months. He did not live to witness the winter of 2025, when something shifted in the relationship between human beings and their machines.
But his framework did not die with him. It grew more relevant.
The second risk society — the one now taking shape — is organized not around material hazards but around cognitive ones. The first risk society contaminated water, air, and soil. The second contaminates attention, judgment, identity, and the capacity for the deep thinking that constitutes, as The Orange Pill argues, the distinctively human contribution to the river of intelligence. The contamination follows the same structural logic Beck identified in 1986, but the substrate has changed. The pollutant is no longer a chemical compound deposited in groundwater. It is a cognitive compound deposited in the processes of thought itself.
Consider the structural parallel. A chemical factory in the first risk society was designed to produce a useful product — a pharmaceutical, a fertilizer, a synthetic material. The pollution it generated was not the purpose of the factory. It was a byproduct, a manufactured uncertainty produced within the same industrial process that generated the benefit. No one set out to contaminate the water table. The contamination was a structural feature of the production system, as inseparable from the manufacture of the product as exhaust is from combustion. The factory owner could not produce the product without producing the pollutant, because the pollutant was generated by the same chemical reactions that generated the value.
An AI coding assistant in the second risk society is designed to produce a useful capability — the reduction of friction between human intention and machine execution. The cognitive hazards it generates are not the purpose of the tool. They are byproducts, manufactured uncertainties produced within the same process that generates the capability. No one at Anthropic set out to produce productive addiction, the erosion of deep understanding, or the colonization of cognitive rest by compulsive engagement. These outcomes are structural features of a system that makes capability frictionless, as inseparable from the benefit as the chemical pollutant was from the pharmaceutical. The company cannot produce the capability without producing the cognitive hazard, because the hazard is generated by the same mechanism — the removal of friction, the acceleration of feedback, the collapse of the gap between intention and artifact — that generates the value.
The analogy is precise, and its precision matters because it determines the kind of response that is adequate.
If cognitive risks are analogous to industrial pollution — systematic, structural, produced as byproducts of beneficial processes — then the responses adequate to industrial pollution are at least instructive for cognitive pollution. Industrial pollution was not solved by asking individual factory workers to wear better masks. It was not solved by publishing guidelines for voluntary emission reduction. It was solved, to the extent it has been solved, by structural intervention: regulation, monitoring, enforcement, and the construction of institutions whose purpose was to manage the risks that the productive institutions could not manage because they were optimized for production, not for risk management.
The cognitive risk society demands analogous structural intervention. But the analogy breaks at a crucial point, and the break reveals what makes cognitive risks more dangerous, not less, than the material risks Beck analyzed in 1986.
Material risks operate on the body and the environment. They can, in principle, be measured by instruments external to the thing being contaminated. A Geiger counter detects radiation the human body cannot sense. A mass spectrometer identifies chemical compounds the human nose cannot smell. The instruments of detection are independent of the substrate of contamination.
Cognitive risks operate on the mind — on the very apparatus that human beings use to detect, assess, and respond to risks. This is not a philosophical abstraction. It is a structural problem. When the hazard contaminates the instrument of detection, the instrument loses the capacity to detect the hazard. The factory worker whose body is contaminated by chemicals can, at least in principle, be examined by a doctor whose body is not contaminated. The knowledge worker whose judgment is contaminated by the cognitive effects of AI tools cannot be examined by anyone whose judgment is entirely free of those effects, because the cognitive environment is pervasive. There is no clean room for the mind. There is no Geiger counter for the erosion of depth.
This is why The Orange Pill's description of productive addiction carries a diagnostic weight that its author may not fully appreciate from within the experience. When Edo Segal describes sitting at his desk at three in the morning, unable to close the laptop, aware that the exhilaration has drained away and what remains is compulsion — aware, that is, that the signal has changed from flow to grinding — he is describing a cognitive risk with the precision of a medical case study. But the precision is available to him only intermittently, only in moments when the compulsion pauses long enough for self-awareness to surface. The rest of the time, the contamination of judgment by the tool's seamless productivity makes the contamination invisible. The tool feels like an extension of capability. The addiction feels like ambition. The erosion of boundaries feels like commitment. The cognitive pollutant is self-concealing in a way that chemical pollutants are not, because it operates within the very system that would need to detect it.
Beck's original framework distinguished between risks that are visible and risks that require expert mediation to become visible. Radiation is invisible; it requires instruments and expertise to detect. Chemical contamination of groundwater is invisible; it requires testing and analysis. The risks of the first risk society were, in this sense, epistemically mediated — they existed whether or not anyone perceived them, but their perception required knowledge and technology that ordinary citizens did not possess. This created what Beck called a dependency on expert systems: the public depended on scientists, regulators, and technical specialists to make risks visible, and this dependency gave expert systems enormous power — the power to define what counted as a risk, what level of exposure was acceptable, and what trade-offs between benefit and hazard were rational.
The cognitive risks of the second risk society are epistemically mediated in a deeper sense. They are not merely invisible without expert detection. They are invisible because they operate within the detection system itself. The dependency on expert systems is therefore more radical: the experts most qualified to assess cognitive risks — the technologists who build AI systems, the cognitive scientists who study attention and judgment, the educators who observe the effects on students — are themselves embedded in the cognitive environment they would need to assess from outside. The fishbowl that The Orange Pill describes is not merely a limitation of perspective. It is a structural feature of the cognitive risk society: every observer observes from within the contaminated environment, and the contamination shapes what the observer can see.
This does not mean assessment is impossible. It means assessment requires a specific, disciplined effort to observe what the environment makes invisible — the effort that The Orange Pill describes as pressing your face against the glass. But the risk society framework adds a structural dimension that the metaphor of individual effort cannot capture. The glass is not equally thick for everyone. The experts whose fishbowls are shaped by optimization logic — the engineers, the product managers, the venture capitalists whose professional survival depends on the production of capability — are in fishbowls whose glass is thickest precisely where the cognitive risks are most acute. They cannot see the risks not because they lack intelligence or goodwill but because their fishbowls are shaped by the system that produces the risks, and the system does not reward the perception of its own hazards.
Beck's work in The Metamorphosis of the World, his final and unfinished book published posthumously in 2016, gestured toward this digital dimension without fully developing it. He wrote of "digital metamorphosis," of the ways that digital communication technologies were altering the fundamental categories through which societies understood themselves. He discussed mass surveillance, the PRISM revelations, and the ways that digital infrastructure created what he called a "politics of invisibility" — the capacity of powerful actors to operate behind technological opacity while demanding transparency from citizens. He noted the generational divide between what he provocatively called "Neanderthals" — older generations who had not been formed by digital environments — and "Homo cosmopoliticus," younger generations born into digital reality as a native condition.
These were seeds. They pointed in the direction of cognitive risk without arriving there, because the AI moment had not yet arrived. The tools Beck might have analyzed — the large language models that learned human language in 2025, the coding assistants that collapsed the imagination-to-artifact ratio, the amplifiers that carry any signal with terrifying fidelity — did not exist when he died. His framework, however, was designed for risks that had not yet materialized. The entire point of the risk society concept was anticipatory: it described a social structure organized around hazards that were not yet fully visible, whose effects were not yet fully manifest, and whose governance required institutions that did not yet exist. The framework was built for what comes next.
What comes next is the subject of this book. The second risk society — the cognitive risk society — is not a metaphor. It is a description of the social structure now taking shape around the systematic production of cognitive hazards by the institutions of cognitive capability. The hazards are real, they are structural, and they are self-concealing. The institutions designed to manage them — regulatory agencies, educational systems, corporate governance structures — were built for the first risk society and are being applied, with predictable inadequacy, to hazards of a qualitatively different kind. The gap between the risks being produced and the capacity to manage them is not closing. It is widening, at the speed of inference on one side and the speed of institutional deliberation on the other.
Beck's bicycle brake metaphor — that ethics plays "the role of a bicycle brake on an intercontinental airplane" — has become the single most cited Beckian insight in AI discourse, deployed by the World Economic Forum, invoked by AI ethicists, quoted in governance frameworks from Brussels to Brasília. The metaphor captures the asymmetry with devastating efficiency: the braking mechanism is real, it functions, it can slow a bicycle. It simply has no relationship to the velocity of the vehicle it has been asked to stop.
The question is whether this book, and the framework it applies, can contribute to something more adequate than a bicycle brake. The answer depends on whether the analysis of cognitive risk is precise enough to identify the leverage points where structural intervention might redirect the flow — not to stop it, which is neither possible nor desirable, but to shape its course with the care that the river of intelligence demands and that the institutions of the first modernity are structurally incapable of providing.
---
The most important word in Beck's vocabulary is not "risk." It is "manufactured."
Risks have always existed. Floods, famines, plagues, predators — the hazards of premodern life were real, often catastrophic, and wholly external to the societies they struck. A drought was not produced by the agricultural system it destroyed. An epidemic was not manufactured by the community it decimated. The risks were natural, in the precise sense that they originated outside the human systems that bore their consequences. The relationship between the society and the hazard was adversarial: nature produced the threat, and human ingenuity attempted to manage it.
Manufactured risks reverse this relationship. The hazard is not external. It is produced within the same system that generates the benefit. The nuclear reactor that provides electricity also produces the risk of contamination. The chemical plant that synthesizes fertilizer also produces the risk of environmental poisoning. The financial instrument that distributes risk across a portfolio also produces the systemic fragility that makes cascading failure possible. In each case, the risk is not an accident, not a failure of the system, not a deviation from its intended function. The risk is a structural feature of the system's success. The reactor produces contamination risk because the same physical process that generates energy also generates radioactive byproducts. The system cannot do the one without doing the other.
This is the insight that distinguishes Beck from every technology critic who came before him and from most who have followed. The problem is not that the technology fails. The problem is that the technology succeeds, and success produces consequences that success cannot manage.
The Orange Pill describes an amplifier. The metaphor is apt and precisely stated: AI carries any signal further than any tool in human history. Feed it carelessness, and carelessness scales. Feed it genuine care, and care reaches further than any previous instrument allowed. The amplifier does not discriminate. It amplifies whatever it receives.
The risk society framework reveals the structural dimension that the amplifier metaphor, taken alone, obscures. An amplifier does not merely carry a signal. It operates within a system — a power supply, a circuit, an acoustic environment — and the system produces byproducts that are independent of the signal being amplified. An audio amplifier produces heat, electromagnetic interference, and harmonic distortion. These are not features of the signal. They are features of the amplification process itself, manufactured by the system regardless of what music is being played. A signal that is perfectly clean going in comes out slightly distorted, slightly noisy, slightly degraded — not because the amplifier failed but because amplification, as a physical process, produces these byproducts as structural features.
The cognitive amplifier that AI represents follows the same logic. The byproducts are not functions of the input. They are functions of the amplification process. A developer who uses Claude Code with exemplary care — who asks precisely the right questions, exercises the judgment The Orange Pill celebrates, and maintains the self-knowledge the book prescribes — still produces cognitive byproducts as a structural feature of the engagement. The erosion of tolerance for friction that occurs when every question receives an immediate, fluent response. The gradual displacement of the questioning instinct by the answering reflex. The subtle atrophy of the patience required for the kind of deep understanding that builds through struggle, failure, and the slow deposition of embodied knowledge that The Orange Pill describes as geological layers of understanding.
These byproducts are not failures of individual discipline. They are manufactured uncertainties — produced by the amplification process itself, irrespective of the quality of the signal being amplified.
The Berkeley study that The Orange Pill examines in its eleventh chapter is the most rigorous empirical documentation of manufactured cognitive uncertainty available to date. The researchers embedded themselves in a two-hundred-person technology company for eight months and observed what happened when generative AI tools entered a functioning organization. Three findings correspond precisely to three categories of manufactured uncertainty.
The first finding — that AI does not reduce work but intensifies it — documents a manufactured uncertainty of capacity. The tool was designed to increase productivity. It achieved this goal. And productivity, having increased, generated new work to fill the space it created. The workers did not choose to work more intensely. No manager mandated it. The intensification emerged as a byproduct of the capability increase, in the same way that a faster road generates more traffic — not because anyone decided to drive more, but because the reduced friction makes driving easier, and easier driving attracts more drivers, and more drivers fill the capacity that the faster road was supposed to create. Transportation engineers call this induced demand. The Berkeley study documented induced cognitive demand: a systematic increase in the volume and intensity of cognitive work produced by the very tools designed to reduce it.
The second finding — that work seeps into pauses — documents a manufactured uncertainty of boundary. The tool was always available. The prompting interface fit on a phone screen. A thought that occurred during a coffee break could be acted upon immediately, without the friction of opening a laptop, navigating to a development environment, remembering where the work had left off. The boundary between work and rest, which had been maintained partly by the friction of transitioning between states, dissolved. Not because anyone removed it deliberately. The boundary was a byproduct of friction, and when the friction was removed, the boundary went with it. The workers did not choose to eliminate their cognitive rest. They responded to an environment in which the cost of acting on a work-related thought had been reduced to zero, and zero-cost action crowds out non-action the way an invasive species crowds out native vegetation — not through superior fitness, but through the sheer relentlessness of its availability.
The third finding — that multitasking became the norm and fractured attention — documents a manufactured uncertainty of coherence. The tool could handle background tasks while the human attended to foreground work. This created the possibility of parallel processing — multiple streams of work advancing simultaneously. The possibility became a norm. The norm fractured the sustained, focused attention that cognitive science has identified as the precondition for deep understanding. The workers did not choose to fracture their attention. They responded to a capability environment that made parallel work possible and a cultural environment that rewarded visible productivity, and the combination produced a systematic reduction in the cognitive coherence required for the kind of thinking that cannot be parallelized — the slow, sequential, friction-rich thinking that builds understanding.
Each of these manufactured uncertainties shares the characteristics Beck identified in industrial manufactured risks. Each is systematic: it is produced reliably whenever the AI tool is deployed in a work environment, not as an occasional side effect but as a structural feature. Each is invisible in its accumulation: no single instance of task seepage or attentional fracturing is perceptible as a hazard; the damage accrues across weeks and months, below the threshold of daily awareness. Each is temporally displaced: the cognitive costs of reduced depth and eroded questioning capacity will manifest not today but in the decisions and judgments of future months and years, when the understanding that was not built is needed and found absent. And each is democratically unaccountable: no citizen voted for the restructuring of cognitive work that AI tools are performing on the minds of their users, and no democratic process was consulted about whether the trade-off between increased productivity and decreased depth was acceptable.
The structural character of manufactured cognitive uncertainty has a specific implication that the moral framework of The Orange Pill, centered on the quality of the individual's engagement with the tool, cannot fully accommodate. The implication is this: individual virtue is not sufficient to prevent manufactured risk.
The developer who practices attentional ecology, sets boundaries, cultivates self-knowledge, exercises taste and judgment — this developer still operates within a cognitive environment that produces manufactured uncertainties as structural features. Her individual dams — the practices of cognitive protection that The Orange Pill prescribes — protect her the way a gas mask protects a worker in a contaminated factory. The protection is real. It helps. But it does not address the contamination. The factory continues to emit. The cognitive environment continues to produce its byproducts. And the worker who removes the mask — who has a bad week, who faces a deadline, who encounters a problem so absorbing that the boundaries dissolve — is immediately re-exposed to the full force of the manufactured uncertainty.
This is not an argument against individual dams. It is an argument that individual dams are necessary and insufficient, and that the insufficiency is structural, not personal. The risk society framework insists that manufactured risks require structural responses — responses at the level of the system that produces them, not merely at the level of the individuals who bear their consequences.
The history of industrial risk management illustrates both the possibility and the difficulty of structural response. London's Great Smog of 1952 killed an estimated twelve thousand people and sickened more than a hundred thousand. The smog was a manufactured uncertainty — produced by the same coal-burning that heated homes and powered factories, visible only in its acute, catastrophic manifestation after years of invisible accumulation. The response was structural: the Clean Air Act of 1956, which regulated fuel types, established smoke control areas, and created enforcement mechanisms. The Act did not ask individual Londoners to breathe more carefully. It changed the system that produced the contamination.
No equivalent of the Clean Air Act exists for cognitive contamination, and the prospects for one are complicated by the self-concealing nature of cognitive risks. The Great Smog was visible — spectacularly, lethally visible, a yellow-black cloud that reduced visibility to one foot and killed people in their homes. The cognitive smog produced by AI-saturated work environments is not visible. It manifests as burnout, which is attributed to individual weakness. As shallow judgment, which is attributed to insufficient training. As the erosion of questioning capacity, which is attributed to generational decline. Each manifestation is individualized, its systemic origins obscured by the same mechanism Beck identified in the first risk society: the structural gap between the production of risk and the attribution of responsibility.
The amplifier metaphor is powerful. It captures something essential about the moral dimension of the AI moment — the fact that the quality of the human input shapes the quality of the amplified output. But the metaphor, if taken as the complete analysis, produces a dangerous implication: that manufactured cognitive risk is primarily a function of individual signal quality, and that the solution lies in improving the signal. Beck's framework corrects this implication. The risks are not in the signal. They are in the amplification process. They are structural features of a system that produces capability and hazard in the same operation, through the same mechanism, with the same inevitability that a combustion engine produces motion and exhaust.
The exhaust does not care how clean the fuel is.
The cognitive byproducts of AI amplification do not care how wise the prompter is.
They are manufactured by the system, not by its users. And the institutions designed to manage them are, at present, optimized for the production of capability — speed of inference, breadth of application, depth of integration into workflow — not for the management of the hazards that capability produces as its structural accompaniment. This is not because the institutions are malicious. It is because they are institutions, and institutions optimize for the functions they were designed to perform, and these institutions were designed to produce cognitive capability, and that is what they do, and the risks are someone else's problem, except that "someone else" turns out to be everyone, distributed across the population in patterns that no single actor chose and no existing institution is equipped to govern.
Beck called this organized irresponsibility. The next chapter examines what happens when the risks return to their source.
---
In the original formulation, Beck observed that industrial risks eventually return to their producers. The factory owner who pollutes the river lives downstream. The nation that exports chemical waste imports the ecological consequences. The executive who authorizes the cost-cutting that degrades safety standards breathes the same air, drinks the same water, inhabits the same fragile biosphere as the workers whose health the cost-cutting compromises. Beck called this the boomerang effect, and he argued that it was the one feature of manufactured risk that gave grounds for cautious optimism: because no social class, no nation, no gated community could fully insulate itself from the risks it produced, the producers of risk had, at least in principle, a self-interested reason to manage them.
The optimism was always cautious. The boomerang could take decades to return. The time lag between production and consequence — between the emission of chlorofluorocarbons and the appearance of the ozone hole, between the release of persistent organic pollutants and the accumulation of measurable concentrations in human tissue — allowed multiple generations of production before the consequences became visible. And the visibility, when it arrived, was mediated by expert systems whose interpretations could be contested, whose findings could be challenged, and whose recommendations could be deferred by actors with sufficient economic or political power to make deferral profitable in the short term.
The cognitive boomerang returns faster.
The Orange Pill is, in the risk society framework, a document of the boomerang effect in real time. The author is not a distant observer commenting on risks that affect other people. He is a technology leader who has spent decades building the systems whose cognitive risks he is now analyzing, and who is experiencing those risks with a specificity that makes the analysis simultaneously more credible and more uncomfortable than any external critique could be.
The confession is distributed across the book but concentrates in certain passages with diagnostic precision. The passage in which Segal describes sitting at his desk, aware that the exhilaration has drained away and what remains is compulsion — "the grinding compulsion of a person who has confused productivity with aliveness." The passage in which he describes writing 187 pages on a transatlantic flight, not because the book demanded it but because he could not stop, and recognizing in himself the pattern of addiction that Byung-Chul Han's philosophy describes. The passage in which he acknowledges building addictive products earlier in his career, products designed to capture attention through engagement loops and variable reward schedules, and then experiencing the same capture mechanism directed at himself through the AI tools he now uses.
In each case, the boomerang has returned to sender. The builder of cognitive amplification tools is among the most intensely amplified — and among the most intensely exposed to the manufactured uncertainties that amplification produces.
This is not a personal failing. It is a structural feature.
The builders of AI systems are the most intensely exposed to cognitive risks for the same reason that nuclear engineers are the most intensely exposed to radiation: proximity to the source. They spend more hours per day engaged with the tools than any other population. They operate in work cultures that reward intensity, that celebrate the inability to stop, that interpret productive compulsion as evidence of passion rather than as a symptom of manufactured risk. They are embedded in organizations whose business models depend on the continued expansion of the tools' reach and whose internal cultures therefore resist the perception of hazard in the expansion they are optimizing for.
The cognitive risks are not evenly distributed across the builder class. They concentrate along two axes: intensity of exposure and absence of countervailing structures. The individual contributor working alone with an AI tool at three in the morning — the figure Segal describes, the figure the Berkeley study documents — is at the intersection of maximum exposure and minimum structural protection. No team to notice the pattern. No manager to intervene. No institutional boundary to enforce. The individual is left alone with the tool and the tool is always available and the work is always possible and the internal imperative — what The Orange Pill, drawing on Han, identifies as the achievement subject's self-administered whip — converts possibility into compulsion with a reliability that no external authority could match.
The boomerang concentrates here because the manufactured uncertainties concentrate here: in the space where individual capability meets institutional absence, where the tool is most powerful and the dams are least present.
But the boomerang does not stop at individual contributors. It returns to executives, to investors, to the decision-makers who set the conditions under which AI tools are deployed. The executive who mandates AI adoption across an organization — who sets productivity targets that assume AI-augmented output levels, who restructures teams around the twenty-fold productivity multiplier that The Orange Pill describes — is producing manufactured cognitive risks at organizational scale. And those risks return. They return as burnout rates that increase recruiting costs. As shallow judgment that produces strategic errors. As the erosion of institutional knowledge that occurs when the embodied understanding of experienced workers is displaced by extracted knowledge that looks adequate but lacks the depth required for the decisions that matter most.
The temporal structure of the cognitive boomerang is particularly insidious. The benefits of AI adoption are immediate and measurable: faster shipping cycles, broader individual capability, reduced translation costs between intention and artifact. The costs are delayed and difficult to measure: the gradual erosion of depth, the slow atrophy of questioning capacity, the incremental displacement of embodied knowledge by surface-level competence. The asymmetry between immediate benefit and delayed cost creates a systematic bias toward adoption and against the perception of hazard. The executive who sees this quarter's productivity gains is rewarded. The executive who warns about next year's cognitive costs is dismissed as a Luddite — a term The Orange Pill rightly identifies as a profound misreading of history, applied to anyone who slows the machine.
The boomerang returns, but it returns on a schedule that is misaligned with the decision cycles of the institutions that produce the risk. Quarterly earnings reports capture the benefit. The cost appears years later, in the form of institutional fragility, strategic shallowness, and the specific vulnerability of organizations that have optimized for speed and lost the capacity for the slow, friction-rich thinking that produces genuine insight.
A structural analysis of the boomerang effect in the AI risk society reveals three distinct return paths.
The first is the direct return: the builder who experiences the cognitive risks of her own tools. This is Segal's confession, and it is the most visible and most personally costly form of the boomerang. The productive addiction, the boundary erosion, the confusion of compulsion with flow — these are the direct consequences of proximity to a cognitive amplifier, experienced by the people who build, deploy, and most intensively use the tools.
The second is the organizational return: the company that deploys AI tools and experiences the cognitive degradation of its own workforce as a delayed consequence. This return path is longer and harder to trace, because the degradation manifests not as a single event but as a gradual decline in the quality of judgment, strategic thinking, and institutional knowledge. The company that optimized for speed discovers, months or years later, that it has lost the capacity for depth. The engineers who can ship in a day cannot explain why the system they built works the way it does, because the understanding was not built alongside the artifact. The executives who made decisions at AI speed discover that the decisions were fast but shallow, optimized for immediate metrics but poorly adapted to the longer-term dynamics that only deep understanding can anticipate.
The third is the societal return: the community or society that adopts AI tools at scale and experiences the cognitive restructuring of its population as a delayed, diffuse, and difficult-to-attribute consequence. This is the return path that Beck's framework is most specifically designed to analyze, because it operates at the level of social structure rather than individual experience or organizational performance. When an entire generation of students uses AI to produce essays, the manufactured uncertainty is not primarily the question of whether individual students learn less — though the Berkeley study suggests they may. The manufactured uncertainty is the restructuring of the cognitive infrastructure of a society: the systematic reduction of the population's capacity for the kind of friction-rich thinking that builds understanding, the displacement of questioning by answering, the erosion of the tolerance for uncertainty that genuine inquiry requires.
This third return path is the one most resistant to perception, because it operates below the threshold of any individual's experience. No single student's use of AI to write an essay is a crisis. The crisis emerges at the level of the aggregate — the slow, statistical reduction of a society's cognitive depth, invisible in any individual case, visible only in the patterns that emerge when the Berkeley researchers embed themselves in an organization for eight months, or when educational researchers compare the questioning capacity of students who have grown up with AI to the capacity of students who learned through friction.
And the crisis, when it becomes visible, will be attributed to individual failures — declining student motivation, poor parenting, inadequate training — rather than to the manufactured uncertainties that produced it. This is the individualization of cognitive risk, which is the subject of a later chapter, but its connection to the boomerang must be noted here: the boomerang returns, but it returns disguised. The cognitive costs of AI adoption arrive wearing the mask of personal shortcoming, and the institutions that produced those costs are absolved because the causal chain is too long, too diffuse, and too deeply embedded in the processes of daily life to be traced back to its source.
Beck argued that the boomerang effect, despite its costs, contained a democratic potential. Because the producers of risk could not fully insulate themselves from the consequences of their production, they had, at least in principle, a motive to participate in risk governance. The factory owner who breathes contaminated air has a self-interested reason to support clean air regulation. The technology executive who experiences productive addiction has a self-interested reason to support the construction of cognitive dams.
This potential is real but fragile. It depends on the producer's capacity to perceive the boomerang — to recognize, in her own experience, the manufactured uncertainties that her system produces. And this capacity is compromised, as the first chapter argued, by the self-concealing nature of cognitive risks. The executive who cannot close the laptop may experience this as dedication rather than contamination. The engineer who fills every pause with prompting may experience this as efficiency rather than boundary erosion. The student who uses AI to generate answers may experience this as intelligence rather than displacement.
The boomerang returns. Whether it is recognized as a boomerang or mistaken for a badge of merit is the question on which the governance of cognitive risk ultimately depends.
---
In March 2019, Dutch authorities discovered that the national tax agency had been using self-learning algorithms for six years to detect childcare benefit fraud. The algorithms flagged several thousand families, who were penalized, fined, and in many cases forced to repay tens of thousands of euros in benefits they had legitimately received. The system was discriminatory — families with dual nationality were disproportionately targeted — and operated without meaningful human oversight or external accountability. When the scandal became public, it brought down the Dutch government. Prime Minister Mark Rutte and his entire cabinet resigned in January 2021.
The case is instructive not because it is exceptional but because it is structural. Every actor in the chain acted within the boundaries of their institutional role. The engineers designed the algorithm to optimize for fraud detection accuracy, which is what they were asked to do. The managers approved its deployment because the cost-benefit analysis was favorable: automated detection was cheaper and faster than manual review. The oversight bodies that were supposed to monitor the system lacked the technical expertise to understand how the algorithm made its decisions and the institutional authority to challenge a system that was producing the results the agency wanted. The families who were harmed had no mechanism to contest a decision made by a system whose logic was opaque even to its operators.
Who was responsible?
The engineers could point to the managers who approved deployment. The managers could point to the policy mandate that required cost reduction. The policymakers could point to the engineers who built a system that discriminated in ways no one had specified or intended. The oversight bodies could point to the technical opacity that prevented them from seeing inside the system. Each actor could plausibly locate responsibility elsewhere, and the result was that no one was effectively responsible. The risks were real. The harms were measurable. Thousands of families were financially devastated. A government fell. And the structures of accountability were revealed to be hollow — not because the actors were malicious, but because the organizational architecture within which they operated made responsibility impossible to locate and therefore impossible to exercise.
Beck called this organized irresponsibility. The term does not imply intentional evasion of responsibility, though that occurs too. It describes a structural condition: a systematic gap between the production of risk and the attribution of responsibility, maintained not by conspiracy but by the normal operations of complex institutions whose internal division of labor distributes causation across so many actors and so many decision points that no single actor or decision can be identified as the responsible one. The irresponsibility is organized — not in the sense of being planned, but in the sense of being produced by organizational structure, as reliably and as impersonally as the organization produces its intended outputs.
The AI industry exhibits organized irresponsibility in a form that is, if anything, more structurally entrenched than the industrial cases Beck originally analyzed.
Consider the causal chain that produces a cognitive risk. A large language model is trained on a corpus of text assembled from the internet. The training process is designed by machine learning researchers who optimize for capabilities specified in benchmarks: coherence, factual accuracy, task completion, safety metrics. The trained model is deployed in a product — Claude Code, for instance — by a product team that designs the interface, sets the default behaviors, calibrates the response latency, and shapes the user experience. The product is adopted by organizations that integrate it into workflows, set performance expectations, and create the cultural conditions within which individual workers use the tool. The individual worker uses the tool within these conditions and experiences the manufactured uncertainties — the intensification of work, the seepage into pauses, the fracturing of attention — that the Berkeley study documents.
Where in this chain does responsibility for the manufactured uncertainties reside?
The machine learning researchers optimized for capability, which is what they were asked to do. The cognitive effects of response latency on the user's tolerance for friction were not in the specification. The product team designed for user experience, which is what they were asked to do. The organizational effects of always-available AI on workplace boundaries were not in the design brief. The organizations that adopted the tool were pursuing productivity gains, which is what their shareholders demanded. The effects on workers' cognitive depth were not in the business case. The individual worker adopted the tool because it was available, capable, and culturally expected. The effects on her capacity for questioning were not in her job description.
Each actor is operating within the boundaries of their institutional role. Each is performing their function competently. And the manufactured uncertainties are being produced by the system as a whole, not by any single actor within it. The gap between production and attribution is maintained by the same mechanism Beck identified in the industrial risk society: the complexity of the causal chain distributes causation so thoroughly that accountability becomes structurally impossible.
The organized irresponsibility of the AI industry is intensified by a feature that distinguishes it from industrial cases: the speed of deployment relative to the speed of assessment. In the industrial risk society, the gap between production and accountability was maintained partly by scientific uncertainty — the difficulty of establishing causal links between specific exposures and specific health outcomes. Tobacco companies exploited this uncertainty for decades, funding research that challenged the link between smoking and cancer, demanding ever-higher standards of proof, and using the inherent uncertainty of epidemiological evidence to defer accountability indefinitely.
The AI industry has a different mechanism for maintaining the gap, but it is equally effective: velocity. The pace of AI development outstrips the pace of risk assessment so comprehensively that by the time the risks of a particular capability are understood, the capability has been deployed, adopted, integrated into workflows, and replaced by a newer version whose risks have not yet been assessed. The assessment is always chasing the deployment, and the gap between them is not closing but widening, because the deployment accelerates with each generation of the technology while the assessment proceeds at the speed of institutional deliberation — peer review, regulatory process, legislative debate, the slow and necessarily cautious mechanisms through which democracies process complex information.
The Orange Pill documents this gap with an observation that carries more weight than its casual delivery suggests: that any company still doing 2026 planning based on pre-December 2025 assumptions should throw the plan away and start from the world that actually exists. The observation implies that the world changed in a matter of weeks — that the capabilities available in January 2026 were qualitatively different from those available in November 2025. If this is true, and the evidence suggests it is, then the governance frameworks being developed in 2025 are governing a technology that no longer exists, and the frameworks being developed in 2026 will be governing a technology that will no longer exist by the time they are implemented.
This is not a temporary condition that faster regulation could solve. It is a structural feature of a system in which the productive apparatus operates at the speed of inference and the governance apparatus operates at the speed of legislation. The asymmetry is not an accident of timing. It is a design feature — not designed intentionally, but produced by the conjunction of institutions optimized for different functions operating at different velocities. The technology companies are optimized for speed because the market rewards speed and punishes delay. The governance institutions are optimized for deliberation because democratic legitimacy requires process and process requires time. The gap between them is an emergent property of two systems doing exactly what they were designed to do.
Organized irresponsibility in the AI industry has a specific institutional expression that merits detailed examination: the AI safety team.
Major AI companies have established internal safety research groups — teams of researchers whose mandate is to identify and mitigate the risks produced by the company's own technology. These teams are staffed by talented, often deeply committed individuals who take the risks seriously and who operate with a genuine sense of responsibility. They publish research. They develop safety protocols. They advocate, internally, for caution.
They are also structurally impotent.
The safety team exists within an organization whose revenue depends on the deployment of the technology whose risks the safety team is assessing. The team's budget is allocated by executives whose performance is measured by deployment metrics. The team's recommendations must compete with the product roadmap for organizational attention, and the product roadmap has quarterly targets while the safety assessment has uncertain timelines and inconclusive findings. The team cannot stop deployment. It can recommend, and recommendations can be overridden by commercial imperatives that are, within the organization's value system, more immediately pressing.
This is not corruption. It is organized irresponsibility in its most structurally pure form. The institution has created a mechanism for risk assessment that is embedded within the institution that produces the risk, funded by the revenue that the risk-producing activity generates, and subordinated to the commercial logic that drives the organization's decisions. The safety team provides the appearance of responsibility — the organization can point to its existence as evidence of commitment to safety — while the structure ensures that the team's capacity to alter the organization's trajectory is strictly limited.
Beck anticipated this dynamic in his analysis of industrial risk. He observed that the companies most exposed to environmental liability were often the ones with the most elaborate environmental compliance departments — not because the departments were effective at preventing risk, but because they were effective at managing the perception of risk management. The compliance department allowed the organization to demonstrate responsibility without altering the operations that produced the hazards the department was supposed to prevent.
The parallel to AI safety teams is structural, not analogical. The function is the same: the institutional performance of responsibility in a context where the structural conditions for the exercise of responsibility are absent. The safety researcher who identifies a risk writes a report. The report enters an organizational process in which it competes with product launch timelines, competitive pressures, and revenue targets. The risk may be acknowledged. It may be noted for future investigation. It may be addressed in a subsequent version of the model. What it almost certainly will not do is stop the deployment of the current version, because stopping deployment has immediate, measurable costs — revenue forgone, competitive ground lost, investor confidence eroded — while the risks the safety researcher identified are uncertain, delayed, and distributed across a population that has no voice in the organization's decision-making process.
The structural asymmetry between the costs of caution and the costs of risk is the engine of organized irresponsibility. Caution has immediate, concentrated, measurable costs borne by the organization that exercises it. Risk has delayed, distributed, difficult-to-measure costs borne by the population exposed to the technology. The organization's decision-making process is calibrated to the former, not the latter, because the former affects the organization's survival and the latter affects people who are not in the room.
The EU AI Act, the American executive orders, the emerging governance frameworks in Singapore, Brazil, and Japan that The Orange Pill references — these are genuine attempts to close the gap between the production of risk and the attribution of responsibility. They establish categories of risk, mandate transparency requirements, and create enforcement mechanisms. They are real structures, and they matter.
But the risk society framework compels a harder question: Are these structures adequate to the velocity and complexity of the risks they address? Or are they, in Beck's terms, bicycle brakes installed on an intercontinental airplane — real braking mechanisms, genuinely functional, simply incommensurate with the speed of the vehicle they have been asked to govern?
The question is not rhetorical, and the answer is not predetermined. There are historical precedents for governance structures that successfully managed manufactured risks — the Montreal Protocol on ozone-depleting substances, the Nuclear Non-Proliferation Treaty, the cascade of clean air and clean water legislation that followed the environmental catastrophes of the mid-twentieth century. These structures were built in response to risks that were, at the time of their construction, also moving faster than the governance apparatus designed to manage them. They succeeded, to varying degrees, because the risks became visible enough, and the boomerang returned forcefully enough, that the political will for structural intervention overcame the organized irresponsibility that had previously prevented it.
The cognitive risks of AI are waiting for their Chernobyl — the catalyzing event that makes the manufactured uncertainty visible enough to generate the political will for structural intervention. Whether the event will be a technological failure, a social crisis, or a gradual erosion that never produces a single dramatic moment but slowly degrades the cognitive infrastructure of societies until the degradation becomes undeniable — this is the question that the risk society framework frames but cannot answer.
What the framework can say is that the current architecture of responsibility is inadequate. The gap between production and attribution is maintained by institutional structures — speed asymmetries, organizational hierarchies, the separation of technical expertise from democratic accountability — that will not close on their own. They will close only when the boomerang returns with sufficient force to make the cost of organized irresponsibility exceed the cost of structural intervention.
Until then, the irresponsibility remains organized. The risks continue to be produced. And the question of who bears the consequences continues to be answered by the oldest and most reliable mechanism in the risk society: the costs fall on whoever lacks the power to deflect them elsewhere.
The framework knitter in Nottinghamshire in 1812 knew who he was. He was a framework knitter. The identity was not a career choice in the modern sense — a selection from a menu of options evaluated against personal preference and market demand. It was a social location, a position within a structure of relationships that determined not only what he did but who he was, whom he married, where he worshipped, what he expected from the future, and what the future expected from him. The guild provided not merely economic protection but ontological security — the deep, prereflective sense of knowing one's place in the order of things.
Beck argued that modernity systematically dismantles these structures. The process he called individualization does not mean that people become more individual in the sense of more autonomous or more free, though that is how the word is commonly misunderstood. It means that the traditional structures within which individual lives were embedded — class, guild, lifelong employment, stable career paths, religious communities, extended families — dissolve, and the individual is left to construct a biography from a menu of choices with insufficient institutional support. The freedom is real. The burden is also real. And the burden falls most heavily on those least equipped to bear it, because the resources required to construct a viable biography from raw materials — education, social capital, financial reserves, the cognitive bandwidth to make complex decisions under uncertainty — are distributed as unequally as the resources of any previous era.
The individualization of the AI moment follows this logic with a precision that Beck, writing about the dissolution of industrial-era labor structures, could not have anticipated but would immediately have recognized.
Consider the developer in Lagos whom The Orange Pill describes. Before AI coding assistants, building a software product required either a team or years of specialized training across multiple programming languages, frameworks, and deployment systems. The developer had the intelligence and the ambition. What she lacked was the institutional infrastructure — the team, the capital, the mentorship networks, the organizational scaffolding that transforms individual capability into shipped product. Claude Code changed the equation. The floor rose. A person with an idea and the ability to describe it in natural language could produce a working prototype in hours.
This is genuine liberation. The barrier between imagination and artifact has been lowered for millions of people who were previously excluded from the building process by lack of access to the institutional resources that building required. The Orange Pill is right to celebrate this. The moral significance of expanding who gets to build is not diminished by the risks that accompany the expansion.
But the risk society framework demands that the celebration be accompanied by a structural analysis of what the liberation costs and who pays.
The developer in Lagos who gains coding leverage also gains sole responsibility for every dimension of the work that a team would previously have distributed. The architectural judgment that a senior engineer would have exercised. The security review that a specialist would have conducted. The quality assessment that a testing team would have performed. The strategic evaluation — should this product exist? does it serve its users well? what are the failure modes and who bears their consequences? — that organizational deliberation would have surfaced. Each of these functions was previously embedded in institutional structures. Each is now the individual developer's responsibility, and each carries risks that the individual may lack the expertise, the perspective, or the cognitive bandwidth to manage.
The individualization is not merely a transfer of tasks. It is a transfer of risks. The architectural error that a team would have caught falls on the individual. The security vulnerability that a specialist would have identified falls on the individual. The strategic misjudgment that organizational debate would have corrected falls on the individual. And the cognitive risks — the productive addiction, the boundary erosion, the atrophy of depth — that institutional structures might have constrained fall on the individual with their full, unmitigated weight, because the individual has no team to notice the pattern, no manager to intervene, no institutional boundary to enforce, no colleague to say, at eleven o'clock on a Tuesday night, that it is time to stop.
Alex Finn's year of solo building, which The Orange Pill documents as evidence of democratized capability, is simultaneously evidence of individualized risk at its most extreme expression. Twenty-six hundred hours of work. Zero days off. An entire year of human life consumed by a productive engagement with no external check on its intensity, no institutional mechanism to distinguish flow from compulsion, and no social structure to absorb the costs if the individual's judgment fails — if the product is built on an architectural error that a team would have caught, if the code contains a vulnerability that a security review would have identified, if the market does not materialize and the year of uninterrupted labor produces nothing but exhaustion and debt.
The triumphalist reading of Finn's year emphasizes the output: a revenue-generating product built by a single person without institutional backing. The risk society reading emphasizes the transfer: every risk that institutional structures once distributed across a team, an organization, a professional community, has been concentrated on a single individual, and the consequences of that concentration — whether they manifest as burnout, as strategic error, as technical failure, or as the quiet erosion of the cognitive depth that only comes from the friction of working with other minds — fall on that individual alone.
The individualization of cognitive risk has a specific discursive expression that Beck's framework illuminates with uncomfortable clarity. When the manufactured uncertainties of AI-augmented work produce their consequences — when the developer burns out, when the solo builder ships a product with a critical flaw, when the knowledge worker loses the capacity for the deep thinking that her role demands — the discourse frames these outcomes as personal failures. The developer lacked discipline. The builder should have taken breaks. The knowledge worker should have set boundaries. The language of personal responsibility — resilience, self-care, work-life balance, mindfulness — is deployed to explain outcomes that are structurally produced.
This is not a conspiracy. It is the natural discursive expression of a social structure that has transferred risk from institutions to individuals. When the institution bore the risk, the discourse located the cause in institutional conditions — inadequate safety protocols, insufficient staffing, poor management. When the individual bears the risk, the discourse locates the cause in individual character — insufficient discipline, poor time management, failure to set boundaries. The structural conditions that produce the outcome — the always-available tool, the cultural expectation of productivity, the absence of institutional dams — disappear from the analysis, replaced by a narrative of personal responsibility that is both intuitively appealing and structurally false.
The Orange Pill partially recognizes this dynamic. Its discussion of Han's achievement subject — the individual who oppresses herself and calls it freedom — captures the phenomenology of individualized risk with precision. The whip and the hand that holds it belong to the same person. The compulsion is experienced as volition. The exploitation is experienced as ambition. But the analysis locates the mechanism primarily in culture and psychology — in the internalized achievement imperative, in the confusion of productivity with aliveness — rather than in the structural transfer of risk from institutions to individuals that makes the cultural and psychological dynamics possible.
The achievement subject does not create herself ex nihilo. She is produced by a social structure that has dissolved the institutional scaffolding within which cognitive work was previously organized and has replaced it with nothing — or, more precisely, with a subscription to a tool and an exhortation to exercise judgment. The judgment is real and necessary. The exhortation is genuine. But the structural conditions that would support the exercise of judgment — the team, the mentorship, the organizational boundaries, the professional community that provides both practical assistance and ontological security — have been individualized away.
The Berkeley study documents the individualization of cognitive risk in its most granular form. The workers who filled every pause with AI interaction were not responding to a mandate. No manager directed them to eliminate their cognitive rest. They were responding to an environment — an environment of available tools, cultural expectations, and absent institutional protections — that made the elimination of rest the path of least resistance. The structural conditions produced the behavior. The discourse attributed the behavior to individual choice.
This is the mechanism of individualization: structural conditions produce individual outcomes, and the discourse attributes those outcomes to individual agency, thereby obscuring the structural conditions and preventing the structural interventions that might address them. The worker who burns out is offered a meditation app. The student who loses the capacity for deep reading is offered a study skills workshop. The developer who cannot stop building is offered advice about work-life balance. Each intervention addresses the symptom at the individual level while leaving the structural production of the symptom entirely intact.
Beck argued that individualization was not merely a feature of late modernity but its defining condition — the condition in which traditional social structures have dissolved and individuals are compelled to construct their own biographies in a context of radical uncertainty. The AI moment represents an intensification of this condition. The traditional structures of cognitive work — the development team, the editorial process, the research group, the mentorship ladder, the professional guild — are dissolving under the pressure of tools that make individual capability sufficient for tasks that previously required collective effort. The dissolution is not complete, and collective structures retain value that this analysis does not deny. But the direction is unmistakable, and the consequences follow the pattern Beck identified: the risks that collective structures once absorbed are being transferred to individuals, the discourse is framing structural outcomes as personal responsibilities, and the institutional interventions that might protect individuals from manufactured cognitive risks are not being built because the risks are not perceived as structural.
The most dangerous consequence of individualization is not the transfer of risk itself but the destruction of the social basis for collective response. When risks are perceived as individual — as failures of personal discipline rather than products of systemic conditions — the political will for structural intervention does not form. No one organizes against her own lack of willpower. No one demands regulation of her own inability to set boundaries. The individualization of risk produces the individualization of response, and individualized responses are, by definition, inadequate to structural problems.
The framework knitter in 1812 had a guild. The guild was imperfect, exclusionary, and ultimately inadequate to the forces that dissolved it. But it provided something that the individualized developer in 2026 does not have: a collective structure within which individual experience could be recognized as shared, and shared experience could generate collective action. The knitter who saw his wages decline could look around the guild hall and see others experiencing the same decline, and the shared perception could become the basis for a shared response — inadequate, as the Luddite response proved, but collective rather than individual.
The developer in 2026 who experiences productive addiction, boundary erosion, and the atrophy of depth experiences these as personal conditions. She may post about them on social media, where her experience will be received as content — liked, shared, commented upon, and scrolled past. The structural conditions that produce her experience will not become the basis for collective action, because the discourse has already framed them as individual choices, and individual choices do not generate political movements.
This is the deepest risk of individualization: not that individuals bear costs they should not bear, though they do, but that the individualization of costs prevents the formation of the collective will required to address the structural conditions that produce them. The dams that The Orange Pill calls for — educational reform, regulatory frameworks, attentional ecology — require collective will. Collective will requires shared perception. Shared perception requires structures within which individual experience can be recognized as collective. And those structures are precisely what individualization dissolves.
The cognitive risk society, then, faces a structural paradox. The risks it produces are collective — manufactured by systems, distributed across populations, borne by everyone who uses the tools. But the perception of those risks is individual — experienced as personal failures, diagnosed as personal pathologies, treated with personal remedies. The gap between collective production and individual perception is the space in which organized irresponsibility operates, and closing that gap — making the collective nature of manufactured cognitive risk visible — is the precondition for any structural response adequate to the scale of the hazard.
---
On a Tuesday afternoon in a conference room in San Francisco, a product team at a major AI company makes a decision. The decision concerns the default response latency of their coding assistant — how quickly the tool returns output after the user submits a prompt. The team has tested three options: a sub-second response that feels instantaneous, a two-second response with a visible "thinking" indicator, and a five-second response with a more detailed progress display. User testing shows the sub-second response produces the highest engagement scores. Users prompt more frequently. They report higher satisfaction. They describe the experience as "seamless."
The team chooses the sub-second response. The decision is logged in a project management tool. No press release is issued. No regulator is consulted. No democratic process is invoked.
This decision will shape the cognitive habits of millions of people. The sub-second response eliminates the pause between question and answer — the brief interval during which the human mind, confronted with a question it has just formulated, might reconsider the question, might refine it, might notice that the question is not quite right and reformulate before receiving a response. The pause is cognitively productive. It is the space in which reflection occurs. And the product team has, with a single design choice, eliminated it for every user of the platform.
No one in the conference room thinks of this as a political decision. It is a product decision, made on the basis of engagement data, user satisfaction scores, and competitive benchmarks. The team is doing its job. The decision is competent, well-informed, and responsive to the data.
It is also, in Beck's framework, a quintessential act of sub-politics — the exercise of political power in spaces that are not recognized as political, by actors who are not recognized as political agents, through decisions that are not recognized as political choices.
Beck introduced the concept of sub-politics to describe a phenomenon he observed in the late twentieth century: the migration of consequential decision-making from the formal political sphere — legislatures, regulatory agencies, courts — to the informal spaces of economic, technological, and scientific activity. The decisions that most profoundly shaped people's lives — decisions about which technologies to develop, which chemicals to synthesize, which products to bring to market, which risks to externalize — were not being made by elected officials subject to democratic accountability. They were being made by engineers, managers, scientists, and entrepreneurs operating within institutions whose decision-making processes were internal, proprietary, and accountable to shareholders rather than to the publics affected by their choices.
Sub-politics was not a conspiracy. It was a structural feature of societies in which the pace and complexity of technological development had outstripped the capacity of formal political institutions to understand, evaluate, and govern it. The legislature that took two years to develop expertise on a technology was governing a technology that had been superseded eighteen months into the process. The regulatory agency that developed guidelines for one generation of a product was publishing guidelines for a product that no longer existed. The formal political system was always behind, and the gap between its deliberative pace and the developmental pace of technology meant that the most consequential decisions were made, by default, in the sub-political spaces where the development actually occurred.
The AI industry represents the most complete realization of sub-political power in the history of technology. The decisions that shape how millions of people think, work, relate to their own cognition, and experience the boundary between human capability and machine capability are made in product meetings, design reviews, training runs, and engineering sprints. They are made by people who are, individually, thoughtful, often brilliant, and frequently concerned about the consequences of their work. They are not made through any process that resembles democratic deliberation, and they are not subject to any accountability mechanism that resembles democratic governance.
The sub-political decisions of the AI industry fall into three categories, each with distinct consequences for the cognitive risk society.
The first category is architectural decisions — choices about the fundamental design of AI systems that determine their capabilities, limitations, and behavioral patterns. The decision to train a large language model on a particular corpus. The choice of optimization objectives. The calibration of safety mechanisms. The determination of what the model will and will not do, and how confidently it will do it. These decisions are made by machine learning researchers and engineers, often in consultation with safety teams, and they determine the cognitive environment within which millions of users will operate. The decision to optimize for fluent, confident responses — rather than, say, for responses that model uncertainty, that pause before answering, that sometimes say "I don't know" with the same ease they say "Here is the answer" — shapes the cognitive habits of every person who uses the system. It teaches, through millions of interactions, that questions have immediate, confident answers, and that uncertainty is a deficiency to be overcome rather than a condition to be inhabited.
The second category is interface decisions — choices about how the AI system presents itself to users, how it structures the interaction, what it makes easy and what it makes difficult. Response latency. The visual design of the prompting interface. The default length and format of responses. The availability of the tool — always on, always accessible, fitting on a phone screen so that every idle moment becomes a potential prompt. These decisions are made by product designers and user experience researchers, and they determine the behavioral patterns that users develop in relation to the tool. The decision to make the tool always available — rather than, say, to build in natural pauses, to make certain hours inaccessible, to require a deliberate act of engagement rather than a reflexive one — shapes the boundary between work and rest for every user. It is a decision about the architecture of human attention, made by people who are optimizing for engagement, not for cognitive ecology.
The third category is deployment decisions — choices about where the tool is introduced, at what pace, with what accompanying structures, and with what expectations. The decision to deploy an AI coding assistant across an entire engineering organization in a single quarter. The decision to set productivity targets that assume AI-augmented output levels. The decision to reduce team sizes in anticipation of productivity gains. These decisions are made by executives and managers, and they determine the organizational conditions within which individual workers experience the tool. The decision to reduce a team from twelve to five, on the assumption that AI tools make each individual twenty-fold more productive, transfers the risks previously distributed across twelve people — the risk of error, the risk of burnout, the risk of strategic misjudgment — to five individuals, each of whom now bears a concentration of risk that the institutional structure of the twelve-person team once absorbed.
Each of these categories of sub-political decision exercises power over the cognitive lives of millions of people. None of them is subject to democratic process. None is governed by any mechanism that gives the affected populations a voice in the decisions that shape their cognitive environment.
The Orange Pill captures this dynamic through the metaphor of the priesthood — the technologists who understand AI deeply enough to see its consequences but who operate within institutions that reward capability over caution. The metaphor is apt, and it carries a moral weight that the sociological analysis should not obscure: the technologist who understands what response latency does to the capacity for reflection and who chooses the sub-second response anyway is making a moral choice, not merely a product choice, and the moral dimension of the choice does not disappear because the institutional context frames it as a technical optimization.
But the priesthood metaphor locates the problem in the individuals who make the decisions — in their knowledge, their conscience, their willingness to exercise the obligation that understanding confers. Beck's sub-politics concept locates the problem in the structure within which those individuals operate. The product designer who might prefer the two-second response with the "thinking" indicator — who might understand, at a deep level, what the pause does for the user's cognitive process — operates within an organization that measures her performance by engagement metrics. The engagement metrics favor the sub-second response. The designer who advocates for the slower response is advocating for a product that will score lower on every metric the organization uses to evaluate success.
The structure does not force the designer to choose the sub-second response. It makes the sub-second response the path of least institutional resistance and the slower response an act of insubordination — not formally, but functionally. The designer who insists on the pause must make a case against the data, against the competitive benchmarks, against the organizational culture that equates faster with better. She must translate a cognitive argument — the pause is valuable because it creates space for reflection — into a business argument, and the translation inevitably loses the thing that matters most: the cognitive value of the pause is, by its nature, unmeasurable by the metrics the organization uses to evaluate product decisions.
This is the structural mechanism of sub-politics: the consequential decisions are made within institutions whose evaluation criteria systematically exclude the dimensions of the decision that matter most for the populations affected by them. The product team evaluates response latency by engagement scores. The engineering team evaluates model performance by capability benchmarks. The executive team evaluates deployment by productivity metrics. At no point in the decision chain does anyone evaluate the decision by its effects on the cognitive ecology of the user population, because cognitive ecology is not a metric, it does not appear on a dashboard, and no institutional actor is accountable for it.
The democratic deficit is not a bug in the system. It is a feature of a social structure in which the most consequential decisions about cognitive life are made in spaces that were never designed for democratic deliberation and are not subject to the accountability mechanisms that democratic governance requires. The formal political system — legislatures, regulatory agencies, courts — can, in principle, intervene. The EU AI Act is such an intervention. The American executive orders on AI are another. But these interventions operate at the level of outcomes — they regulate what the technology may do after it has been built — rather than at the level of design — they do not participate in the decisions about what to build and how to build it that determine the technology's cognitive effects before any regulation applies.
Beck argued that the reinvention of politics in the risk society required bringing sub-political decisions into the sphere of democratic accountability — not by making every design choice subject to legislative approval, which would be both impractical and counterproductive, but by creating institutional mechanisms through which the populations affected by sub-political decisions could participate in shaping the conditions under which those decisions are made. Advisory boards with genuine authority. Public deliberation processes that inform design standards. Transparency requirements that make the cognitive implications of design choices visible to the populations who will bear their consequences.
These mechanisms do not yet exist in a form adequate to the AI moment. Their construction is not a technical problem. It is a political one — a problem of creating institutional structures that give democratic publics a voice in the decisions that shape their cognitive environment. The sub-political spaces must be made political, or the most consequential decisions of the cognitive age will continue to be made by people accountable to engagement metrics rather than to the societies whose minds their products reshape.
---
The orange pill moment, as The Orange Pill describes it, is the recognition that something genuinely new has arrived and that there is no going back. The recognition is irreversible. The world before the orange pill is not available for return. The frameworks that governed the previous era — what constitutes skill, what constitutes value, what constitutes a career, what constitutes expertise — have been revealed as contingent, as artifacts of a technological condition that no longer obtains.
Beck had a name for this kind of moment. He called it reflexive modernization.
The term is one of the most frequently misunderstood in contemporary sociology. It does not mean that modern societies have become more reflective — more thoughtful, more self-aware, more given to careful examination of their own processes. It means that modernization has turned upon itself. The institutions, assumptions, and social structures produced by the first phase of modernization — industrial modernity — are being undermined not by external enemies but by the internal dynamics of the modernization process they enabled. The factory system that created the industrial working class also created the conditions for the working class's political mobilization, which transformed the factory system. The educational system that produced the expert class also produced the critical capacity that questions expert authority, which destabilizes the authority the educational system was designed to certify. The technology that created the knowledge economy also created the tools that commoditize knowledge work, which undermines the economic logic on which the knowledge economy was built.
Reflexive modernization is modernization confronting its own consequences. The confrontation is not chosen. It is structural — produced by the same dynamics that produced the institutions now being confronted. And the confrontation does not proceed smoothly, through orderly institutional adaptation. It proceeds through crisis, through the sudden visibility of contradictions that were always present but previously obscured by the apparent stability of the institutions that contained them.
The orange pill moment is a moment of reflexive modernization experienced at the individual level. The developer who realizes that the skills she spent a decade acquiring can now be performed by a tool available for a hundred dollars a month is experiencing, in personal terms, the structural dynamic Beck described at the level of social systems. Her expertise was produced by the knowledge economy. The tool that commoditizes her expertise was produced by the same knowledge economy. The institution undermines itself. The modernization reflects back upon its own products.
The Orange Pill identifies a five-stage pattern in technological transitions: threshold, exhilaration, resistance, adaptation, expansion. The pattern maps with striking precision onto Beck's account of reflexive modernization, and the mapping reveals a structural mechanism beneath the pattern that the phenomenological description alone cannot capture.
The threshold stage corresponds to what Beck called the moment of manufactured uncertainty becoming visible. For decades, the knowledge economy operated on the assumption that cognitive skill — the ability to write code, draft legal briefs, analyze data, produce competent prose — was scarce, durable, and economically defensible. The assumption was not questioned because the conditions of the first modernity supported it: cognitive skill was genuinely scarce, genuinely difficult to acquire, and genuinely essential for the production of the goods and services the economy demanded. The assumption was a feature of the institutional landscape, as invisible and as load-bearing as a foundation.
The threshold is the moment the foundation cracks. Not the moment anyone notices the crack — that comes later. The moment the structural conditions that supported the assumption change in a way that makes the assumption no longer tenable. In the case of AI, the structural change was the collapse of the imagination-to-artifact ratio: the distance between a human idea and its realization, reduced from years of specialized training to the duration of a conversation. The assumption that cognitive skill is scarce was undermined not by a critique of the assumption but by a change in the conditions that made it true.
The exhilaration stage corresponds to what Beck called the period of opportunity perception. When manufactured uncertainty first becomes visible, it appears not as a risk but as a liberation. The developer who discovers she can build in a day what previously took a month does not, in that first moment, perceive the manufactured uncertainties that accompany the capability. She perceives the capability. The exhilaration is genuine. The liberation is real. And the risks — the erosion of depth, the atrophy of the skills that were being bypassed, the transfer of institutional risk to individual shoulders — are not yet visible, because their temporal structure is delayed. The benefit is immediate. The cost accumulates slowly, below the threshold of perception, in the geological layers of understanding that are no longer being deposited.
The resistance stage corresponds to Beck's account of risk perception catching up with opportunity perception. The framework knitters who saw their craft commoditized. The senior engineers who felt the ground shift beneath the expertise they had built over decades. The educators who watched students outsource the cognitive work that education was supposed to develop. These are people who perceive, often before the exhilarated majority, the manufactured uncertainties that accompany the new capability. Their resistance is not irrational. It is the expression of a risk perception that the institutional framework has not yet caught up with — a perception that the costs of the transition are being borne by specific populations while the benefits are being captured by others.
The adaptation stage is where Beck's framework adds the most diagnostic power that The Orange Pill's phenomenological account, taken alone, cannot provide. The Orange Pill describes adaptation as the period during which the culture builds dams — laws, standards, best practices, institutional structures that redirect the flow of capability toward human flourishing. This is accurate as far as it goes. But the risk society framework asks a harder question about adaptation: Does the adaptation address the structural production of risk, or does it merely manage the appearance of address while leaving the productive structures intact?
This distinction is the difference between genuine institutional transformation and what Beck called cosmetic modernization — the performance of adaptation without its substance. The industrial risk society produced many instances of cosmetic modernization: environmental compliance departments that managed the perception of environmental responsibility while the productive processes that generated pollution continued unchanged. Corporate social responsibility programs that managed reputational risk while the business models that produced social harm remained intact. Regulatory frameworks that addressed the most visible symptoms of manufactured risk while leaving the structural conditions that produced those symptoms unaltered.
The AI moment is producing its own cosmetic modernizations. Corporate AI ethics boards that lack the institutional authority to alter product decisions. Voluntary AI safety commitments that bind no one and expire when competitive pressure makes them inconvenient. Educational initiatives that teach students to "use AI responsibly" without addressing the structural conditions — the design of the tools, the business models that drive their deployment, the absence of institutional protections for cognitive space — that make irresponsible use the path of least resistance.
The question for the adaptation stage — the stage The Orange Pill argues we are currently in — is whether the dams being built are structural or cosmetic. Whether they address the production of manufactured cognitive risk at its source or merely manage its consequences downstream. Whether they represent genuine institutional transformation — new forms of governance adequate to the risks of the second modernity — or organized irresponsibility dressed in the language of responsibility.
The expansion stage, if it arrives, corresponds to what Beck called the emergence of genuinely new institutional forms — forms that are adequate to the risks of the era that produced them, rather than adaptations of forms designed for the previous era. The eight-hour day was a genuinely new institutional form, produced by the labor movement in response to manufactured risks that the institutions of early industrial capitalism could not manage. The environmental regulatory agency was a genuinely new institutional form, produced by the environmental movement in response to manufactured risks that the institutions of mid-twentieth-century industrial capitalism could not manage. Each represented not the adaptation of an existing institution but the creation of a new one, designed from the ground up to address a category of risk that previous institutions were structurally incapable of managing.
The cognitive risk society awaits its equivalent. The genuinely new institutional forms that would be adequate to manufactured cognitive risk have not yet been imagined, much less built. What they might look like — what institutional architecture could protect the cognitive ecology of populations exposed to AI tools while preserving the genuine capability expansion those tools provide — is the question that the concluding chapters of this book will address.
But the five-stage pattern, read through Beck's framework, produces a warning that the pattern's optimistic trajectory can obscure. The trajectory from threshold through expansion is not guaranteed. It is contingent on the quality of the adaptation — on whether the dams built during Stage Four are structural or cosmetic, genuine or performative, adequate to the risks or merely adequate to the appearance of addressing them. Transitions that produce inadequate adaptations do not expand. They calcify — they freeze into structures that manage the appearance of risk governance while the risks continue to accumulate beneath the surface, building pressure toward a crisis that the cosmetic structures are not equipped to contain.
The question is not whether the five-stage pattern will complete. It is whether Stage Four — the stage we inhabit now — will produce the genuine institutional innovation that Stage Five requires, or the cosmetic modernization that defers the crisis while deepening its structural foundations.
---
In 1986, the wind carried cesium-137 from a ruptured reactor in Ukraine across the border into Belarus, then Poland, then Scandinavia, then Western Europe. Within days, radioactive contamination had been detected in reindeer in Lapland, sheep in Wales, and milk in Bavaria. The radiation did not consult a map. It did not stop at customs. It did not distinguish between NATO and Warsaw Pact nations, between the responsible and the innocent, between the societies that had chosen nuclear power and those that had not. The fallout was, in the precise sense Beck intended, cosmopolitan — it demonstrated, through contamination, the irrelevance of national borders to risks produced by modern technology.
Beck argued that the cosmopolitan character of manufactured risk was not merely a practical problem of cross-border governance. It was an epistemological challenge — a challenge to the fundamental categories through which modern societies understood themselves. The nation-state, the primary unit of political organization for three centuries, was predicated on the assumption that the most consequential risks and benefits of social life could be contained within territorial boundaries. Domestic policy governed domestic risks. Foreign policy governed relations between states. The boundary between inside and outside was the organizing principle of political life.
Manufactured risks dissolved this boundary. Environmental contamination, financial contagion, pandemic disease, climate change — each demonstrated that the risks produced by modern systems exceeded the jurisdictional capacity of the institutions designed to manage them. The response adequate to Chernobyl could not be organized by Ukraine alone, or by the Soviet Union alone, or by any single nation. It required coordination across jurisdictions whose interests, capabilities, and political systems were radically different, and whose existing institutional arrangements provided no framework for the coordination the situation demanded.
The cognitive risks produced by AI tools are cosmopolitan in the same structural sense. The tools are built in a small number of locations — predominantly the United States, with significant development in the United Kingdom, China, France, and a handful of other nations. They are deployed globally. The cognitive effects they produce — the erosion of depth, the atrophy of questioning, the colonization of rest, the restructuring of the relationship between human intention and machine capability — travel with the tools, crossing borders as effortlessly as the software itself.
The Orange Pill documents this global reach through the democratization argument: the developer in Lagos gains access to the same coding leverage as an engineer at Google. The student in Dhaka gains access to the same knowledge resources as a student at Princeton. The engineer in Trivandrum gains twenty-fold productivity. In each case, the capability crosses borders. And in each case, the manufactured uncertainties that accompany the capability cross borders with identical ease.
The developer in Lagos who gains coding leverage also gains the cognitive hazards that the Berkeley study documents: work intensification, task seepage, attentional fracturing. She gains the manufactured uncertainty of boundary erosion — the tool is always available, the prompting interface fits on a phone, every idle moment is a potential work moment. She gains the manufactured uncertainty of depth displacement — the geological layers of understanding that were deposited by the struggle of learning to code from scratch are not deposited by the conversation with the AI that writes the code for her. She gains the manufactured uncertainty of judgment concentration — every risk that a team would have distributed now falls on her alone.
These hazards travel with the capability because they are produced by the same mechanism. The collapse of the imagination-to-artifact ratio that makes capability accessible also makes risk portable. The boundary removal that lets the developer in Lagos build what previously required a team in San Francisco also lets the cognitive contamination of AI-saturated work environments reach Lagos with the same efficiency.
The asymmetry between the global reach of cognitive risk and the local capacity for risk management is the defining structural feature of the AI risk society. The risks are produced globally — by tools built in one jurisdiction, trained on data from dozens of jurisdictions, and deployed across all jurisdictions simultaneously. The dams that might mitigate these risks are built locally — by individual organizations, within individual regulatory frameworks, according to individual cultural norms and institutional capacities.
The EU AI Act is a local dam. It applies within the jurisdictions of the European Union. It establishes categories of risk, mandates transparency requirements, and creates enforcement mechanisms — within its jurisdiction. The developer in Lagos is not within its jurisdiction. The student in Dhaka is not within its jurisdiction. The engineer in Trivandrum is not within its jurisdiction. The cognitive risks they experience are produced by the same tools the EU AI Act attempts to govern, but the governance does not extend to their experience of those risks.
This is not a failure of the EU AI Act. It is a structural limitation of any governance framework organized at the national or regional level, applied to risks that are produced and distributed globally. The Act governs the supply side — what AI companies within or operating within the EU may and may not do. It does not, and structurally cannot, govern the demand side — what happens to the cognition of the hundreds of millions of people outside the EU who use the same tools under no equivalent governance framework.
The American executive orders on AI share this structural limitation. They apply within the jurisdiction of the United States. They establish principles, direct federal agencies, and create reporting requirements — within their jurisdiction. The cognitive risks produced by American AI tools, deployed in countries whose regulatory capacity is a fraction of the American capacity and whose institutional infrastructure provides no equivalent protections, are outside the scope of the orders.
Beck called this the cosmopolitan condition — the condition of living in a world where the most consequential risks are global and the most powerful governance mechanisms are national. The condition is not new. It has characterized the risk society since the mid-twentieth century, when nuclear weapons made it possible for a single nation's decision to end civilization, and when industrial pollution made it possible for one nation's economic choices to degrade the biosphere that all nations shared. What is new about the AI moment is the speed at which the cosmopolitan condition is intensifying, and the intimacy of the risks it produces.
Environmental risks operate on the body through the environment — through contaminated water, polluted air, degraded ecosystems. They are mediated by physical systems whose behavior can, in principle, be modeled and whose contamination can, in principle, be measured. The measurement creates a basis, however contested, for governance — for international agreements on emission standards, pollution limits, and environmental protection protocols.
Cognitive risks operate on the mind through the tools it uses — through the design of interfaces, the calibration of response times, the behavioral patterns that millions of daily interactions deposit in the neural pathways of their users. They are mediated by cognitive systems whose contamination cannot be measured by any external instrument, because the contamination operates within the instrument of measurement itself. There is no Geiger counter for the erosion of depth. There is no atmospheric monitoring station for the atrophy of questioning capacity. The risks are real, systematic, and global, and the instruments that might make them visible — the longitudinal studies, the cognitive assessments, the institutional monitoring of population-level thinking patterns — do not exist at the scale the risks demand.
The cosmopolitan imperative of the cognitive risk society is the demand for governance structures that match the scale of the risks they address. This does not mean world government — a solution that is neither feasible nor desirable, and that Beck explicitly rejected as a misunderstanding of the cosmopolitan project. It means transnational coordination on specific dimensions of cognitive risk: standards for AI transparency that apply across jurisdictions, so that the cognitive implications of design choices are visible to all populations affected by them, not only those within the jurisdiction of a particular regulatory framework. Frameworks for the distribution of cognitive risk that ensure the benefits and hazards of AI tools are not distributed along the same lines as every previous technological advantage — concentrated benefits in the developed world, distributed risks in the developing world. Institutional innovations that bring the sub-political design decisions of AI companies into a form of accountability that extends beyond shareholder value to the cognitive well-being of the global populations their products reshape.
The Orange Pill acknowledges the barriers to global democratization of AI capability — connectivity, hardware costs, language bias, the cost of inference. It notes that these barriers will fall as models reach capability thresholds and are then optimized for efficiency. This is likely correct as a prediction about capability. It is insufficient as an analysis of risk, because the falling of barriers to capability is simultaneously the falling of barriers to hazard. The efficiency optimization that makes AI accessible in Lagos also makes the cognitive contamination of AI-saturated work environments accessible in Lagos, and the institutional protections that might buffer the contamination — the educational frameworks, the labor protections, the attentional ecology practices — are not part of the efficiency optimization. They must be built separately, by institutions that do not yet exist, in jurisdictions whose governance capacity is already strained by the demands of the first risk society.
The construction of cosmopolitan governance for cognitive risk is not a utopian project. It is a practical necessity, demanded by the structural mismatch between global risks and local governance. Its foundations might include transnational standards for AI design transparency — not what the AI does, but what design choices were made about how it interacts with human cognition, and what the cognitive implications of those choices are likely to be. They might include cosmopolitan frameworks for cognitive impact assessment — the equivalent of environmental impact assessments, applied to the cognitive environment, and required before AI tools are deployed at population scale in any jurisdiction. They might include institutional mechanisms for the representation of affected populations in the sub-political spaces where AI design decisions are made — not as observers or advisors but as participants whose cognitive well-being is weighted alongside engagement metrics and competitive benchmarks in the evaluation of design choices.
These are foundations, not buildings. The architecture of cosmopolitan cognitive governance will require decades of construction, and the construction will proceed against the resistance of every institution whose power derives from the current arrangement — the technology companies whose sub-political autonomy would be constrained, the national governments whose jurisdictional authority would be shared, the expert systems whose monopoly on risk definition would be challenged. The resistance is predictable, and the history of cosmopolitan governance — from the Montreal Protocol to the Paris Agreement — suggests that the resistance is overcome only when the boomerang returns with sufficient force to make the cost of inaction exceed the cost of coordination.
The cognitive boomerang is returning. The signs are visible — in the burnout rates documented by the Berkeley study, in the productive addiction confessed by the builders themselves, in the erosion of educational outcomes that educators are beginning to report, in the subtle but measurable decline of the cognitive depth that societies depend upon for the judgments that matter most. Whether the return will be forceful enough to generate the political will for cosmopolitan governance, or whether the boomerang will be individualized — attributed to personal failures, treated with personal remedies, managed at the level of the individual rather than addressed at the level of the system — is the question on which the future of the cognitive risk society depends.
The cesium did not stop at the border. Neither will the cognitive contamination of AI-saturated environments. The governance must eventually match the reach of the risk, or the risk will outrun every local dam that individual jurisdictions construct, however well-intentioned and however structurally sound within their own borders.
Every abstraction in the history of computing removed difficulty at one level and relocated it upward. The Orange Pill calls this ascending friction, and the thesis is among the book's most structurally important contributions. When assembly language gave way to compilers, the programmer no longer managed memory addresses by hand, but the systems built on top of compilers were more complex than anything assembly could have supported, and the decisions required to architect those systems were harder — not easier — than the decisions the compiler had automated away. When cloud infrastructure abstracted away server management, the practitioner no longer swapped drives at three in the morning, but the scaling strategies and resilience architectures that cloud computing made possible demanded a kind of systems thinking that the server administrator's narrower role had never required.
The thesis is correct. The difficulty ascends. The work at the higher level is genuinely harder, more demanding of judgment, more reliant on the integrative thinking that The Orange Pill identifies as the scarce resource of the AI economy.
But the thesis has a shadow, and the shadow is risk.
When friction ascends, the consequences of failure ascend with it. This is not a minor qualification appended to an otherwise optimistic argument. It is a structural feature of ascending friction that transforms the risk calculus of every transition it describes.
An assembly programmer who made an error produced a local failure. A single program crashed. A single user was inconvenienced. The blast radius of the error was contained by the narrowness of the capability the error had corrupted. The programmer who mismanaged a memory address damaged a specific, bounded piece of functionality, and the damage was visible, immediate, and repairable.
A cloud architect who makes an error produces a systemic failure. Thousands of services go down simultaneously. Millions of users are affected. The blast radius of the error is proportional to the capability the architecture supports, and the capability is enormous — precisely because the lower-level frictions that once bounded it have been abstracted away. The error at the higher level is not merely more consequential in degree. It is more consequential in kind. The failure propagates through systems of systems, cascading across dependencies that the architect may not have fully mapped, producing emergent consequences that no single actor anticipated because no single actor had visibility into the full scope of the interconnection.
Charles Perrow documented this dynamic in Normal Accidents, published in 1984, two years before Beck's Risk Society. Perrow studied catastrophic failures in complex technological systems — nuclear power plants, chemical processing facilities, air traffic control — and concluded that accidents in tightly coupled, highly complex systems are not aberrations. They are structural features. The same complexity and tight coupling that make the systems powerful also make catastrophic failure an inevitable, statistically normal event. The question is not whether the system will fail but when, and how widely the failure will propagate through the interdependencies that the system's designers could not fully anticipate.
The AI moment extends Perrow's analysis into the cognitive domain. When a developer uses Claude Code to build a system whose architecture she has described but not implemented by hand — whose internal logic was generated through conversation rather than through the sequential, friction-rich process of writing code line by line and debugging it error by error — the system may work correctly. It may work beautifully. The architecture may be sound. The logic may hold under testing. And the developer, freed from implementation friction, may build something more ambitious, more capable, and more consequential than anything she could have built through manual coding.
But the developer's understanding of the system she has built is structurally different from the understanding she would have acquired through manual implementation. The geological layers of understanding that The Orange Pill describes — the embodied knowledge deposited through hours of debugging, the intuitive sense of how the system behaves under stress, the feeling for where the architecture is fragile and where it is robust — have not been deposited. The system exists. The understanding does not. And the gap between the system's capability and the builder's comprehension of that capability is a manufactured risk — produced by the same ascending friction that makes the capability possible.
When the system fails — and Perrow's analysis suggests that in complex, tightly coupled systems, failure is a structural feature rather than an avoidable deficiency — the developer who built it through AI-assisted conversation may lack the embodied understanding required to diagnose the failure, to trace its propagation through the system's interdependencies, to identify the architectural decision that made the failure possible. The ascending friction that freed her from implementation also freed her from the understanding that implementation deposits, and the understanding is precisely what she needs when the system breaks.
The risk society framework adds a dimension that neither The Orange Pill's ascending friction thesis nor Perrow's normal accidents theory fully captures. The manufactured risks of ascending friction are not merely technical — they are cognitive, and they accumulate at the population level.
When one developer builds a system she does not fully understand, the risk is local. When an entire generation of developers builds systems through AI-assisted conversation, the risk is systemic. The aggregate understanding of the global developer population shifts — from embodied knowledge built through friction to extracted knowledge produced through conversation. The shift is invisible in any individual case. Each developer's system works. Each developer's productivity is higher. Each developer, individually, appears more capable than her predecessor who built through manual coding.
The systemic risk appears only at the level of the aggregate — when the collective capacity of the developer population to diagnose and repair complex system failures has been quietly eroded by the same process that made each individual developer more productive. The erosion is invisible because it manifests not as a reduction in output but as a reduction in resilience — the capacity of the system to recover from the failures that Perrow's analysis tells us are inevitable.
This risk calculus applies beyond software development. The lawyer who uses AI to draft briefs is freed from the implementation friction of legal writing, but the understanding of precedent that manual drafting deposits is not deposited by AI-assisted drafting, and the understanding is what she needs when a novel case falls outside the patterns the AI has learned. The physician who uses AI for diagnostic support is freed from the friction of differential diagnosis, but the clinical judgment that emerges from the slow, friction-rich process of considering and eliminating possibilities one by one is not the same as the judgment that reviews an AI-generated list of possibilities. The executive who uses AI for strategic analysis is freed from the friction of data synthesis, but the strategic intuition that emerges from the slow accumulation of domain-specific pattern recognition is not the same as the analysis that the tool produces.
In each case, the ascending friction thesis holds: the work remaining for the human is harder, more demanding of judgment, more consequential. And in each case, the risk calculus is the same: the consequences of failure at the higher level are more severe than the consequences of failure at the lower level, and the capacity to manage failure at the higher level may be compromised by the same process that relocated the work there. The tool that elevates the human to the judgment layer also, potentially, depletes the human's capacity for the judgment that the elevated layer demands.
Beck's concept of manufactured uncertainty applies here with particular force. The risk is not produced by a failure of the tool or a failure of the user. It is produced by the structural logic of the transition itself — by the same ascending friction that generates the capability. The developer who builds more ambitiously because the tool handles implementation is not making an error. She is responding rationally to the capabilities available to her. The lawyer who drafts more efficiently because the tool handles legal writing is not being lazy. She is optimizing her practice in the way the market rewards. Each individual decision is rational. The systemic risk emerges from the aggregate of rational individual decisions, none of which, taken singly, produced the hazard.
This is the signature of manufactured risk: produced by rational action within a system whose structural logic generates hazards that no individual actor intended, no individual decision produced, and no individual remedy can address.
The response to the risk calculus of ascending friction cannot be the refusal of ascending friction — the Luddite error that The Orange Pill rightly identifies as strategically catastrophic. The response must be structural: institutional mechanisms that maintain the capacity for deep understanding alongside the deployment of tools that make deep understanding less necessary for daily production. Mentorship structures that transmit embodied knowledge from experienced practitioners to juniors who may never acquire it through their own friction-rich practice. Diagnostic training that develops the capacity to trace failure through complex systems, even when the systems were built through AI-assisted conversation rather than manual coding. And, most fundamentally, institutional humility — the recognition that the systems being built are more powerful than the understanding of the people building them, and that this gap between capability and comprehension is itself a manufactured risk that demands structural management.
The ascending friction thesis is correct: the difficulty ascends, and the work at the higher level is genuinely harder and more valuable. The risk society addendum is equally correct: the consequences of failure ascend alongside the difficulty, and the capacity to manage those consequences may be compromised by the same process that relocated the work. The amplifier carries the error as faithfully as it carries the insight, and the error at the top of the tower falls further than the error at the bottom.
---
The architecture of cognitive risk management that the AI moment demands does not yet exist. This is not a statement of pessimism. It is a statement of structural fact, and the distinction matters because the response to a structural absence is construction, not despair.
The Orange Pill prescribes dams. The prescription is genuine, practical, and, within its scope, correct. Individual dams: self-knowledge, attentional discipline, the willingness to ask whether productive intensity is flow or compulsion. Organizational dams: AI Practice frameworks, structured pauses, protected mentoring time, the deliberate maintenance of spaces where AI tools are set aside and people engage in the friction-rich deliberation that develops judgment. National dams: educational reform, regulatory frameworks, the reorientation of curricula from teaching students to produce toward teaching them to question.
Each of these dams addresses a real dimension of manufactured cognitive risk. Each is necessary. And each, when examined through the risk society framework, is revealed to be structurally insufficient — not because it is poorly designed, but because the risks it addresses exceed the scale at which it operates.
The individual dam protects the individual. The developer who practices attentional ecology, who maintains boundaries between work and rest, who cultivates the self-knowledge to distinguish flow from compulsion — this developer is better protected than the developer who does not. But her individual dam does nothing to alter the cognitive environment within which she operates. The tool is still designed for sub-second response. The organizational culture still rewards visible productivity. The competitive landscape still punishes the pause. Her dam protects her against the current. It does not alter the current.
The organizational dam protects the organization. The company that builds AI Practice into its workflows — that mandates structured pauses, protects mentoring time, sequences work to prevent the parallelization that fractures attention — this company's workforce is better protected than a workforce operating without these structures. But the organization cannot control the cognitive environment that its employees inhabit outside work hours, or the cognitive environment that its competitors create, or the design choices made by the AI companies whose tools it deploys. The organizational dam redirects the flow within the organization's boundaries. It does not affect the flow beyond them.
The national dam protects the nation's citizens — to the extent that national governance can reach the global systems that produce cognitive risk. The EU AI Act establishes transparency requirements, creates risk categories, and mandates accountability mechanisms. Within the EU's jurisdiction, these structures have real force. They represent genuine governance, built through democratic process, backed by enforcement capacity. They are among the most serious attempts in the world to construct institutional dams adequate to AI risk.
They do not reach the developer in Lagos. They do not reach the student in Dhaka. They do not reach the engineer in Trivandrum. And the cognitive risks those individuals experience are produced by the same tools, through the same mechanisms, generating the same manufactured uncertainties as the risks the EU AI Act addresses within its borders.
The structural insufficiency of local dams for global risks is the central governance problem of the risk society. Beck spent the last two decades of his career developing the framework of cosmopolitan governance — not world government, which he explicitly rejected, but the institutional architecture through which transnational risks can be managed through transnational coordination without requiring the dissolution of national sovereignty.
Applied to the cognitive risk society, cosmopolitan governance would require institutional innovations that do not currently exist but whose outlines can be discerned from the precedents of previous risk governance achievements.
The first precedent is the Montreal Protocol of 1987, which phased out the production of ozone-depleting substances through a mechanism that no single nation could have implemented alone. The Protocol succeeded because it addressed a risk that was both scientifically demonstrable and cosmopolitan in its effects — the ozone hole threatened everyone, regardless of national contribution to the problem — and because it created a framework for differentiated responsibility, allowing nations at different levels of development to phase out the substances at different rates while maintaining a shared commitment to the common goal.
A cognitive equivalent of the Montreal Protocol would establish transnational standards for the design of AI tools — not what the tools do, but how they interact with human cognition. Standards for response latency that preserve the cognitive space for reflection. Standards for default availability that prevent the colonization of rest by always-accessible prompting. Standards for transparency that make the cognitive implications of design choices visible to the populations affected by them. These standards would apply across jurisdictions, creating a floor of cognitive protection beneath which no deployment could fall, while allowing national and regional frameworks to build additional protections above the floor.
The second precedent is the environmental impact assessment, a governance tool that originated in the United States National Environmental Policy Act of 1970 and has since been adopted, in various forms, by nearly every nation on earth. The environmental impact assessment requires that major projects undergo systematic evaluation of their environmental consequences before they are approved — not after they have been built and the consequences have materialized, but before, when the design can still be altered to mitigate foreseeable harms.
A cognitive impact assessment, applied to AI tools before they are deployed at population scale, would require systematic evaluation of their effects on the cognitive ecology of the populations that will use them. What does the tool's response latency do to the user's tolerance for uncertainty? What does the tool's default availability do to the boundary between work and rest? What does the tool's confidence calibration do to the user's questioning instinct? These are empirical questions, answerable through the same kinds of longitudinal studies and controlled experiments that environmental impact assessments employ. They are not currently asked, because no institutional framework requires them to be asked, and no governance mechanism ensures that the answers, if obtained, would alter the design choices that produce the cognitive effects.
The third precedent is the Basel Accords of banking regulation, which established transnational standards for capital adequacy and risk management in the financial system. The Accords did not eliminate financial risk. They did not prevent the 2008 crisis, which revealed inadequacies in the regulatory framework that the Accords had established. But they created an institutional architecture — a set of shared standards, reporting requirements, and accountability mechanisms — that made the systemic risks of the global financial system visible and, to some extent, manageable across jurisdictions that would have been incapable of managing them individually.
A cognitive equivalent of the Basel Accords would establish transnational standards for the assessment and reporting of cognitive risk in AI systems. What are the measured effects of this tool on the attention patterns of its user population? What are the measured effects on the depth of understanding in domains where the tool is deployed? What are the measured effects on the questioning capacity of students who use the tool in educational settings? These measurements would be required of AI companies operating across jurisdictions, reported to a transnational body with the authority to establish minimum standards, and made available to the national regulatory frameworks that govern deployment within their borders.
Each of these precedents demonstrates that cosmopolitan governance is not utopian. It has been achieved — imperfectly, incompletely, with the inevitable compromises of multilateral negotiation — in domains where the risks were global and the governance was local and the mismatch between them produced consequences that no national framework could manage alone. The cognitive risk society presents the same structural mismatch, and the history of risk governance suggests that the mismatch will be addressed only when the consequences become severe enough to generate the political will for coordination.
The question — Beck's question, and the question this book has been building toward — is whether the consequences can be anticipated and the coordination begun before the cognitive Chernobyl that would make it unavoidable. Whether the dams can be built proactively, through the exercise of foresight and political will, rather than reactively, in the aftermath of a crisis whose costs could have been averted by earlier action.
The precedents offer cautious grounds for both hope and skepticism. The Montreal Protocol was proactive — it addressed the ozone risk before the consequences became catastrophic, guided by scientific evidence of a trajectory that, left unaltered, would have produced a catastrophe. The response to the 2008 financial crisis was reactive — the regulatory reforms that followed were motivated by a crisis that the existing frameworks had failed to prevent. The difference between proactive and reactive governance is measured in human cost — the cost borne by the populations who experience the crisis that proactive governance would have averted.
The cognitive risks of AI are accumulating. The manufactured uncertainties are being produced at scale, distributed globally, individualized in their consequences, and obscured by the self-concealing nature of cognitive contamination. The dams prescribed by The Orange Pill — individual, organizational, national — are necessary and should be built. But they are not sufficient. The risks they address exceed the scale at which they operate, and the governance adequate to the risks requires coordination at a scale that matches the reach of the tools that produce them.
Beck would recognize the moment. The manufactured uncertainties are real. The organized irresponsibility is structural. The individualization of consequences is systematic. The sub-political spaces where the most consequential decisions are made remain unaccountable. And the cosmopolitan governance that the situation demands is absent — not because it is impossible, but because the political will for its construction has not yet been generated by consequences severe enough to overcome the institutional resistance of every actor that benefits from the current arrangement.
The architecture must be built. The foundations are available — in the precedents of environmental governance, financial regulation, and public health coordination. The materials are available — in the research tools, the institutional models, and the democratic processes that have been developed through a century of grappling with manufactured risks of other kinds. What is not yet available is the recognition, sufficiently widespread and sufficiently urgent, that the cognitive risks of AI constitute a manufactured hazard of the same order as the environmental and financial hazards that previous generations of governance were built to address.
That recognition will come. The boomerang is returning. The question is whether it arrives as a catalyst for proactive construction or as the aftermath of a crisis that reactive construction can only partially repair. The answer depends, as it always does in the risk society, on whether the institutions that produce the risk can be compelled to participate in its governance before the cost of not participating exceeds the cost of the risk itself.
The dams must be built at every level — individual, organizational, national, and cosmopolitan. The first three are within reach. The fourth requires construction of a kind that no generation has yet accomplished for cognitive risk, though previous generations have accomplished it for risks of comparable scale and comparable complexity. The precedents exist. The tools exist. The understanding exists. What remains is the will, and the will depends on whether societies can perceive, before the crisis, the structural nature of the hazards that are accumulating in the self-concealing silence of the cognitive environments their tools have built.
---
The guarantee nobody gives you about parenthood is that your children will ask questions you cannot answer. Not the kind you deflect with a joke or a search engine query — the kind that expose the edges of everything you thought you understood.
My son's question at dinner — whether AI would take everyone's jobs — was one of those. I gave him an answer I believed at the time, an answer about jobs evolving and ascending and the human capacity for judgment remaining irreplaceable. I still believe that answer. But after spending months inside Ulrich Beck's framework, I understand something I did not understand when I gave it: I was answering the wrong question.
My son was not asking about employment statistics. He was asking about risk. Who bears the cost of this transition? Who decided it would happen this way? And if the people making the decisions cannot see the consequences from inside their fishbowls, who is watching out for the people downstream?
Beck died before any of this arrived. He never used Claude. He never felt the vertigo of the orange pill. But his framework fits the AI moment with a precision that unnerves me, because it suggests that the risks I describe throughout The Orange Pill — the productive addiction, the erosion of depth, the colonization of rest, the atrophy of the questioning instinct I call the most human thing we possess — are not personal challenges to be managed through individual discipline. They are manufactured uncertainties, produced by the same systems that produce the capability I celebrate, as structurally inseparable from the benefit as exhaust is from combustion.
That structural inseparability is the idea I cannot stop thinking about. I wrote The Orange Pill as a builder's book — a book about what to do, how to build, where to place the next stick in the dam. Beck's framework does not invalidate that project. But it exposes a limitation I was not equipped to see from inside it. The dams I prescribe — self-knowledge, attentional ecology, organizational boundaries, educational reform — are real and necessary. They protect individuals and organizations from the cognitive current. They do not alter the current.
The current is structural. It is produced by institutions optimized for capability, governed at speeds that cannot match the velocity of deployment, and distributed across borders that no local governance can reach. The developer in Lagos. The student in Dhaka. The engineer in Trivandrum. The cognitive risks they experience are manufactured by the same tools I use, and the individual dams I build for myself and my team do not extend to them. My dam protects my stretch of the river. The river flows on.
Beck's bicycle brake haunts me — the image of ethics playing the role of a bicycle brake on an intercontinental airplane. Every corporate AI ethics board I have encountered, every voluntary safety commitment, every internal review process operates at a speed and a scale that bears no meaningful relationship to the velocity and reach of the systems it is supposed to govern. The brake is real. It functions. It can slow a bicycle. It has no purchase on the vehicle it has been asked to stop.
What I take from Beck, and what I want to leave with anyone who has followed this analysis, is not despair. It is a recognition that changes what I build next. The dams I described in The Orange Pill are necessary. They are the beginning of the response, not its completion. The completion requires structures I cannot build alone, structures that no single company or nation can build alone — the transnational architectures of cognitive risk governance that this book's final chapters describe in outline and that the next decade must construct in practice.
My son's question deserved a better answer than the one I gave. The better answer is: I do not know. But I know that the cost of not knowing is not borne equally, and the people who build the tools have an obligation to build the governance alongside them, at a scale that matches the reach of what they have made.
The river does not stop. The boomerang returns. And the dams — the real dams, the structural ones, the ones adequate to the force of what is coming — are the work of a generation, not an individual.
They need building now.
The most dangerous byproduct of AI isn't what the technology does wrong.
It's what it manufactures when it works exactly as designed.
When Ulrich Beck published Risk Society in 1986, Chernobyl proved his thesis in radioactive fallout: modern institutions produce hazards as systematically as they produce wealth, and the same systems that generate the benefit generate the contamination. Now apply that framework to AI -- not to the apocalyptic scenarios, but to the quiet, structural risks manufactured by tools that succeed. The productive addiction that builders cannot distinguish from ambition. The erosion of cognitive depth produced by the same frictionlessness that liberates. The organized irresponsibility of an industry whose safety mechanisms operate at bicycle-brake speed on an intercontinental vehicle. This book brings Beck's most penetrating concepts -- manufactured uncertainty, reflexive modernization, sub-politics, the boomerang effect -- to bear on the AI moment he never lived to witness, revealing why individual discipline is necessary but structurally insufficient, and why the governance the moment demands must match the global reach of the risks it produces.

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ulrich Beck — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →