In self-organized critical systems, the triggering event — the specific grain that initiates an avalanche, the earthquake's final stress increment, the blog post that crashes a stock price — is almost irrelevant compared to the system's underlying critical state. The trigger is what happened. The cause is the global configuration that made a large cascade possible. Per Bak insisted on this distinction to counter the human tendency to attribute causation to the most temporally proximate event. When Anthropic's COBOL blog post triggered IBM's largest single-day stock decline in twenty-five years, the blog post was the trigger. The cause was the accumulated market anxiety about AI-driven software obsolescence, the critical state of investor confidence, the correlated repricing mechanisms across the sector. Understanding this distinction prevents the error of focusing on grain-level interventions (controlling which blog posts get published) when pile-level dynamics (the market's critical state) determine outcomes.
The trigger-versus-cause distinction is foundational to seismology, where earthquakes are understood as the release of accumulated tectonic stress rather than being 'caused' by whatever final increment pushed the fault past its failure threshold. A magnitude-8 earthquake releases energy that accumulated over decades or centuries. The final hour's worth of stress accumulation triggers the rupture, but attributing causation to that final hour would be a category error. The cause is the plate-tectonic system's continuous accumulation of strain; the trigger is the moment when accumulated strain exceeded the fault's frictional resistance. Bak generalized this understanding: in every self-organized critical system, the meaningful cause is the critical state, not the triggering perturbation.
The human cognitive bias toward attributing causation to triggering events produces systematically inadequate responses. After a financial crash, regulators focus on the triggering event (a specific trade, a piece of news, a rumor) and attempt to prevent similar triggers. This intervention is grain-level: removing specific grains, policing specific behaviors. It doesn't address the pile's critical state — the accumulated leverage, the correlated positions, the feedback loops that made a cascade inevitable. After the 2008 crisis, enormous regulatory energy went into preventing the specific mechanisms that triggered that crash (subprime mortgages, credit default swaps). Less energy went into addressing the financial system's self-organization toward criticality, which guarantees that some trigger, somewhere, will eventually initiate the next cascade.
For the AI transition, the distinction matters for both analysis and response. Focusing on Claude Code as 'the' breakthrough that changed everything misattributes causation. Claude Code was a grain landing on a pile at its critical angle. The breakthrough was the pile's state — fifty years of computing abstraction, thirty years of connectivity, a decade of deep learning, years of scaling-law research accumulating grain by grain until the system was poised for phase transition. If not Claude Code in December 2025, a different grain would have triggered a similar transition within months. The timing was specific to the grain; the inevitability was specific to the pile.
The response implication: institutions attempting to govern AI by controlling specific triggering events (banning particular tools, restricting particular capabilities, regulating particular companies) are applying grain-level interventions to a pile-level phenomenon. The interventions might delay specific avalanches by removing specific grains, but they cannot change the pile's critical state. The pile continues accumulating grains through every organization and individual that isn't regulated. The correlation length ensures that grains dropped anywhere contribute to the global critical state. Effective governance must address pile-level dynamics: the competitive pressures driving capability accumulation, the economic incentives accelerating deployment, the educational gaps leaving populations unprepared for ongoing reorganization. Grain-level policies fail in critical systems. Pile-level institutions channel.
The distinction between proximate and ultimate causes has philosophical roots in Aristotle's four causes, scientific roots in Darwin's distinction between how and why questions, and practical roots in engineering failure analysis where the triggering event (a spark, a crack, a gust of wind) is distinguished from the underlying conditions (fuel accumulation, material fatigue, structural resonance) that made catastrophic failure possible. Bak's contribution was showing that in self-organized critical systems, the triggering event contributes essentially nothing to explaining the scale of consequences — the entire explanatory weight falls on the system's critical state.
Trigger is proximate, state is ultimate. The final perturbation determines timing; the critical configuration determines magnitude — causally, the latter vastly outweighs the former.
Any trigger would do. At criticality, the specific identity of the triggering grain is nearly irrelevant — if not this one, another would have initiated a cascade of similar scale soon after.
Human bias toward triggers. Cognitive systems evolved to attribute causation to temporally proximate events, systematically misdirecting attention from underlying critical states to superficial triggers.
Grain-level policy fails. Interventions targeting specific triggers (banning tools, restricting capabilities) cannot affect pile-level dynamics determining whether avalanches occur and how they propagate.
Pile-level structures matter. Effective governance addresses the dynamics that produce and maintain criticality — competitive pressures, economic incentives, educational infrastructure — rather than policing which grains fall.