Accelerationism names a family of positions united by the claim that the appropriate response to technological and social change is intensification rather than restraint. The term has roots in mid-twentieth-century French theory (Deleuze and Guattari's work on capitalism), was developed through the UK CCRU in the 1990s, and has emerged in distinct right-wing, left-wing, and centrist variants. Its contemporary Silicon Valley form — effective accelerationism, or e/acc — treats AI development as the paradigmatic case in which acceleration is morally required, and treats deceleration as the actual existential risk. Andreessen's Techno-Optimist Manifesto articulates this contemporary accelerationist position in its most compressed public form.
The term's genealogy runs through several distinct traditions. Deleuze and Guattari's Anti-Oedipus (1972) contained the enigmatic suggestion that capitalism's tendencies should be accelerated rather than resisted — a passage whose interpretation remains contested. The Cybernetic Culture Research Unit at Warwick in the 1990s, around Nick Land and Sadie Plant, developed a more explicit accelerationist framework that anticipated many later formulations. Left accelerationism — articulated by Nick Srnicek and Alex Williams in their 2013 manifesto — argues that technological acceleration under alternative social arrangements could produce post-capitalist outcomes.
Effective accelerationism, or e/acc, emerged in 2022 as a Silicon Valley-adjacent movement explicitly opposed to AI safety discourse. Its proponents — on Twitter and through various anonymous accounts — argued that AI should be developed as fast as possible, that alignment concerns were either overblown or productively addressed through scale, and that deceleration was the real risk to humanity. The movement's aesthetic — combining thermodynamic metaphors with explicit hostility to regulation — positioned it as the natural opponent of the effective altruist AI safety community.
Andreessen's Techno-Optimist Manifesto can be read as the most prominent public statement of an accelerationist position from a figure with institutional power to implement it. The manifesto does not use the term accelerationism explicitly but adopts its framework: acceleration is morally required, deceleration is harmful, the enemies of technology are the enemies of humanity.
The accelerationist framework faces several sustained critiques. The distributional critique argues that acceleration's benefits and costs are asymmetrically distributed, and that treating acceleration as uniformly desirable privileges those who capture gains over those who absorb costs. The epistemic critique argues that acceleration forecloses the time required for the kind of deliberation that determines whether specific technological paths serve broad human interests. The alignment critique — from the AI safety community — argues that acceleration in specific high-stakes domains may produce irreversible outcomes whose costs make the framework self-refuting.
The Andreessen — On AI volume treats accelerationism as an incomplete framework rather than a wrong one. It argues that acceleration is the correct default in most cases but that the default requires the kind of honest accounting of costs that pure accelerationism resists.
The term and its underlying framework emerged across multiple traditions in the twentieth century — Marxist, post-structuralist, and cybernetic. The contemporary Silicon Valley form crystallized in 2022–2023 through anonymous Twitter accounts, Guillaume Verdon's public advocacy under the handle Beff Jezos, and Andreessen's October 2023 manifesto. By 2024, the movement had become an identifiable faction within the AI discourse, opposed to the AI safety community on most policy questions.
Acceleration as moral duty. The claim that technological intensification is not optional but required, because the counterfactual is perpetuation of existing deprivation.
Deceleration as risk. The inversion of standard AI safety framing — the argument that slowing technological development is the actual existential risk, not the proposed remedy.
Market as accelerator. The claim that markets are the most effective mechanism for producing acceleration, and that interference with market dynamics produces worse outcomes even when well-intentioned.
Enemy identification. The rhetorical move, central to e/acc and the Techno-Optimist Manifesto, of naming specific opponents — regulators, academics, safety researchers — as adversaries rather than interlocutors.
Thermodynamic framing. The distinctive e/acc aesthetic of treating acceleration as a physical tendency of the universe expressed through technological and economic activity — a framing whose rhetorical function exceeds its analytical content.
Accelerationism is currently the subject of intense debate across the AI policy, technology, and philosophical communities. Its defenders argue that its framework clarifies questions that other positions obscure. Its critics argue that its compressed form precludes the qualified argument the subject demands. The specific debate about whether the AI moment requires acceleration, deceleration, or some configuration of selective acceleration with institutional reform remains unresolved and will likely remain so for years.