The greengrocer's algorithm is the Václav Havel — On AI simulation's term for the rational decision procedure that every participant in a system of distributed compliance performs. The algorithm is simple: calculate the cost of compliance versus the cost of refusal, recognize that the asymmetry is extreme (compliance costs nothing or very little; refusal costs everything or enough to matter), and choose compliance. The choice is rational. The aggregate outcome is a system sustained not by force but by millions of distributed rational calculations, each arriving at the same conclusion. The algorithm operates automatically—it does not require deliberation. The greengrocer hangs the sign without thinking, the way he locks the door at night. The knowledge worker adopts AI tools without deliberation, because every colleague has adopted them, because productivity metrics reward their use, because professional survival depends on competitive capability, and because the calculation—adopt or decline—arrives at adopt with such reliability that the decision ceases to feel like a decision and becomes simply what one does.
The simulation applies the greengrocer's algorithm to the AI transition by identifying its operation in the decision-making of developers, teachers, parents, and organizational leaders. The developer calculates: If I adopt, my productivity increases, my metrics improve, my professional reputation is maintained. If I do not adopt, my productivity stagnates, my metrics decline, my colleagues regard me as behind. The calculation is not hypothetical. Every element is grounded in observable reality. The productivity gains are real. The professional consequences of non-adoption are real. The rational response is adoption. And because the response is rational, the developer experiences it as choice—she is not being forced, she is choosing the better option. The algorithm conceals the fact that the options themselves were structured by a system whose logic she did not design and cannot alter.
The algorithm's power resides in its distribution. No central authority runs the calculation for everyone. Each participant runs it independently, in private, arriving at the same conclusion through the same reasoning. The decentralization is what makes the system resilient: there is no single point of enforcement, no authority whose commands could be resisted, no visible oppressor against whom resistance could be organized. The oppression, such as it is, is structural—built into the arrangement of incentives rather than imposed by any agent. The developer who adopts AI is not being oppressed by her employer, by the technology companies, or by the market. She is responding to a structure that all of those entities participate in sustaining but that none of them individually controls.
The simulation's most precise diagnosis is the identification of what the algorithm conceals: the third option. The greengrocer's algorithm presents two choices—comply or refuse—and structures the consequences so that only one choice is viable. But Havel's framework reveals a third option the binary conceals: comply, and tell the truth about what compliance costs. The greengrocer can hang the sign and, in his private conversations, acknowledge that the sign is ritual. The developer can adopt the tools and, in her documentation, record what the tools have cost alongside what they have produced. This third option does not resolve the structural problem, but it preserves the capacity to perceive the problem—and perception, in Havel's framework, is the precondition for any genuine alternative.
The concept is the simulation's extension of Havel's analysis into the specific terms of the AI transition. Havel did not use the phrase "algorithm," but the structure he described—the rational calculation that produces distributed compliance—is algorithmic in the precise sense: a procedure that, given the same inputs (incentive structure, consequences, alternatives), reliably produces the same output (compliance). The simulation's contribution is to name the procedure explicitly and to demonstrate its operation in the contemporary context, where the incentives are not political but economic, and where the performance is not ideological loyalty but technological enthusiasm.
Asymmetry of consequences. The algorithm's core input—compliance costs nothing or little, refusal costs enough to matter—which makes the calculation's outcome predictable and makes genuine alternatives structurally invisible.
Distributed rationality. Each participant runs the algorithm independently, arriving at the same conclusion through the same reasoning—creating a system sustained by decentralized compliance rather than centralized enforcement.
Automaticity through repetition. The algorithm, run daily over months and years, becomes automatic—the decision ceases to feel like a decision and becomes simply what one does, what anyone would do.
The concealed third option. The binary (comply or refuse) obscures the possibility of complying while maintaining clarity about what compliance costs—the practice of participation without performance that Havel called living in truth.