No constitution protects it. No bill of rights enumerates it. No political party defends it. The freedom to do nothing — to exist without producing, to occupy time without converting it into value — has no legal standing because the culture that would need to protect it does not recognize it as a freedom at all. It recognizes it as waste. Odell argues that this unrecognized freedom is the one most threatened by AI, not because AI eliminates the freedom directly but because AI makes the freedom feel worthless. When the tool can amplify everything you are, the choice to not use it feels like the choice to be less. A freedom that feels like diminishment is a freedom that no one exercises. The erosion is therefore invisible: nothing is taken, but the choice to not use the tool becomes, functionally, not a choice at all.
The argument rests on a specific analytic move: the separation of the capacity to refuse from the conditions under which refusal is possible. In liberal political theory, freedom is usually analyzed at the level of capacity — the law does not prohibit the action, therefore the action is free. Odell's argument is that this framing misses the conditions that make the capacity actually usable. The factory worker in 1910 was, in capacity terms, free to refuse the sixteen-hour day. In conditions terms, that refusal meant unemployment, and unemployment meant starvation. The freedom without conditions is not freedom. It is the appearance of freedom.
The AI case is structurally similar but differs in the nature of the coercion. There is no boss demanding that the builder work through lunch. The competitive pressure operates through the accumulated small choices of every other builder, each acting rationally, none intending to coerce anyone. The builder who refuses to use the tool falls behind the builder who does. The builder who maintains boundaries is outperformed by the builder who does not. The penalty is distributed rather than direct, which makes it invisible to frameworks designed to identify direct coercion.
Odell's invocation of the 2023 Hollywood writers' strike is the framework's concrete political exemplar. The writers did not individually refuse AI. They collectively refused, withdrew their labor simultaneously, and forced the studios to negotiate protections that preserved the dignity of human creative work. The protections were extracted through collective action, not granted by enlightened employers. The parallel to the history of labor protections — the forty-hour week, child labor laws, workplace safety — is explicit in Odell's framework.
The dignity Odell invokes at the Sydney Writers' Festival is not sentiment. It is a precise political claim: that human work has a value that cannot be reduced to output, and that the conditions under which the work is performed are themselves part of what is at stake. The writers were not primarily negotiating wages. They were negotiating the terms under which writing would continue to be a recognizable human activity.
The concept emerged from Odell's reading of labor history alongside her observation of the AI-era workplace. Key interlocutors include E.P. Thompson on the moral economy of the English working class, David Graeber on bullshit jobs, and the economist Juliet Schor on time use.
The concept's most public articulation came at the 2023 Sydney Writers' Festival, where Odell rejected the "it's going to happen sooner or later" framing of AI's inevitability and invoked the Hollywood writers' strike as evidence that technological trajectories are political rather than natural.
An unrecognized freedom. The freedom has no legal protection because the culture does not recognize non-production as a legitimate use of time.
Erosion by devaluation, not coercion. AI does not prevent refusal. It makes refusal feel like self-diminishment, which amounts to the same thing.
Structural not personal. The pressure operates through competitive dynamics, not through individual actors, making individual resistance insufficient.
Dignity as the operative standard. The question is not whether humans can still produce but whether they can still choose, with full agency, not to.
Collective protection as the only response. The history of labor rights suggests that such freedoms are protected only when they are collectively claimed and institutionally defended.
The argument has been challenged from both libertarian and accelerationist positions: libertarians deny that structural pressure counts as coercion, accelerationists argue that the ability to produce at AI-scale is an unambiguous human good that makes restrictions on its use regressive. Odell's reply is that neither position takes seriously the distinction between the tool's existence and the obligation to use it maximally, and that the absence of this distinction in contemporary discourse is itself a symptom of the problem.