Beauvoir identified bad faith as the central mechanism by which humans flee from the anguish of freedom. Because acknowledging freedom means accepting that we are the authors of our values and the source of meaning in a meaningless universe, the psychological pull toward determinism—'I had no choice'—is enormous. The AI transition intensifies this pull by providing sophisticated justifications: the innovation imperative, the competitive necessity of adoption, the claim that refusing AI is refusing the future. These narratives are not lies but systematic self-deceptions—they function to relieve the builder of the burden of asking whether the acceleration she participates in serves purposes she can defend.
The organizational form of bad faith appears when companies treat AI adoption as strategic necessity rather than strategic choice. Christensen's framework reveals that incumbents claiming 'we have no choice but to disrupt ourselves' are refusing to examine whether self-disruption serves long-term value or short-term survival instincts. Beauvoir would diagnose this as institutional bad faith—the collective refusal to acknowledge that organizational responses to AI involve genuine moral choices about who benefits, who bears costs, and what kind of work culture is being constructed. The claim 'the market demands it' converts a political-economic decision into a natural fact, precisely the operation bad faith performs.
The antidote is not heroic individualism but honest self-appraisal practiced at individual and institutional levels. The builder confronting her own bad faith must ask: Am I using this tool because it serves the work, or because it relieves me of the difficulty I find uncomfortable? Am I building this feature because it should exist, or because the tool makes it easy to build? The organization confronting its collective bad faith must ask: Are we adopting AI to expand human capability or to reduce labor costs? Are we preserving the conditions under which our members develop judgment, or are we optimizing them away in pursuit of quarterly metrics? These questions have no algorithmic answers—they require the sustained, uncomfortable, non-optimizable practice of examining our commitments and accepting responsibility for them.
Bad faith (mauvaise foi) was Sartre's concept, developed in Being and Nothingness (1943) as the denial of either facticity (our embeddedness in circumstances) or transcendence (our capacity to go beyond them). Beauvoir refined the concept by showing that bad faith is not merely individual psychology but is structurally induced by oppressive situations. Women in patriarchal societies were encouraged into bad faith—to see their confinement as natural rather than constructed. The AI application recognizes that technology discourses similarly naturalize what are actually choices, converting moral decisions into technical inevitabilities and thereby producing bad faith at civilizational scale.
Technology does not determine. AI shapes possibilities and probabilities but does not eliminate choice; the builder who claims 'I must' is refusing to acknowledge 'I choose' and thereby evading responsibility for consequences.
Market as constructed. Market demands are not natural forces but human arrangements whose rules reflect power and can be contested; citing market necessity is bad faith when the speaker benefits from the market's current configuration.
Collective bad faith. Organizations adopt AI 'because everyone is' or 'to stay competitive' without examining whether the adoption serves articulated purposes—institutional self-deception that distributes responsibility until it disappears.
The anguish of acknowledgment. Recognizing one's choices as choices produces discomfort—the weight of responsibility, the risk of error—and bad faith functions to relieve this discomfort at the cost of honesty and moral seriousness.
Antidote through interrogation. The practice of asking 'Why am I building this?' and refusing answers that cite external necessity—demanding justifications that acknowledge values, trade-offs, and the agent's genuine freedom to choose otherwise.