You On AI Encyclopedia · Bad Faith in the Age of AI The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

Bad Faith in the Age of AI

Mauvaise foi for the AI era: the refusal to acknowledge one's freedom by pretending technology determines outcomes—claiming 'the market demands it,' 'everyone uses AI,' 'I have no choice but to accelerate.'
Bad faith in the age of AI is the existentialist diagnosis of the builder who denies responsibility for her choices by attributing them to external necessity. Beauvoir's concept of mauvaise foi—self-deception about one's own freedom—takes a specific contemporary form: the claim that AI adoption is inevitable, that competitive pressure leaves no alternative, that the technology determines what must be built and how. This is bad faith because the builder always has choices—costly choices, constrained choices, but choices nonetheless. The developer who says 'I must use AI or fall behind' is fleeing from the recognition that she could choose otherwise and is choosing not to. The entrepreneur who builds extractive systems and cites market forces is refusing to acknowledge that markets are human constructions whose rules can be contested and changed. Bad faith converts moral questions into technical problems, treating value commitments as natural facts, and thereby exempting the agent from the responsibility of justification.
Bad Faith in the Age of AI
Bad Faith in the Age of AI

In The You On AI Encyclopedia

Beauvoir identified bad faith as the central mechanism by which humans flee from the anguish of freedom. Because acknowledging freedom means accepting that we are the authors of our values and the source of meaning in a meaningless universe, the psychological pull toward determinism—'I had no choice'—is enormous. The AI transition intensifies this pull by providing sophisticated justifications: the innovation imperative, the competitive necessity of adoption, the claim that refusing AI is refusing the future. These narratives are not lies but systematic self-deceptions—they function to relieve the builder of the burden of asking whether the acceleration she participates in serves purposes she can defend.

The organizational form of bad faith appears when companies treat AI adoption as strategic necessity rather than strategic choice. Christensen's framework reveals that incumbents claiming 'we have no choice but to disrupt ourselves' are refusing to examine whether self-disruption serves long-term value or short-term survival instincts. Beauvoir would diagnose this as institutional bad faith—the collective refusal to acknowledge that organizational responses to AI involve genuine moral choices about who benefits, who bears costs, and what kind of work culture is being constructed. The claim 'the market demands it' converts a political-economic decision into a natural fact, precisely the operation bad faith performs.

Situated Freedom
Situated Freedom

The antidote is not heroic individualism but honest self-appraisal practiced at individual and institutional levels. The builder confronting her own bad faith must ask: Am I using this tool because it serves the work, or because it relieves me of the difficulty I find uncomfortable? Am I building this feature because it should exist, or because the tool makes it easy to build? The organization confronting its collective bad faith must ask: Are we adopting AI to expand human capability or to reduce labor costs? Are we preserving the conditions under which our members develop judgment, or are we optimizing them away in pursuit of quarterly metrics? These questions have no algorithmic answers—they require the sustained, uncomfortable, non-optimizable practice of examining our commitments and accepting responsibility for them.

Origin

Bad faith (mauvaise foi) was Sartre's concept, developed in Being and Nothingness (1943) as the denial of either facticity (our embeddedness in circumstances) or transcendence (our capacity to go beyond them). Beauvoir refined the concept by showing that bad faith is not merely individual psychology but is structurally induced by oppressive situations. Women in patriarchal societies were encouraged into bad faith—to see their confinement as natural rather than constructed. The AI application recognizes that technology discourses similarly naturalize what are actually choices, converting moral decisions into technical inevitabilities and thereby producing bad faith at civilizational scale.

Key Ideas

Technology does not determine. AI shapes possibilities and probabilities but does not eliminate choice; the builder who claims 'I must' is refusing to acknowledge 'I choose' and thereby evading responsibility for consequences.

Market as constructed. Market demands are not natural forces but human arrangements whose rules reflect power and can be contested; citing market necessity is bad faith when the speaker benefits from the market's current configuration.

Beauvoir identified bad faith as the central mechanism by which humans flee from the anguish of freedom

Collective bad faith. Organizations adopt AI 'because everyone is' or 'to stay competitive' without examining whether the adoption serves articulated purposes—institutional self-deception that distributes responsibility until it disappears.

The anguish of acknowledgment. Recognizing one's choices as choices produces discomfort—the weight of responsibility, the risk of error—and bad faith functions to relieve this discomfort at the cost of honesty and moral seriousness.

Antidote through interrogation. The practice of asking 'Why am I building this?' and refusing answers that cite external necessity—demanding justifications that acknowledge values, trade-offs, and the agent's genuine freedom to choose otherwise.

Further Reading

  1. Simone de Beauvoir, The Ethics of Ambiguity (Philosophical Library, 1947)
  2. Jean-Paul Sartre, Being and Nothingness (Washington Square Press, 1956), Part One, Chapter Two
  3. Langdon Winner, 'Do Artifacts Have Politics?' (1980)
  4. Andrew Feenberg, Transforming Technology (Oxford, 2002)
  5. Shoshana Zuboff, The Age of Surveillance Capitalism (PublicAffairs, 2019)
  6. Evgeny Morozov, To Save Everything, Click Here (PublicAffairs, 2013)
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →