In 2024, writing in Business Horizons, Tim Hannigan, Ian McCarthy, and André Spicer coined the term 'botshit' to describe the specific pathology that emerges at the interface between AI output and human consumption. Bullshit, in Frankfurt's precise sense, is output produced without concern for truth. Botshit is what happens when a human accepts such output as though it were truth-oriented and acts on it without performing the evaluative work that would distinguish plausible from accurate. The term captures a cascading epistemic failure: the system produces bullshit because it cannot care about truth; the human converts the bullshit into botshit by failing to care about truth in the specific context of evaluating the system's output.
The conversion from bullshit to botshit is not a moral failure. It is an attention failure — a consequence of the conditions under which AI output is consumed. The output arrives quickly. It is polished. It sounds authoritative. The builder is in a state of flow, moving fast, building with the specific momentum that AI tools make possible. The evaluative pause that would distinguish plausible from accurate — the moment of stepping back, checking the reference, asking 'Is this actually true?' — is a friction that the flow state resists.
Frankfurt's framework illuminates why this pause is so difficult to maintain. The builder in flow is a person whose will is aligned: the desire to build and the endorsement of the desire to build are unified, producing the phenomenology of freedom. The evaluative pause interrupts this alignment. It requires the builder to step out of the flow state, to activate the critical faculty that the flow state has temporarily suspended, to become a different kind of thinker — skeptical, careful, oriented toward truth rather than momentum. The pause is not just a time cost. It is a volitional shift, and the volitional shift requires effort that the flow state's momentum actively resists.
The builder who maintains the evaluative pause despite the resistance is doing something Frankfurt's framework values above almost everything else: maintaining the authority of the reflective self over the productive self, ensuring that second-order evaluation continues to govern first-order activity, keeping the person a person rather than sliding into the wanton's mode of unreflective production. The builder who does not maintain the pause — who accepts polished output, moves on, builds the next thing, and never checks whether the philosophical reference was accurate — is producing botshit.
Frankfurt's most disturbing insight, in this context, is that bullshit is self-reinforcing. The more bullshit a person consumes without detecting it, the more their capacity for detection atrophies. The framework within which truth and falsity are distinguished weakens with disuse, like any cognitive capacity not exercised. A builder who accepts AI-generated output uncritically for six months is not the same evaluator she was at the start. The muscle that asks 'Is this true?' has weakened. The standard of what counts as acceptable output has drifted toward plausibility and away from truth — not through any conscious decision but through gradual erosion of a capacity not being used.
Hannigan, McCarthy, and Spicer introduced the term in 'Beware of botshit: How to manage the epistemic risks of generative chatbots,' published in Business Horizons in 2024. The paper built on the earlier work of Hicks, Humphries, and Slater ('ChatGPT is Bullshit,' Ethics and Information Technology, 2024) and extended Frankfurt's analytical framework into the domain of organizational decision-making.
The term has since been adopted in education, journalism, and knowledge-work more broadly as a vocabulary for naming a specific failure mode that previous terminology did not capture. 'Hallucination' misdescribes the phenomenon by implying perceptual error. 'Misinformation' implies intent. Botshit names the specific cascade through which truth-indifferent production meets truth-indifferent consumption and produces downstream action with no truth-orientation anywhere in the loop.
Attention failure, not moral failure. The conversion from bullshit to botshit is structural. It happens under conditions the flow state produces, not because the consumer lacks integrity.
The flow state as epistemic risk. The phenomenology of alignment that makes AI-augmented work pleasurable is also the phenomenology that resists the evaluative pause critical reading requires.
Self-reinforcing erosion. Each uncaught instance of botshit consumption weakens the capacity to catch the next. The muscle atrophies with disuse.
The remedy is structural. Individual willpower cannot reliably resist the flow-state's resistance to evaluation. Practices and structures must do the work the will alone cannot sustain.