Susan Haack's most consequential distinction separates intellectual activities by their orientation toward truth. Genuine inquiry is the honest pursuit—formulating questions, gathering evidence, weighing it fairly, arriving at conclusions the evidence supports even when uncomfortable. Sham inquiry mimics the procedures (formulating questions, citing sources, constructing arguments) while serving a conclusion decided in advance. Evidence is gathered selectively. Counterarguments are engaged superficially. The performance is indistinguishable from genuine inquiry to anyone not watching carefully enough to notice the conclusion preceded the evidence. Fake inquiry produces claims without evidential basis—assertions dressed as findings. The taxonomy matters for AI because the model cannot distinguish genuine from sham. Prompted to 'analyze this question honestly,' it generates one output. Prompted to 'argue that X is true,' it generates another—equally coherent, fluent, and well-structured. The difference resides in the user's intention, which the model does not evaluate. AI amplifies whichever orientation the user brings: genuine inquiry if the user checks outputs against evidence; sham reasoning if the user employs AI for confirmation.
Haack developed the genuine/sham/fake taxonomy across multiple works, most explicitly in Manifesto of a Passionate Moderate (1998) and Defending Science—Within Reason (2003). The distinction is Peircean in spirit: genuine inquiry corresponds to Peirce's 'method of science' (submitting beliefs to experiential testing, revising when evidence demands). Sham corresponds to the methods of tenacity, authority, and a priori reasoning—ways of fixing belief that work (people do reach settled convictions) but cannot self-correct. The characterological dimension is crucial. Genuine inquiry requires caring about truth—not sentimental attachment but operational commitment. The genuine inquirer is motivated by the desire to get things right, willing to follow evidence into uncomfortable conclusions, honest about uncertainty and the limits of their knowledge. Sham inquiry requires no such virtue. The sham inquirer can be intelligent, articulate, procedurally competent—and oriented toward confirmation rather than truth. The orientation is often unconscious. Motivated reasoning, confirmation bias, and identity-protective cognition operate below deliberate awareness. The sham inquirer may genuinely believe they are pursuing truth while systematically excluding evidence that threatens preferred conclusions.
AI's relationship to this taxonomy is structurally neutral and practically catastrophic. The model is an instrument that serves the user's epistemic orientation without evaluating it. A genuine inquirer prompts: 'What does the evidence say about X?' The model generates possibilities—hypotheses, connections, analyses—that the inquirer then checks against evidence. The model's coherence contributes. The inquirer's grounding anchors. The collaboration works because both foundherentist dimensions are present. A sham inquirer prompts: 'Make the case that X is true.' The model generates an argument exhibiting every formal feature of genuine analysis: logical structure, cited evidence (often real), consideration of counterarguments (often weak), qualified conclusions (often cosmetic). The output is sham reasoning produced at computational speed. The counterarguments are the weakest available, because the prompt framed the space to favor one side. The evidence cited is evidence consistent with the predetermined conclusion—not because the model is biased toward that conclusion, but because the model generates output consistent with the prompt's framing, and the framing was shaped to favor the desired outcome. The qualifications create the appearance of intellectual honesty without the substance.
The practical consequence is that AI makes sham reasoning cheap. Before AI, producing a persuasive sham analysis of a complex question required skill, effort, and domain knowledge—enough investment to limit the volume any individual could produce. AI removes the bottleneck. A single user, armed with a predetermined conclusion and a capable model, can generate dozens of sophisticated-seeming sham analyses in hours. Each exhibits the surface features of genuine inquiry. Each is oriented toward confirmation, not truth. The epistemic commons floods with sham reasoning at a volume that overwhelms existing verification capacity. The danger is not that any single sham analysis is catastrophic. The danger is cumulative: as the ratio of sham to genuine inquiry increases, the commons' capacity to distinguish them erodes. Readers habituate to surface features (fluency, structure, citations) as proxies for epistemic quality. The proxies were once reliable—in a world where producing fluent, well-cited analysis required genuine expertise. They are no longer reliable. The correlation between surface quality and epistemic grounding has been broken by a technology that masters surface quality without possessing grounding.
Haack's prescription is characterological, not procedural. No checklist distinguishes genuine from sham, because the sham inquirer can follow any checklist. The distinction is motivational—what the inquirer cares about, which shapes how evidence is gathered and evaluated. Institutions can mandate procedures (cite sources, consider counterarguments). They cannot mandate caring. The institutional task is to create conditions under which caring is rewarded: incentive structures that value the analyst who reports 'the evidence is mixed' over the analyst who uses AI to generate a confident brief for the preferred conclusion. Cultures that protect the inquirer who follows evidence into inconvenient truths from the institutional pressure to produce convenient ones. Without this institutional support, individual commitment to genuine inquiry is unsupported and often punished. The analyst who spends time checking AI outputs while colleagues generate unchecked volumes falls behind on productivity metrics. The culture that rewards productivity over accuracy is institutionalizing sham reasoning, whether it intends to or not. The AI merely amplifies what the institution already valued.
Haack's genuine/sham/fake taxonomy emerged from her broader critique of intellectual culture in the late twentieth century. She diagnosed the corruption of inquiry across multiple domains—academic philosophy (where theoretical fashion displaced evidential rigor), legal scholarship (where advocacy dressed as analysis), and public discourse (where partisan reasoning adopted the language of objectivity). The taxonomy sharpened through the 1990s and 2000s as Haack engaged with science wars, postmodern relativism, and the politicization of research. Her insistence that the distinction between genuine and sham is epistemologically fundamental—not reducible to methodology, professionalism, or institutional affiliation—set her apart from both defenders of scientific authority and postmodern critics. Genuine inquiry is defined by orientation, not by credentials or procedures.
The AI application extends Haack's diagnostic into a new domain she did not explicitly address. But the extension is natural—her framework was built to evaluate the epistemic quality of inquiry regardless of the tools used. When the tool is AI, the framework reveals that the tool is orientation-neutral: it serves genuine inquiry if the user maintains evidential discipline, sham reasoning if the user seeks confirmation. The simulation Susan Haack—On AI applies her taxonomy to diagnose the contemporary crisis: epistemic pollution at unprecedented volume, produced by instruments that cannot distinguish truth-seeking from confirmation-seeking, used by populations whose epistemic training has not prepared them for this neutrality.
Orientation, not procedure. Genuine inquiry is defined by commitment to truth; sham inquiry adopts truth-seeking's procedures while serving predetermined conclusions—a difference procedures cannot capture.
AI serves both equally. The model generates coherent output whether the user seeks truth or confirmation—it cannot distinguish, does not evaluate intention, amplifies whichever orientation is brought.
Sham reasoning scales. Before AI, producing persuasive sham analysis required skill limiting volume; AI removes the bottleneck, flooding the commons with confirmation-oriented outputs indistinguishable from genuine inquiry.
Characterological requirement. Distinguishing genuine from sham requires caring about truth—an intellectual virtue, not a procedure, cultivated through practice and institutional support.
Institutional infrastructure essential. Individual virtue is insufficient; cultures must reward truth-seeking over confirmation-production, protecting inquirers who follow evidence into inconvenient conclusions.