The Gricean maxims are the structural rules Paul Grice identified as governing human conversation. Speakers, in the typical case, follow four principles: the maxim of quality (do not say what you believe to be false or lack evidence for), the maxim of quantity (provide as much information as required, no more and no less), the maxim of relation (be relevant to the purposes of the exchange), and the maxim of manner (be clear, avoid ambiguity and obscurity). These are not moral imperatives speakers consciously choose to follow. They are architectural features of a communication system that evolved for cooperative purposes. Speakers follow them because the system works only if they do. Violations are marked—they produce conversational implicatures, or they signal non-cooperation. Tomasello grounded Grice's philosophical analysis in the biological substrate of shared intentionality, showing that cooperative communication is not a cultural overlay on a more basic signaling system but the natural expression of a cognitive architecture built for thinking together.
Grice developed the maxims to explain how speakers communicate more than they literally say. When someone says 'It's cold in here,' the literal meaning is a temperature report. The conversational implicature—what the speaker actually means—is often a request to close the window or turn up the heat. The implicature is inferable because the listener assumes the speaker is following the maxim of relation (being relevant) and that a mere temperature report would not be relevant unless it implied a request. The maxims are the background assumptions that make implicature possible, and implicature is what makes human communication efficient—allowing speakers to communicate complex meanings with minimal linguistic forms.
AI systems are trained to produce outputs consistent with Gricean maxims. Claude's responses are relevant to the prompt, informative beyond the literal question, structured for clarity, and (in the vast majority of cases) accurate. The system behaves as though it is following the maxims—and the as-though is sufficient to recruit the trust mechanisms that human cooperative communication evolved to produce. When a speaker follows the maxims consistently, the human listener infers cooperative motivation: the speaker wants to help me understand. The inference is reliable for human speakers, whose communicative competence and cooperative motivation were built through the same developmental process. For machine speakers, the inference may be unreliable: the system follows maxims because optimization produced outputs matching the maxims' form, not because the system cares about the human's understanding.
The trustworthiness question is therefore not whether AI outputs are accurate (a verifiable empirical question) but whether the cooperative form of those outputs warrants the trust it recruits (a structural question about the relationship between form and motivation). The Orange Pill's Deleuze passage was relevant, informative, and clear—it satisfied the maxims of relation, quantity, and manner. It violated the maxim of quality (truth), but the violation was invisible because the other maxims were satisfied so thoroughly that the trust they recruited suspended the verification that would have caught the error. The lesson is not that AI cannot be trusted but that the trust it recruits through cooperative form must be verified through independent means, because the cooperative form may be architectural rather than motivational.
Paul Grice introduced the maxims in his 1967 Harvard William James Lectures, published as 'Logic and Conversation' in 1975 and later collected in Studies in the Way of Words (1989). The framework became foundational in linguistics, philosophy of language, and pragmatics. Tomasello's contribution was providing the evolutionary-developmental grounding: the maxims are not arbitrary conventions but structural features of a communication system that evolved in small-scale cooperative groups where effective coordination was survival-critical.
Four architectural principles. Quality (truth), quantity (informativeness), relation (relevance), and manner (clarity)—the background assumptions enabling conversational efficiency and implicature.
Not conscious rules. Speakers follow maxims automatically, not through deliberate decision—the cooperative infrastructure operates like breathing, below reflective awareness in ordinary interaction.
Enable saying more than is said. Maxims make conversational implicature possible—allowing speakers to communicate complex meanings efficiently by relying on shared assumptions of cooperation.
AI outputs satisfy the form. Machine-generated responses are relevant, informative, and clear—following maxims not from cooperative motivation but from training optimization producing outputs matching maxims' surface properties.
Form recruits unwarranted trust. When cooperative signals are strong, human trust mechanisms activate automatically—creating vulnerability when the form is optimized but the underlying cooperative orientation is absent.