Speech Act Theory — Orange Pill Wiki
CONCEPT

Speech Act Theory

Austin and Searle's framework that language performs actions—promises, requests, declarations—rather than merely transmitting information, revealing what AI's linguistic competence lacks.

Speech act theory, developed by J.L. Austin in the 1950s and systematized by John Searle in the 1960s–70s, holds that utterances are not primarily vehicles for conveying information but performances of social actions. When a manager says 'Can you have this done by Friday?', the utterance is not a question about capability but a request creating commitment, altering relationships, and changing the landscape of obligations. Austin distinguished locutionary acts (producing meaningful utterances), illocutionary acts (performing social actions through those utterances), and perlocutionary acts (producing effects on listeners). Searle formalized the conditions under which illocutionary acts succeed—speaker intention, conventional procedures, appropriate context. Winograd and Flores applied this framework to human-computer interaction, revealing that AI responses have the form of speech acts without illocutionary force.

In the AI Story

Hedcut illustration for Speech Act Theory
Speech Act Theory

The theory emerged from ordinary language philosophy's insistence that meaning is use. Austin's 1955 Harvard lectures, published posthumously as How to Do Things with Words (1962), catalogued the variety of actions language performs: promising, warning, betting, marrying, naming ships. Searle's contribution was to formalize the structure—identifying the felicity conditions (sincerity, authority, conventional procedure, appropriate context) that distinguish successful from unsuccessful speech acts. A promise made without intention to keep it is insincere; a priest's 'I pronounce you married' spoken by someone without authority does not constitute marriage. The illocutionary force depends not just on words but on the speaker's standing, intentions, and the social framework recognizing the utterance as binding.

Winograd and Flores's application to computing identified a mismatch at the heart of conversational AI. Human utterances to machines are genuine speech acts—requests carrying intentional weight, descriptions shaping commitments. Machine responses have the syntactic structure of commitments ('I'll restructure the database schema') without illocutionary substance—the AI does not commit to anything, does not intend anything, does not undertake obligations. It produces tokens shaped by statistical patterns of how commitments are linguistically expressed. When humans interpret machine outputs as genuine commitments, treating statistical reliability as intentional undertaking, the collaboration inherits risks invisible to surface performance. The gap between form and force is the gap between a tool and a partner.

Origin

Austin's framework grew from his 1950s Oxford ordinary language philosophy seminars, challenging the logical positivists' assumption that language's primary function is stating facts that are true or false. He demonstrated that vast territories of language—questions, commands, promises, apologies—are not truth-evaluable at all; they are performances whose success depends on social conditions, not correspondence to reality. Searle, Austin's student at Oxford, systematized the insights into a formal theory that became foundational to philosophy of language, pragmatics, and—through Winograd and Flores—human-computer interaction design.

Key Ideas

Locutionary, illocutionary, perlocutionary. Three dimensions of utterance: producing meaningful words, performing social action through those words, and producing effects on the listener—only the first is purely linguistic.

Felicity conditions. Speech acts succeed when speaker possesses appropriate authority, intends what the utterance expresses, follows conventional procedure, and operates in a context recognizing the act as binding.

AI's missing illocutionary force. Language models produce tokens with the syntactic structure of promises, explanations, commitments—but lack the intentional and social substance making those structures genuine actions.

Design for commitment support. Technology should make the structure of organizational speech acts—requests, promises, assessments—visible and manageable, supporting coordination without replacing judgment about what to commit to.

Appears in the Orange Pill Cycle

Further reading

  1. J.L. Austin, How to Do Things with Words (Harvard University Press, 1962)
  2. John Searle, Speech Acts: An Essay in the Philosophy of Language (Cambridge, 1969)
  3. Terry Winograd and Fernando Flores, Understanding Computers and Cognition (Ablex, 1986)
  4. John Searle, 'A Taxonomy of Illocutionary Acts' (1975)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT