Narrative vs. Paradigmatic Thought — Orange Pill Wiki
CONCEPT

Narrative vs. Paradigmatic Thought

Bruner's 1986 distinction between two irreducible modes of cognition — the logical-scientific mode that seeks general truths and the narrative mode that constructs particular meanings. AI excels at the first; Bruner's framework asks what happens to the second.

In Actual Minds, Possible Worlds (1986) and Acts of Meaning (1990), Bruner argued that human cognition operates in two distinct modes, each with its own logic and criteria for well-formedness. The paradigmatic mode seeks general truths, operates through formal categories and logical operations, aims at empirical verification, and succeeds when it produces propositions that are demonstrably true or false. The narrative mode seeks particular meanings, operates through stories that connect events and intentions into coherent temporal sequences, and succeeds when it illuminates what it is like to be a person in a particular situation. The two modes are complementary and irreducible to each other. AI systems operate with increasing sophistication in the paradigmatic mode. What they do not do — what their architecture is not designed to do — is operate in the narrative mode as Bruner defined it, because narrative cognition is the act of a consciousness embedded in a culture, a life, and a history.

In the AI Story

Hedcut illustration for Narrative vs. Paradigmatic Thought
Narrative vs. Paradigmatic Thought

The distinction is not abstract. It is visible in the way people actually think. When a physician reads a lab report, she operates paradigmatically — interpreting numerical values against categorical norms, drawing logical inferences. When the same physician sits with the patient and hears how the illness has changed his life, she operates narratively — constructing an understanding the lab values cannot capture. Both modes are in play. Both are necessary. Neither is sufficient alone.

AI systems produce narratives. They construct stories with characters, plotlines, temporal sequences, emotional arcs. A 2025 study in Frontiers in Psychology tested narrative coherence in neural language models and found levels 'fully in line with data on human subjects, with slightly higher values in the case of GPT-4.' By that measure, the models narrate as coherently as people do.

But coherence and meaning-making are not the same phenomenon. Bruner's concept of narrative cognition is not about structural properties of the story produced. It is about the cognitive act of the narrator — an act of meaning-making performed by a consciousness embedded in a culture, a life, and a history. The narrator constructs the story as an attempt to make sense of experience. A large language model that produces a coherent narrative has performed a sophisticated pattern-matching operation. It has not performed an act of meaning-making in Bruner's sense, because it does not have an experience to make sense of.

The two modes develop in dialogue with each other. The scientist who narrates a discovery draws on paradigmatic knowledge to get facts right. The novelist who constructs a logically coherent plot draws on paradigmatic reasoning. When one mode is outsourced — when paradigmatic cognition is increasingly handled by AI — the dialogue between modes may degrade. The narrative mind, operating in partnership with a paradigmatic machine, may lose the cognitive richness the partnership requires.

Origin

Bruner articulated the distinction in Actual Minds, Possible Worlds (Harvard University Press, 1986), drawing on his earlier work on narrative in child development and autobiography. The argument was deepened in Acts of Meaning (1990) and applied to educational practice in The Culture of Education (1996).

Key Ideas

Paradigmatic mode. Logical-scientific thinking — general truths, formal categories, empirical verification.

Narrative mode. Meaning-making — particular interpretations, temporal sequences, stories that render experience intelligible.

Irreducibility. Neither mode can be reduced to the other; each produces understanding the other cannot reach.

AI and the paradigmatic. Large language models perform paradigmatic operations with extraordinary sophistication but do not, in Bruner's sense, perform narrative meaning-making.

Dialogical development. The two modes develop in conversation; outsourcing one may degrade the other.

Debates & Critiques

Whether large language models perform narrative cognition in any sense is contested. Researchers like Tony Veale and others in the computational creativity community argue the distinction Bruner drew is too sharp — that AI systems perform at least some of what Bruner called narrative. Bruner-aligned philosophers of mind argue the distinction is structural and consciousness-dependent: narrative cognition requires a consciousness with stakes in its experience, which no current AI possesses.

Appears in the Orange Pill Cycle

Further reading

  1. Bruner, J. S., Actual Minds, Possible Worlds (Harvard University Press, 1986)
  2. Bruner, J. S., Acts of Meaning (Harvard University Press, 1990)
  3. Bruner, J. S., 'The Narrative Construction of Reality' (Critical Inquiry, 1991)
  4. Fisher, W. R., Human Communication as Narration (1987)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT