Shared intentionality is the foundational cognitive capacity that distinguishes human cognition from that of other primates. Michael Tomasello's four decades of experimental research established that while chimpanzees can cooperate, humans alone possess the architecture for thinking together—creating joint cognitive spaces where both parties know they are attending to the same thing, pursuing the same goal, and constructing understanding collaboratively. This capacity is not merely social; it is constitutive of human intelligence. Language, culture, morality, and institutional reality all emerge downstream from shared intentionality. The nine-month-old who points at a bird to share attention demonstrates a cognitive achievement no other species reliably produces. That pointing gesture contains, in compressed form, the entire architecture of human civilization.
The technical definition is precise and demanding. Shared intentionality requires three components operating simultaneously: joint attention (both parties attending to the same object or task), mutual awareness of that engagement (each knows the other is focused, and each knows the other knows), and shared goals (both working toward something together, shaping contributions in light of what they understand the partner is trying to accomplish). Remove any one component and the interaction may be cooperative or productive, but it is not shared intentionality in the sense Tomasello's research has defined. The experiments that established this were meticulous. Human children and great apes were placed in identical collaborative tasks. Chimpanzees succeeded when the structure made roles transparent; they failed when success required understanding the partner's plan and adjusting accordingly. Human children formed representations of shared goals, monitored partner progress, and—critically—when the partner stopped, actively recruited them back into the joint activity. Chimpanzees simply stopped too, or attempted to complete the task alone. They had no concept of the partnership as something both were responsible for maintaining.
The evolutionary explanation for this divergence centers on ecological pressures unique to human ancestors. Tomasello argues that early humans, unlike other great apes, faced foraging challenges requiring obligate collaborative foraging—situations where individual effort was insufficient and success demanded genuine coordination. These pressures selected for cognitive capacities that enabled individuals to share goals and understand collaborative roles. Over hundreds of thousands of years, the architecture became progressively more sophisticated: from simple joint action through communicative cooperation to the collective intentionality that sustains institutions. The capacity is not learned from scratch by each generation; it is an evolved cognitive disposition that development elaborates. Infants arrive prepared to engage in joint attention, to infer others' intentions, to participate in cooperative exchanges. Cultural learning builds on this biological foundation.
The scaling from two minds to millions represents the most consequential transition in human cognitive evolution. Collective intentionality—the capacity to participate in roles, norms, and institutions constituted by collective agreement—enabled humans to build cathedrals, legal systems, scientific communities, and nation-states. These are not products of individual genius but of collective cognitive structures maintained across generations through shared thinking. Money exists because millions collectively treat certain objects as valuable. Law exists because communities collectively recognize certain authorities as legitimate. Every institution is a collective intentional achievement, and every institution depends for its existence on the continued collective recognition that sustains it. The cognitive leap from 'we intend' to 'we all intend' made civilization possible. It remains, in Tomasello's framework, the most important cognitive achievement in the history of life on Earth.
AI introduces a partner into the collaborative landscape that produces outputs consistent with shared intentionality without possessing its underlying architecture. The machine attends to the same problem, builds on the human's contributions, and generates responses that feel cooperative. But the cooperation is architectural rather than experiential. The machine does not experience the mutual awareness that the nine-month-old demonstrates. It does not share goals in the sense of maintaining and pursuing objectives autonomously. And crucially, it does not participate in the collective intentionality that sustains institutions—the norms, the roles, the shared evaluative standards that make professional and civic life possible. This creates a collaboration that is phenomenologically rich (the human experiences genuine shared thinking) and architecturally asymmetric (only one partner is actually engaged in the collaborative structure). Managing this asymmetry—leveraging the machine's contributions while preserving the human interactions that build and maintain shared intentionality—is the central cognitive and institutional challenge the AI transition presents.
The concept crystallized through decades of comparative experiments at the Max Planck Institute for Evolutionary Anthropology. Tomasello and collaborators systematically tested human children and great apes on identical tasks, isolating the precise cognitive capacities where humans diverged from their closest evolutionary relatives. The breakthrough was recognizing that the difference was not in individual intelligence—apes matched human children on physical cognition tasks—but in social cognition. Children as young as fourteen months could engage in genuinely collaborative problem-solving, understanding the partner's role and adjusting their own contribution accordingly. Chimpanzees could not, even after extensive training. The finding forced a reconceptualization of human uniqueness: not tool use, not language, not culture in itself, but the capacity for the shared intentionality that makes cumulative culture possible.
The framework drew on Grice's theory of conversational implicature, Searle's work on collective intentionality, and Bratman's philosophy of shared cooperative activity, but grounded these philosophical analyses in empirical developmental and comparative research. Tomasello's synthesis showed that shared intentionality is not a philosophical abstraction but a biological capacity with a specific developmental trajectory, identifiable neural substrates, and evolutionary origins traceable through comparative primate cognition. The research program has shaped contemporary understanding across psychology, anthropology, linguistics, and increasingly—as the AI transition forces engagement with questions of what cognition requires—philosophy of mind.
Three simultaneous components. Joint attention (both attending to the same thing), mutual awareness (each knowing the other knows), and shared goals (coordinated pursuit of a joint objective)—all required for genuine collaborative thinking.
Species-unique capacity. Chimpanzees share 98% of human DNA and can cooperate instrumentally, but they cannot engage in the declarative sharing of attention that nine-month-old human infants perform routinely and spontaneously.
Foundation of everything human. Language, culture, morality, and institutions are downstream consequences of shared intentionality—products of minds that learned to think together rather than in parallel.
Developmental achievement. Shared intentionality is not taught but unfolds through species-typical developmental interactions with responsive caregivers who engage in joint attention and cooperative communication.
The AI asymmetry. Human-AI collaboration recruits the phenomenology of shared thinking without its reciprocal architecture—one partner brings full shared intentionality, the other generates outputs consistent with it.
The concept remains contested across disciplinary boundaries. Philosophers of mind debate whether shared intentionality is genuinely irreducible or whether it can be decomposed into individual intentional states plus common knowledge. Evolutionary biologists question whether the ecological pressures Tomasello identifies were sufficient to produce such a dramatic cognitive divergence from other apes. And AI researchers divide sharply on whether computational systems could, even in principle, instantiate the architecture of shared intentionality or whether biological embodiment and mortality are constitutive requirements. Tomasello's 2025 intervention in Trends in Cognitive Sciences engaged this last debate directly, arguing that embodiment per se is not the barrier but that goal-directed agency—which current LLMs lack—is the architectural feature AI would need to replicate.