Normative understanding is the cognitive achievement that enables humans to participate in rule-governed social life. Beginning around age three, children do not merely follow rules; they enforce them, protesting violations even when not personally affected. A three-year-old will correct another child who 'plays the game wrong,' admonish an adult who 'cheats,' and insist on fair distribution of resources. This normative capacity is not taught in any straightforward sense—children are not explicitly instructed in the concept of normativity. It emerges from participation in joint activities where shared goals generate mutual expectations about proper contribution. The expectations carry force: participants should contribute fairly, should follow agreed procedures, should reciprocate cooperative effort. This 'should' is the origin of human morality in Tomasello's naturalistic account—not divine command, not rational principle, but a natural outgrowth of the cooperative cognitive structures shared intentionality creates.
The experimental evidence is specific and cross-culturally robust. In studies across diverse populations, children who participated in collaborative activities shared rewards more equally with their partner than children who achieved the same outcome working in parallel. The collaboration itself generated the norm of fairness—the mutual expectation that joint efforts should produce joint benefits. The norm was not modeled by adults or explicitly taught. It emerged from the structure of the shared activity. This finding supports Tomasello's thesis that human morality has evolutionary-developmental roots in the cooperative practices that shared intentionality makes possible.
Normative enforcement is a collective achievement. In human communities, norms are maintained through reciprocal monitoring—each person checks others' behavior against the shared standard and signals when behavior falls short. The monitoring is distributed across the community, operates continuously, and is largely automatic. Violations trigger immediate social responses—disapproval, correction, exclusion—that enforce the norm without requiring centralized authority. This distributed enforcement is what makes norms stable across large populations and what distinguishes human normative systems from the dominance hierarchies that structure other primate societies. Dominance is maintained by individual power; norms are maintained by collective agreement.
AI collaboration creates a one-sided normative structure. The human brings norms—expectations of accuracy, honesty, quality—and monitors the machine's outputs against those standards. The machine does not enforce norms in return. When Claude produces confidently wrong output, the error is not a normative violation (the machine is not being dishonest) but a statistical failure (the pattern-matching produced an inaccurate result). The human must catch the error through individual vigilance, because the machine does not participate in the normative framework that would make mutual monitoring possible. This unilateral enforcement is unprecedented and demanding. In human collaboration, the cognitive load of quality assurance is shared; in human-AI collaboration, it falls entirely on the human side.
The quiet erosion that Chapter 8 of the Tomasello volume diagnoses is the gradual weakening of norms when they are no longer reciprocally enforced. The mechanism is not dramatic but incremental: when working extensively with a partner that does not hold you to standards, the standards begin to feel optional. Not through conscious decision but through the atrophy of the social-enforcement muscle. Individual discipline can maintain standards, but individual discipline is a weaker force than social enforcement because it requires continuous effortful attention where social enforcement operates automatically. The professional whose cognitive work occurs primarily with AI and secondarily with human colleagues is not abandoning professional standards deliberately. The standards are eroding through displacement of the social context that maintained them.
Tomasello's account of normative understanding synthesized philosophical analyses of normativity (Kant, Searle, Korsgaard) with empirical developmental research showing that normative capacity emerges in early childhood through participation in collaborative activities. The framework appeared most fully in A Natural History of Human Morality (2016), where Tomasello traced the evolutionary origins of moral psychology from the second-personal morality of small-scale cooperation to the objective morality of large-scale institutional life.
Emerges from joint activity. Children develop normative expectations—about fairness, reciprocity, and proper contribution—through participation in collaborative activities that generate mutual expectations carrying normative force.
Enforcement is reciprocal. In human communities, norms are maintained through distributed monitoring—each person checking others and being checked—creating a stable equilibrium that AI collaboration cannot replicate.
Foundation of morality. Tomasello's naturalistic account locates moral origins not in rational principle or divine command but in the cooperative structure of shared intentionality—the 'should' of joint commitment.
AI does not enforce. Machine outputs may violate human norms (accuracy, rigor, honesty) without triggering the social consequences that enforcement requires—errors are statistical, not normative, and the asymmetry places full monitoring burden on the human.
Erosion through displacement. When the social context that reciprocally enforces norms is replaced by machine interaction that does not enforce them, the norms weaken through atrophy—not abandonment but the gradual relaxation of standards no longer socially maintained.