The personal intelligences are Gardner's paired capacities of interpersonal intelligence (understanding other people) and intrapersonal intelligence (understanding oneself). In Gardner's 2025 Viblio interview, he drew a sharp line: AI systems may master the 'technical' intelligences — linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial — but 'I have a great deal of difficulty in accepting AI as a significant participant in the personal intelligences. Because WHO would those selves or others actually BE?' The personal intelligences require, in Gardner's argument, the embodied developmental history of a mortal creature among other mortal creatures — a condition no current AI inhabits, and the distinction determines which forms of human contribution the AI age cannot replicate.
The pairing reflects a structural connection Gardner identified from the beginning: understanding others and understanding oneself are not separate capacities but two faces of the same underlying cognitive architecture — the capacity to model minds, whether another's or one's own. Damage to specific prefrontal regions impairs both simultaneously; development in one reinforces the other.
Gardner's 2025 position is precise. 'AI systems can simulate those experiences, but only individuals with flesh and blood — and with a finite life span of no more than a century — can truly experience them.' The finitude is constitutive. The awareness that the other person will die — that they are, like you, temporary and vulnerable — is what gives interpersonal engagement its moral weight. The awareness that one's own time is limited is what makes self-examination existentially consequential.
The simulation question matters practically. AI systems produce text that performs interpersonal and intrapersonal sensitivity convincingly. Users report feeling understood, supported, even emotionally met. Gardner's framework holds that the performance is real but the capacity is not — the system has no self that understands another self, no other-awareness built on shared vulnerability.
This distinction shapes the book's argument about which professions will remain most essentially human. Therapy, teaching, parenting, moral counsel, the leadership of teams through genuine uncertainty — each requires personal intelligences the amplifier cannot supply, and each will be transformed but not replaced by AI tools that can simulate but not inhabit the capacities these professions deploy.
Gardner's original 1983 treatment paired interpersonal and intrapersonal intelligences deliberately, noting their structural connection. His 2025 elaboration responded directly to the AI transformation, specifying why these two capacities in particular resist amplification.
Structural pairing. Understanding others and understanding oneself are two faces of the same cognitive architecture.
Mortality as constitutive. The finitude of life is what gives personal-intelligence engagement its moral weight.
Simulation vs possession. AI can produce outputs that perform personal intelligence without the cognitive substrate.
The 'who' question. Gardner's pointed question — WHO would the self be — identifies what the machine lacks.
Professions at the boundary. Therapy, teaching, moral counsel remain essentially human because they require personal intelligences.