Dacher Keltner is Professor of Psychology at UC Berkeley, where he has taught since 1996, and directs the Berkeley Social Interaction Lab. His research focuses on the science of emotion, power, and prosocial behavior, with particular emphasis on awe. His two-component model of awe, developed with Jonathan Haidt and published in 2003, established the framework that has generated hundreds of subsequent studies across cultures. His major works include Born to Be Good: The Science of a Meaningful Life (2009) and Awe: The New Science of Everyday Wonder and How It Can Transform Your Life (2023). Keltner is the founding faculty director of the Greater Good Science Center, served as scientific consultant to Pixar on Inside Out and Inside Out 2, and is Chief Scientific Advisor at Hume AI, where he works to integrate emotion science into AI development.
There is a parallel reading that begins with Keltner's institutional positioning rather than his intellectual contributions. The Greater Good Science Center, founded in 2001, emerged during Silicon Valley's growing interest in scientizing virtue — the moment when tech platforms discovered that quantified prosociality could become a product category. Keltner's trajectory from Berkeley professor to Chief Scientific Advisor at Hume AI and West Co. follows a well-worn pattern: academic expertise lending legitimacy to commercial AI development, emotional science becoming the substrate for affective computing, research on human flourishing converted into optimization targets for machine learning systems.
The two-component model of awe — vastness plus accommodation — becomes particularly concerning when operationalized by AI companies. What Keltner describes as ego-reduction and prosocial behavior looks different when engineered at scale by platforms. The 'small self' research translates readily into user engagement metrics; the twenty-four vocal emotions and twenty-eight facial states become surveillance categories; the Possible Worlds Theory positions imagination as something to be modeled, measured, and eventually guided by recommendation systems. The refusal of the 'false binary between celebration and resistance' may itself be the problem — not because critique is inherently superior, but because participation in AI development from inside eliminates the possibility of genuinely independent assessment. The ecology of wonder becomes the productization of wonder, and the scientist who can measure awe becomes the consultant who can manufacture it.
Keltner was raised in a small Mexican town and across rural California by counterculture parents — a biographical detail that shaped his interest in emotion as both scientific object and meaningful human experience. He earned his PhD from Stanford under Paul Ekman, the pioneer of modern emotion research, and brought to Berkeley the Ekmanian emphasis on cross-cultural universality combined with a sociological attention to power and institutional context.
His research program has operated at the unusual intersection of rigorous empirical method and philosophical ambition. He has measured awe through facial coding, physiological signatures, self-report, and behavioral experiments, while simultaneously engaging the philosophical tradition running through Burke, Kant, and William James. The combination is what makes his framework applicable to questions — like the AI transition — that exceed any single methodological approach.
His engagement with AI is notable for its refusal of the false binary between celebration and resistance. As Chief Scientific Advisor at Hume AI and founding scientific advisor at West Co., he participates in AI development rather than standing outside and critiquing its results. His collaboration with former student Alan Cowen produced the computational emotion research underpinning empathic AI — the discovery that the human voice conveys at least twenty-four emotions without words, that facial expressions map onto at least twenty-eight distinct emotional states, that these mappings hold across cultures and can be modeled by machine learning.
His recent work on imagination — the Possible Worlds Theory published in the Annual Review of Psychology in 2025 — argues that imagination is central to human social life and that play, spirituality, morality, and art are all exercises of the capacity to construct possible worlds. This positions a further element of the ecology of wonder: AI challenges us to define and protect imagination, not because AI will replace it, but because a culture optimized for efficiency may stop exercising it.
Keltner earned his PhD from Stanford in 1989, joined UC Berkeley in 1996, founded the Greater Good Science Center in 2001, and has been Chief Scientific Advisor at Hume AI since the company's founding by Alan Cowen. His public work includes the Greater Good podcast, the Greater Good Magazine, and extensive collaboration with arts and entertainment including Pixar.
Empirical awe. Operationalized what Burke, Kant, and James described philosophically.
Two-component model. Vastness plus accommodation, published with Haidt in 2003.
Small self research. Documented the specific ego-reduction and prosocial consequences of awe.
Empathic AI. Participated in AI development through Hume AI rather than standing outside it.
Possible Worlds Theory. Recent work on imagination as central to social life and threatened by optimization cultures.
The question of intellectual integrity versus institutional capture depends entirely on which aspect of the work we're examining. On the empirical research program (100% Keltner): the two-component model, the cross-cultural studies, the physiological measurements represent genuine scientific progress — awe moved from philosophical concept to measurable phenomenon with replicable methods and predictive power. On the philosophical ambition (80% Keltner): the integration of Burke, Kant, and James with modern emotion science is rare and valuable, though the contrarian reading correctly notes that operationalization always loses something essential to the original concepts.
On institutional positioning (60% contrarian): the trajectory from Berkeley to Hume AI does follow Silicon Valley's pattern of academicizing product development, and the potential for research findings to become manipulation tools is real. But the arbitrating insight is that non-participation offers no protection — AI companies will build affective computing systems whether or not emotion scientists advise them. The relevant question isn't purity but influence over how these systems are constructed. On the 'small self' and empathic AI work (50/50): these represent both genuine insight into human emotional architecture and genuine risk of surveillance infrastructure. The weighting depends on governance — the same research that could enable mass emotional manipulation could also establish boundaries against it.
The synthetic frame the field actually needs: recognize that emotion science has become dual-use research. Keltner's value lies not in standing outside the AI transition but in carrying empirical rigor and philosophical seriousness into its construction. The ecology of wonder is threatened less by his participation than by the absence of people like him from these decisions.