The BetterUp research partnership, announced in April 2026, represents Brown's most direct empirical engagement with the question of whether AI deployment produces its promised performance gains — and what variables determine the answer. The emerging data challenges the dominant narrative that AI adoption correlates with performance improvement. Whether AI improves organizational performance, the research finds, depends less on how much leaders use the technology than on the kind of culture they create around it. Organizations with high trust, psychological safety, and the relational behaviors Brown's BRAVING framework specifies see significant performance gains from AI deployment. Organizations with low trust, defensive cultures, and armored leadership see performance gains that are marginal, negligible, or negative — despite investing equivalent resources in the tools themselves.
The strategic implications reframe the AI investment calculus. The billions flowing into AI capabilities will not pay off without corresponding investment in the human foundations — trust, development, and culture — that determine whether the tools actually improve performance. Brown's formulation, quoted in the Fortune coverage: I don't blame the C-suite for wanting to believe it's about skills because that's easier than creating a deep sense of mattering and courage and trust and agency. But you think building trust is expensive? Try not having trust. That's going to cost you everything.
The mechanism the research illuminates connects to the broader Orange Pill thesis that AI is an amplifier. The amplifier's output quality depends on the quality of the signal it receives. Low-trust cultures produce low-quality signals — workers concealing difficulties, gaming metrics, suppressing legitimate concerns, performing adoption while sabotaging execution. The amplifier faithfully carries the corrupted signal to scale, producing outputs that look impressive in metrics reports but that deliver diminished value at the point of use. High-trust cultures produce high-quality signals — honest assessments, genuine creative risk, willingness to flag problems, collaborative iteration. The amplifier carries these signals to scale as well, producing the performance gains the technology promises.
The research connects to the AI shaming findings by revealing how organizational culture predicts whether individual shame responses dominate or dissolve. In low-trust cultures, workers suppress AI use to avoid the shame of being seen needing help, and the suppression costs measurable performance. In high-trust cultures, AI use is normalized as professional practice — analogous to using a reference book — and the shame vector that drives suppression does not activate. The difference is not cultural preference; it is measurable organizational outcome.
The partnership between Brené Brown Education and Research Group and BetterUp was announced in April 2026, with initial findings reported in Fortune. The research continues the empirical trajectory established by Brown's Dare to Lead research program, extending its frameworks into the specific domain of AI-mediated organizational performance.
Culture over capability. Organizational culture predicts AI performance outcomes more reliably than investment in the tools themselves.
Amplifier quality. The AI amplifier carries whatever signal organizational culture produces — corrupted or genuine.
Shame vector activation. Low-trust cultures activate the AI shaming vector; high-trust cultures dissolve it.
Investment reframe. Technical AI investment without relational infrastructure investment is systematically under-performing.
Strategic imperative. Trust-building is not a humanistic preference but a financial requirement for AI ROI.