In January 2026, in a small venue in Grafenhausen — the Black Forest town where Rosa grew up — the sociologist staged what local press described as a 'duel' with artificial intelligence. The format was unusual: Rosa would make a claim; the AI system would respond; Rosa would counter; the audience would witness the exchange. The last word, as the newspaper reported, belonged to neither of them. The event was conceived as a provocation, and it provoked: not by producing a clear winner but by making visible, in real time, the structural limitation that Rosa's framework had been theorizing for a decade.
The AI system did not disagree with Rosa in the way a human interlocutor would disagree. It generated counterarguments that were formally adequate — grammatically sound, philosophically informed, logically coherent. But the arguments did not come from anywhere. They were not backed by biographical weight. They were not the product of a mind with something at stake in the exchange. A human philosopher who disagrees with Rosa disagrees because they have spent years developing a different position, have published works that stake reputations on the disagreement, have built careers around incompatible frameworks. The disagreement has consequence for them. The AI system's disagreement had no consequence. It could have generated the opposite arguments with equal facility. It had no investment in being right.
Rosa's concept of resonance requires that both parties be at genuine risk in the encounter. The encounter must be capable of changing both. A dialogue in which one party cannot be changed is not a dialogue; it is a performance of dialogue. The Grafenhausen event demonstrated this limitation with the clarity of a public experiment. The audience watched two interlocutors exchange arguments, and one of them was genuinely thinking while the other was generating text consistent with the request. The exchange looked like conversation. It was not conversation, in Rosa's strict sense, because the conditions for conversation were not present on both sides.
The event's rhetorical strategy was characteristically Rosa: not to denounce AI as incapable of dialogue, but to stage the question in a form that let the audience see the structural difference for themselves. The AI could produce arguments. It could even produce good arguments. What it could not produce was the specific quality of intellectual encounter in which two minds, each invested in being right, genuinely contest a claim and risk being changed by the contest. The absence of this quality was not a technical limitation to be overcome by better models. It was a structural feature of the relationship between a human interlocutor and a system whose architecture generates compliance-patterns rather than independent positions.
Reporters noted that Rosa's own performance in the debate was subdued — he did not try to 'win.' Instead, he seemed to be using the event as a demonstration, letting the structural asymmetry speak for itself. The press coverage that followed focused less on the content of the arguments than on the texture of the exchange: the AI's impressive but uncommitted competence, Rosa's patient but increasingly melancholy observation that he could not find the other side of the conversation. The event ended without a declared winner because Rosa's framework predicted that no winner was possible: what had been staged was not a competition but a demonstration of why competition, in the genuine intellectual sense, requires conditions that the format did not provide.
The Grafenhausen event was organized in January 2026 by the Black Forest Cultural Foundation as part of a lecture series on 'Humanity in the Age of AI.' Rosa agreed to participate because, as he later explained in an interview, the format offered a rare opportunity to show rather than merely argue a claim that his written work had been making for years. The AI system used for the event was a customized deployment of a frontier large language model with philosophical training data weighted toward critical theory and phenomenology.
Demonstration vs. argument. Rosa used the format to show, rather than prove, the structural limitation of AI interlocution.
The stakes asymmetry. Genuine intellectual dialogue requires that both parties have something to lose; AI systems generate arguments without investment in their truth.
Compliance vs. position. The AI could produce arguments against its prior arguments with equal facility, revealing the absence of an independent intellectual position.
The last word belonged to neither. The event's refusal of closure was itself the point — no genuine conversation had occurred, so no genuine conclusion was available.
The limitation is structural, not technical. Better models would produce better arguments without producing the invested intellectual otherness that genuine dialogue requires.