Interactional expertise is Collins's name for the competence he discovered he had developed in gravitational wave physics after two decades of immersion: he could discuss the physics with the fluency of a practitioner, pass as a physicist in conversation, even survive a formal test in which judges could not reliably distinguish his answers from those of actual physicists. But he had never built an instrument, run an experiment, or contributed original data. His expertise was interactional, not contributory. The distinction is the single most useful concept for understanding what large language models do and do not possess.
Collins developed the concept through the peculiar experience of being, effectively, a test case for himself. Over forty years of attending gravitational wave physics conferences, reading every published paper, interviewing hundreds of physicists, he had acquired something real — a genuine fluency in the domain that practitioners themselves recognized. But he had never done physics. The conceptual question was what to call this. The answer was interactional expertise: real expertise, acquired through linguistic participation in the community's discourse, not through contribution to its practice.
Large language models possess interactional expertise at a scale Collins could not have imagined. A model trained on the entire corpus of a domain's literature has read more than any individual practitioner and can discuss the domain with a comprehensive fluency that would pass Collins's Imitation Game in many fields simultaneously. This is a genuine achievement. Collins does not dismiss it. In his 2021 lecture Must Intelligent Machines Be Social Machines? he called it 'a first but impressive step' toward reproducing aspects of human linguistic competence.
The critical question is what interactional expertise is sufficient for. When the task requires discussing a domain fluently — summarizing literature, explaining concepts, helping a practitioner find connections across fields — interactional expertise may be exactly the right tool. When the task requires contributing original knowledge to the field, evaluating novel claims against the community's evolving standards, or exercising the social judgment that distinguishes competent from excellent practice, interactional expertise hits its ceiling. The machine can discuss the work with the fluency of a participant. It cannot do the work with the judgment of one.
The Surrender, in Collins's 2018 framing, is the cultural pathology of mistaking interactional for contributory expertise — of accepting the machine's fluent output as evidence of understanding, when the fluency is precisely the kind of competence that does not require understanding. The asymmetry is insidious: confirming that interactional output is correct requires the same contributory expertise that producing it would require. Users who possess that expertise can evaluate the output. Users who lack it cannot, and the machine's fluency becomes, over time, its own evidence of substance.
Collins introduced the concept in Rethinking Expertise (2007), co-authored with Robert Evans, as part of a broader Periodic Table of Expertises. The concept crystallized Collins's decades of reflection on his own ambiguous position vis-à-vis the gravitational wave community. The Imitation Game methodology — in which a panel of judges attempts to distinguish a non-expert interactional expert from a genuine contributory expert — provided the empirical tool for validating the distinction.
Real expertise. Interactional expertise is not fake or superficial. It is a genuine achievement acquired through sustained linguistic engagement with a community.
Not contributory. Interactional expertise does not include the capacity to contribute original knowledge, which requires social participation in the community's practices, not merely its discourse.
The LLM condition. Large language models possess interactional expertise at unprecedented scale and across unprecedented breadth, making them genuinely useful while exposing them to systematic failure where contributory expertise is required.
Evaluative asymmetry. Only someone with contributory expertise can reliably evaluate whether an interactional output is substantively correct — which means users who most need the machine's help are least equipped to verify it.
Critics have argued that the interactional/contributory distinction is too sharp — that sufficiently deep interactional engagement shades into contributory competence, and that the boundary is fuzzier than Collins allows. Collins's response has been to insist on the Imitation Game as the empirical test: if the judges are specialists and they can reliably distinguish the interactional expert from the contributory one on tasks that require social judgment, the distinction is real.