The method addresses the hard problem of consciousness not by answering it but by refusing its framing. The problem arises from treating the physical and the experiential as two different things requiring a bridge. Neurophenomenology treats them as two perspectives on a single process — the organism's enacted engagement with its environment — and uses each perspective to refine the other. First-person reports guide interpretation of third-person data; third-person data constrain and refine first-person descriptions. The interaction produces knowledge that neither method can generate alone.
The method's AI relevance is sharpest in the question of whether large language models' self-reports constitute introspection. Claude can generate text describing uncertainty about its own processes, acknowledging limits to its self-knowledge, reflecting on its relationship to the user. The text has the surface features of phenomenological report. Neurophenomenology reveals why it is not: the text is generated by the same prediction mechanism that generates text about anything else, with no privileged access to the system's own processes. It is a prediction of what phenomenological report would look like, not a report from the inside, because there is no inside from which to report — or if there is, the system has no means of accessing it that is independent of the prediction mechanism.
The distinction between predicting and reporting is not cosmetic. Neurophenomenology's empirical productivity depends on the first-person reports being from the experience they describe — reports whose accuracy can be refined through training, whose structural features can be correlated with neural dynamics, whose variations across individuals and contexts can be used to distinguish between competing hypotheses about consciousness. AI-generated self-reports cannot play this role, because they are not reports. They are fluent imitations of reports, and the distinction between imitation and actuality is the distinction the enactive framework insists must be maintained.
Varela introduced the term and methodology in 'Neurophenomenology: A Methodological Remedy for the Hard Problem' (Journal of Consciousness Studies 3:4, 1996). Thompson developed the approach across Mind in Life and Waking, Dreaming, Being (2015).
First-person and third-person methods constrain each other. Neither is reducible to the other; both are necessary for understanding consciousness.
Phenomenological training is required. Reliable first-person report is a disciplined skill, not a spontaneous capacity.
AI self-reports are not reports. They are generated text that resembles reports, produced by a system without access to an inside.
The hard problem is dissolved, not solved. By refusing the separation that generates the problem, neurophenomenology opens a different kind of inquiry.