The distinction is operationally useful in ways that the binary is it conscious or not question is not. A large language model that reports its own uncertainty about a claim is exhibiting a form of psychological consciousness. Whether there is anything it is like to be in that state of reported uncertainty is a different question. The first question has empirical traction; the second does not.
For the You On AI reader, the distinction clarifies why demonstrations of impressive AI capability neither confirm nor refute claims about machine consciousness. Claude exhibits remarkable psychological-consciousness-like features: it tracks context, monitors its own outputs, produces self-reports. None of this bears on the phenomenal question. The two dimensions are logically independent.
The distinction also clarifies the moral stakes. Our ethical obligations to beings are generally taken to turn on phenomenal consciousness — on whether there is someone home to suffer or flourish. Psychological consciousness alone does not generate the same obligations. A very capable system with no phenomenal dimension is a very capable tool. A system with phenomenal dimension, however limited, is something else. The distinction is what makes the moral question tractable, even if the empirical question of which systems have which properties remains hard.
Chalmers introduced the distinction in The Conscious Mind (1996) as part of his argument that reductive programs mistake progress on the psychological problems for progress on the phenomenal problem. The distinction has since become standard in philosophy of mind and is increasingly influential in AI ethics and consciousness research.
Two senses of consciousness must be distinguished. Phenomenal (experience) and psychological (function).
AI plausibly exhibits psychological consciousness. It does not thereby exhibit phenomenal consciousness.
Moral status tracks the phenomenal side. Our obligations depend on whether there is experience, not merely whether there is function.
Confusing the two produces persistent errors. Both AI triumphalism and AI dismissal frequently rest on the conflation.